MySQL Japanese characters

Hello,
I already searched the froums for a soluton to my problem but couldn't find any for it though I tried some of the things that were proposed. I have a MySQL Database where I store Japanese characters in SJIS. I tested if they are in SJIS and they are.
I am using mysql 4.1.3b-beta, mysql-connector-java-3.0.14, JBuilderX (jdk 1.4.2), and OS XP.
When I try to retrieve the data I only get a square, then a some character, then a square, then another character,... but not the actual Kanji/Kana. I can display Japanese characters correctly when I retrieve them from a file, so I don't think it's a problem with the font.
My code is as follows:
public void testDB(){
String result ="";
Connection con = null;
Statement st = null;
try
Properties prop = new java.util.Properties();
prop.put("user","");
prop.put("password","");
prop.put("useUnicode", "true");
prop.put("characterEncoding","SJIS");
String url = "jdbc:mysql://localhost/japanese";
//Class.forName ("org.gjt.mm.mysql.Driver");
Class.forName ("com.mysql.jdbc.Driver").newInstance ();
con = DriverManager.getConnection (url, prop);
result= "Database connection established";
st = con.createStatement();
ResultSet rs = null;
String querry1 = "SELECT * FROM jp";
PreparedStatement pstmt = con.prepareStatement(querry1);
rs = pstmt.executeQuery();
int i = 0;
while (rs.next()) {
result = rs.getString(1);
     jTable1.setFont(new Font("Arial Unicode MS",0,15));
jTable1.setValueAt(result,i,0);
i++;
catch (Exception e)
// result = "Cannot connect to database server";
finally
if (con != null)
try
con.close ();
// result ="Database connection terminated";
catch (Exception e) { /* ignore close errors */ }
I hope someone can help me as MySql is my last option after I already discovered that I couldn't use MS Access and Foxpro because the ODBC:JDBC bridge doesn't work correctly for unicode.
On a side note as I am new to MySql after installing version mysql4.1.3b. I no longer have the mysqld.exe but only mysqld-opt.exe. So I am using this instead. But I don't think that can be responsible for my problem.
Thaks for any help.
S

I see your original question was sent in August so maybe you have found an answer ?.. Would be interesting to see how you fixed it.
I was having a similar sort of problem but managed to fix it by looking at the fonts available to Java on my machine and setting the font used by the graphics object to one of the japanese fonts. (nb. I see in your code you are setting the font to arial unicode, maybe setting it to a japanaese font name would be enough)
Hopefully the following will give you some help.
// this line sticks the name of all fonts in the graphics environment in an array
String[] nameArray = GraphicsEnvironment
               .getLocalGraphicsEnvironment()
               .getAvailableFontFamilyNames();
// I know that the last two fonts are for japanese characters. I chose the font name second from the end of the array.
int listfonts=nameArray.length-2 ;
//then I sent the font for the g2 object.
Font fontselec ;
fontselec = new Font(nameArray[listfonts],Font.PLAIN,24) ;
g2.setFont(fontselec) ;
best regards
Graham

Similar Messages

  • How to store japanese characters in mysql 5.0

    I want to store japanese characters in mysql 5.0 database through java program and then to retrive the same characters through the program only.Java program means a form containing first name ,last name and address. I am entering corresponding translations in japanese to this fields while inserting to database. In another form i am retrieving those japanese characters.those should display
    in this form.

    How to handle the unicode sir for japanese characters..Pls give me more hints and any reference links to get the answer.

  • Create HTML file that can display unicode (japanese) characters

    Hi,
    Product:           Java Web Application
    Operating system:     Windows NT/2000 server, Linux, FreeBSD
    Web Server:          IIS, Apache etc
    Application server:     Tomcat 3.2.4, JRun, WebLogic etc
    Database server:     MySQL 3.23.49, MS-SQL, Oracle etc
    Java Architecture:     JSP (presentation) + Java Bean (Business logic)
    Language:          English, Japanese, chinese, italian, arabic etc
    Through our java application we need to create HTML files that have to display unicode text. Our present works well with English and most of the european character set. But when we tried to create HTML files that will display unidoce text, say japanese, only ???? is getting displayed. Following is the code we have used. The out on the browser displays the japanese characters correctly. But the created file displays only ??? in place of japanese chars. Can anybody tell how can we do it?
    <%
    String s = request.getParameter( "txt1" );
    out.println("Orignial Text " + s);
    //for html output
    String f_str_content="";
    f_str_content = f_str_content +"<HTML><HEAD>";
    f_str_content = f_str_content +"<META content=\"text/html; charset=utf-8\" http-equiv=Content-Type></HEAD>";
    f_str_content = f_str_content +"<BODY> ";
    f_str_content = f_str_content +s;
    f_str_content = f_str_content +"</BODY></HTML>";
    f_str_content = new String(f_str_content.getBytes("8859_9"),"Shift_JIS");
    out.println("file = " + f_str_content);
              byte f_arr_c_buffer1[] = new byte[f_str_content.length()];
    f_str_content.getBytes(0,f_str_content.length(),f_arr_c_buffer1,0);
              f_arr_c_buffer1 = f_str_content.getBytes();
    FileOutputStream l_obj_fout; //file object
    //file object for html file
    File l_obj_f5 = new File("jap127.html");
    if(l_obj_f5.exists()) //for dir check
    l_obj_f5.delete();
    l_obj_f5.createNewFile();
    l_obj_fout = new FileOutputStream(l_obj_f5); //file output stream for writing
    for(int i = 0;i<f_arr_c_buffer1.length;i++ ) //for writing
    l_obj_fout.write(f_arr_c_buffer1);
    l_obj_fout.close();
    %>
    thanx.

    Try changing the charset attribute within the META tag from 'utf-8' to 'SHIFT_JIS' or 'utf-16'. One of those two ought to do the trick for you.
    Hope that helps,
    Martin Hughes

  • Converting garbled characters for JAPANESE characters in a custom table

    Hi all,
    I have a custom table that store Japanese characters.
    After my company has upgraded to ECC6.0, this data in the custom table has become garbled and its alot of it garbled.
    Is there any SAP tool that can I use to make the correction on those garbled Japanese characters?
    Thanks,
    William Wilstroth

    Hi Nils,
    I really really really had a field day reading and testing around UC... To my dissappointment, I do not have the authorization to use SUMG and SCP too as well as a few of the TCODES...
    I finally told my higher level technical mgnt. that this table might need some changes...
    Has this problem of mine got anything to do with MDMP since its no longer supported in ECC6 and I found one coding that search for MDMP in RSVTPROT...
    My colleagues suggest that the data be corrected from table DBTABLOG... which i find, in my opinion, is not the right way...
    Thanks,
    William

  • Oracle Report Server Issue with Japanese Characters

    We are trying to setup a Oracle Report Server to print the Japanese characters in the PDF format.
    We have separate Oracle Report servers for printing English, Chinese and Vietnamese characters in PDF formats using Oracle Reports in the production which are running properly with Unix AIX version 5.3. Now we have a requirement to print the Japanese characters. Hence we tried to setup the new server for the same and the configurations are done as same as Chinese/Vietnamese report servers. But we are not able to print the Japanese characters.
    I am providing the details which we followed to configure this new server.
    1.     We have modified the reports.sh to map the proper NLS_LANG (JAPANESE_AMERICA.UTF8) and other Admin folder settings.
    2.     We have configured the new report server via OPMN admin.
    3.     We have copied the arialuni.ttf to Printers folder and we have converted this same .ttf file in AFM format. This AFM file has been copied to $ORACLE_HOME/guicommon/gk/JP_Admin/AFM folder.
    4.     We have modified the uifont.ali (JP_admin folder) file for font subsetting.
    5.     We have put an entry in JP_admin/PPD/datap462.ppd as *Font ArialUnicodeMS: Standard "(Version 1.01)" Standard ROM
    6.     We have modified the Tk2Motif.rgb (JP_admin folder) file for character set mapping (Tk2Motif*fontMapCs: iso8859-1=UTF8) as we have enabled this one for other report servers as well.
    Environment Details:-
    Unix AIX version : 5300-07-05-0831
    Oracle Version : 10.1.0.4.2
    NLS_LANG : JAPANESE_AMERICA.UTF8
    Font Mapping : Font Sub Setting in uifont.ali
    Font Used for Printing : arialuni.ttf (Font Name : Arial Unicode MS)
    The error thrown in the rwEng trace (rwEng-0.trc) file is as below
    [2011/9/7 8:11:4:488] Error 50103 (C Engine): 20:11:04 ERR REP-3000: Internal error starting Oracle Toolkit.
    The error thrown when trying to execute the reports is…
    REP-0177: Error while running in remote server
    Engine rwEng-0 crashed, job Id: 67
    Our investigations and findings…
    1.     We disabled the entry Tk2Motif*fontMapCs: iso8859-1=UTF8 in Tk2Motif.rgb then started the server. We found that no error is thrown in the rwEng trace file and we are able to print the report also in PDF format… (Please see the attached japarial.pdf for your verification) but we are able to see only junk characters. We verified the document settings in the PDF file for ensuring the font sub set. We are able to see the font sub setting is used.
    2.     If we enable the above entry then the rwEng trace throwing the above error (oracle toolkit error) and reports engine is crashed.
    It will be a great help from you if you can assist us to resolve this issue…

    Maybe 7zip or another tool has workarounds for broken file names, you could try that.
    Or you could try to go over the files in the zip archive one-by-one and write it to files out-1, out-2, ..., out-$n without concerning yourself with the file names. You could get file endings back via the mimetype.
    This script might work:
    #include <stdio.h>
    #include <zip.h>
    static const char *template = "./out-%04d.bin";
    int main(int argc, char**argv)
    int err = 0;
    zip_t *arc = zip_open((const char*)argv[1], ZIP_RDONLY, &err);
    if(arc == NULL)
    printf("Failed to open ZIP, error %d\n", err);
    return -1;
    zip_int64_t n = zip_get_num_entries(arc, 0);
    printf("%s: # of packed files: %d\n", argv[1], n);
    for(int i = 0; i < n; i++)
    zip_stat_t stat;
    zip_stat_index(arc, i, ZIP_FL_UNCHANGED, &stat);
    char buf[stat.size];
    char oname[sizeof(template)];
    zip_file_t *f = zip_fopen_index(arc, (zip_int64_t)i, ZIP_FL_UNCHANGED);
    zip_fread(f, (void*)&buf[0], stat.size);
    snprintf(&oname[0], sizeof(template), template, i);
    FILE *of = fopen(oname, "wb");
    fwrite(&buf[0], stat.size, 1, of);
    printf("%s: %s => %lu bytes\n", argv[1], oname, stat.size);
    zip_fclose(f);
    fclose(of);
    zip_close(arc);
    return 0;
    Compile with
    gcc -std=gnu99 -O3 -o unzip unzip.c -lzip
    and run as
    ./unzip $funnyzipfile
    You should get template-named, numbered output files in the current directory.
    Last edited by 2ion (2015-05-21 23:09:29)

  • How can I get Japanese characters to show up for my music in iTunes?

    I am not exactly sure what generation my iPod is, but it says copyright 2004 on the back. It is 20 GB. It has no problems displaying Japanese characters if a particular song of mine is Japanese. I have my iPod set up to manually manage music. I like to carry my iPod between home and work. At both home and work, I use PCs with Windows XP Pro installed. I also use the latest version of iTunes (ver. 7.1.1.5).
    However, I have run into a weird problem. On my home computer, my iTunes displays Japanese characters perfectly fine, but on my work computer, whenever there is a Japanese character, iTunes does not recognize it and puts an ugly "square" character in its place. How do I get my iTunes to display these Japanese characters properly?
    Thanks.

    Well I just answered my own question. I needed to install the files for East Asian languages via the WinXP Control Panel. So if anyone else runs into this problem, there ya go!

  • Creating a PDF from a SAAS app creates boxes instead of Japanese characters

    I'm using an online app (Unleashed Software) to "print" invoices, and the printed invoices show boxes instead of Japanese characters. The really weird thing about this problem, is that it occurs only on certain devices. I've tested on Macs, Windows, Android, and iOS, and on some devices I get the problem, and on some devices I don't. It's not just a Windows problem, or a iOS problem. Additionally, I use different browsers, from Chrome, to IE, to Firefox, to Safari. Changing the browser doesn't seem to help when it's on a device that won't output Japanese characters in a PDF properly.
    I'm wondering how PDFs are generated when using online software. Since I can't reproduce the problem on certain devices, it seems to me that the software is using some local settings to render the PDF incorrectly.
    Any ideas of how I could go about troubleshooting this problem?

    Hi,
    Could you please answer the following questions
    1.What version of Crystal Reports are you using?
    Go to Help-> About to find out.
    2.What is the font you are using on the report?
    Try to change the font style to MS Gothic or Arial Unicode MS, most preferably MS Gothic.
    And export the report to PDF format.
    This may help you
    Thanks,
    Praveen G

  • Specify File Encoding(Japanese Characters) for UTL_FILE in Oracle 10g

    Hi All,
    I am creating a text file using the UTL_FILE package. The database is Oracle 10G and the charset of DB is UTF-8.
    The file is created on the DB Server machine itself which is a Windows 2003 machine with Japanese OS. Further, some tables contain Japanese characters which I need to write to the file.
    When these Japanese characters are written to the text file they occupy 3 bytes instead of 1 and distort the format of the file, I need to stick to.
    Can somebody suggest, is there a way to write the Japanese character in 1 byte or change the encoding of the file type to something else viz. ShiftJIS etc.
    Thanking in advance,
    Regards,
    Tushar

    Are you using the UTL_FILE.FOPEN_NCHAR function to open the files?
    Cheers, APC

  • Problem with Gui_download using ASC File type - japanese characters

    Hi,
    During upgrade,while downloading data for japanese characters using GUI_DOWNLOAD Function module with file type as 'ASC', the space between 2 fields data getting much wider compared to 4.6C Version ws_download Function module's  data.
    Example: the gap between first field data and second field data in ECC 6.0 is 6 characters length,but in 4.6C it is 2 characters length.
    Is there any possibility to get the results similar to 4.6c version.Please give your valueable suggestions.
    Thanks
    BalaNarasimman

    Hi Sandra
    Please find the detailed information for your questions.
    1.Internal table content before download:During Debugging,it was observed that internal table content was same in both versions.For testing,i used only brand new data(Transaction entry).
    2.Download with code Page conversion:Yes,codepage parameter 4103 was explicitly passed into GUI_DOWNLOAD Function module.Also the front end code page which is used by system is 4110 . No errors occured.
    3.System is an Unicode system only.
    4.Actually this 6 character does not refer the byte value,only the gap between 2 fields data is getting referred in ECC 6.0.Please find the below example.
    Example - File data after Download:
    ECC 6.0: Field1            Field2      (gap - 6 characters space between 2 fields data)  Using GUI_Download
    data       u0152©Ïu201Dԍu2020      EN                               
         4.6C: Field1            Field2       (gap - 2 characters space between 2 fields data) Using WS_Download
         data    u0152©Ïu201Dԍu2020  EN    
    Note:Special characters are Japanese characters:

  • Saving a file in a with a file name containing Japanese Characters

    Hi,
    I hope some genius out there comes up with the solution to this problem
    Here it is :
    I am trying to save some files using a java program from the command console , and the file name contains japanese characters. The file names are available to me as string values from an InputStream. When I try to write to a File Object containing the japanese characters , I get something like ?????.txt . I found out that I am able to save the files using the unicode value of the java characters.
    So I realize that the trick is to convert the streaming japanese characters , character by character into their respective unicode value and then Create A File Object with that name. The problem is -> I cant find any standard method to convert these characters into their unicode values. Does anyone have a better solution ? Remember , its not writing japanese characters to a file , but creating a file with japanese characters in the file name !!!!
    Regards
    Chandu

    retrive a byte array out of the input Stream and store the values in String using the condtructor
    String(byte [] bytes, String enc)
    where encoding would be Shift_Jis for japanese I guess.
    Now to understand this concept basically all the Strings are unicode however when you are passing a byte array String has no means to know what is the encoding of the byte array, which is being used to instantiate the String value so if no encoding is specified it takes the System value which is mostly iso-8859-1. This leads to displaying ?
    However in case you know the encoding of the array specifying that in the constructor would be a real help.

  • Japanese Characters working as URL parameters, turning to question marks when in URL string itself

    I'm having some trouble getting coldfusion to see japanese
    characters in the URL string.
    To clarify, if I have something like this:
    http://my.domain.com/index.cfm?categorylevel0=Search&categorylevel1=%E3%82%A2%E3%82%B8%E3% 82%A2%E3%83%BB%E3%83%93%E3%82%B8%E3%83%8D%E3%82%B9%E9%96%8B%E7%99%BA
    All of my code works correctly and the server is able to pass
    the japanese characters to the database and retrieve the correct
    data.
    If I have this instead:
    http://my.domain.com/index.cfm/Search/%E3%82%A2%E3%82%B8%E3%82%A2%E3%83%BB%E3%83%93%E3%82% B8%E3%83%8D%E3%82%B9%E9%96%8B%E7%99%BA
    My script (which works fine with English characters) parses
    CGI variables and converts these to the same URL parameters that I
    had in the first URL using a loop and a CFSET url.etc..
    In the first example, looking at the CF debug info shows me
    what I expect to see:
    URL Parameters:
    CATEGORYLEVEL0=Search
    CATEGORYLEVEL1=アジア・ビジネス開発
    In the second example it shows me this:
    URL Parameters:
    CATEGORYLEVEL0=Search
    CATEGORYLEVEL1=???·??????
    Can anyone suggest means for debugging this? I'm not sure if
    this is a CF problem, an IIS problem, a JRUN problem or something
    else altogether that causes it to lose the characters if they are
    in the URL string but NOT as a parameter.

    My suggestion was that you test with the
    first url, not the second. However, I can see a source of
    confusion. I overlooked your delimiter, "/". It should be "?" and
    "=" in this case. With these modifications, we get
    <cfif Len(cgi.query_string) neq 0>
    <cfset i = 1>
    <cfloop list="#cgi.query_string#" delimiters="&"
    index="currentcatname">
    <cfoutput>categorylevel#i# =
    #ListGetAt(currentcatname,2,"=")#</cfoutput><br>
    <cfset i = i + 1>
    </cfloop>
    If it is a failing of Coldfusion, the above test should fail,
    too.
    Now, an adaptation of the same test to your second url.
    <cfset url2 = "
    http://my.domain.com/index.cfm/Search/%E3%82%A2%E3%82%B8%E3%82%A2%E3%83%BB%E3%83%93%E3%82% B8%E3%83%8D%E3%82%B9%E9%96%8B%E7%99%BA">
    <cfset query_str =
    ListGetAt(replacenocase(url2,".cfm/","?"),2,"?")>
    <cfif Len(query_str) neq 0>
    <cfset i = 1>
    <cfloop list="#query_str#" delimiters="/"
    index="currentcatname">
    <cfoutput>categorylevel#i# =
    #currentcatname#</cfoutput><br>
    <cfset i = i + 1>
    </cfloop>

  • Japanese characters display with wrong encoding all of a sudden...

    I had no issues before when it came to typing in Japanese in DW using the windows language bar , I would just change the keyboard to JP(japanese) and then start typing within DW code view, but then one day after doing updating my main template and using the find and replace feature in DW all the Japanese characters turned into question marks, diamonds with question marks and ASCII alphanumeric codes..
    also the spaces in my documents  turned into blocks. It was a mess,
    *I don't know if it was something I triggered accidentaly or if it was some type of bug....I also remember copying and pasting text and Japanese characters from another website that I created(but I had done that a dozen times before and it was never a problem).
    Long story short, after not being able to find a solution I decided to manually delete the weird symbols and start over, I typed in Japanese using the windows language bar as always and began typing away inside the same pages that displayed those weird characters (sorry I don't know what the proper name for them is)and it accepted the Japanese characters with no issues, it was working just like it did before.
    but my question is "What happened?" was that a bug in DW or was it something on my end?
    I would like to know so I can fix the problem incase this happens again.
    I've always had utf-8 as the charset and it's never been an issue. (and I all my pages are saved as utf-8 as well)
    --Which is why I am confused why all the Japanese got messed up.
    Here is the head code of one of the pages that had the problem:
    Thank you.

    Without seeing an actual page, it's impossible to say what happened, but the most likely explanation is that you did something wrong. Asian characters, such as Japanese, require correct encoding. If the encoding is incorrect, you end up with mojibake.
    I suspect that what happened is that you copied and pasted from Shift-JIS or EUC-JP encoding into a different encoding. It's quite possible that your page was set to iso-8859-1 (Western European) without realizing.
    By the way, your head code didn't show up in your post.

  • Displaying Japanese characters in JSP page

    Hi,
    I am calling an application which returns Japanese characters from my JSP. I am getting the captions in Japanese characters from the application and I am able to display the Japanese captions. After displaying the Japanese captions, user will select the particular captions by selecting the check box against the caption and Press Save button. Then I am storing the captions in the javascript string separated by :: and passing it to another JSP.
    The acton JSP retrieves that string and split it by using tokenizer and store it in the database. When I retrieve it again from the database and display it, I am not able to see the Japanese characters, it is showing some other characters, may be characters encoded by ISO.
    My database is UTF-8 enabled and in my server I am setting the UTF-8 as default encoding. In my JSP pages also, I am setting the charset and encoding type as UTF-8.
    I shall appreciate you if you can help me in resolving the issue.

    Post the encoding-related statements from your JSPs - there are a number of different ones that may be relevant.
    It may also be relevant which database you store the strings in (Oracle, DB2, etc.), since some require an encoding parameter to be passed.

  • Issue with Japanese characters in files/filenames in terminal.

    I recently downloaded a zip file with Japanese characters in the archive and in the files within the archive. The name of the archive is "【批量下载】パノプティコン労働歌 第一等.zip"
    The characters are properly displayed in firefox, chrome, and other applications, but in my terminal some of the characters appear corrupted. Screenshot: https://i.imgur.com/4R22m0D.png
    Additionally, this leads to corruption of the files in the archive. When I try to extract the files, this is what happens:
    % unzip 【批量下载】パノプティコン労働歌 第一等.zip
    Archive: 【批量下载】パノプティコン労働歌 第一等.zip
    extracting: +ii/flac/Let's -+-ʦ1,000,000-.flac bad CRC 5f603d51 (should be debde980)
    extracting: +ii/flac/+ѦѾP++ -instrumental-.flac bad CRC 78b93a2d (should be 3501d555)
    extracting: +ii/flac/----.flac bad CRC ddeb1d3e (should be c05ae84f)
    extracting: +ii/flac/+ѦѾP++.flac bad CRC 0ccf2725 (should be be2b58f1)
    extracting: +ii/flac/Let's -+-ʦ1,000,000--instrumental-.flac bad CRC 67a39f8e (should be ece37917)
    extracting: +ii/flac/.flac bad CRC f90f3aa0 (should be 41756c2c)
    extracting: +ii/flac/ -instrumental-.flac bad CRC 3be03344 (should be 0b7a9cea)
    extracting: +ii/flac/---- -instrumental-.flac bad CRC 569b6194 (should be adb5d5fe)
    I'm not sure what could be the cause of this. I'm using uxterm with terminus as my main font and IPA gothic (a Japanese font) as my secondary font. I have a Japanese locale set up and have tried setting LANG=ja_JP.utf8 before, but the results never change.
    Also, this issue isn't just with this file. This happens with nearly all archives that have Japanese characters associated with it.
    Has anyone encountered this issue before or knows what might be wrong?
    Last edited by Sanbanyo (2015-05-21 03:12:56)

    Maybe 7zip or another tool has workarounds for broken file names, you could try that.
    Or you could try to go over the files in the zip archive one-by-one and write it to files out-1, out-2, ..., out-$n without concerning yourself with the file names. You could get file endings back via the mimetype.
    This script might work:
    #include <stdio.h>
    #include <zip.h>
    static const char *template = "./out-%04d.bin";
    int main(int argc, char**argv)
    int err = 0;
    zip_t *arc = zip_open((const char*)argv[1], ZIP_RDONLY, &err);
    if(arc == NULL)
    printf("Failed to open ZIP, error %d\n", err);
    return -1;
    zip_int64_t n = zip_get_num_entries(arc, 0);
    printf("%s: # of packed files: %d\n", argv[1], n);
    for(int i = 0; i < n; i++)
    zip_stat_t stat;
    zip_stat_index(arc, i, ZIP_FL_UNCHANGED, &stat);
    char buf[stat.size];
    char oname[sizeof(template)];
    zip_file_t *f = zip_fopen_index(arc, (zip_int64_t)i, ZIP_FL_UNCHANGED);
    zip_fread(f, (void*)&buf[0], stat.size);
    snprintf(&oname[0], sizeof(template), template, i);
    FILE *of = fopen(oname, "wb");
    fwrite(&buf[0], stat.size, 1, of);
    printf("%s: %s => %lu bytes\n", argv[1], oname, stat.size);
    zip_fclose(f);
    fclose(of);
    zip_close(arc);
    return 0;
    Compile with
    gcc -std=gnu99 -O3 -o unzip unzip.c -lzip
    and run as
    ./unzip $funnyzipfile
    You should get template-named, numbered output files in the current directory.
    Last edited by 2ion (2015-05-21 23:09:29)

  • Acrobat Pro 9.3.1 does not convert certain Japanese characters

    I have a text document that contains a mix of Roman and Japanese characters - when I do Create PDF From File and read that text document in, there is a sequence of 2 Japanese characters that disappear - the text before them and after them appear in the PDF, but there's a void between.
    The sequence is (don't know if I can insert Japanese in here...)
    before監査証跡after
    When the PDF is generated, the first 2 Japanese characters (after the last 'e' in before) do not appear in the PDF.
    Here is the source text document (UTF-8 encoded with BOM): http://www.scribd.com/doc/28158046
    and here is the resulting PDF: http://www.scribd.com/doc/28158121
    Anyone seen this before?

    If I paste your "before監査証跡after" into Notepad and save it as UTF-8 text, I can print the file to the Acrobat 9.3.1 Pro "Adobe PDF" printer with no problems at all: the 4 kanji appear in a font Acrobat calls "MS-UIGothic".  If I right-click on the saved *.txt file in Windows Exploreer (Vista 64) and select "Convert to Adobe PDF" I still get all the kanji, although the first shows up in Adobe Ming, the 2nd in Adobe Song, and the last 2 in KozGoPr6N.
    I can't explain what's going on here, but perhaps this can help point you down a useful path.
    David

Maybe you are looking for

  • Linking between webpage to webpage

    Hi Experts, I have requirements like linking WebPage to a Webpage. We are creating articles in Web page composer(WPC). I need to link one web page to another webpage. Can you people suggest me how i will get this requirement. Thanks in advance. Thank

  • DMS - Archived document retrieval

    I am able to archive documents using archiving object CV_DVS. But I am not able to retrieve the archived document. How can I retrieve the archived document? T Code cv02n has Environment -> Display from archive but its disabled. Regards, Samson.

  • Where is the oracle.prometric

    i wrote the Oracle RAC 10g R2 exam in india hyderabad on the oracle.prometric web-center but while i am checking the web site to see my score i can not find the website maybe it's changed with my regard

  • Can not import avi files into pse12 catalog

    I am wondering if any one else is having an issue with importing avi (video files from my Nikon D5000) into the PSE 12 catalog. Running windows 7 SP1 with all updates. Camera is a Nikon D5000 file type is AVI Every time I go to import an avi file (ei

  • Masters Level Electrical/Software Engineer in Northern Virginia

    I'm looking for an engineer with experience in LabVIEW and C++ prefereably also with some experience in GPS,LIDAR, and image processing interested in working in the Northern Virginia DC area. Interested parties should look at this posting: http://ens