Issue with Japanese characters in files/filenames in terminal.

I recently downloaded a zip file with Japanese characters in the archive and in the files within the archive. The name of the archive is "【批量下载】パノプティコン労働歌 第一等.zip"
The characters are properly displayed in firefox, chrome, and other applications, but in my terminal some of the characters appear corrupted. Screenshot: https://i.imgur.com/4R22m0D.png
Additionally, this leads to corruption of the files in the archive. When I try to extract the files, this is what happens:
% unzip 【批量下载】パノプティコン労働歌 第一等.zip
Archive: 【批量下载】パノプティコン労働歌 第一等.zip
extracting: +ii/flac/Let's -+-ʦ1,000,000-.flac bad CRC 5f603d51 (should be debde980)
extracting: +ii/flac/+ѦѾP++ -instrumental-.flac bad CRC 78b93a2d (should be 3501d555)
extracting: +ii/flac/----.flac bad CRC ddeb1d3e (should be c05ae84f)
extracting: +ii/flac/+ѦѾP++.flac bad CRC 0ccf2725 (should be be2b58f1)
extracting: +ii/flac/Let's -+-ʦ1,000,000--instrumental-.flac bad CRC 67a39f8e (should be ece37917)
extracting: +ii/flac/.flac bad CRC f90f3aa0 (should be 41756c2c)
extracting: +ii/flac/ -instrumental-.flac bad CRC 3be03344 (should be 0b7a9cea)
extracting: +ii/flac/---- -instrumental-.flac bad CRC 569b6194 (should be adb5d5fe)
I'm not sure what could be the cause of this. I'm using uxterm with terminus as my main font and IPA gothic (a Japanese font) as my secondary font. I have a Japanese locale set up and have tried setting LANG=ja_JP.utf8 before, but the results never change.
Also, this issue isn't just with this file. This happens with nearly all archives that have Japanese characters associated with it.
Has anyone encountered this issue before or knows what might be wrong?
Last edited by Sanbanyo (2015-05-21 03:12:56)

Maybe 7zip or another tool has workarounds for broken file names, you could try that.
Or you could try to go over the files in the zip archive one-by-one and write it to files out-1, out-2, ..., out-$n without concerning yourself with the file names. You could get file endings back via the mimetype.
This script might work:
#include <stdio.h>
#include <zip.h>
static const char *template = "./out-%04d.bin";
int main(int argc, char**argv)
int err = 0;
zip_t *arc = zip_open((const char*)argv[1], ZIP_RDONLY, &err);
if(arc == NULL)
printf("Failed to open ZIP, error %d\n", err);
return -1;
zip_int64_t n = zip_get_num_entries(arc, 0);
printf("%s: # of packed files: %d\n", argv[1], n);
for(int i = 0; i < n; i++)
zip_stat_t stat;
zip_stat_index(arc, i, ZIP_FL_UNCHANGED, &stat);
char buf[stat.size];
char oname[sizeof(template)];
zip_file_t *f = zip_fopen_index(arc, (zip_int64_t)i, ZIP_FL_UNCHANGED);
zip_fread(f, (void*)&buf[0], stat.size);
snprintf(&oname[0], sizeof(template), template, i);
FILE *of = fopen(oname, "wb");
fwrite(&buf[0], stat.size, 1, of);
printf("%s: %s => %lu bytes\n", argv[1], oname, stat.size);
zip_fclose(f);
fclose(of);
zip_close(arc);
return 0;
Compile with
gcc -std=gnu99 -O3 -o unzip unzip.c -lzip
and run as
./unzip $funnyzipfile
You should get template-named, numbered output files in the current directory.
Last edited by 2ion (2015-05-21 23:09:29)

Similar Messages

  • Oracle Report Server Issue with Japanese Characters

    We are trying to setup a Oracle Report Server to print the Japanese characters in the PDF format.
    We have separate Oracle Report servers for printing English, Chinese and Vietnamese characters in PDF formats using Oracle Reports in the production which are running properly with Unix AIX version 5.3. Now we have a requirement to print the Japanese characters. Hence we tried to setup the new server for the same and the configurations are done as same as Chinese/Vietnamese report servers. But we are not able to print the Japanese characters.
    I am providing the details which we followed to configure this new server.
    1.     We have modified the reports.sh to map the proper NLS_LANG (JAPANESE_AMERICA.UTF8) and other Admin folder settings.
    2.     We have configured the new report server via OPMN admin.
    3.     We have copied the arialuni.ttf to Printers folder and we have converted this same .ttf file in AFM format. This AFM file has been copied to $ORACLE_HOME/guicommon/gk/JP_Admin/AFM folder.
    4.     We have modified the uifont.ali (JP_admin folder) file for font subsetting.
    5.     We have put an entry in JP_admin/PPD/datap462.ppd as *Font ArialUnicodeMS: Standard "(Version 1.01)" Standard ROM
    6.     We have modified the Tk2Motif.rgb (JP_admin folder) file for character set mapping (Tk2Motif*fontMapCs: iso8859-1=UTF8) as we have enabled this one for other report servers as well.
    Environment Details:-
    Unix AIX version : 5300-07-05-0831
    Oracle Version : 10.1.0.4.2
    NLS_LANG : JAPANESE_AMERICA.UTF8
    Font Mapping : Font Sub Setting in uifont.ali
    Font Used for Printing : arialuni.ttf (Font Name : Arial Unicode MS)
    The error thrown in the rwEng trace (rwEng-0.trc) file is as below
    [2011/9/7 8:11:4:488] Error 50103 (C Engine): 20:11:04 ERR REP-3000: Internal error starting Oracle Toolkit.
    The error thrown when trying to execute the reports is…
    REP-0177: Error while running in remote server
    Engine rwEng-0 crashed, job Id: 67
    Our investigations and findings…
    1.     We disabled the entry Tk2Motif*fontMapCs: iso8859-1=UTF8 in Tk2Motif.rgb then started the server. We found that no error is thrown in the rwEng trace file and we are able to print the report also in PDF format… (Please see the attached japarial.pdf for your verification) but we are able to see only junk characters. We verified the document settings in the PDF file for ensuring the font sub set. We are able to see the font sub setting is used.
    2.     If we enable the above entry then the rwEng trace throwing the above error (oracle toolkit error) and reports engine is crashed.
    It will be a great help from you if you can assist us to resolve this issue…

    Maybe 7zip or another tool has workarounds for broken file names, you could try that.
    Or you could try to go over the files in the zip archive one-by-one and write it to files out-1, out-2, ..., out-$n without concerning yourself with the file names. You could get file endings back via the mimetype.
    This script might work:
    #include <stdio.h>
    #include <zip.h>
    static const char *template = "./out-%04d.bin";
    int main(int argc, char**argv)
    int err = 0;
    zip_t *arc = zip_open((const char*)argv[1], ZIP_RDONLY, &err);
    if(arc == NULL)
    printf("Failed to open ZIP, error %d\n", err);
    return -1;
    zip_int64_t n = zip_get_num_entries(arc, 0);
    printf("%s: # of packed files: %d\n", argv[1], n);
    for(int i = 0; i < n; i++)
    zip_stat_t stat;
    zip_stat_index(arc, i, ZIP_FL_UNCHANGED, &stat);
    char buf[stat.size];
    char oname[sizeof(template)];
    zip_file_t *f = zip_fopen_index(arc, (zip_int64_t)i, ZIP_FL_UNCHANGED);
    zip_fread(f, (void*)&buf[0], stat.size);
    snprintf(&oname[0], sizeof(template), template, i);
    FILE *of = fopen(oname, "wb");
    fwrite(&buf[0], stat.size, 1, of);
    printf("%s: %s => %lu bytes\n", argv[1], oname, stat.size);
    zip_fclose(f);
    fclose(of);
    zip_close(arc);
    return 0;
    Compile with
    gcc -std=gnu99 -O3 -o unzip unzip.c -lzip
    and run as
    ./unzip $funnyzipfile
    You should get template-named, numbered output files in the current directory.
    Last edited by 2ion (2015-05-21 23:09:29)

  • Collation issue with Japanese characters in Oracle8i

    Hi,
    I have japanese data in a varchar2 column in an Oracle8i instance which contains both single byte and multibyte Japanese characters. The encoding type of the instance is UTF-8.
    I want to sort them in such a way that equivalent single byte and multibyte Japanese characters are treated as same. Also while selecting, if I specify the single byte characters in the where condition it should select both single and double byte characters and vice-versa.
    The functionality I'm looking for is similar to the one which can be achieved by using Collator class in Java with FULL_DECOMPOSITION as decmposition mode.
    Could anyone please let me know how can I do it ?
    Thanks in advance.
    Best Regards,
    Sourav

    Maybe 7zip or another tool has workarounds for broken file names, you could try that.
    Or you could try to go over the files in the zip archive one-by-one and write it to files out-1, out-2, ..., out-$n without concerning yourself with the file names. You could get file endings back via the mimetype.
    This script might work:
    #include <stdio.h>
    #include <zip.h>
    static const char *template = "./out-%04d.bin";
    int main(int argc, char**argv)
    int err = 0;
    zip_t *arc = zip_open((const char*)argv[1], ZIP_RDONLY, &err);
    if(arc == NULL)
    printf("Failed to open ZIP, error %d\n", err);
    return -1;
    zip_int64_t n = zip_get_num_entries(arc, 0);
    printf("%s: # of packed files: %d\n", argv[1], n);
    for(int i = 0; i < n; i++)
    zip_stat_t stat;
    zip_stat_index(arc, i, ZIP_FL_UNCHANGED, &stat);
    char buf[stat.size];
    char oname[sizeof(template)];
    zip_file_t *f = zip_fopen_index(arc, (zip_int64_t)i, ZIP_FL_UNCHANGED);
    zip_fread(f, (void*)&buf[0], stat.size);
    snprintf(&oname[0], sizeof(template), template, i);
    FILE *of = fopen(oname, "wb");
    fwrite(&buf[0], stat.size, 1, of);
    printf("%s: %s => %lu bytes\n", argv[1], oname, stat.size);
    zip_fclose(f);
    fclose(of);
    zip_close(arc);
    return 0;
    Compile with
    gcc -std=gnu99 -O3 -o unzip unzip.c -lzip
    and run as
    ./unzip $funnyzipfile
    You should get template-named, numbered output files in the current directory.
    Last edited by 2ion (2015-05-21 23:09:29)

  • Problems working with Japanese characters found in filenames to determine invalid shortcuts

    Hi,
    I have written a script that uses test-path to query whether or not *.LNK shortcut files have valid Target Paths to determine whether or not the LNK files should be kept if invalid or contain temporary locations. This works perfectly for most shortcuts with
    the exception of those with filenames in Japanese/Chinese/Cyrillic fonts.
    The script is made up of the following action:-
    gci "$nextdrive\Users\$env:username\AppData\Roaming\microsoft\office\recent" -Filter *.lnk | % { $shell.CreateShortcut( $_.FullName ) } | ? { ( Test-Path ( $_.TargetPath )) -eq $false -or $_.TargetPath -like '*temporary internet files*' -or $_.TargetPath
    -like '*temp*'} | % { Remove-item -LiteralPath $_.FullName }
    If I run the script when a file name exists in the folder above such as : 1、目录.doc.LNK
    the Powershell script from the ISE returns the following error:-
    "Cannot bind argument to parameter 'Path' because it is an empty string.
    If I rename the .LNK file to something like 12345.doc.LNK, the powershell script successfully removes the file so I know that the issue is certainly related to the actual filename itself. Is there any sort of UTF, CJK or encoding option I need to configure
    here for this to work successfully?
    Thanks for taking the time to read this!
    Andrew

    The article Johan linked has some sample code which seems to address your issue (using Shell.Application instead of WScript.Shell). Here's a modification of your code that uses this technique:
    $shellApplication = New-Object -ComObject Shell.Application
    Get-ChildItem "$nextdrive\Users\$env:username\AppData\Roaming\microsoft\office\recent" -Filter *.lnk |
    ForEach-Object {
    try
    $shortcut = $shellApplication.Namespace(0).ParseName($_.FullName).GetLink
    if ((Test-Path -Path $shortcut.Path) -eq $false -or $shortcut.Path -like '*temporary internet files*' -or $shortcut.Path -like '*temp*')
    Remove-Item -LiteralPath $_.FullName
    catch
    Write-Error -ErrorRecord $_
    Alternatively, you can use the IShellLink COM class directly.  Looks like someone has already written a .NET wrapper for this: 
    http://www.vbaccelerator.com/home/NET/Code/Libraries/Shell_Projects/Creating_and_Modifying_Shortcuts/article.asp

  • Issue inputting japanese characters in sqlplus?

    (Reposting this from Linux forum)
    Hey all, looking for some feedback on this issue.
    Background: Using a tool to process an external data file (with a Japanese filename and contents which contain Japanese as well). This tool ultimately places the converted data into an Oracle database.
    When this tool is run from our Windows environment, everything works correctly.
    However, when running the Linux version of the tool, talking to the same database, the following is output:
    WARNING: underlying database error.
    SDE Code (-51): Underlying DBMS error
    Extended DBMS error code: 911
    ORA-00911: invalid character
    (駅)
    Not able to create business table 駅
    Delete layer "駅" ...
    SDE Code (-51): Underlying DBMS error
    Extended DBMS error code: 911
    ORA-00911: invalid character
    Unable to delete layer "駅"
    When we re-run the tool with an English filename it seems to work fine regardless of the contents of the file.
    Relevant environment variables:
    LANG=ja_JP.UTF8
    NLS_LANG=Japanese_Japan.UTF8
    NLS_LANGUAGE=japanese
    And I have also tried with:
    LANG=ja_JP.eucJP
    NLS_LANG=japanese_japan.JA16EUC
    NLS_LANGUAGE=japanese
    I have noticed that if I run sqlplus directly and attempt to create a simple table with a Japanese name, I get the ORA-00911 error with japanese_japan.JA16EUC as NLS_LANG. If I change NLS_LANG to Japanese_Japan.UTF8 I can then successfully create a table with japanese characters in its name. I still cannot run our conversion tool above.
    Any ideas or tips? I believe the database is set up correctly as it does work with Windows as the client... perhaps the conversion tool is overriding my NLS_LANG settings? Any other possibilities you can think of?
    TIA,
    Ray

    Thanks for all the input guys.
    Here's output from when I have an error creating a table:
    ORACLE_HOME is  /goa1/ora10gr2/app/oracle/product/10.2.0
    ORACLE_SID  is  jap10b
    SDEHOME is  /goa1/sde/sde93_jap/sdeexe93
    NLS_LANG is  japanese_japan.JA16EUC
    LANG is  ja_JP.eucJP
    jap10b on goa.esri.com >sqlplus user/pw
    SQL*Plus: Release 10.2.0.2.0 - Production on �� 5�� 8 17:06:56 2008
    Copyright (c) 1982, 2005, Oracle.  All Rights Reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
    With the Partitioning and Data Mining options
    SQL> CREATE TABLE 文 (
      2    I INT);
    CREATE TABLE 文 (
    ERROR at line 1:
    ORA-00911: invalid character
    SQL> SELECT * from NLS_SESSION_PARAMETERS;
    PARAMETER
    VALUE
    NLS_LANGUAGE
    JAPANESE
    NLS_TERRITORY
    JAPAN
    NLS_CURRENCY
    PARAMETER
    VALUE
    NLS_ISO_CURRENCY
    JAPAN
    NLS_NUMERIC_CHARACTERS
    NLS_CALENDAR
    GREGORIAN
    PARAMETER
    VALUE
    NLS_DATE_FORMAT
    RR-MM-DD
    NLS_DATE_LANGUAGE
    JAPANESE
    NLS_SORT
    BINARY
    PARAMETER
    VALUE
    NLS_TIME_FORMAT
    HH24:MI:SSXFF
    NLS_TIMESTAMP_FORMAT
    RR-MM-DD HH24:MI:SSXFF
    NLS_TIME_TZ_FORMAT
    HH24:MI:SSXFF TZR
    PARAMETER
    VALUE
    NLS_TIMESTAMP_TZ_FORMAT
    RR-MM-DD HH24:MI:SSXFF TZR
    NLS_DUAL_CURRENCY
    NLS_COMP
    BINARY
    PARAMETER
    VALUE
    NLS_LENGTH_SEMANTICS
    BYTE
    NLS_NCHAR_CONV_EXCP
    FALSE
    17 rows selected.
    SQL> select value from nls_database_parameters where parameter = 'NLS_CHARACTERSET';
    VALUE
    JA16EUC
    SQL> select value from nls_database_parameters where parameter = 'NLS_NCHAR_CHARACTERSET';
    VALUE
    AL16UTF16
    SQL> And here it works fine (using UTF8 in shell environment)
    ORACLE_HOME is  /goa1/ora10gr2/app/oracle/product/10.2.0
    ORACLE_SID  is  jap10b
    SDEHOME is  /goa1/sde/sde93_jap/sdeexe93
    NLS_LANG is  Japanese_Japan.UTF8
    LANG is  ja_JP.UTF8
    jap10b on goa.esri.com >
    jap10b on goa.esri.com >sqlplus user/pass
    SQL*Plus: Release 10.2.0.2.0 - Production on 木 5月 8 17:09:12 2008
    Copyright (c) 1982, 2005, Oracle.  All Rights Reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
    With the Partitioning and Data Mining options
    SQL> CREATE TABLE 文 (
      2    I INT);
    Table created.
    SQL> SELECT * from NLS_SESSION_PARAMETERS;
    PARAMETER
    VALUE
    NLS_LANGUAGE
    JAPANESE
    NLS_TERRITORY
    JAPAN
    NLS_CURRENCY
    PARAMETER
    VALUE
    NLS_ISO_CURRENCY
    JAPAN
    NLS_NUMERIC_CHARACTERS
    NLS_CALENDAR
    GREGORIAN
    PARAMETER
    VALUE
    NLS_DATE_FORMAT
    RR-MM-DD
    NLS_DATE_LANGUAGE
    JAPANESE
    NLS_SORT
    BINARY
    PARAMETER
    VALUE
    NLS_TIME_FORMAT
    HH24:MI:SSXFF
    NLS_TIMESTAMP_FORMAT
    RR-MM-DD HH24:MI:SSXFF
    NLS_TIME_TZ_FORMAT
    HH24:MI:SSXFF TZR
    PARAMETER
    VALUE
    NLS_TIMESTAMP_TZ_FORMAT
    RR-MM-DD HH24:MI:SSXFF TZR
    NLS_DUAL_CURRENCY
    NLS_COMP
    BINARY
    PARAMETER
    VALUE
    NLS_LENGTH_SEMANTICS
    BYTE
    NLS_NCHAR_CONV_EXCP
    FALSE
    17 rows selected.
    SQL> select value from nls_database_parameters where parameter = 'NLS_CHARACTERSET';
    VALUE
    JA16EUC
    SQL> select value from nls_database_parameters where parameter = 'NLS_NCHAR_CHARACTERSET';
    VALUE
    AL16UTF16Input is being done from an SSH session from an XFCE Terminal on a Fedora 8 machine. I am cutting and pasting the Japanese character. Everything seems to match up, save for the shell environment variable. I wonder if somehow the character transmitted by my paste changes based on the character set setting.

  • GUI Download Issue with Chinese characters

    Hello,
    Currently we are upgrading from 4.7 to ECC. I'm using GUI_DOWNLOAD
    function module to download the data from SAP to desktop. I do have an
    issue with Chinese characters while downloading the file from SAP to ECC.
    In 4.7 the Chinese characters are being downloaded (I haven't used any
    code page) perfectly, but where as in ECC the downloaded file has junk
    characters instead of Chinese.
    Is there any change in the GUI_UPLOAD FM.
    For your reference below is the code present in the program
      CALL METHOD CL_GUI_FRONTEND_SERVICES=>GUI_DOWNLOAD
        EXPORTING
          FILENAME             = Z_FILENAME
    *****DCDK900543 - Begin of fixing for Unicode conversion ****
         FILETYPE             = 'WK1'
          FILETYPE             = 'ASC'
          WRITE_FIELD_SEPARATOR = ABAP_TRUE
    *****DCDK900543 - End of fixing for Unicode conversion ****
        CHANGING
          DATA_TAB             = I_TAB_TMP
        EXCEPTIONS
          FILE_WRITE_ERROR     = 1
          NO_BATCH             = 2
          INVALID_TYPE         = 3
          UNKNOWN_ERROR        = 4
          OTHERS               = 5.
    Regards,
    Bharath.

    Hi bharat,
    please check whether you ecc 6.0 is uni coded or not (as you upgraded).
    If it is not uni coded then you will not be able to print the Chinese characters.
    You can see whether it is uni coded or not by the following
    in your app toolbar under system click on status.
    There you can see whether your ecc6.0 is uni coded or not.
    Regards,
    koolspy.

  • Issues with Japanese encoding using Mail

    Since recently (I would say since I updated to 10.6), I have an issue with Japanese-encoded (ISO 2022-JP) mails on my English MacOS.
    I have no problem to read, edit and write answers to any mails.
    However with some ISO JP-2022-JP encoded messages (sent with Thunderbird 2.0.0.23 (Windows/20090812) btw) I have the following misbehaviour:
    - if I send the message and let the encoding to "automatic", Mail sends the mail in UTF-8, which I do not want since most of Japanese computer do not understand UTF-8 by default (and the receiver gets panicked: "I can not read your mail T_T !")
    - if I set the encoding to "ISO JP-2022-JP", I can not send nor save the message (see [1] at the end of the post). One should note that the error message when saving is really misleading (and yes my hard-drive has a lot of space left) and it should be fixed by Apple.
    - if I dig a bit deeper, I can in effect find some characters in the original message which prevent Mail to send my mail. It however does not make any sense since:
    - those char were in the original message properly encoded in ISO JP-2022-JP
    - those char are always very common ones
    The only solution I have found so far is to delete the original message in my mail, which is very frustrating...
    A sample of such mail can be found at (I removed personal info. and the mail is about a drinking party):
    - http://files.me.com/trouve.antoine/73w3w9
    Help would be very appreciated.
    Thank you very much.
    Antoine
    [1] I get the following error messages:
    -> try to save:
    *This message can’t be saved to the Drafts mailbox.*
    The message contains one or more attachments that
    are too large to be saved in the Drafts mailbox. Try
    deleting some attachments.
    ->try to send
    *Invalid Text Encoding*
    Some characters in your message could not be
    converted to the “Japanese (ISO 2022-JP)” text
    encoding. Choose a different encoding from the
    “Text Encoding” menu.

    You can find out about the different versions here, for example:
    http://en.wikipedia.org/wiki/ISO/IEC_2022
    Thank you. I feel a bit stupid for not having looked in Wikipedia at first...
    I sometime wonder how could such basic problem like charset not being solved after more than 40 years of computer science...
    Here is a note that addresses that problem, but I don't think it works with 10.6. Might be worth a > try:
    http://discussions.apple.com/thread.jspa?threadID=121808&tstart=60
    Thank for the link.
    It seems to still work: new japanese mails are now sent in "ISO 2022-JP-2".
    However, for messages with the header explicitly specifying "ISO 2022-JP" (which should be "ISO 2022-JP-2" on my mac) it has no influence.
    The only ways I see to solve this issue would be:
    i) to force "ISO 2022-JP-2" for all mails (a bit too extreme)
    ii) to force the use of "ISO 2022-JP-2" instead of "ISO 2022-JP", but I do not think such precise configuration is possible
    This mess appears to be due to Thunderbird which seems to mix "ISO 2022-JP-2" and "ISO 2022-JP", but I do not have any working Thunderbird to test now...

  • Web Logic 10.3 upgrade causes issues with escaped characters in JSP.

    We recently upgraded our application servers from Weblogic 9.2 to Weblogic 10.3 and we are having an issue with escaped characters in a JSP code. Here is an example of what we are seeing:
    var convertedBody1 = document.getElementById('body').value.replace(/\$FIRST_NAME\$/g, firstName);
    This code works in Weblogic 9.2. In Weblogic 10.3 we have to make the following changes:
    var convertedBody1 = document.getElementById('body').value.replace(/\$FIRST_NAME\$/g, firstName);
    Thanks, Tom

    Hi:
    I have resolved the issue with the following in the jspx page.
    Put an
    <jsp:scriptlet>
    response.setContentType(“text/html; charset=UTF-8”);
    </jsp:scriptlet>
    Inside the <f:view> on the jspx file.
    Please refer the link http://www.oracle.com/global/il/support/tip/nlss11061.html for more details. It is helpful.
    Thanks & Regards
    Sridhar Doki

  • How to create Japanese characters PDF files -- Oracle9i

    After modified the uifont.ali file, I can get Japanese characters PDF file by running command line(rwrun.exe) on Oracle 9i AS.
    If I call the report file from Oracle9i forms(by using run_report_object ), though the PDF file was created, the Japanese Characters can not be displayed correctly.
    Can anyone help me?
    Thanks.

    Hi,
    Please go through following links..this will help you:
    http://lucamezzalira.com/2009/02/28/create-pdf-in-runtime-with-actionscript-3-alivepdf-zin c-or-air-flex-or-flash/
    http://forums.adobe.com/thread/753959
    http://blog.unthinkmedia.com/2008/09/05/exporting-pdfs-in-flex-using-alivepdf/
    Thanks and Regards,
    Vibhuti Gosavi | [email protected] | www.infocepts.com

  • Issue with Czech characters in PDFs generated from RSTXPDFT4

    Hi,
    We have a requirement to generate PDF documents from the spool of the Billing document outputs in our project.
    For this we are using the standard program RSTXPDFT4, which converts the SAP script OTF to PDF format.
    But the Czech characters in the billing document output are not getting displayed in the PDF generated out of it.
    We are already using a device type I2HP4 when creating the print request , which supports Latin-2 Character set ( ISO 8859-2 ), to which the special characters
    of East European languages belong.
    Even then , the czech characters are not getting displayed in the PDF generated.
    We have raised  a message to SAP for this, and SAP informed us that currently the only solution to this is to use Latin 2 soft fonts,
    and to upload these soft fonts into R/3 System using report RSTXPDF2 as they contain the Eastern European special characters plus all the other characters in ISO 8859-2.
    But, since character font definitions (font files) are protected by copyrights, SAP informed us that they cannot provide these font files and we have to acquire
    these latin-2 font files by searching in search engines in the internet.
    If anyone has the information where we can get these "Adobe type 1 Latin-2" font files with '.PFB' extension,  for the proper display of Czech characters, please let me know.

    Hi,
    Did you or anyone manage to find a reasonable solution for this issue?
    I'm currently facing something similar but with Polish characters instead.
    I tried using RSTXPDF2 to upload .PFB and .TTF files but to no avail.

  • Issue with cyrillic characters in path while exporting using script

    Hi everyone!
    It seems that there is some issue in CS5 with exporting document using vbscript (not sure about other scripting languages) and path containing cyrillic symbols. Error popped up every time I try to execute the line docRef.Export path & fname, 2, exportOptions if path containing non-latin characters (probably it only affects one with cyriilic characters though). Command docRef.SaveAs path & fname, jpgSaveOptions, True, 2 works just fine with any kind of path supplied. In CS4, CS3 and CS2 versions this problem doesn't occur.
    Is this a known bug and what I should do with it if it's not? I really want help to improve Photoshop;)

    Hi,
    Did you or anyone manage to find a reasonable solution for this issue?
    I'm currently facing something similar but with Polish characters instead.
    I tried using RSTXPDF2 to upload .PFB and .TTF files but to no avail.

  • Compiling Java code with Japanese characters

    I have a Java code with some Japanese characters. My compiler doesn't recognise these characters and gives me error messages.
    Please help me.

    Obviously it's not the copmiler's fault. You need to fix your code.
    Here is a link to the Java Language Specification.
    The link is to section 3.8 - Identifiers.
    It describes the acceptable naming:
    http://java.sun.com/docs/books/jls/second_edition/html/lexical.doc.html#40625
    Perhaps your editor is not saving the text file in an appropriate format.
    What editor are you using?
    Try vim http://www.vim.org
    or SciTE http://www.scintilla.org/SciTE.html

  • Awt alignment issue with japanese character

    I am using japesenes OS(Win XP ) and running awt application (JDK1.6).
    For Choice box (dropdown box) ,if the font is less than 15 and if there
    is a mixture of japanese and number ,the numbers and the Japanese characters
    are not vertically aligned.
    I am using java.awt.Choice component to display all the months
    Choice month =new Choice();
    month.insert ("1 \u6708", 0);
    month.insert ("2 \u6708", 1);
    month.insert ("3 \u6708", 2); .. so on
    While running the applet , in display window
    The numbers and the Japanese characters are not vertically aligned.
    When a month is selected, it is slightly truncated at the top.
    My default font is [family=SansSerif,name=sansserif,style=plain,size=12]
    when i change the font to 15,the characters are properly aligned.
    Did anyone faced this before? if so can you please clarify why this behaviour when font size
    is less than 15 or how to resolve this issue without changing the font size .
    thanks in advance,
    ruhul

    Hi,
    I have also faced the above mentioned issue.The workaround of using Full Width unicode though resolves the alignment of Japanese with english/numbers/special characters.
    There is other issues with this to mention a few:
    (a) The text is still top aligned.
    (b) A leading and trailing space is added to the value in the choice component when full width unicode value is used instead of english/number/special characters.
    It would nice if the mentioned issues could be resolved or if a work around could be suggested.

  • Buffer Issue with streaming 10 MB file

    Hi there,
    Having a bit of a nightmare, essentially I have this code:
      private void streamBinaryData(String urlstr,String format,HttpServletResponse response)
            String ErrorStr = null;
            try{
             if(urlstr==null)           
                 urlstr = "c:\\video\\hoff.flv";
             File f = new File(urlstr);             
             response.setContentType("video/x-flv");
            response.setHeader ("Content-Disposition", "filename=\"hoff.flv\"");
            Long fileSize = Long.valueOf(f.length());
            response.setContentLength(fileSize.intValue());
            InputStream in = new FileInputStream(f);
            ServletOutputStream outs = response.getOutputStream();
            int bit = 0;
            System.out.println("VIDEO STREAMER: start streaming data");
                while ((bit) != -1) {
                    bit = in.read();
                    outs.write(bit);
                in.close();
            System.out.println("STREAMER: Finished streaming data");
            outs.flush();
            outs.close();
            catch(Exception e){
                    System.out.println("DEBUG: "+ e.toString());
      }which works just fine, but as the server that will be using this code essentially just (90%) will be the use of this servlet to stream .flv files I want to make it more productive by buffering all or some of the file in order to stream.
    so essentially I have tried many times and variations around the following sub code:
            InputStream in = new FileInputStream(f);
            ServletOutputStream outs = response.getOutputStream();
            int bit = 0;
            System.out.println("VIDEO STREAMER: start streaming data");
            byte buffer = new byte[fileSize.intValue()];
            outs.write(buffer);
                in.close();
            System.out.println("STREAMER: Finished streaming data");The above pump about 1MB of a 10MB fine, but the hangs and no exception is received, or:
            InputStream in = new FileInputStream(f);
            ServletOutputStream outs = response.getOutputStream();
            System.out.println("VIDEO STREAMER: start streaming data");
            byte buffer = new byte[fileSize.intValue()];
            for(int i=0; i<buffer.length; i++)  {
                 outs.write(buffer);
    outs.write(buffer);
    in.close();
    System.out.println("STREAMER: Finished streaming data");
    which will pump out around 2-6 MB of the file, with both of the code changes above I can see the file being pumped at a much faster rate, but obviously no good as it does not deliver the whole file.
    I know that content length is fine, I have also tried varying the response.setBuffer(int) to larger than 8192 up to fileSize.intValue to allow a buffer to handle the whole file before outputing, all to no avail.
    I have also increased the runtime RAM via '-Xmx64m'.
    I am developing on a windows tomcat (with netbeans) and the production version is unix based.
    Any help that anyone can offer will be greatly appreciated.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    Thanks for the reply.
    Hi, yes sorry has missed that (typed the other 2 variations by hand), I have tried buffering part of the file & tried using a BufferedOutputStream.
    Neither worked either.
    so for example, I used (done before, but hell tried again anyway):
            InputStream in = new FileInputStream(f);
            ServletOutputStream outs = response.getOutputStream();
            System.out.println("VIDEO STREAMER: start streaming data");
            BufferedOutputStream bout = new BufferedOutputStream(outs);
            byte [] buffer = new byte[1024];
            int length=0;
            while( (length = in.read(buffer,0,buffer.length)) > 0)  {
                bout.write(buffer,0,length);
            }this actually does worse as I only receive around 500KB.
    Checked the link you provided, useful but the implemented solution is still reading 1 byte at a time, hense far too many reads going on for when the server will be (hopefully) sending out a few files per min..
    essentially it seems like the outputstream (whether raw or bufferedOutputStream) is having issues with pumping data that quickly to it !???!?!?
    or it's not liking raw bytes and only really working with the bout.write(int) method, the best solution so far has been this use when I mentioned above that I would receive between 4-6 odd MB of the file, this was nice and fast, just never complete.

  • Issue with Opening a PDF file

    We have just completed an upgrade of one of our servers executing Reporting Services.   The upgrade was from 2005 to 2008.
    After we have rendered a report and have it saved in PDF format, we start having issues.
    If we try to open the PDF by double clicking on the file name in Windows Explorer, Adobe Reader starts, but sits unresponsive and consumes an excessive amount of CPU cycles.    It never completes the open and we have to kill the Adobe Reader process in Task Manager.
    If we start Adobe Reader, and then use the Menu to do FIle -> Open, the PDF file opens immediately.
    We have attempted both methods using several different versions of Adobe Reader.    The issue occurs on all versions prior to 9.3.
    Since we are a service organization, we are unable to force our clients to upgrade to a more current version of Adobe Reader without providing assurances that this will correct this issue.
    Has anyone else seen this type of issue with opening PDF files rendered by Reporting Services?   And if so, what is the cause and how did you correct it?
    Thanks
    Steve

    Hi there,
    Please find attached a word document which contains the error that comes up when I try to open the PDF file. This PDF was e-mailed to me from one of the Safety companies that I receive e-mails from on a regular basis. I believe I am running Windows XP and the version of Adobe is Adobe Reader X. I hope this is enough info for you.
    Janice Nadeau
    [signatue deleted by host]

Maybe you are looking for

  • How to run a script on Oracle server from isqlplus

    Hi I am trying to run a script on my workstation from Oracle server through isqlplus workarea. I entered following command and get the following error. i have enabled isqlplus URL by editing web.xml file already. Can please someone help how to run th

  • PO line item text and material po text

    Hi, I want to fetch the text maintained at item po level and print the same with form.. I am passing following parameters in my Include text in smartforms.. Text name         &VAR6& ..... concatenated po number and item                         ( 4700

  • Is it possible to create a Solaris DVD?

    Does anyone know if it's possible to create a Solaris 9 DVD from the installation cds? I find it rather impossible, I'm just wondering... Maybe a live Solaris cd/dvd?

  • WMV to MOV converter

    I am trying to convert a .wmv file created in Movie Maker on a PC and load it in Iweb. Flip4MAc works fine for viewing on the MAC side, but PC viewers can't open the converted quicktime file. I'd like to convert wmv to a more universal movie format s

  • Querying and Assigning Date Value

    Hello, this query is not working. I am trying to assign the date value to the from_date variable. set result_list [execsql "select TCM_DATE_BEGIN_REQD into: TO_CHAR(from_date, 'YYYY/MM/DD') from tcm_tau_component where tcm_reference_number = $tau_ref