Problems with non-English character.

Hi
My problem is that '������' become rubbish in my c++ dll. I hope someone can help me.
---Java code-----
class Test
     private native void print(String str);
     public static void main(String [] args)
          new Test().print("������");
     static
          System.loadLibrary("Test");
----c++ code----
JNIEXPORT void JNICALL
Java_Test_print(JNIEnv *env, jobject obj, jstring jstr)
     const char * str = env->GetStringUTFChars(jstr, 0);
     if (str == NULL) {
          return; /* OutOfMemoryError already thrown */
     wstring str2(200, ' ');
     MultiByteToWideChar(CP_ACP, 0, str, strlen(str)+1, (LPWSTR)str2.data(),strlen(str));
     printf(">>> %s\n\n", str);
     cout << " str: " << str << ", str2.data: " << str2.data() << endl;
     return;
----OutPut-----
&#9500;�&#9500;�&#9500;�&#9500;�&#9500;�&#9500;&#9570;str: &#9500;�&#9500;�&#9500;�&#9500;�&#9500;�&#9500;&#9570;, str2.data: 02CB6118
Thanks
Fredrik

Hi,
You will probably see the same rubbish if you try to print them to the console using java. Dos prompts usually have problems to print the Swedish characters. I don't know if this will affect your c++ dll or not. But you can specify the encoding for the JVM (-Dfile.encoding). For valid encodings see:
http://www.mindprod.com/jgloss/encoding.html
/Kaj

Similar Messages

  • Error when import file with non-english character

    Hi,<br /><br />I have images file with non-english character (unicode), for example ABC<X>.png where <X> is non-english character such as japanese, chinese, etc.<br /><br />Whenever I want to import the file to After Effects (right click -> import -> file), I always encounter error:<br /><br />Finding file/dir info for the file "C:\...\ABC?.png" -- file not found (-43) (3::30)<br />Can't import file "ABC?.png": unsupported filetype or extension. (0::1)<br /><br />My PC is Windows XP Professional 2002 SP2 English.<br /><br />How to solve this problem?<br /><br />Thanks

    Adjust your system language settings. Proper file name conventions require a consistent Unicode environment, so install the respective foreign language support files or switch the language system-wide. Mixing different zones/ code ranges is always a bad idea. If your system is not in Japanese, AE will always misinterpret the characters and refuse to import. If that's not feasible, simply rename the files.
    Mylenium

  • [SOLVED!] On USB drives, problems with non-English chars and HAL

    Hello,
    I am having a problem with non-English caracters (áãàçéẽê...) on files stored on my USB drive.
    On Windows they're created with the correct name. But on Linux the files have the non-English characteres replaced by '?' and are not accessible.
    If I manuallly mount the drives using 'mount -o iocharset=utf8 /dev/sdb1 /media/usbdisk' the characters are OK, so I think I just need to get HAL to pass the correct parameters to mount. However I don't know how to do that, and haven't found any good solution.
    I tried to build a custom kernel setting the default charset as UTF-8 and it didn't work.
    Any ideas? I'm using x86-64, HAL 0.5.13-3 and my locale is pt-BR.UTF-8.
    Thanks!
    EDIT: Actually, this is not a HAL problem, but a problem with 'exo'. For the solution, I edited /etc/xdg/xfce4/mount.rc and added iocharset=utf8 to the [vfat] category.
    Last edited by Renan Birck (2009-11-28 20:54:23)

    I don't use Thunar presently, but I looked in the Thunar Volume Manager doc and I didn't find anything to change the mount options of removable drives. I am not quite sure if it's possible or not. Maybe someone using it can tell for sure.
    But if it is not possible to change the mount options, a possible solution is to disable the Thunar Volume Manager plugin and to use something else more configurable to manage the automount function.
    Personally I use the halevt package from AUR which uses configuration files in the xml format.
    It's not so easy to use but is highly configurable.
    But there exists other tools also.
    I can help you with halevt if you choose that way...

  • Problem with Non-English Fields Output to PDF by JASPER in JDev10.1.3

    I am using jsprx files(designed in i-report) to generate pdf reports out of an oracle database.
    The non-English fields are shown correctly when I output the report into an HTML or when I view it with JasperView.
    If I try making PDF files (JasperExportManager.exportReportToPdfFile) the static fields containing e.g.Arabic/Chineese characters won't be displayed and dynamic fields from the database with non-English contents will be shown as ??? or null.
    I received some suggestions about using PARAMETERS to feed the report instead of FIELDS, which I think can not be helpful in this case and in general.
    I think this should be a common problem. These are the components I am using:
    itext-1.4.7. jar
    commons-digester- 1.7.zip
    jasperreports- 1.2.8.jar
    Any comment or help is appreciated.
    Thanks
    Farbod

    I am using jsprx files(designed in i-report) to generate pdf reports out of an oracle database.
    The non-English fields are shown correctly when I output the report into an HTML or when I view it with JasperView.
    If I try making PDF files (JasperExportManager.exportReportToPdfFile) the static fields containing e.g.Arabic/Chineese characters won't be displayed and dynamic fields from the database with non-English contents will be shown as ??? or null.
    I received some suggestions about using PARAMETERS to feed the report instead of FIELDS, which I think can not be helpful in this case and in general.
    I think this should be a common problem. These are the components I am using:
    itext-1.4.7. jar
    commons-digester- 1.7.zip
    jasperreports- 1.2.8.jar
    Any comment or help is appreciated.
    Thanks
    Farbod

  • Problem with non english caracter

    Hi,
    I'm using JRockit 1.5.0_03, I have a problem with pages with non english caracters. is it possible to change certain properties of JVM like "user.country", "file.encoding" or "user.language"? If yes, how can I change it?
    Thanks in advanced

    Hi,
    I'm using JRockit 1.5.0_03, I have a problem with pages with non english caracters. is it possible to change certain properties of JVM like "user.country", "file.encoding" or "user.language"? If yes, how can I change it?
    Thanks in advanced

  • [AS] Problem with non English characters in file path

    I wrote a script that exports a pdf file from ID, rasterizes it in PS, applies an action, saves it as another pdf file, and finally creates a Mail message, and attaches the file to it (the last part is written in AppleScript).
    The problem is that it doesn't work when the path to this file contains non English characters.
    This works:
    make new attachment with properties {file name:"/Volumes/Macintosh HD/BackUp Tetard/Test.pdf"}
    but this doesn't:
    make new attachment with properties {file name:"/Volumes/Macintosh HD/BackUp Têtard /Test.pdf"}
    I remember vaguely that I read somewhere that AppleScript can work with Unicode — in other words with such characters — starting from some version, don't remember which exactly, but it seems to me — Leopard.
    I am on Mac OS X 10.4.11 right now. Will updating solve this problem? Does anybody know any solution to this problem: a scripting addition, some hidden setting, etc.
    I made a little test: used a Russian character — ё and it works, but when I use — ê (Dutch) it doesn't. May it have something to do with the Region setting in International panel?
    Thanks in advance,
    Kasyan

    Kasyan, as of Leopard AppleScript treats all text as Unicode pre this you can specify 'as Unicode text'. Try a test with these.
    -- Leopard
    set x to POSIX path of (path to desktop)
    -- Pre Leopard
    set x to POSIX path of (path to desktop as Unicode text)
    -- Leopard
    set x to POSIX path of (choose file without invisibles)
    -- Pre Leopard
    set x to POSIX path of ((choose file without invisibles) as Unicode text)

  • Safari Problem with Non-English websites

    Hello every body,
    I ran into strange problem, I do not know why I can not browse corectly some non-english websites like کانادا مهاجرت  or مهاجرت کانادا or مهاجرت به کانادا . it sounds a messy, could I make a change in HTML codes to make it compatible with SAFARI?
    Thank you

    Taleo's job search is flat out broken on Safari. Clicking the "search" button does absolutely nothing. The latest Firefox is also broken.
    The only conclusion I can draw is that Taleo doesn't support Mac users, period. Stunningly stupid. Says a lot about the quality of Taleo's products and workers.

  • Csv upload -- suggestion needed with non-English character in csv file

    <p>Hi All,</p>
    I have a process which uploads a csv file into a table. It works with the normal english characters. In case of non-English characters in the csv file it doesn't populate the actual columns.
    My csv file content is
    <p></p>First Name | Middle Name | Last Name
    <p><span style="background-color: #FF0000">José</span> | # | Reema</p>
    <p>Sam | # | Peter</p>
    <p>Out put is coming like : (the last name is coming as blank )</p>
    First Name | Middle Name | Last Name
    <p><span style="background-color: #FF0000">Jos鬣</span> | Reema | <span style="background-color: #FF0000"> blank </span></p>
    <p>Sam | # | Peter</p>
    http://apex.oracle.com/pls/otn/f?p=53121:1
    workspace- gil_dev
    user- apex
    password- apex12
    Thanks for your help.
    Manish

    Manish,
    PROCEDURE csv_to_array (
          -- Utility to take a CSV string, parse it into a PL/SQL table
          -- Note that it takes care of some elements optionally enclosed
          -- by double-quotes.
          p_csv_string   IN       VARCHAR2,
          p_array        OUT      wwv_flow_global.vc_arr2,
          p_separator    IN       VARCHAR2 := ';'
       IS
          l_start_separator   PLS_INTEGER    := 0;
          l_stop_separator    PLS_INTEGER    := 0;
          l_length            PLS_INTEGER    := 0;
          l_idx               BINARY_INTEGER := 0;
          l_quote_enclosed    BOOLEAN        := FALSE;
          l_offset            PLS_INTEGER    := 1;
       BEGIN
          l_length := NVL (LENGTH (p_csv_string), 0);
          IF (l_length <= 0)
          THEN
             RETURN;
          END IF;
          LOOP
             l_idx := l_idx + 1;
             l_quote_enclosed := FALSE;
             IF SUBSTR (p_csv_string, l_start_separator + 1, 1) = '"'
             THEN
                l_quote_enclosed := TRUE;
                l_offset := 2;
                l_stop_separator :=
                       INSTR (p_csv_string, '"', l_start_separator + l_offset, 1);
             ELSE
                l_offset := 1;
                l_stop_separator :=
                   INSTR (p_csv_string,
                          p_separator,
                          l_start_separator + l_offset,
                          1
             END IF;
             IF l_stop_separator = 0
             THEN
                l_stop_separator := l_length + 1;
             END IF;
             p_array (l_idx) :=
                (SUBSTR (p_csv_string,
                         l_start_separator + l_offset,
                         (l_stop_separator - l_start_separator - l_offset
             EXIT WHEN l_stop_separator >= l_length;
             IF l_quote_enclosed
             THEN
                l_stop_separator := l_stop_separator + 1;
             END IF;
             l_start_separator := l_stop_separator;
          END LOOP;
       END csv_to_array;and
    PROCEDURE get_records (p_clob IN CLOB, p_records OUT varchar2_t)
       IS
          l_record_separator   VARCHAR2 (2) := CHR (13) || CHR (10);
          l_last               INTEGER;
          l_current            INTEGER;
       BEGIN
          -- SIf HTMLDB has generated the file,
          -- it will be a Unix text file. If user has manually created the file, it
          -- will have DOS newlines.
          -- If the file has a DOS newline (cr+lf), use that
          -- If the file does not have a DOS newline, use a Unix newline (lf)
          IF (NVL (DBMS_LOB.INSTR (p_clob, l_record_separator, 1, 1), 0) = 0)
          THEN
             l_record_separator := CHR (10);
          END IF;
          l_last := 1;
          LOOP
             l_current := DBMS_LOB.INSTR (p_clob, l_record_separator, l_last, 1);
             EXIT WHEN (NVL (l_current, 0) = 0);
             p_records (p_records.COUNT + 1) :=
                REPLACE (DBMS_LOB.SUBSTR (p_clob, l_current - l_last, l_last),
             l_last := l_current + LENGTH (l_record_separator);
          END LOOP;
       END get_records;Denes Kubicek
    http://deneskubicek.blogspot.com/
    http://www.opal-consulting.de/training
    http://htmldb.oracle.com/pls/otn/f?p=31517:1
    -------------------------------------------------------------------

  • N91 problem sending SMS with non English character...

    Hello I own a N91 4GB version. I have upgraded to version 2.20.008. I checked recently and have not found a newer version available for download.
    When composing an SMS a counter appears on the upper part of the screen, which shows how many characters remain and how many SMS(s) will be sent.
    When I use my native language, i.e. Greek, the counter starts as usual with 160 characters. When I type the first character it drops to 69 characters and then it works correctly decreasing the counter by 1. The result is that the phone sends more than 1 SMS even if there are no more than 160 characters.
    The problem does not appear if I use English characters.
    Is there a way to fix this? Do other users have the same problem?

    Hello alsanico,
    I'm from Greece too.
    I haven't seen N91's exact menu but i suppose it has similarities with my N95. They both run S60.
    Settings-
    General-
    Personalisation-
    Language-
    Writing Language-Ellinika
    If you have already made these settings then go to:
    Messaging-
    Options (left selection key)-
    Settings-
    Text Message-
    Character encoding-
    Reduced support
    Hope this helps...

  • Problems with non-English characters in iTunes 7.7

    Since "upgrading" to iTunes 7.7 every file containing characters not normally found in English (e.g.: å, ß, ç, é, î, ñ, ø, ü, etc.) gets a mane change when iTunes plays the track. For example, a song called "Baião" suddenly becomes "Bai.o" and if the artist or album name contains accented characters, these get scrambled as well. The files themselves also get jettisoned from their folders in the iTunes Library, which causes big problems.

    Kasyan, as of Leopard AppleScript treats all text as Unicode pre this you can specify 'as Unicode text'. Try a test with these.
    -- Leopard
    set x to POSIX path of (path to desktop)
    -- Pre Leopard
    set x to POSIX path of (path to desktop as Unicode text)
    -- Leopard
    set x to POSIX path of (choose file without invisibles)
    -- Pre Leopard
    set x to POSIX path of ((choose file without invisibles) as Unicode text)

  • Problem with  Non English Chars

    OS : Mac OS
    Java : 1.5.0_07
    Hi,
    i have an Swing application that reads data from a database and shows them in a swing GUI. The text returned by the database is in Arabic and saved in a TextField object.
    But once printed, the arabic chars are screwed up.or actually they r not arabic chars at all!!
    For debugging i also write the result of the query in the console and in a log4j log file.
    There, it is printed in the right form.
    here the code:
    System.out.println("D3"+java.nio.charset.Charset.defaultCharset().name());
    System.out.println("singular "+dit.getData().getSingular());
    log4j,debug("singular "+dit.getData().getSingular());
    Font font = Font.decode("Geeza Pro");
    textl.setFont(font);
    textl.setText(dit.getData().getSingular());
    The output in the console is (and log4j) :
    D3MacRoman
    singular &#1589;&#1608;&#1601;
    The output in the Swing Textfield is
    ������
    If i configure log4j to use UTF8 ,then even into log4j log file the same screwed
    chars are written.
    Looks like i've to tell Swing to use MacRoman, which is the default of the OS and
    the used by the console&log4j. but i don't know how to.
    Any clue??
    Thanks,
    Chris.

    convert your strings to unicode:
    example 1
    import java.awt.*;
    import java.awt.event.*;
    public class ApplicationFrame
        extends Frame {
      public ApplicationFrame() { this("ApplicationFrame v1.0"); }
      public ApplicationFrame(String title) {
        super(title);
        createUI();
      protected void createUI() {
        setSize(500, 400);
        center();
        addWindowListener(new WindowAdapter() {
          public void windowClosing(WindowEvent e) {
            dispose();
            System.exit(0);
      public void center() {
        Dimension screenSize = Toolkit.getDefaultToolkit().getScreenSize();
        Dimension frameSize = getSize();
        int x = (screenSize.width - frameSize.width) / 2;
        int y = (screenSize.height - frameSize.height) / 2;
        setLocation(x, y);
    import java.awt.*;
    public class BidirectionalText {
      public static void main(String[] args) {
        Frame f = new ApplicationFrame("BidirectionalText v1.0") {
          public void paint(Graphics g) {
            Graphics2D g2 = (Graphics2D)g;
            g2.setRenderingHint(RenderingHints.KEY_ANTIALIASING,
                RenderingHints.VALUE_ANTIALIAS_ON);
            Font font = new Font("Lucida Sans Regular", Font.PLAIN, 32);
            g2.setFont(font);
            g2.drawString("Please \u062e\u0644\u0639 slowly.", 40, 80);
        f.setVisible(true);
    example2
    Java Internationalization
    By Andy Deitsch, David Czarnecki
    ISBN: 0-596-00019-7
    O'Reilly
    import java.awt.event.*;
    import java.awt.*;
    import java.text.*;
    import javax.swing.*;
    public class ArabicDigits extends JPanel {
      static JFrame frame;
      public ArabicDigits() {
        NumberFormat nf = NumberFormat.getInstance();
        if (nf instanceof DecimalFormat) {
          DecimalFormat df = (DecimalFormat)nf;
          DecimalFormatSymbols dfs = df.getDecimalFormatSymbols();
          // set the beginning of the range to Arabic digits
          dfs.setZeroDigit('\u0660');
          df.setDecimalFormatSymbols(dfs);
        // create a label with the formatted number
        JLabel label = new JLabel(nf.format(1234567.89));
        // set the font with a large enough size so we can easily
        // read the numbers
        label.setFont(new Font("Lucida Sans", Font.PLAIN, 22));
        add(label);
      public static void main(String [] argv) {
        ArabicDigits panel = new ArabicDigits();
        frame = new JFrame("Arabic Digits");
        frame.addWindowListener(new WindowAdapter() {
        public void windowClosing(WindowEvent e) {System.exit(0);}});
        frame.getContentPane().add("Center", panel);
        frame.pack();
        frame.setVisible(true);
    To avoid having to type all the \u... notation manually, use the native2ascii tool (included with the SDK).
    http://java.sun.com/developer/technicalArticles/Intl/HTTPCharset/

  • Upload text files with non-english characters

    I use an Apex page to upload text files. Then i retrieve the contents of files from wwv_flow_files.blob_content and convert them to varchar2 with utl_raw.cast_to_varchar2, but characters like ò, à, ù become garbage.
    What could be the problem? Are characters lost when files are stored in wwv_flow_files or when i do the conversion?
    Some other info:
    * I see wwv_flow_files.DAD_CHARSET is set to "ascii", wwv_flow_files.FILE_CHARSET is null.
    * Trying utl_raw.cast_to_varchar2( utl_raw.cast_to_raw('àòèù') ) returns 'àòèù' correctly;
    * NLS_CHARACTERSET parameter is AL32UTF8 (not just english ASCII)

    Hi
    Have a look at csv upload -- suggestion needed with non-English character in csv file it might help you.
    Thanks,
    Manish

  • Naming files with non English characters.

    I'm using filemaker to creat PDF's through Acrobat 10.1.12. I need to use Polish, Hungarian, Czech and Slovakian characters in the file name but the characters are not recognised and so the file name will not create. This is for Windows, the problem does not occur on a mac.

    Hi
    Have a look at csv upload -- suggestion needed with non-English character in csv file it might help you.
    Thanks,
    Manish

  • Does querybuilder support non-english character?

    I want to make query using querybuilder with non-english character (Chinese)?
    I tried with http://localhost:4502/libs/cq/search/content/querydebug.html but it is not working.
    below is my query string:
    property=contenttext
    property.value=&#20320;&#22909;&#21966;
    I have converted the chinese character (你好嗎)to unicode.
    Can anyone help me?

    That's a bug in the debugger UI. But it's easy to fix:
    in crxde lite, overlay /libs/cq/search/components/querydebug/querydebug.jsp by copying it to /apps/cq/search/components/querydebug/querydebug.jsp
    open /apps/cq/search/components/querydebug/querydebug.jsp
    find the line "props.load(new ByteArrayInputStream(queryParam.getBytes("ISO-8859-1")));"
    and replace with "props.load(new StringReader(queryParam));"
    Will be fixed in 5.6.1.

  • Problem with Vcard and non-English character

    VCard feature is what I would like to use, but I have quite a few contacts with Non-English name (Korean).
    I know Ipod can display in Korean, but when I create a v-card with Korean character and copy the vcard file over into /contacts folder, I can see the filename as the person's name (From windows explorer), but I can ONLY see first character of the file when I display contacts in iPod.
    Does anyone have tips/tricks on displaying all the filename in IPod contacts?
    Thanks.
      Windows XP Pro  

    Because i use the string nota into a jsp page and i print the string nota into a textarea and the text is with no newline, example:
    <textarea name="nota" rows="4" cols="60"><%= nota %></textarea>
    the text into textarea is:
    first linesecond linethird line
    but i want that the text displayed into textarea is equal the text into the CDATA section:
    first line
    second line
    third line

Maybe you are looking for