Unicode characters are shown as "question marks" in Eclipse console

I am trying to retrieve Unicode data from Sybase database using jdbc.
The data are stored in Sybase with unichar and univarchar datatypes.
Following is the java code tring to run.
{color:#808080}public class Test
public static void main(String [] args) throws Exception
CoreServiceSoapBindingStub binding_Core = new CoreServiceSoapBindingStub();
CoreWSServiceLocator locator_Core = new CoreWSServiceLocator();
binding_Core = (CoreServiceSoapBindingStub) locator_Core.getCoreService();
Contact[] con = binding_Core.getContact();
for(int i=0;i< con.length;++i)
System.out.println(con.getLastName());
}{color}
The result of this code in Eclipse console should be as follow (consists of one English and one Japanese name).
{color:#808080}Suzuki
&#37428;&#26408;{color}
However when I run this, I get the following.
{color:#808080}Suzuki
{color}
The alphabetical characters seem to diplay fine in the console, but foreign characters are not....
The default character set of the database is ISO-8859-1, but I used unichar and univarchar to store data in unicode thus believe no issue at database side...
Used jconnect 6.05 (com.sybase.jdbc3.jdbc.SybDriver
) for the database driver.
Java files are encoded in UTF-8.
Console Encoding is UTF-8.
Is this an issue in database driver?
Since I set the parameters for character set to UTF-8 in both the database and java files....
It would be great if someone could give some comments on this issue....
Thanks a lot.

It might be better to ask this question on an Eclipse forum. I have a couple of suggestions, but none of them have made the output in my console look entirely correct:
1. Try to start Eclipse with these parameters: -vmargs -Dfile.encoding=UTF-8
2. Try switching the font settings for the Console under Preferences in Eclipse.

Similar Messages

  • Arabic characters are displaying as question marks in forms 10g

    We have migrated our application from forms 6i to forms 10g and now in forms 10g the arabic characters are displaying as question marks while it displays correctly in the old application using forms 6i. I have already set the character set to AR8MSWIN1256 in the registry, but it didn't help. Somebody please help.

    @ Sarah, Al-Salamu Alikum We Rahmatu Allah we Barakatu,
    Sarah Habibty, why new installation ? In order to select a new suitable character set !!!
    Then creating a new instance from the db is a better alternative since it saves time,effort and another back up of his current db is exist safely if needed for any purposes in the future.
    @Amer,honestly speaking...
    Modifing ur NLS_LANG to > AMERICAN_AMERICA.AR8MSWIN1256
    Works for me in both Arabic and English data in 2 applications.This works in my pc.But it didn't works at my boss pc this can happened don't have any reason for that.!!!!
    i spent lot's of time trying to search but what i had got is that solution i suggested by a friend of mine.
    Now please could you advise me, is it better to create a new instance of database as Amatu Allah has suggested or is it better to change the character set through sql as some others have suggested? Again i suggest to select the short cut way ; to reset the character set through sql after taking a back up from ur data that is currently exist.
    then retest again doing the select and test ur data input and retrieval.
    SQL> select * from v$nls_parameters
    2 where parameter in ('NLS_CHARACTERSET','NLS_LANGUAGE');watching the output if it works that's fine saving ur time & effort .
    if not working with the correct NLS_CHARACTERSET then use my previous solution.
    Hope this helps...
    Regards,
    Amatu Allah

  • Special characters are changed to question marks

    I'm using SQL developer 3.107 for unit testing.
    In a particular unit test, the expected result is a sentence (varchar2) with a special character (ë).
    But when I save the result, SQL developer changes the character to 2 question marks.
    And as a result, the unit test fails because the expected result differs from the received result (where the 'ë' remains unchanged).
    I already tried changing the encoding in the SQL Developer preferences from cp1252 to UNICODE and UTF8 but that didn't help.
    Any suggestions?
    Thanks in advance

    Hello:
    I guess that what you observe could be an interaction between the server characterset and the client characterset.
    These are the results with different client characterset settings:
    NLS_LANG=american_america.WE8ISO8859P1
    select 'ë' c, dump('ë') dumped from dual;
    C DUMPED
    ë Typ=96 Len=1: 137
    NLS_LANG=american_america.WE8MSWIN1252
    select 'ë' c, dump('ë') dumped from dual;
    C DUMPED
    + Typ=96 Len=1: 191
    set NLS_LANG=american_america.WE8PC850
    select 'ë' c, dump('ë') dumped from dual;
    C DUMPED
    ë Typ=96 Len=1: 235
    According to the ISO 8859-1, 8-bit single-byte coded graphic character sets document [http://www.open-std.org/JTC1/SC2/WG3/docs/n411.pdf] the encoding of the latin small letter e with diaeresis is 0xEB -> (decimal 235).
    If you set the client to WE8PC850 do you see a correct behaviour?

  • Some characters are replaced by question marks!

    All of a sudden my iMac (OS X 10.5.1 Leopard) is displaying question marks for some special characters.
    For example, on internet, look up a word "pediment" in Yahoo! Dictionary ( http://education.yahoo.com/reference/dictionary/entry/pediment)... Instead of showing the "dot" symbol to identify a syllabus break, it shows question marks as ped?i?ment, and instead of showing c with a little squiggle underneath for a word facade (within the definition of pediment), it shows fa?ade.
    I did not have this problem on my iMac on Tuesday (01/29/08) morning and I have not installed anything other than the Apple updates.
    I tried with FireFox and Safari, and both show the same thing. Then I went and checked my husband's iMac (OS X 10.5.1 Leopard) and it has a same problem, so I know it is not just my iMac.
    Then (yes, there is more...) I started to work on my Word document and tried to insert "symbol" from insert on the Toolbar... many of my symbol fonts (Symbol, Osaka, etc) has question marks replacing special characters.
    Please help! I am unable to complete my work without these special characters!!

    Well, I went to the Yahoo! Dictionary "Pediment" and it said it was set to Default which is set to Western (ISO Latin 1), so I assumed it was Western (ISO Latin 1). But when I click on Western (ISO Latin 1), the pages appear what it's supposed to look like with special characters... What does this mean?
    I reset all the settings; I chose something else as default, quit Safari, opened it and chose Western (ISO Latin 1) as a default to see if it changes anything... No change. It says my default is Western (ISO Latin 1), but it seems not.
    I am officially confused and have no clue what to do...??????!!!!!
    On the bright side, at least I can get my work done, even if I have to change the encoding one page at a time... Thanks.

  • PDF mail attachments are shown as question mark instead of preview or icon.

    When I receive a mail with PDF attachment, I can see paper clip on the header of mail. This is fine. However, Apple Mail does not show as preview or icon. I can see only a box with border and a question mark sign "?" in the middle. If I click the "?" symbol, preview opens the PDF attachment properly.
    It is fine for other mail attachment such as .DOC or .JPG. Either I can see preview or as icon.
    Is there anybody who knows the solution of this problem?
    P.S. I use OS X Yosemite 10.10.1 but problem was existing also with Mavericks.

    No, I don't think so. Same attachment is shown properly on the my colleague's Mac who is another recipient for same e-mail. There is something wrong about my Mac.

  • Slideshow controls are shown as question marks

    I know since the end of MobileMe the pop up slideshow doesn't work properly.  I'm trying to fix the missing controls.  When you click play slideshow, the slideshow pops up, but at the bottom, the play/pause, etc. controls are question marks.  I've tried to follow Old Toad's tutorial listed here:
    http://www.oldtoadstutorials.net/No.iW14.html
    but I am still having problems.
    I have mulptile sites in iweb that have already been published with slideshows.  So, I need to fix all of them at this point.  I've changed the item that he refers to in the tutorial and uploaded it to the website's server.  I've tried putting in different folders; the scripts folder, the slideshow_files folder, just in the root folder, but it is not working.  What am I missing?  Any thoughts about what I need to try next.  Thanks for your help!

    If you are changing the script in the iWeb package you will need to republish every slideshow from iWeb for the updates to appear in the published version.
    You could modify the local or remote versions of the file outside of iWeb but you would need to do this every time you publish changes to your site. See [2]...
    http://www.iwebformusicians.com/Banner-Slideshow/iWeb-Slideshow-Assets.html

  • Japanese Characters are showing as Question Marks '?'

    Hi Experts,
    We are using Oracle Database with below nls_database_parameters:
    PARAMETER VALUE
    NLS_LANGUAGE AMERICAN
    NLS_TERRITORY AMERICA
    NLS_CURRENCY $
    NLS_ISO_CURRENCY AMERICA
    NLS_NUMERIC_CHARACTERS .,
    NLS_CHARACTERSET WE8MSWIN1252
    NLS_CALENDAR GREGORIAN
    NLS_DATE_FORMAT DD-MON-RR
    NLS_DATE_LANGUAGE AMERICAN
    NLS_SORT BINARY
    NLS_TIME_FORMAT HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY $
    NLS_COMP BINARY
    NLS_LENGTH_SEMANTICS BYTE
    NLS_NCHAR_CHARACTERSET AL16UTF16
    NLS_NCHAR_CONV_EXCP FALSE
    NLS_CSMIG_SCHEMA_VERSION 3
    NLS_RDBMS_VERSION 11.1.0.7.0
    When we are trying to view the Japanese characters (windows 7) in SQLdeveloper, toad or sqlPlus, we are getting data like '????'.
    Can anybody please explain us the setups required to view the Japanese characters from the local machine and database.
    Thanks in advance.

    user542601 wrote:
    [Note: If I insert the Japanese characters from Sql Developer or Toad, I am unable to see proper results.]For JDBC connections in Oracle SQL Developer, I believe a different parameter setting is required.
    Try running Sql Dveloper with jvm option: -Doracle.jdbc.convertNcharLiterals=true.
    I need to use this data in Oracle 6i Reports now.
    When I am creating reports using the table where I have Japanese characters stored in NVARCHAR2 column, the value is not displaying correctly in Report Regardless of Reports support for nchar columns, 6i is very very old and based on equally ancient database client libraries (8.0.x if memory serves me). Earliest version of Oracle database software that support the N literal replacement feature is 10.2. So, obviously not available for Reports 6i.
    I'm guessing only way to fully support Japanese language symbols is to move to a UTF8 database (if not migrating to a current version of Report Services).
    Please help to provide a workaround for this. Or do I need to post this question in any other forums?There is a Reports forum around here somewhere. Look in the dev tools section or maybe Middleware categories.
    Edit: here it is: {forum:id=84}
    Edited by: orafad on Feb 25, 2012 11:12 PM
    Edited by: orafad on Feb 25, 2012 11:16 PM

  • Language:  Filename with characters for arabic turns question mark

    Language: Filename with characters for arabic turns question mark
    OS: Solaris 9
    Machine: Sun-Fire 25K
    There is an adobe distiller software that is configured and a java apps. There are postscript files that are being converted to .pdf format using the adobe distiller. Using the GUI (using the Exceed; for remote access), when they use GUI to convert the postscripts to pdf files, the long filenames have the corresponding characters for arabic reading purpose. This is OK.
    When we use the windows RUN to telnet the server and convert the postscripts to pdf, it gives a question marks characters in the filenames ( this; is a sample; filename; ?? ??? ??; right.pdf )
    We are not sure now if we have to add a package of arabic or a patch to resolve this problem.
    Message was edited by:
    yurioira32
    Message was edited by:
    yurioira32

    Solution found, I'll post the work around to those who might encounter the same problem.
    Somewhere in the layers of technology (webwork or weblogic I'd guess), the servlet response is encoded into UTF-8 regardless. The encoding in the database was ISO-8859-1. Sending ISO encoded bytes by UTF-8 caused the conflicting character codes (anything above 127) to show up as undefined.
    The fix is to decode the input byte array into ISO-8859 string, then encode that string into UTF-8, which can be send by Weblogic.
    isoConvert = new String(buf, "ISO-8859-1");
    out.write(isoConvert.getBytes("UTF-8"), 0, isoConvert.getBytes("UTF-8").length);

  • Special Characters shown as question marks

    Hi everyone.
    I have an iPhone 4 since December 2010 but im having a problem, everytime i recieve or send a text message with special characters such as:
    ñ á é ç ó etc... i recieve this characters with question mark (?) i called IUSACELL which is my carrier here in México and the told me that this is an iPhone problem because they ran somes test with some iPhones they have there and everything went fine. This is quite annoying.
    The problem showed up when i had iOS 4.3.5, now i'm iOS 5 Beta 5 and is still there!!
    Regards.
    Some images here.
    The word is "Llegué"
    The word is "Quedé"
    The word is "Años"
    The word is "Quién"
    PS. Sorry for my bad english

    Sounds that you have a problem with the font that is used to display those characters.<br />
    You can try some different default fonts (e.g. Arial, Verdana, Tahoma) to see if you can identify the not working font.<br />
    You will have to reinstall or refresh that font.
    *http://en.wikipedia.org/wiki/Punctuation
    *http://en.wikipedia.org/wiki/Dash

  • Values are returned as question marks.

    I have set up a UTF-8 database, on my english version of WinXP..
    SELECT * FROM NLS_DATABASE_PARAMETERS;
    .. shows this is correct - UTF8
    I have set up NLS_LANG variable in windows, to be "American.America.UTF8"
    Using SQL*Plus worksheet I can update a row in my database to contain chinese characters. I dont know what the chinese means, but I can copy and paste from another application, and the glyphs looks the same in the source and SQL*Plus worksheet.
    I can then select the row ..
    SELECT * FROM THETABLE WHERE THECOLUMN = '{CHINESECHARS}';
    The correct row is returned, but the column is shown with just question marks, instead of the glyphs I expect.
    (This also happens with the web application I am developing which also seems to return qurestion marks. (PHP not JSP))
    Can anyone shed any light on this problem as I am completely stuck.

    Check these notes# 132453.1, 152260.1 & 158577.1 out on metalink also for further good details. This may/may not work for Chinese yet it should steer you
    in the right direction.. It's the combination of the NLS_LANG and Active
    Code page settings on the client as well as the database characterset. I believe you say the database character set is UTF8. That part should be correct. It's then the client side combo's that do the trick. It's trying differing combo's until he desired results are achieved. Having not worked with Chinese I can
    give you an example of what I've experienced...
    These are the following steps I performed to enable the client SQL*Plus (command line and GUI) and SQL*Worksheet to display/input the euro currency symbol correctly for example...
    1) Search from the top of the registry (regedit) for ALL occurrences of NLS_LANG and make the value AMERICAN_AMERICA.WE8MSWIN1252.
    2) Go to the following registry entry HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Nls\CodePage\ACP
    and change the active code page (ACP) to 1252. If there is an OEMACP entry there change that also to 1252.
    3) Reboot the PC
    To test .. go to DOS and enter at the prompt..
    E:\>chcp
    Active code page: 1252
    If the above message IS NOT 1252 you must have missed editting the registry entry
    explained above. However you can change it for this session by entering
    E:\>chcp 1252
    - < Set font to Lucida Console (in Properties, Font tab) >
    To get the UNICODE characters you want insert into the database do the following
    E:\>charmap
    You will see a display of different characters.
    If you click on advanced view at the lower right corner a search screen comes up. Enter UNICODE for the character set, ALL for group by and EURO for the search for and the euro currency symbol will come up. This is one way how the user can enter this character into the app. You can copy the character in this function as you can see and then you can paste it elsewhere. Copy whatever character you want to test with. My test was with the euro currency symbol.
    E:\>sqlplus.exe <user>/<pwd>
    SQL*Plus: Release 8.1.7.0.0 - Production on Fri Jul 13 13:22:19 2001
    (c) Copyright 1999 Oracle Corporation. All rights reserved.
    Connected to: Oracle8i Enterprise Edition Release 8.1.7.0.0 - Production
    With the Partitioning option JServer Release 8.1.7.0.0 - Production
    SQL> create table t (a char(3));
    Table created.
    SQL>insert into t values ('');
    1 row inserted. SQL>
    commit;
    SQL> select * from t;
    A
    If the all the registry edits were performed correctly this will work with
    SQL*Plus GUI and SQL*Worksheet.
    Check these notes# 132453.1, 152260.1 & 158577.1 out on metalink also for further details real carefully.

  • Apostrophes are replaced by question marks in output string?

    Hi Guys
    I think this is an encoding problem but I cannot find it in the forum archives. My apostrophes are being replaced by question marks on my jsp output?
    Any idea why and how to get them back to apostrophes
    Cheers, ADC

    Guys, the 3rd party product has no trouble at all
    showing the apostrophes from the self-same database
    that I get question marks from.
    This MUST be an encoding issue, so anyone that knows
    about encoding and changing it please could you
    suggest something. You can take for granted that one
    way or another the character in the database IS an
    apostrophe as verified by viewing it via the 3rd party
    product we have.You said...
    In the SQL database view they come out as squares
    Were you referring to the normal applications that are available for viewing data for the database (like SQL*Plus or Enterprise Manager?)
    If yes, then if it was me then I would presume it was not an apostrophe and that the third party tool was mapping the real character to an apostrophe. A 'square' indicates an unusual character in viewers.
    Are you using EBCDIC? Unless EBCDIC is involved apostrophe has the same value in most if not all character sets because ANSI forms the lower range for most character sets. I know for example that Korean, Japanese and Chinese character sets use ANSI, as does unicode (with padded zeros for the wider ranges.)

  • Weblogic 12c Servlet Response - Special characters show up as question mark

    My web app is running on Weblogic 12c (12.1.1) using WebWork + Hibernate. The program streams data (bytes making up a pdf) from a CLOB in an Oracle Database to the AsciiStream of the servlet output response. No exceptions are thrown, but the generated pdf contains blank pages. Comparing the bytes of the generated pdf, special characters are showing up as question marks.
    Some of the bytes read in from the database contain 8 bits (correct data), but the bytes that the servlet return contain only 7 (all bytes with 8 bits become "1111111"). The number of bytes returned from the servlet is correct.
    Code:
    //Response is HttpServletResponse
    response.setContentType("application/pdf");
    response.setHeader("Content-Disposition", "inline; filename=\"test.pdf\"");
    out = response.getOutputStream();
    byte[] buf = new byte[16 * 1024];
    InputStream in = clob.getAsciiStream();
    int size = -1;
    while ((size = in.read(buf)) != -1){
    // buf contains the correct data
    out.write(buf, 0, size);
    // other exception handling code, etc
    out.flush();
    out.close();
    "Correct" pdf byte example:
    10011100
    10011101
    1010111
    1001011
    1101111
    11011011
    Incorrect pdf byte example:
    111111
    111111
    1010111
    1001011
    1101111
    111111
    I have verified that the data read from the CLOB in the database IS correct. My guess is that the Weblogic server has some strange servlet settings that causes the bytes to be written to the servlet output stream incorrectly, or a character encoding issue. Any ideas?
    Edited by: 944705 on Jul 26, 2012 10:17 AM

    Solution found, I'll post the work around to those who might encounter the same problem.
    Somewhere in the layers of technology (webwork or weblogic I'd guess), the servlet response is encoded into UTF-8 regardless. The encoding in the database was ISO-8859-1. Sending ISO encoded bytes by UTF-8 caused the conflicting character codes (anything above 127) to show up as undefined.
    The fix is to decode the input byte array into ISO-8859 string, then encode that string into UTF-8, which can be send by Weblogic.
    isoConvert = new String(buf, "ISO-8859-1");
    out.write(isoConvert.getBytes("UTF-8"), 0, isoConvert.getBytes("UTF-8").length);

  • Can someone help me understand how ePub CSS @fontface Unicode characters are supported in td , but not in div or other elements?

    Hi,
    I'm working on a project to convert several hundred thousand life sciences articles into epub format, and we have run in to a problem with character entities.
    Being that these are scientific articles, the characters are from a wide range of Unicode charts, and are essential to transmitting the meaning of the data.
    The problem is that in my epub, the character entity inside a table data cell is rendering the @font-face correctly, but inside any other HTML element, the character renders as an empty box on our ipad2s.
    I've placed pre tags in hopes that the unicode will not be rendered in your browser here. The code point in this example is x1d542 just in case.
    So inside div, we see boxes, inside td, we see the character rendered properly.
    <pre>
          <div class="stix">Let &#x1d542; be a field, which will be either the complex numbers &#x02102; or the finite field &#x1d53d;</div>
          <table id="t31" rules="all">
            <tr>
              <td>&#x1d542;</td>
              <td class="stix">&#x1d542;</td>
              <td>U+1D542 MATHEMATICAL DOUBLE-STRUCK CAPITAL K </td>
            </tr>
    </pre>
    My CSS looks like this:
    <pre>
    @font-face {
        font-family: 'STIX';
        src: url('STIX-Regular.otf') format('opentype');
        font-weight: normal;
        font-style: normal;
        unicode-range:  U+02B0-02FF, U+07C0-07FF,  U+0900-097F,U+0F00-0FD8, U+1D00-1D7F, U+1D80-1DBF, U+1D400-1D7FF, U+1E00-1EFF, U+1F00-1FFE,U+2000-206F, U+20A0-20B8, U+20D0-20F0, U+2300,23FF, U+25A0-25FF, U+2600-26FF, U+27C0-27EF, U+27F0-27FF, U+2900-297F, U+2A00-2AFF, U+2B00-2B59, U+2C60-2C7F ;
    @font-face {
        font-family: 'STIX-Math';
        src: url('STIXMath-Regular.otf') format('opentype');
        font-weight: normal;
        font-style: normal;
        unicode-range:  U+02B0-02FF, U+07C0-07FF,  U+0900-097F,U+0F00-0FD8, U+1D00-1D7F, U+1D80-1DBF, U+1D400-1D7FF, U+1E00-1EFF, U+1F00-1FFE,U+2000-206F, U+20A0-20B8, U+20D0-20F0, U+2300,23FF, U+25A0-25FF, U+2600-26FF, U+27C0-27EF, U+27F0-27FF, U+2900-297F, U+2A00-2AFF, U+2B00-2B59, U+2C60-2C7F ;
    .stix   {
            font-family: "STIX", "STIX-Math", sans-serif;
    </pre>
    Is it possible that this is a rendering bug, because the character is rendering in the table cell, but not in other elements?
    Have I missed something obvious?
    Thanks,
    Abe

    I assume you are including the STIX font as part of your epub files?     
    Perhaps the folks who do this blog might be able to help -- they have done some work with font embedding:
    http://www.pigsgourdsandwikis.com/2011/04/embedding-fonts-in-epub-ipad-iphone-an d.html

  • Unicode characters are not inserted in database properly

    Hi all,
    Iam developing an application that takes entries from text boxs and inserts them in database , When i try to write in language rather than english (i use arabic) the fields are inserted but as ?????? in the database fields , i checked the codepoints of the string and every thing is ok untill the call to executeUpdate() , i think that the unicode characters represented in the codepoint array are not transferred to the database , How can i fix this problem.
    Best Regards.

    In the connection string, you have to specify character encoding.
    For example,
    jdbc:mysql://localhost:3306/database?useUnicode=true&characterEncoding=utf8

  • Japanese character in Code Editor of "View" is shown as Question Marks

    Hi All,
    I have a probelm where I tried to create a view having decode function as shown below:
    decode(A.resource_process_name,
            'ザイコ', 'Inventory',
            'PP良後', 'Wafer BE',
            'PP良前', 'Wafer FE',
            'PP良済', 'Wafer',
            'PP良引', 'Wafer in transit',
            A.resource_process_name
      ) as step_code
    Data type of A.resource_process_name is NVARCHAR2. Data in "STEP_CODE" column is displayed correctly after translation using decode function when I run select statement of this view. I don't have any issue till this point.
    But problem happens when I try to see/open the script/code of this view in the SQL Developer (Right click view --> Edit...). It shows me the code of decode function as shown below:
    decode(A.resource_process_name,
            '¿¿¿', 'Inventory',
            'PP¿¿', 'Wafer BE',
            'PP¿¿', 'Wafer FE',
            'PP¿¿', 'Wafer',
            'PP¿¿', 'Wafer in transit',
            A.resource_process_name
    Can you please suggest whay should be done so that the code should be visible in it is original format i.e., it should show decode function with Japanese characters?

    It might be better to ask this question on an Eclipse forum. I have a couple of suggestions, but none of them have made the output in my console look entirely correct:
    1. Try to start Eclipse with these parameters: -vmargs -Dfile.encoding=UTF-8
    2. Try switching the font settings for the Console under Preferences in Eclipse.

Maybe you are looking for