Problem retreiving unicode char.

I have a database MSSQL with some fields as nvarchar. I pasted some chinese chars on it in Enterprise administrator and it works fine. but using rs.getString , results to "?????" characther. I wonder whats wrong with the app. While the chinese character from resourse bundles displayed correctly the one that is from the database do not.
Please help.

I already included the directive UTF-8 and the page content.
As a matter of fact I could view OK the chinese chars from the Resource Bundle that I have made for all my labels and button.
In this case I ruled out the font problem since the browser could display the one that is from the resource but when its from the database throug rs.getString() it does not work it displays ????? . Does getString capable of displaying foreign chars? Does it has something to do with the JDBC?
Please enlighten me more.
Thanks.

Similar Messages

  • Problem with unicode in MIME text/html

    Hi;
    I have a java program that sends email by sending it to our exchange server using SMTP. The email has both a To and a Bcc in the single email sent.
    The bcc addressee receives the email fine.
    The to address however has a problem with chars that are > 0x7f in the html. The html uses utf-8. But the displayed characters look as though the utf-8 part was ignored.
    Also weird, if I go to view, options in Outlook for the bcc email (which is good) it shows:
    MIME-Version: 1.0
    Content-Type: multipart/alternative;
    boundary="----=_Part_0_32437168.1135634913407"
    Return-Path: [email protected]
    X-OriginalArrivalTime: 26 Dec 2005 22:08:33.0366 (UTC) FILETIME=[E94D1F60:01C60A68]
    ------=_Part_0_32437168.1135634913407
    Content-Type: text/plain; charset=Cp1252
    Content-Transfer-Encoding: quoted-printable
    ------=_Part_0_32437168.1135634913407
    Content-Type: text/html; charset=Cp1252
    Content-Transfer-Encoding: quoted-printable
    ------=_Part_0_32437168.1135634913407--
    But for the to email (which has the problem), it only shows:
    MIME-Version: 1.0
    Content-Type: multipart/alternative;
    boundary="----=_Part_0_32437168.1135634913407"
    Return-Path: [email protected]
    X-OriginalArrivalTime: 26 Dec 2005 22:08:33.0366 (UTC) FILETIME=[E94D1F60:01C60A68]
    Does javamail do anything weird when it gets an email with a to and a bcc and split it up wrong? I just download and installed the latest mail.jar and activation.jar.
    thanks - dave

    OK...this didnt quite cure it for me...but having done this AND then this...
    MimeBodyPart htmlText = new MimeBodyPart();
    final String htmlStuff = "<?xml version=\"1.0\" encoding=\"utf-8\"?>"
    + "<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Strict//EN\" \"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd\">"
    + "<html xmlns=\"http://www.w3.org/1999/xhtml\" xml:lang=\"en\" lang=\"en\">"
    + "<head>"
    + "<title>Stuff.</title>"
    + "<meta http-equiv=\"Content-Type\" content=\"application/xhtml+xml; charset=utf-8\"/>"
    + "</head><body>"
    + "<p>Currency Symbols: \u00A3\u00A2\u20A3\u20AC$<p>"
    + "</body></html>";
    DataSource htmlSource = new DataSource()
    private String stuff = htmlStuff;
    private Charset cset = Charset.forName("utf-8");
    public String getContentType() { return "text/html"; };
    public java.io.InputStream getInputStream() throws IOException
    return new ByteArrayInputStream(cset.encode(stuff).array());
    public String getName()
    return null;
    public OutputStream getOutputStream() throws IOException
    throw new IOException();
    htmlText.setDataHandler(new DataHandler(htmlSource));
    htmlText.addHeader("Content-Transfer-Encoding","base64");
    This works for me as shown by the Unicode chars in the html.
    If you intend to take this to production create a decent external DataHandler class and avoid the use of the anonymous class - which then avoids the need for the final String and the string can come from anywhere then.
    Hope this helps,
    Barry

  • GUI_DOWNLOAD problem in unicode system

    Hi Guru's,
    I am facing one prolem in gui_download. we are doing unicode remediation in one report. In the program  one internal table declared as of type c with length 255 and data filled into the internal table by importing the data ifrom cluster. After  that this internal table  used  by ws_download function moduel with  file type as BIN to download it  in word doc file. We replaced the function module with gui_download. It is working fine in non-unicode system but it is not downloading properly in the unicode system.
    i am unable to find what is the cause.. I tried with different different codepages giving in run time..it is not solving my problem.
    << Moderator message - Everyone's problem is important. Please do not ask for help quickly. >>
    Thanks & Regards,
    Sastry R
    Edited by: Rob Burbank on Dec 13, 2010 9:39 AM

    Hi Clemens.
    I replaced the ws_download function module with gui_download.
    here is my code
    Earlier before 6.0 code as follows
    CALL FUNCTION 'WS_DOWNLOAD'
       EXPORTING
         bin_filesize            = data_len
         filename                = p_file
         filetype                = 'BIN'
       TABLES
         data_tab                = data_tab
       EXCEPTIONS
         file_open_error         = 1
         file_write_error        = 2
         invalid_filesize        = 3
         invalid_table_width     = 4
         invalid_type            = 5
         no_batch                = 6
         unknown_error           = 7
         gui_refuse_filetransfer = 8
         OTHERS                  = 9.
    IF sy-subrc <> 0 AND no_error_dlg = space.
       MESSAGE i002(sy) WITH text-i03.    "FILE OPEN ERROR
    ENDIF.
    Replaced above with following code
      DATA:lv_fname TYPE string,
           lv_ftype(10) VALUE 'BIN',
           lv_codepage type abap_encod VALUE '4102'.
    CALL METHOD cl_gui_frontend_services=>gui_download
        EXPORTING
          bin_filesize            = data_len
          filename                = lv_fname
          filetype                = lv_ftype
          codepage                = lv_codepage
        CHANGING
          data_tab                = data_tab
        EXCEPTIONS
          file_write_error        = 1
          no_batch                = 2
          gui_refuse_filetransfer = 3
          invalid_type            = 4
          no_authority            = 5
          unknown_error           = 6
          header_not_allowed      = 7
          separator_not_allowed   = 8
          filesize_not_allowed    = 9
          header_too_long         = 10
          dp_error_create         = 11
          dp_error_send           = 12
          dp_error_write          = 13
          unknown_dp_error        = 14
          access_denied           = 15
          dp_out_of_memory        = 16
          disk_full               = 17
          dp_timeout              = 18
          file_not_found          = 19
          dataprovider_exception  = 20
          control_flush_error     = 21
          not_supported_by_gui    = 22
          error_no_gui            = 23
          OTHERS                  = 24.
      IF sy-subrc <> 0 AND no_error_dlg = space.
        MESSAGE i002(sy) WITH text-i03.    "FILE OPEN ERROR
      ENDIF.
    I tried with all othr code pages also like 4110/4103/1110/1100/1102. It is not working,
    It is giving problem in unicode system. File is downloading.but not properly..
    and when i am opening the word file it is asking me select encoding type to make document readble along with the  available text encoding formats.
    Please help me..
    Thanks & Regards,
    Sastry R

  • Display Unicode Chars in text box

    This seems like it should be simple enough, but apparently, I am dense...
    I want to display a Unicode char in a text box based on its hex code input.  For instance, given x2190, I want to display a Left Arrow in a text box (x2190 is the code for left arrow).  I tried Flatten To String function then wiring the string to a text box with Force Unicode set, but that doesnt work.
    Or is there a simpler way to get non-ascii chars like arrows and such to display?  Next stop is little bmps in a picture ring...ack
    TIA
    Bill F

    did you get a chance to see the document in the below link
    http://decibel.ni.com/content/docs/DOC-10153
    Anil Punnam
    CLD
    LV 2012, TestStand 4.2..........

  • Trouble with Unicode Chars

    Hi all,
    I am having trouble displaying Unicode chars both in an Applet and in the command prompt window. I am using awt. The characters I want to display are: \u2228, \u2283, \u2261. These characters display correctly from AppletViewer when I add them to ChoiceBoxes, they display correctly in some browsers (isn't Unicode fairly universal by now? Do most browsers support it?), but not correctly when I try to put them into a TextField, using this simple code:
    public void keyTyped(KeyEvent e) {
    if(e.getKeyChar() == '/') {
    e.setKeyChar('\u2228');
    if(e.getKeyChar() == '.') {
    e.setKeyChar('\u2283');
    System.out.println(e.getKeyChar());
    if(e.getKeyChar() == ',') {
    e.setKeyChar('\u2261');
    System.out.println(e.getKeyChar());
    Any help is greatly appreciated, thanks!

    Er, forgot to mention, the chars display as either a
    black bar, sort of a square, This happens when there is no suitable font for the codepoint, and the system doesn't know how to convert a codepoint to another codepoint for which there is a suitable font. The latter mechanism allow to use non-unicode fonts with unicode chars.
    a question mark, That's possibly another issue. '?' usually appears while
    converting chara�ter into bytes, specifically if an encoding doesn't have an appropriate rule for a particular character.
    For example, the "Cp1251" encoding knows nothing of Greek characters, and so converts them into ?'s.
    Such conversion may occur when a java string is passed
    into a native window system, because such systems often are non-unicode.
    and an equal signCompletely illegal behaviour, usually occurs in very old JVM's.
    Note that Swing is always correct with unicode.
    A relevant documentation is here:
    http://java.sun.com/j2se/1.3/docs/guide/intl/addingfonts.html
    and
    http://java.sun.com/j2se/1.3/docs/guide/intl/fontprop.html

  • How to include unicode chars in text boxes?

    Hi,
    We need the unicode chars (for example: Chinese, Korean, etc...) to appear in the generated .pdf and .pcl files from the Adobe Output designer's IFD file.
    These chars are static and will not change, so we shall not be fetching them from FNF file (.dat).
    While trying to copy-paste directly into text box field, we are getting an error prompting for converting them to some other character set.
    We couldn't paste them exactly with the text how they should appear in the text boxes.
    Please tell us if there is a way to achieve this.
    Thanks and regards,
    Gurunath
    [email protected]

    Double check that all of the selected presentation targets and the font being used for the particular text box supports the characters you are trying to paste into it. For example, the PDF target only has 11 fonts available unless you use the other tabs to "create new soft font cartidges". If you have the appropriate fonts installed on you PC you should be able to use this capability to make the font available for each of your presentation targets. (At least that is my impression - I've never needed to try it.)

  • SQL Developer & Unicode Chars

    Running SQL Developer on Windows XP.
    Tool looks good, just one thing, we have Japanese/Korean text (not much at the moment) in our Oracle 9i/Oracle 10g (AL32UTF8)
    When I run the query in the sql editor, the chars appear as small rectangles.
    Exporting this as XML and loading into an XML viewer (like IE), all the chars appear fine.
    When I copy the text from the data results grid and paste into the sql editor window (after tweaking the prefs), the chars appear fine.
    so I am wondering ... how can I get the text in the data grid to show the Japanese text ?
    Is there a setting I have missed ? or would this be an enhancement ?
    We will be storing more and more international chars and having a tool capable of viewing it would be something we have not found in other tools we have trialed.

    In the Prefs -> Environment, ... set encoding to UTF-8
    In the Prefs -> Code Editor -> Fonts ... set to "Arial Unicode MS"
    Method 1 :
    Open a SQL Worksheet
    Tap in the sql, when executing the sql with the <F9> key, japanese text is displayed as small rectangles.
    When I cut-and-paste the text into the window where the SQL Statement is typed in, I can see the Japanese chars.
    Method 2 :
    On the connections panel, goto "Other Users", navigate to the table that contains the unicode chars.
    Double click on tablename in the left panel, this brings up a "toad like" display, the second tab is the "Data" tab,
    and the unicode data is displayed as miniature rectangles.
    I am unable to find a setting (in the Prefs) to set the font for the grid the data is displayed in

  • Strange problem retreiving string from database

    Hi,
    I am receiving a very strange problem retreiving a string from a database. The database I am using is Access. I am using .getString(COLUMN_NAME) to get the data from the particular record in the result set. The problem is that sometimes it does not seem to get the entire information that is in the cell. I have found that it is completely random on when it will get the entire contents and when it will only get a portion. It seems like once it gets only a portion though, that every other retrieval of the same field on a different record in the result set will yield the same cut-off point. Of course to make my life even more difficult, it is not always the same cut-off point on a different result set.
    Has anyone else experienced anything along these lines? Any help would be greatly appreciated as I am running out of ideas as to the cause.
    Thank you

    I've printed out exactly what is being retrieved from the fields in question and that's how I narrowed it down to that it has to do with the retrieval of the information from the result set. I've found the exact line of code where I am suddenly receiving only part of the data. And all I'm doing there is just getting the data from the result set. Nothing fancy at all. So unless the result set is becomming corrupt somehow...not sure how that would happen though by just cycling through?

  • Unicode Problem (KhmerOs Unicode)

    I develop one web site use Flex Builder 2. I want to use one
    unicode "KhmerOS" with my website. I want to use this unicode with
    TextInput and TextArea. Both of this tool is display "?" when i
    type. I embeded font and used with unicode rang already but it
    still display "?" in those tools.
    <mx:Style>
    @font-face{
    font-family:Khmer OS;
    src:url("assets/KhmerOS.ttf");
    unicode-range: U+1780-U+17FF, U+19E0-U+19FF;
    </mx:Style>
    I don't know how to solve this problem?

    Unicode is not the same as UTF-8, as Unicode consists of 2 bytes for each character whereas utf-8 only uses 2 bytes for special characters.
    So the fist question is, have you entered unicode (2byte characters) or utf-8 special characters entered into the text field?
    Can you post the character codes you have entered and seen this behavior?
    Next you should make sure you get the correct data from the DB. To do this you can debug your code and set a breakpoint right after you get the data from the DB (I guess into a string variable), the check the string using the data inspector. Look at the bytes, not at the string itself (open the string node to see the internal representation).
    Timo

  • Open dataset in UTF8. Problems between Unicode and non Unicode

    Hello,
    I am currently testing the file transfer between unicode an non unicode systems.
    I transfered some japanese KNA1 data from non unicode system (Mandt,Name1, Name2,City) to a file with this option:
    set local language pi_langu.
      open dataset pe_file in text mode encoding utf-8 for output with byte-order mark.
    Now I want to read the file from a unicode system. The code looks like this:
    open dataset file in text mode encoding utf-8 for input skipping byte-order mark.
    The characters look fine but they are shifted. name1 is correct but now parts of the city characters are in name2....
    If I open the file in a non unicode system with the same coding the data is ok again!
    Is there a problem with spaces between unicode an non-unicode?!

    Hello again,
    after implementing and testing this method, we saw that the conversion is always taken place in the unicode system.
    For examble: we have a char(35) field in mdmp with several japanese signs..as soon as we transfer the data into the file and have a look at it the binary data the field is only 28 chars long. several spaces are missing...now if we open the file in our unicode system using the mentioned class the size is gaining to 35 characters
    on the other hand if we export data from unicode system using this method the size is shrinking from 35 chars to 28 so the mdmp system can interprete the data.
    as soon as all systems are on unicode this method is obselete/wrong because we don't want to cut off/add the spaces..it's not needed anymore..
    the better way would be to create a "real" UTF-8 file in our MDMP system. The question is, is there a method to add somehow the missing spaces in the mdmp system?
    so it works something like thtat:
          OPEN DATASET p_file FOR OUTPUT IN TEXT MODE ENCODING UTF-8 WITH BYTE-ORDER MARK.
    "MDMP (with ECC 6.0 by the way)
    if charsize = 1.
    *add missing spaces to the structure
    *transfer strucutre to file
    *UNICODE
    else.
    *just transfer struc to file -> no conversion needed anymore
    endif.
    I thought mybe somehow this could work with the class CL_ABAP_CONV_OUT_CE. But until now I had no luck...
    Normally I would think that if I'am creating a UTF-8 file this work is done automatically on the transfer command

  • Getting PDF filename with unicode chars

    Hello,
    I'm trying to write a plugin that gets the file path of the current active document. The code looks like this:
    AVDoc avDoc = AVAppGetActiveDoc();
    PDDoc pdDoc = AVDocGetPDDoc(avDoc);
    ASFile file = PDDocGetFile (pdDoc);
    ASPathName filePath = ASFileAcquirePathName (file);
    This works fine for most documents, but for documents with unicode characters in the name each unicode character is replaced with '.' in filePath. For example, if the document is "测试中文关键词搜索!@#$%^&().pdf", then filePath becomes ".........!@#$%^&().pdf". Am I missing something required to get unicode filenames?
    Thanks.

    You were right, the plugin was getting a char* from ASFileSysDisplayStringFromPath. I removed that and added this which seems to have fixed my problem:
    ASText pathText = ASTextNew();
    ASFileSysDisplayASTextFromPath(ASGetDefaultFileSys(), filePath, pathText);
    wchar_t *pathString = (wchar_t*)ASTextGetUnicode(pathText);
    Thank you!

  • Problem displaying Unicode text.

    Hi guys,
    I am new to I18N. i want to display some unicode text using java program. But i always get "???"..any idea??
    public class I18N {
    public static void main(String[] args) throws Exception {
    String street = "\u65E5\u672C\u8A9E";
    System.out.println(street);
    if i use unicode value equivalent to english chars, it works fine..i know there is some problem in loading the corresponding fonts..but not able to nail the problem.psl help..examples appreciated.

    For example if you have a MS-OS check this site
    http://www.hclrss.demon.co.uk/unicode/fonts.html
    or you could simple try every font you can find on your system.

  • SO_NEW_DOCUMENT_ATT_SEND_API1 - problem after Unicode

    Hi experts, I'm using SO_NEW_DOCUMENT_ATT_SEND_API1 to create an email with an attached report.  Prior to Unicode upgrade, I had no problems with the attachment.  Now each line of the contents_bin atttachment wraps after 127 characters and the wrapped part is in Chinese!  Each line is 242 characters long so it shouldn't wrap.  Can somebody suggest what I can do to prevent both the wrapping and the change from English to Chinese from char 127 onwards.? Thanks.

    Hi Brigitte ,
    Please refer note : 190669.
    As I have faced the similar issue.
    The problem is with the upgradation.
    And due to upgradation, the internal logic of SO_NEW_DOCUMENT_ATT_SEND_API1 has been changed.
    So after referring the note : 190669.
    We have referred Program : BCS_EXAMPLE_# (# indicates 1-8) and updated my program with this example.
    I have referred BCS_EXAMPLE_5 to send the email notification with the attachment.
    Now sending email is easier comparing with the  SO_NEW_DOCUMENT_ATT_SEND_API1.
    As we DONT need to use PACKING LIST in this FM and
    its easy to use this example as we just need to pass required parameters like
    email body, subject line, receivers,attachment body,attachment name,etc., .
    Regards,
    Amit Linge.
    Edited by: Amit Ashok Linge on Jun 24, 2011 8:53 AM
    Edited by: Amit Ashok Linge on Jun 24, 2011 8:57 AM

  • BeX Variable:Strange problem with InfoObject CHAR 60

    Hi everybody,
    I've created an infobject custom (lenght:60 char).
    In Bex I've created a variable (type select option) but when I choose a value of 60 char from match code, it's truncated and I obtain an error message (BRAIN 643) because this value isn't in master data table.
    Thanks in advance.

    Hi Riccardo,
    unfortunately the problem hasn't solution on BEX side.
    Here full answer from SAP:
    Hello,
    We analyzed the issue on our testsystem and found out that it is
    not possible to enter more than 45 characters in the selection screen inBEX analyser.
    Even though the length may be defined as 60 characters, it takes only
    the first 45 characters and truncates the remaining
    characters.
    Similar thing happens when we run the query and try to retrieve the
    variable value. Now when we select a value which is greater
    than 45 characters, it truncates the extra characters and then fills
    the text box for that variable with truncated value.
    But now that we do not have this value in the master data table
    it generates the error message.
    Even if you try to copy and paste the value in the variable pop up
    screen it truncates it to 45 characters.
    Please note that this would happen only when you run the query in
    BEX Analyser(as in this case screens are rendered by basis) and not WEB.To verify this you can run the same scenerio in WEB (HTML) mode and see
    that it works fine because in this case screens are rendered in BW
    which can have length greater than 45 characters.
    So I would suggest you to run such queries (which has variable values
    exceeding 45 characters)in HTML and not Bex analyser,or refrain from
    using values exceeding 45 characters.
    As we spoke to our development, this would not be fixed in future support packages.
    Therefore,I would suggest you to either use characteristic values upto a maximum of 45 characters or use WEB/HTML to run queries with suchscenerio.
    I hope this resolves the issue.
    Thanks for your understanding and cooperation and I´m sorry that I can´t give you a better answer.

  • Problem in Unicode Data in Oracle Forms 6i

    hello all,
    i m using forms 6i with oracle 10g.
    i have set Nls_lang =American_America.UTF8.
    Now my problem is when i type some data i marathi Using a font
    converter engine directly into the text box on form i just get
    ?????? in the text box.
    But when i type same data on notepad n then paste it in textbox it gets
    properly pasted and also get inserted in oracle .I m also able to
    retrive it back properly.So i think it is not the problem of my
    character set.Also i have set the font of textbox as Arial Unicode Ms.
    i m not getting where the problem is?
    I will b very thankfull if i get any help regarding this.

    Try checking the Job status in Report Servlet for the Errror.

Maybe you are looking for