SQL Developer - displaying double byte (Japanese) characters

I have installed SQL Developer on an English Windows XP system with Asian regional support installed.
SQL Developer > preferences > database > set language, territory to Japan/Japanese.
The strings are displaying as square boxes which means the characters are unable to display, I have verified on a Japanese machine that the characters to display with the same version of SQL Developer.
Any ideas on how to get these characters to display?

Hi user625396,
Square boxes mean font not installed, for example on my Linux box for displaying Chinese:
Chinese characters in linux see http://isis.poly.edu/~qiming/chinese-debian-mini-howto.html basically:
1. create a jre/lib/fonts/fallback directory if necessary and
2. copy or link the fonts files into this directory. (On my Linux system: /usr/share/fonts/truetype/arphic/gbsn00lp.ttf )
3. I will install Japanese when the business need arises.
Please confirm this fallback directory works for Japanese and Windows XP, with / changed to \ and the path to a Japanese font replacing /usr/share/fonts/truetype/arphic/gbsn00lp.ttf .
I had a quick look for a windows XP/Japanese/fallback font page but I could not see one.
-Turloch

Similar Messages

  • Inserting double-byte (Japanese) characters

    I am trying to insert a < 30 character Japanese string into a
    table with a column of type VARCHAR2(30), but I get the
    Exception:
    ORA-01401: inserted value too large for column
    which is fairly self explanatory.......
    I'm using the thin driver against Oracle 7.3.4.
    We have a C++ application using OCI which does exactly the same
    function successfully. So, my question is how do I tell the thin
    driver the character set in use in the database? I think it would
    be by a driver property, or some NLS_LANG equivalent, but I can't
    find any documentation.
    Help someone, please. Many Thanks.
    Mark Phipps
    null

    Mark Phipps (guest) wrote:
    : I am trying to insert a < 30 character Japanese string into a
    : table with a column of type VARCHAR2(30), but I get the
    : Exception:
    : ORA-01401: inserted value too large for column
    : which is fairly self explanatory.......
    : I'm using the thin driver against Oracle 7.3.4.
    : We have a C++ application using OCI which does exactly the same
    : function successfully. So, my question is how do I tell the
    thin
    : driver the character set in use in the database? I think it
    would
    : be by a driver property, or some NLS_LANG equivalent, but I
    can't
    : find any documentation.
    : Help someone, please. Many Thanks.
    : Mark Phipps
    Try different drivers. i.e. classes102.zip or classes111.zip
    or check your drivers. Or make sure you're including the right
    one in your classpath.
    Ciao
    null

  • How to display double byte characters with system.out.print?

    Hi, I'm a newbie java programmer having trouble to utilize java locale with system io on dos console mode.
    Platform is winxp, jvm1.5,
    File structure is:
    C:\myProg <-root
    C:\myProg\test <-package
    C:\myProg\test\Run.java
    C:\myProg\test\MessageBundle.properties <- default properties
    C:\myProg\test\MessageBundle_zh_HK.properties <- localed properties (written in notepad and save as Unicode, window notepad contains BOM)
    inside MessageBundle.properties:
    test = Hello
    inside Message_zh_HK.properties:
    test = &#21890; //hello in big5 encoding
    run.java:
    package test;
    import java.util.*;
    public class Run{
      public static void main(String[] args){
        Locale locale = new Locale("zh","HK");
        ResourceBundle resource =
            ResourceBundle.getbundle("test.MessageBundle", locale);
        System.out.println(resource.getString("test"));
      }//main
    }//classwhen run this program, it'll kept diplay "hello" instead of the encoded character...
    then when i try run the native2ascii tool against MessageBundle_zh_HK.properties, it starts to display monster characters instead.
    Trying to figure out what I did wrong and how to display double byte characters on console.
    Thank you.
    p.s: while googling, some said dos can only can display ASCII. To demonstrate the dos console is capable of displaying double byte characters, i wrote another helloWorld in chinese using notepad with C# and compile using "csc hello.cs", sure enough, console.write in c# allowed me to display the character I was expecting. Since dos console can print double byte characters, I must be missing something important in this java program.

    after google a brunch, I learned that javac (hence java.exe) does not support BOM (byte order mark).
    I had to use a diff editor to save my text file as unicode without BOM in order for native2ascii to convert into a ascii file.
    Even the property file is in ascii format, I'm still having trouble to display those character in dos console. In fact, I just noticed I can use system.out.println to display double byte character if I embedded the character itself in java source file:
    public class Run {
         public static void main(String[] args) throws UnsupportedEncodingException{
              String msg = "&#20013;&#25991;";    //double byte character
                    try{
                    System.out.println(new String(msg.getBytes("UTF-8")) + " new string");  //this displays fine
                    catch(Exception e){}
                    Locale locale = new Locale("zh", "HK");
              ResourceBundle resource = ResourceBundle.getBundle("test.MessagesBundle", locale);
                    System.out.println(resource.getString("Hey"));      //this will display weird characterso it seems like to me that I must did something wrong in the process of creating properties file from unicode text file...

  • Form English Char ( Single Byte  ) TO Double Byte ( Japanese Char )

    Hello EveryOne !!!!
    I need Help !!
    I am new to Java ,.... I got assignment where i need to Check the String , if that string Contains Any Non Japanse Character ( a~z , A~Z , 0 -9 ) then this should be replaced with Double Byte ( Japanese Char )...
    I am using Java 1.2 ..
    Please guide me ...
    thanks and regards
    Maruti Chavan

    hello ..
    as you all asked Detail requirement here i an pasting C code where 'a' is passed as input character ..after process it is giving me Double Byte Japanese "A" .. i want this to be Done ..using this i am able to Convert APLHA-Numeric from singale byte to Doubel Byte ( Japanse ) ...
    Same program i want to Java ... so pleas guide me ..
    #include <stdio.h>
    int main( int argc, char *argv[] )
    char c[2];
    char d[3];
    strcpy( c, "a" ); // a is input char
    d[0] = 0xa3;
    d[1] = c[0] + 0x80;
    printf( ":%s:\n", c ); // Orginal Single byte char
    printf( ":%s:\n", d ); // Converted Double Byte ..
    please ..
    thax and regards
    Maruti Chavan

  • How can Mapviewer display double-byte characters

    Hi,
    I am facing a problem using Mapviewer to display a map with a title containing double-byte characters e.g. chinese charaters. I think the problem takes place as map_Request producing the map image.
    My environment is :
    NT server 5.0(sp 3)
    Oracle 9.01
    J2EE
    IE 6.0

    are you using the mapclient.jsp demo file? there is a bug in that file that will lead to truncated map requests when the title is in chinese. it will be fixed in the next version of MapViewer (a beta version
    of which will be on OTN soon).
    Thanks,
    LJ

  • Error messages in Pl Sql developer are in those funny characters

    Hi,
    Can someone help me on how to change the erroe messages to english in Pl Sql developer? when I run a query in pl sql developer the error messages gives the error code and those characters.
    thanks.
    Edited by: 848824 on Mar 30, 2011 11:45 PM

    hi
    u can try something like this.
    EXCEPTION
       WHEN invalid_number OR STANDARD.INVALID_NUMBER THEN
          -- handle the error
    END;OR
    RAISE FORM_TRIGGER_FAILURE;hope this helps u....
    sarah

  • SQL Developer Unable to see Chinese characters embedded in Package

    Dear sir,
    Please kindly help.
    I am running a DB with 'AMERICAN_AMERICA.UTF8'.
    My local NLS_LANG for the Oracle Home (regedit) are also set as 'AMERICAN_AMERICA.UTF8'.
    I am not quite sure what to set for the SQL Developer NLS_LAN (E.g. from Menu bar Tools > Preference, select Database > NLS_LANG item).
    Problem:
    In the SQL Developer, when I open the Package Browser and view my P/L Sql package content... the embedded Simplified Chinese are all scambled, and unreadable.
    Note: when I do select statement, I am able to see the chinese data fine.
    But not sure why the Packages are not able to be display correctly?!
    Please kindly assist.

    871693 wrote:
    So I am able to display the data, but aren't able to view the P/L Sql package embedded chinese?!
    How can I fix it? I'm not sure.
    Does it has to do with my SQL Developer NLS_LANG?Problem/solution involves SQL Developer's "environment".
    Which company makes your SQL Developer? Which version is it?
    What is OS name & version?

  • Function module to control printing of double byte chinese characters

    Hi,
    My sapscript printing of GR Slip often overflow to the next line whenever a line item encountered article desciption text in CHINESE.
    The system login as "EN" but we do maintain article description in ENGLISH, ChINESE and mixture of both.
    This result in different field length when printing.
    Is there a way to control it and ensure that it will not overflow to the next line?
    How does SAP standard deals with this sort of printing, single & double byte chars?
    Please assist.

    This is the code that solved our issue.
    Besides we set the InfoObject to have NO master data attributes: it was just used as text attribute in an DSO, not as dimensional attribute in a cube. This solved the issue of the SID value generation error.
    FUNCTION z_bw_replace_sp_char.
    ""Local Interface:
    *"  IMPORTING
    *"     REFERENCE(I_STRING)
    *"  EXPORTING
    *"     REFERENCE(O_STRING)
      FIELD-SYMBOLS: <ic> TYPE x.
    Strings with other un-allowed char -
    DATA:
    ch1(12) TYPE x VALUE
    '410000204200002043000020',
    ch2(12) TYPE x VALUE
    '610000206200002063000020'.
      DATA:
      x8(4) TYPE x,
      x0(2) TYPE x VALUE '0020',
      x0200(2) TYPE x VALUE '0200'.
      DATA: v_len TYPE sy-index,
            v_cnt TYPE sy-index.
      o_string = i_string.
      v_len = STRLEN( o_string ).
    # sign
      IF v_len = 1.
        IF o_string(1) = '#'.
          o_string(1) = ' '.
        ENDIF.
      ENDIF.
    ! sign
      IF o_string(1) = '!'.
        o_string(1) = ' '.
      ENDIF.
      DO v_len TIMES.
        ASSIGN o_string+v_cnt(1) TO <ic> CASTING TYPE x.
        IF <ic> <> x0200. "$$$$$$$$$$$$$$$$$$$$$$
          IF <ic> >= '0000' AND
             <ic> <= '1F00'.  " Remove 0000---001F
            o_string+v_cnt(1) = ' '.
         ELSE.
           CONCATENATE <ic> x0 INTO x8 IN BYTE MODE.
           unassign <ic>.
           SEARCH ch1 FOR x8 IN BYTE MODE.
           IF sy-subrc <> 0.
             SEARCH ch2 FOR x8 IN BYTE MODE.
             IF sy-subrc = 0.
               o_string+v_cnt(1) = ' '.
             ENDIF.
           ELSE.
             o_string+v_cnt(1) = ' '.
           ENDIF.
          ENDIF.
        ENDIF. "$$$$$$$$$$$$$$$$$$$$$$
        v_cnt = v_cnt + 1.
      ENDDO.
    ENDFUNCTION.

  • Double-byte Chinese characters between Flash Media Server and WebLogic 9.2

    i have installed a FMS (Flash Media Server 4.0 developer edition) and a weblogic server on linux. The two servers communicate with each other via WebService (Axis2 on Weblogic 9.2). i can send some XML message from FMS to the Weblogic server and receive message. but the returned message from the Weblogic server is as follows:
    <?xml version="1.0" encoding="utf-8"?>
    <root>
    <response>
    <result>1</result>
    <reports >
    <test>x8054x901Ax6027x6D4Bx8BD5</test>
    </reports>
    </response>
    </root>
    the returned messages between the <test> and </test> tag can be decoded to "测试联通性" in Chinese via utf-8. why not the returned messages just show the result as :
    <test>测试联通性</test>
    Later i made another test: copy the war file in weblogic server to a apache-tomcat server, then the FMS send XML message to apache-tomcat and got the correct message as follows:
    <?xml version="1.0" encoding="utf-8"?>
    <root>
    <response>
    <result>1</result>
    <reports >
    <test>测试联通性</test>
    </reports>
    </response>
    </root>
    then what is the problem with my weblogic? what shall i do to solve the problem? thank you for anyone who gives me some advice.

    i have installed a FMS (Flash Media Server 4.0 developer edition) and a weblogic server on linux. The two servers communicate with each other via WebService (Axis2 on Weblogic 9.2). i can send some XML message from FMS to the Weblogic server and receive message. but the returned message from the Weblogic server is as follows:
    <?xml version="1.0" encoding="utf-8"?>
    <root>
    <response>
    <result>1</result>
    <reports >
    <test>x8054x901Ax6027x6D4Bx8BD5</test>
    </reports>
    </response>
    </root>
    the returned messages between the <test> and </test> tag can be decoded to "测试联通性" in Chinese via utf-8. why not the returned messages just show the result as :
    <test>测试联通性</test>
    Later i made another test: copy the war file in weblogic server to a apache-tomcat server, then the FMS send XML message to apache-tomcat and got the correct message as follows:
    <?xml version="1.0" encoding="utf-8"?>
    <root>
    <response>
    <result>1</result>
    <reports >
    <test>测试联通性</test>
    </reports>
    </response>
    </root>
    then what is the problem with my weblogic? what shall i do to solve the problem? thank you for anyone who gives me some advice.

  • Double byte characters in a String - Help needed

    Hi All,
    Can anyone tell me how to find the presence of a double byte ( Japanese ) character in a String. I need to check whether the String contains a Japanese character or not. Thanx in advance.
    Ramya

    /** returns true if the String s contains any "double-byte" characters */
    public boolean containsDoubleByte(String s) {
      for (int i=0; i<s.length(); i++) {
        if (isDoubleByte(s.charAt(i)) {
          return true;
      return false;
    /** returns true if the char c is a double-byte character */
    public boolean isJapanese(char c) {
      if (c >= '\u0100' && c<='\uffff') return true;
      return false;
    // simpler:  return c>'\u00ff';
    /** returns true if the String s contains any Japanese characters */
    public boolean containsJapanese(String s) {
      for (int i=0; i<s.length(); i++) {
        if (isJapanese(s.charAt(i)) {
          return true;
      return false;
    /** returns true if the char c is a Japanese character. */
    public boolean isJapanese(char c) {
      // katakana:
      if (c >= '\u30a0' && c<='\u30ff') return true;
      // hiragana
      if (c >= '\u3040' && c<='\u309f') return true;
      // CJK Unified Ideographs
      if (c >= '\u4e00' && c<='\u9fff') return true;
      // CJK symbols & punctuation
      if (c >= '\u3000' && c<='\u303f') return true;
      // KangXi (kanji)
      if (c >= '\u2f00' && c<='\u2fdf') return true;
      // KanBun
      if (c >= '\u3190' && c <='\u319f') return true;
      // CJK Unified Ideographs Extension A
      if (c >= '\u3400' && c <='\u4db5') return true;
      // CJK Compatibility Forms
      if (c >= '\ufe30' && c <='\ufe4f') return true;
      // CJK Compatibility
      if (c >= '\u3300' && c <='\u33ff') return true;
      // CJK Radicals Supplement
      if (c >= '\u2e80' && c <='\u2eff') return true;
      // other character..
      return false;
    /* NB CJK Unified Ideographs Extension B not supported with 16-bit unicode. Source: http://www.alanwood.net/unicode */
    }

  • Problem in displaying Japanese characters in SAPScripts

    Hi All,
    I am facing a strange problem in one of my SAPScripts. I have one script in both English and Japanese languages. The scripts are already existing. I had to do some minor changes in a logo window. I did them and i did not do any other changes in any of the windows.
    When the output wa s seen for the script in the Japanese version, it was looking ok displaying all hte Japanese characters in various windows. Now, during testing , in the same server, the Japanese characteres are not shown. Instead , some ' #'(hash) symb ols are getting displayed.
    How could it happen? Did any body face such problem? If so, can anybody plz help me out with the solution?
    What shud i do to get back the Japanese characters in my script again?
    Regards,
    Priya

    Priya.
    this is not an ABAP problem ask your BASIS team to set printer cofing from SPAD.dont worry its not an ABAP issue at all.
    sometime printer doesnt support special char so it need to be setting with printer.
    Amit.

  • [Bug Report] CR4E V2: Exported PDF displays Japanese characters incorrectly

    We now plan to transport a legacy application from VB to Java with Crystal Reports for Eclipse. It is required to export report as PDF file, but result PDFs display Japanese characters incorrectly for field with some mostly used Japanese fonts (MS Gothic & Mincho).
    Here is our sample Crystal Reports project:   [download related resources here|http://sites.google.com/site/cr4eexportpdf/example-of-cr4e-export-pdf]
    1. PDFExportSample.rpt located under ..\src contains fields with different Japanese fonts.
    2. Run SampleViewerFrameClient#main(..) to open a Java Report Viewer:
        a) At zoom rate 100%, everything is ok.
        b) Change zoom rate to 200% or 50%, some fields in Japanese font collapse.
        c) Export to PDF file,
             * Fonts "MS Gothic & Mincho": both ASCII & Japanese characters failed.
             * Fonts "Meiryo & HGKyokashotai": everything works well.
             * Open PDF properties, you will see all fonts are embedded with built-in encoding.
             * Interest to note that copy collapsed Japanese characters from Acrobat Reader, then
               paste them into a Notepad window, Notepad will show the correct Japanese characters anyway.
               It seems PDF export in CR4E mistaking to choose right typeface for Japanese characters
               from some TTF file.
    3. Open PDFExportSample.rpt in Crystal Report 2008 Designer (trial version), and export it as PDF.
        The result PDF displays both ASCII & Japanese characters without any problem.
    Test environment as below:
    * Windows XP Professional SP3 (Japanese) with MS Office which including extra fonts (i.e. HGKyokashotai)
    * Font version: MS Gothic, Mincho, Meiryo, all in Version 5.0
        You can download MS Meiryo from Microsoft's Site:
        http://www.microsoft.com/downloads/details.aspx?familyid=F7D758D2-46FF-4C55-92F2-69AE834AC928&displaylang=en)
    * Eclipse 3.5.2
    * Crystal Reports for Eclipse, V2, 12.2.207.r916
    Can this problem be fixed? If yes how long will it take to release a patch?
    We really looking forward to a solution before abandoning CR4E.
    Thanks for any reply.

    I have created a [simple PDF file|http://sites.google.com/site/cr4eexportpdf/inside-the-pdf/simple.pdf?attredirects=0&d=1] exported from CR4E. It is expected to display "漢字" (or in unicode as "\u6F22\u5B57"), but instead being rendered in different ones of "殱塸" (in unicode as "\u6BB1\u5878").
    Look inside into this simple PDF file (you can just open it with your favorite text editor), here is its page content:
    8 0 obj
    <</Filter [ /FlateDecode ] /Length 120>>
    stream ... endstream
    endobj
    Decode this stream, we get:
    /DeviceRGB cs
    /DeviceRGB CS
    q
    1 0 0 1 0 841.7 cm
    13 -13 569.2 -815.7  re W n
    BT
    1 0 0 1 25.75 -105.6 Tm     <-- text position
    0 Tr
    /ttf0 10 Tf                 <-- apply font
    0 0 0 sc
    ( !)Tj                      <-- show glyphs [20, 21], which index is to embedded TrueType font subset
    ET
    Q
    The only embeded font subset is defined as:
    9 0 obj /ttf0 endobj
    10 0 obj /AAAAAA+MSGothic endobj
    11 0 obj
    << /BaseFont /AAAAAA+MSGothic
    /FirstChar 32
    /FontDescriptor 13 0 R
    /LastChar 33
    /Subtype /TrueType
    /ToUnicode 18 0 R                            <-- point to a CMap object
    /Type /Font
    /Widths 17 0 R >>
    endobj
    12 0 obj [ 0 -140 1000 859 ] endobj
    13 0 obj
    << /Ascent 860
    /CapHeight 1001
    /Descent -141
    /Flags 4
    /FontBBox 12 0 R
    /FontFile2 14 0 R                            <-- point to an embedded TrueType font subset
    /FontName /AAAAAA+MSGothic
    /ItalicAngle 0
    /MissingWidth 1000
    /StemV 0
    /Type /FontDescriptor >>
    endobj
    The CMap object after decoded is:
    18 0 obj
    /CIDInit /ProcSet findresource begin 12 dict begin begincmap /CIDSystemInfo <<
    /Registry (AAAAAB+MSGothic) /Ordering (UCS) /Supplement 0 >> def
    /CMapName /AAAAAB+MSGothic def
    1 begincodespacerange <20> <21> endcodespacerange
    2 beginbfrange
    <20> <20> <6f22>                         <-- "u6F22"
    <21> <21> <5b57>                         <-- "u5B57"
    endbfrange
    endcmap CMapName currentdict /CMap defineresource pop end end
    endobj
    I can write out the embedded TrueType font subset (= "14 0 obj") to a file named "[embedded.ttc|http://sites.google.com/site/cr4eexportpdf/inside-the-pdf/embedded.ttf?attredirects=0&d=1]", which is really a tiny TrueType font file containing only the wrong typefaces for "漢" & "字". It seems everything OK except CR4E failed to choose right typefaces from the TrueType file (msgothic.ttc).
    Is it any help? I am looking forward to any solution.

  • ASCII representations of double-byte characters

    My file contains ASCII representations of double-byte CJK characters (output of native2ascii). How do I restore them back to the original native characters?
    I mean, when I load the file with FileInputStream, what I get are all strings like \uabcd. How do I get the characters represented by these strings?

    My file contains ASCII representations of double-byte
    CJK characters (output of native2ascii. How do
    I restore them back to the original native
    characters?
    I am no expert in unicode so I don't know if this is correct, but I assume that if a String starts with "\u" then there will be 4 more characters that are a hexadecimal representation of the char value. If that's right, then you should be able to parse out the "\uxxxx" and convert it to a char by parsing the hex. For example//the variable unicode is a String like \uabcd
    String hex = unicode.substring(2);
    char result = (char) (Integer.parseInt(hex,16));

  • Can't display MS SQL datetime value in SQL Developer 1.5.1

    Hi All,
    I try to select the datetime value from MS SQL Server and the SQL Developer display strange value like this:
    oracle.sql.TIMESTAMP@f37160
    The problem does not exists in the previous version. And any body can help to solve it?
    Thanks.

    ISSUE CLARIFICATION
    ====================
    Attempting to view records in a table from a MS SQL Server 2005 schema. All fields are ok, except
    the datetime fields.
    Getting Oracle.sql.date@E4F4012
    This worked (actually still works) with SQL Developer 1.2.1. In fact, I see both on my screen and
    1.2.1 views the fields fine, 1.5.4 does not.
    ISSUE VERIFICATION
    ===================
    Verified the issue by the screen shots
    DATA COLLECTED
    ===============
    SCREEN SHOTS
    SR6930954.994_SQLDeveloper.ppt
    CAUSE DETERMINATION
    ====================
    Missing option in SQL Server 2005 default datatype mapping rules.
    CAUSE JUSTIFICATION
    ====================
    Note 458294.1
    Bug 6360558 / Bug 6378114
    PROPOSED SOLUTION(S)
    ======================
    use the workaround provided in Note: 458294.1
    PROPOSED SOLUTION JUSTIFICATION(S)
    ====================================
    Note 458294.1
    Bug 6360558 / Bug 6378114
    SOLUTION / ACTION PLAN
    =======================
    You can use the following workaround if you want SQL Server 2005 DATETIME datatype map to Oracle DATE:
    - On the Captured Model, select 'Set Data Mapping'
    - Click on Apply in the 'Set Data Map' Window, this will create a record in the Repository
    - Open a connection with the Repository database
    - Click on Tables
    - Click on table MIGR_DATATYPE_TRANSFORM_RULE
    - Click on the "Data" Tab
    - Sort on column "SOURCE_DATA_TYPE_NAME"
    - Look for "DATETIME"
    - Change the value of column "TARGET_DATA_TYPE_NAME" from TIMESTAMP to DATE
    - Change the value of column "TARGET_SCALE" from 6 to (null)
    - Commit the change.
    Remark: This workaround is only suitable if the repository contains only 1 captured model, otherwise we need to identify the proper MAP_ID.
    KNOWLEDGE CONTENT
    =================
    Platform / Port Specific? = NO
    Knowledge content not created because the following note already addresses this issue: Note.458294.1
    This problem has been identified as a bug. And it has been fixed in Version 2.0.0 (Future Release)
    For available workaround refer to the above tags and NOTE: 458294.1
    Regards,
    Srikanth
    Oracle Support Services.
    Currently Support is not having any information of the release date for sql developer V2.0.0
    Regards,
    Srikanth

  • Implementation of double-byte character support in the Database

    Hi experts,
    There is an oracle 10.2.0.5 standard edition database running on windows platform. The application team needs to add a column of the datatype double byte (chinese characters) to an existing table. By doing so, will we have to change the character set of the database ? Would implementing this cause any impact on the database ?
    Is it possible to change the existing single byte column to the double byte column in a table?
    Edited by: srp on Dec 18, 2010 2:21 PM

    I think you should post your request to Globalization Support forum Globalization Support with following details:
    what is the APIs used to write data ?
    what are the APIs used to read data ?
    what is the client OS ?
    what is the value of NLS_LANG environment variable in client environment
    Note that command line SQL*Plus under Windows does not display Unicode data: try to use instead SQL Developer.
    Example:
    1. run in SQL*Plus:
    SQL> desc t;
    Name                                      Null?    Type
    X                                                  NUMBER(38)
    Y                                                  NVARCHAR2(10)
    SQL> insert into t values(1, unistr('\8349'));
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> select * from t;
             X Y
             1 ■
    SQL> select dump(y, 1016)  from t;
    DUMP(Y,1016)
    Typ=1 Len=2 CharacterSet=AL16UTF16: 83,49Run in SQL Developer:
    select * from t
    X                      Y         
    1                      草       

Maybe you are looking for

  • I have a Imac 8,1 That wont boot.

    I have a Imac 8,1 that wont boot. 3.06Ghz, Nvidia 8800 I believe i have 2 problems. First this is what ive already done: I ran an extensive hardware test, no problem In disk utility I erased my harddrive and clean installed Mavericks, Verified and re

  • Standard report for Request for down payment

    Dear All, Is there any standard report on request for down payment in case of vendor and Custm

  • PSE 6 on old XP computer, PSE 9 on new WIn 7 - where are the pictures?

    I am lost.  I had 9000+ pictures on the old computer outboard drive; and others copied onto the new computer (Win XPSP3.)  I am now am on the new computer (WIN7).  I just opened PSE 9 for the first time.  The guide which came with the PSE disc discus

  • DropDownList Skin Not Found error

    I am running Flex 4.6 and the latest install of Flash Builder from the creative.adobe.com subscription. I have a DropDownList that is running in two places, one on a VGroup page and again on a TitleWindow popup. The VGroup displays a list of items to

  • Ibook compatible with ios 4.2.1?

    recently i have deleted my ibook on my ipod touc 2G, is it possible to find one compatible with ios 4.2.1, if possible and anyone has it can you please send it to me? ([email protected]) thanks ..