JDBC2.0 API and Multi-Bytes Characters

I use the JDBC2.0 API with the thin Driver816 for jdk1.2.X,
it works well with English characters ,
but i get wrong with Multi-Bytes Characters.
Does anyone else know the reason?
Thanks in advance.

I have the same problem!!!!!!!!!!!
<BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by huang Jian-chang:
I use the JDBC2.0 API with the thin Driver816 for jdk1.2.X,
it works well with English characters ,
but i get wrong with Multi-Bytes Characters.
Does anyone else know the reason?
Thanks in advance.<HR></BLOCKQUOTE>
null

Similar Messages

  • Multi-byte characters are garbled in SQL Server Business Intelligent Development Studio (Visual Studio) 2008

    Hi,
    I'm revising an existing report which was developed by my predecessor. Though it works fine in the production environment, when I open the .rdl file with SQL Server Business Intelligent Studio (Visual Studio) 2008 on my client
    PC, I find all the multi-byte characters are garbled. When I open it with the BIDS (the same version) on the server, it shows everything correctly.
    The fonts for the controls (labels) are Tahoma and it's originally only for alphabets, but multi-byte characters are supposed to be displayed in MSGOTHIC by Font Link as they are displayed correctly on the server.
    Could anyone advise me how to solve this issue? I know I can fix it by changing the fonts from Tahoma to MSGOTHIC for all the contrls, but I don't want to do it.
    Environment:
    My PC:Windows7 64bit /Visual Studio 9.0.30729.1 / .NET Framework 3.5 SP1
    Server:Windows Server 2003 R2 /Visual Studio 9.0.30729.1 / .NET Framework 3.5 SP1
    Garbled characters sample:
    FontLink - SystemLink
    Please let me know if you need any more information. I would appreciate your advice!

    Hi nino_miya,
    According to your description, when you display the report in client side, characters are garbled.
    In your scenario, please check if the Language is the same as the report on production server. Also please check if the data of Tahoma in registry on client PC is the same as server. If those two settings are the same, please specify font of the each
    control as MSGOTHIC manaually on client PC.
    If you have any question, please feel free to ask.
    Best regards,
    Qiuyun Yu
    Qiuyun Yu
    TechNet Community Support

  • Store Multi Byte Characters in WE8ISO8859P1 Database without Migration

    Hi - I am looking for a solution where I can store the Multi Byte Character's under the WE8ISO8859P1 Database.
    Below are the DB NLS_PARAMETERS
    NLS_CHARACTERSET = WE8ISO8859P1
    NLS_NCHAR_CHARACTERSET = AL32UTF8
    NLS_LENGTH_SEMANTICS = BYTE
    Size of DB = 2 TB.
    DB Version = 11.2.0.4
    Currently there is a need to store the Chinese Characters under NAME and ADDRESS Columns only. Below are the description of the columns.
    Column Name            DataType
    GIVEN_NAME_ONE
    VARCHAR2(120 BYTE)
    GIVEN_NAME_TWO
    VARCHAR2(120 BYTE)
    LAST_NAME
    VARCHAR2(120 BYTE)
    ADDR_LINE_ONE
    VARCHAR2(100 BYTE)
    ADDR_LINE_TWO
    VARCHAR2(100 BYTE)
    ADDR_LINE_THREE
    VARCHAR2(100 BYTE)
    What are my option's over here without considering the Migration WE8ISO8859P1  DB to AL32UTF8 ?
    1. Can I increase the size of the Column i.e make it n x 4. e.g NAME will be 480 Byte  and ADDRESS will be 400 Byte.? What are pros and cons ?
    2. Convert the existing Column from VARCHAR2 to NVARCHAR2 with the Same Size ? i.e NVARCHAR2(120 BYTE) ?
    3. Add the extension to an table with new columns - NVARCHAR2. e.g NAME - NVARCHAR2(120 CHAR) and ADDRESS (100 - CHAR) ?
    4. Database got Clobs,Blobs, Long etc. got Varied Data, Is it a good idea to Migrate to AL32UTF8  with Minimal Downtime ?
    Please suggest the best alternatives. Thanks.
    Thanks
    Jitesh

    Hi Jitesh,
    NLS_NCHAR_CHARACTERSET can either be AL16UTF16 or UTF8. So mostly your DB would have UTF8.
    You can definitely insert Unicode characters into N-type columns. Size of the N-type column will depend on the characters you plan to store in them.
    If you use N-types, do make sure you use the (N'...') syntax when coding it so that Literals are denoted as being in the national character set by prepending letter 'N'.
    Although you can use them, N-types are  not very well supported in 3the party client/programming environments, you may need to adapt a lot of code to use N-types properly and there are some limitations.
    While at first using N-types for a (few) columns seems like a good idea to avoid the conversion of a whole database , in many cases the end conclusion is that changing the NLS_CHARACTERSET is simply the easiest and fastest way to support more languages in an Oracle database.
    So, It depends on how much of your data will be unicode which you would store in N-type characters.
    If you do have access to My Oracle Support you can check Note 276914.1 :The National Character Set ( NLS_NCHAR_CHARACTERSET ) in Oracle 9i, 10g , 11g and 12c, For more details.
    With respect to your Downtime, The actual conversion (CSALTER or in case using DMU) shouldn't take too much time, if you have run CSSCAN on your DB and made sure you have taken care of all your truncation, convertible and lossy data (if any).
    It would be best for you to run CSSCAN initially to gauge how much convertible/lossy/truncation data you need to take care.
    $ CSSCAN FROMCHAR=WE8ISO8859P1 TOCHAR=AL32UTF8 LOG=P1TOAl32UTF8 ARRAY=1000000 PROCESS=2 CAPTURE=Y FULL=Y
    Regards,
    Suntrupth

  • Single and multi byte settings

    Hello,
    We are trying to implement multibyte char loading and I have a few questions:
    1) Our current char coding is in UTF-8. What char coding should we use for multi byte loading?
    2) In DDL, the column can be declared as a BYTE or CHAR, such as varchar2(20 CHAR). For multi byte, we can either change the size of the column or change from BYTE to CHAR for column definition. Which is a better way of implementation?
    3) Any other setting changes we need to be aware of from single to multi bye implementation?
    Regards

    First off, I'm a bit confused. If your database's character set us UTF-8, you already have a multi-byte character set. I'm not sure what it is that you're converting in this case.
    As to changing the table definition-- that depends primarily on your application(s). Generally, I find it easier to declare a field with character length semantics, which gives users in every language certainty about the number of characters a field can support. There are probably people that think the other way because they're allocating memory in a client application based on bytes and want to ensure that the definitions on the client and the server match.
    Since I don't quite understand what it is that you're converting, I'm hard pressed to come up with what "other setting changes" might be appropriate.
    Justin

  • Sapshcut and  double-byte characters trouble?

    Hi experts,
    I did try some commands to log-on to SAP system as the following examples and get some conclusions:
    (1) sapshcut.exe -sysname="今ウィ ちゃわ異"  -user="paragon1"  -pw="paragon1"  -client="800" -language=en  -maxgui "
    (2) sapshcut.exe -sysname="EC5"  -user="paragon1"  -pw="paragon1"  -client="800" -language=en  -maxgui "
    - For the (1) case, with any "double-byte characters" sysname (Janpanese,...), I can not log-on to SAP and get "Microsoft Visual C++ Runtime Library" error message.
    - For the (2) case, without "double-byte characters" sysname, I can log-on to SAP easily.
    Thus, I wan to know
    1) Does the sapshcut.exe support double-byte characters?
    2) Do we have a way to use sapshcut.exe with double-byte characters(Japanese,...)?
    Kindly Regards,

    The comments on the bytes/strings were helpful. Thanks.
    But I'm still confused as to what matching pattern could be used.
    For example a pattern like:
    [A-Za-z]
    I assume would not match any double byte characters.
    I also assume the following won't work either:
    [\\p{Alpah}]
    because it is posix - US-ASCII only.
    So how do you say "match the tag, then take any characters,
    double byte, ascii, whatever, then match the text tag - per the
    original example ?

  • Regular Expressions and Double Byte Characters ?

    Is it possible to use Java Regular Expressions to parse
    a file that will contain double byte characters ?
    For example, I want a regular expression to match the following line
    tag="double byte stuff" id="double byte stuff"

    The comments on the bytes/strings were helpful. Thanks.
    But I'm still confused as to what matching pattern could be used.
    For example a pattern like:
    [A-Za-z]
    I assume would not match any double byte characters.
    I also assume the following won't work either:
    [\\p{Alpah}]
    because it is posix - US-ASCII only.
    So how do you say "match the tag, then take any characters,
    double byte, ascii, whatever, then match the text tag - per the
    original example ?

  • About applet and 4-bytes characters

    Hi,
    I have attempted to display 4-bytes characters in textfield, but to no avail. Here is part of my coding:
    int b[] = { 131096, 19985, 131160};
    String a = new String(b, 0 , 3); // use code point to define the string
    TextField hwtext = new TextField(a, 5) ;
    hwtext.setFont(some font instance here)
    hwtext.setEditable(true);
    this.add("text1", hwtext)
    Only the middle character (2-bytes) can be seen.
    I used similar approach but put it to Graphics, something like:
    g.setFont(textFont);
    g.drawString(buffer.toString(), 40, 100);
    (details omitted)
    It works successfully. Any suggest?
    Dave

    I've tried your code but I've got the same result in both cases, ie only the middle character is displayed. So it seems to me that the problem is due to the font rather than other things.
    If you say your applet could display correctly the characters, you could try to use the same font to both. What font are you using by the way?

  • PDF acceleration F5 BigIP WA and double byte characters

    We have been trying to use the F5 appliance from BigIP to accelerate the delivery of PDF files from SharePoint over the WAN.  However, we encountered problems with the double-byte files many months ago and have been trying to resolve the problem with F5.  We have turned off PDF acceleration on the F5 because of the problems.  The problem occurs when PDF files have Kanji characters in the file name.  If the file names are English (single byte) the problem does not occur, even if the content of the PDF contains Kanji characters.
    After many months of working with F5, they are now saying that the problem is with the Adobe plug-in to Internet Explorer.  Specifically they say:
    The issue is a result of Adobe's (not F5's) handling of the linearization request of PDF’s with the Japanese character set over 300 KB when the Web Accelerator is enabled on the BigIP (F5) appliance.  We assume the issue exists for all double-byte languages, not only Japanese.  If a non-double byte character set is used, this works fine.  “Linearization” is a feature which allows the Adobe web plug-in to start displaying the PDF file while it is still being downloaded in the background.
    The F5 case number is available to anybody from Adobe if interested.
    The F5 product management  and the F5 Adobe relationship manager have been made aware of this and will bring this issue up to Adobe.  But this is as far as F5 is willing to pursue as a resolution.  F5 consider this an Adobe issue, not a F5 issue.
    Anybody know if this is truly a bug with the PDF browser plug-in?  Anybody else experienced this?

    Your searches should have also come up with the fact that CR XI R2 is not supported in .NET 2008. Only CR 2008 (12.x) and Crystal Reports Basic for Visual Studio 2008 (10.5) are supported in .NET 2008. I realize this is not good news given the release time line, but support or non support of cr xi r2 in .net 2008 is well documented - from [Supported Platforms|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/7081b21c-911e-2b10-678e-fe062159b453
    ] to [KBases|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_dev/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes.do], to [Wiki|https://wiki.sdn.sap.com/wiki/display/BOBJ/WhichCrystalReportsassemblyversionsaresupportedinwhichversionsofVisualStudio+.NET].
    Best I can suggest is to try SP6:
    https://smpdl.sap-ag.de/~sapidp/012002523100015859952009E/crxir2win_sp6.exe
    MSM:
    https://smpdl.sap-ag.de/~sapidp/012002523100000634042010E/crxir2sp6_net_mm.zip
    MSI:
    https://smpdl.sap-ag.de/~sapidp/012002523100000633302010E/crxir2sp6_net_si.zip
    Failing that, you will have to move to a supported environment...
    Ludek
    Follow us on Twitter http://twitter.com/SAPCRNetSup
    Edited by: Ludek Uher on Jul 20, 2010 7:54 AM

  • Migrating Multi-Byte Characters

    When Migrating from access 2000, all multibyte charcters
    are coverted into single byte. The Database is runng UTF8?
    Anyone done this before?
    Thanks

    #1 should return you the encoded string.
    #2 should decode the string and return the correct characters.
    If it doesn't, it's probably because the string was improperly
    encoded.
    #3 should cause #1 to do the same as #2, but you have to set
    the property before JavaMail classes are loaded.

  • Handling unicode and multi-byte/ANSI strings in same application

    I'm creating my environment handle using OCIEnvNlsCreate so all strings passed to/from oracle are supposed to be in wide string format.
    This is fine until I want to bind a variable that contains an ANSI character string. My application can use mixed string types.
    Which SQLT_ type do I use to bind my ANSI character string? What's the difference between SQLT_STR and SQLT_AVC?
    Or do I have to convert each ANSI string to a wide string before I bind?
    The SQL Server ODBC API handles this problem without any trouble by specifying the data type when binding to SQL_VARCHAR or SQL_WVARCHAR.
    Any help greatly appreciated as I'm totally stuck!
    Thanks,
    John

    Here's the relevant para from the documentation:
    Specifying Character Sets in OCI
    Use the OCIEnvNlsCreate function to specify client-side database and national character sets when the OCI environment is created.This function allows users to set character set information dynamically in applications, independent of the NLS_LANG and NLS_NCHAR initialization parameter settings. In addition, one application can initialize several environment handles for different client environments in the same server environment.
    Any Oracle character set ID except AL16UTF16 can be specified through the OCIEnvNlsCreate function to specify the encoding of metadata, SQL CHAR data, and SQL NCHAR data. Use OCI_UTF16ID in the OCIEnvNlsCreate function to specify UTF-16 data.
    Can somebody please tell me what I can set charset or ncharset to apart from OCI_UTF16ID or zero and hence make the call to OCIEnvNlsCreate return OCI_SUCCESS.
    Thanks,
    John

  • JavaMail support of non-ASCII and double-byte characters?

    currently doing a project involving foreign characters
    would be great to know if this was possible using java mail
    any insight?
    thanks in advance

    Yes, it is. (Not much of an "insight", was it?)

  • Urgent: comparing multi-byte characters to a single byte character!

    Let's say I have two strings, they have the same contents but use different encoding, how do I compare them?
    String a = "GOLD";
    String b = "G O L D ";
    The method a.equals(b) doesn't seem to work.

    try this:
    String a = "GOLD";
    String b = "G O L D ";
    boolean bEqual = true;
    int iLength = a.length();
    int j = 0;
    for (int i = 0; i < iLength; i++)  {
       while(b.substring(j,1).equals(" "))
          j++;
       if(!a.substring(i, 1).equals(b.substring(j,1))  {
          bEqual = false;
          break;
    }

  • DEFECT: (Serious!) Truncates display of data in multi-byte environment

    I have an oracle 10g database set up with the following nls parameters:
    NLS_CALENDAR      GREGORIAN
    NLS_CHARACTERSET      AL32UTF8
    NLS_COMP      LINGUISTIC
    NLS_CURRENCY      $
    NLS_DATE_FORMAT      DD-MON-YYYY
    NLS_DATE_LANGUAGE      AMERICAN
    NLS_DUAL_CURRENCY      $
    NLS_ISO_CURRENCY      AMERICA
    NLS_LANGUAGE      AMERICAN
    NLS_LENGTH_SEMANTICS      CHAR
    NLS_NCHAR_CHARACTERSET      UTF8
    NLS_NCHAR_CONV_EXCP      TRUE
    NLS_NUMERIC_CHARACTERS      .,
    NLS_RDBMS_VERSION      10.2.0.3.0
    NLS_SORT BINARY
    NLS_TERRITORY      AMERICA
    NLS_TIMESTAMP_FORMAT      DD-MON-RR HH.MI.SSXFF AM
    NLS_TIMESTAMP_TZ_FORMAT      DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_TIME_FORMAT      HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT      HH.MI.SSXFF AM TZR
    I am querying a view in sqlserver 2000 via an odbc database link.
    When I query a 26 character wide column in the view in sql developer, it will only return up to 13 characters of the data.
    When I query the exact same view in the exact same sql server database from the extact same oracle database using the exact same odbc database link using sql navigator, I get the full 26 characters worth of data.
    It also works just fine from the sql command line tool from 10g express.
    Apparently, sql developer is confused about how to handle multi-byte data. If you ask it the length of the data in the column, it will tell you 26, but it will only show you 13.
    I have found a VERY PAINFUL work around, to do a cast(column_name as varchar2(26) when I query it. But I've got hundreds of views and queries...

    In all other respects, the settings I have appear to be working correctly.
    I can enter multi-byte characters into the sql worksheet to create a package, save it, and re-open the package with the multi-byte characters still visible.
    I'm using a fallback directory for my jdk with the correct font installed, so I can see and edit multi-byte data in the data grids.
    In this case, I noticed the problem on a column that only contains the standard ascii letters and digits.
    Environment->Encoding = UTF-16
    All the fonts are set to a font that properly displays western and ge'ez characters. The font has been in use for years, and is working correctly in all other circumstances.
    The Database->NLS Parameters tab under sql developer preferences shows:
    language: American
    territory : American
    sort: binary
    comp: binary
    length: char (I've also tried byte)
    If there are other settings that you think might be relevant, please let me know.
    I've done some more testing. I created an oracle table with a single column and did an insert into ... select from statement across the database link. The correct, full-length data appeared in the oracle table.
    So, it's not a matter of whether the data is being returned or not, it is. It is simply not being displayed correctly. It appears that sql developer is making some unwarranted decisions about the datatable across the database link when it decides to display the data, because sql plus and sql navigator have no such issues.
    This is really a very serious problem, because if I cannot trust the data the tool shows me, I cannot trust the tool.
    It is also an invitation to make an error based upon the erroneous data display.

  • Handling Tab Delimited File generation in non-unicode for multi byte lang

    Hi,
    Requirement:
    We are generating a Tab Delimited File in different languages (Single Byte and Multi Byte) and placing the files at application server.
    Problem:
    Our system is a Non-unicode system so we are facing problems with generation of Tab delimited file for multibyte languages like Russian, Japanese, Chinese etc.,
    I am actually using data: d_tab TYPE X value '09' but it dont work for multi byte. I cant see tab delimited file at application server path.
    Any thoughts about how to proceed on this issue?Please let me know.
    Thanks & Regards,
    Pavan

    >
    Pavan Ravikanti wrote:
    > Thanks for your answer but do you reckon cl_abap_char_utilities will be a work around for data: d_tab type X VALUE '09' .
    > Pavan.
    On a non-unicode system the X Variant is working, but not on a unicode system. Here you must use the class. On the other hand you can use the class on a non-unicode system und your char var will always be correct (one byte/twobyte depending on which system your report is running).
    What you are planning to do is to put a file with an amount of possible characters into a system with has a less amount of characters. Thats not working in no way.
    What you can do is to build up a multi-code-page system where the codepage is bound to the user or bound to the logon-language. Here you you can read and process textfiles in several codepages - but not a textfile in unicode. You have to convert the unioce textfile into a non-unicode textfile before processing it.
    Remember that SAP does not support multi-code-page Systems anymore and multi-code-page systems will result in much more work when converting the system to unicode.
    Even non-unicode system will not be maintained by SAP in the near future.
    What you encounter here are problems for what unicode was developped. A unicode system can handle non-unicode textfiles, but the other way round will always lead to problems which cant be solved.

  • Problem to display japanese/multi-byte character on weblogic server 9.1

    Hi experts
    We are running weblogic 9.1 on linux box [REHL v4] and trying to display Japanese characters embedded in some of html files, but Japanese characters are converted into a question mark [?]. The html files that contain Japanese characters are stored properly in the file system and retained the Japanese characters as they should be.
    I changed character setting in the html header to shift_jis, but no luck. Then I added the encoding scheme for shift_jis in jsp_description and charset-parameter section in weblogic.xml but also no luck.
    I am wondering how I can properly display multi-byte characters/Japanese on weblogic server without setting up internationalization tools.
    I will appreciate for your advice.
    Thanks,
    yasushi

    This was fixed by removing everything except teh following files from the original ( 8.1 ) domain directory
    1. config.xml
    2. SerializedSystemIni.dat
    3. *.ldift
    4. applications directory
    Is this a bug in the upgrade tool ? Or did I miss a part of the documentation ?
    Thanks
    --sony                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

Maybe you are looking for

  • How do I move Notes from one account to another?

    When I upgraded to Lion and ios5, all my Notes were on my Mac, iPhone and iPad, having been synched via USB and iTunes in the past. Some of these notes have been around for quite a while. Later, I edited a few notes and became aware that the chnages

  • Background processing and IdM Split

    We have a Create User process that can be initiated by the end user. The end user provides the user data to create the user. This data is passed on to the approval process where an approver approves/declines the request. If the request is approved th

  • 10.5.1 update problem

    Hi, Hoping someone can help me. New to Macs- loving it so far, but am having one problem. Just updated the software the other day from 10.5 to 10.5.1. I used the software update button to do this. I am on a dial up connection (live too far out in the

  • Using Forms in Numbers for iPad

    Although Forms for the iPad may seem to be a great interface between the user and the spreadsheet, it has a few significant drawbacks under it's present design. When using predefined records and catagories to prompt a user for input, one runs into pr

  • Update checkbox values in jsp page back to OBPM Process

    Dear experts, I am having a small problem with JSP and I was wondering if anyone could shed some light on it. My situation is as follows, currently I have <f:invoke var="${orderDetails}" methodName="getIndicatorsY" retAttName="indicator1"/> <c:forEac