How to display double byte characters with system.out.print?

Hi, I'm a newbie java programmer having trouble to utilize java locale with system io on dos console mode.
Platform is winxp, jvm1.5,
File structure is:
C:\myProg <-root
C:\myProg\test <-package
C:\myProg\test\Run.java
C:\myProg\test\MessageBundle.properties <- default properties
C:\myProg\test\MessageBundle_zh_HK.properties <- localed properties (written in notepad and save as Unicode, window notepad contains BOM)
inside MessageBundle.properties:
test = Hello
inside Message_zh_HK.properties:
test = &#21890; //hello in big5 encoding
run.java:
package test;
import java.util.*;
public class Run{
  public static void main(String[] args){
    Locale locale = new Locale("zh","HK");
    ResourceBundle resource =
        ResourceBundle.getbundle("test.MessageBundle", locale);
    System.out.println(resource.getString("test"));
  }//main
}//classwhen run this program, it'll kept diplay "hello" instead of the encoded character...
then when i try run the native2ascii tool against MessageBundle_zh_HK.properties, it starts to display monster characters instead.
Trying to figure out what I did wrong and how to display double byte characters on console.
Thank you.
p.s: while googling, some said dos can only can display ASCII. To demonstrate the dos console is capable of displaying double byte characters, i wrote another helloWorld in chinese using notepad with C# and compile using "csc hello.cs", sure enough, console.write in c# allowed me to display the character I was expecting. Since dos console can print double byte characters, I must be missing something important in this java program.

after google a brunch, I learned that javac (hence java.exe) does not support BOM (byte order mark).
I had to use a diff editor to save my text file as unicode without BOM in order for native2ascii to convert into a ascii file.
Even the property file is in ascii format, I'm still having trouble to display those character in dos console. In fact, I just noticed I can use system.out.println to display double byte character if I embedded the character itself in java source file:
public class Run {
     public static void main(String[] args) throws UnsupportedEncodingException{
          String msg = "&#20013;&#25991;";    //double byte character
                try{
                System.out.println(new String(msg.getBytes("UTF-8")) + " new string");  //this displays fine
                catch(Exception e){}
                Locale locale = new Locale("zh", "HK");
          ResourceBundle resource = ResourceBundle.getBundle("test.MessagesBundle", locale);
                System.out.println(resource.getString("Hey"));      //this will display weird characterso it seems like to me that I must did something wrong in the process of creating properties file from unicode text file...

Similar Messages

  • How can Mapviewer display double-byte characters

    Hi,
    I am facing a problem using Mapviewer to display a map with a title containing double-byte characters e.g. chinese charaters. I think the problem takes place as map_Request producing the map image.
    My environment is :
    NT server 5.0(sp 3)
    Oracle 9.01
    J2EE
    IE 6.0

    are you using the mapclient.jsp demo file? there is a bug in that file that will lead to truncated map requests when the title is in chinese. it will be fixed in the next version of MapViewer (a beta version
    of which will be on OTN soon).
    Thanks,
    LJ

  • How to check double byte characters

    Hi
    My requirement: I have to accept the string (may include double byte characters and special characters). Need to check that wether that string contains any special characters(like %,&,..), if so should display error message.
    My solution: Starting i tried by usign the ASCII values. But the my code dividing the Double Byte characters into two characters.
    Code:
    package JNDI;
    public class CharASCIIValues {
         public static void main(String[] args) {
              // TODO Auto-generated method stub
              String s = args[0];
              char ch[] = s.toCharArray();
              for(int i=0;i<=ch.length;i++){
                   System.out.println(" "+ch[i]+"="+(int)ch);
    I ran with some double characters (japanese)
    But i got the out put was = ?=63 ?=63 ?=63 ?=63 ?=63 ?=63 1=49 2=50 3=51 h=104 e=101 l=108 l=108 o=111
    The ? are double byte charcters.
    Queries:
    Do i need to set any java setting to support DB characters.
    Please help me to come out this problem....any help/information will be appreciated.

    First of all Java strings are encoded in a modified version of UTF-8 that uses 2 bytes per character. That is the char datatype is equivalent to an unsigned short.
    Second what exactly do you mean by double byte? Whether a character ends up encoded in two bytes or not depends on the encoding used (UTF-8, UTF-16 (both unicode), BIG5, GB2312 (both chinese), iso-8859-1(Latin-1), ASCII, etc...). This means that there are no "double byte characters" there are only "double byte characters when encoded in <your encoding>".
    Where does your string come from? What encoding are you using to read the string in the first place? Are you sure you are creating the string using the right encoding?

  • Problem with national characters calling System.out.print(...)

    I need to develop an application printing spanish characters like "�" (ce trencada) "�", "�", etc.
    The problem is when I type
    System.out.print("�")in my application, I get "plus-minus" symbol while executing it.
    Anyone can help me, please?
    Thank you in advance.

    check the list of fonts available. You can use the following code for the purpose.
      public static void main(String args[])
            String fonts[] = getFontNames();
            for(int i = 0; i < fonts.length; i++)
                System.out.println(fonts);
    public static String[] getFontNames()
    GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment();
    return ge.getAvailableFontFamilyNames();

  • Java - Eclipse debug applet with System.out.print in jboss console

    I use System.out.print to debug my java programs using Eclipse. I'm new
    with applets, and when nothing showed up in the applet window, I tried
    using print statements. I was expecting to see them in the jboss console like
    always, but there was nothing there. I tried looking this problem up
    online and found nothing. Does anyone know how I can use print
    statements to debug my applet? (Japplet to be specific).

    System.out.println works for me. The output statment are visible in the Console Window within Eclipse.

  • How do I know that there are double-byte characters in s String?

    Hi!
    If I have a String that contain English and Chinese words,
    How do I know that the String contain double-byte characters(Chinese words)?
    Following is my method and the problem I suffered...
    String A = "test(double-byte chinese)test";
    byte B[] = A.getBytes();
    if(A.length() != B.length)
    System.out.print("String contains double-byte words");
    else
    System.out.print("String does not contain double-byte words");
    If the String contains Chinese words,then A.length() will be smaller than B.length...
    I run the program on Window NT workstation(Tradtional Chinese version) and it works...
    Then I run the same program on Redhat 6.0(English version),
    but the result was not the same as running on NT...
    because A.length() always equal to B.length...
    I guess that's because of Charset of OS...
    But I don't know how to set the Charset of Linux...
    Does anybody have other solution to my problem?
    Any suggestion will be very appreciate!

    A String is always in Unicode. You cannot see what kind of character is in the string unless you compare with the Unicode range of charcters. E.g. 3400-4DB5 is CJKUnified Ideographs extension A. Then you at least know that is is not Latin-1 or other.
    Klint

  • How to preserve 2-byte characters when "Edit Link" ?

    Keynote 6.2.2 will remove any 2-byte characters presented in the hyperlink of an object.
    However, the previous version doesn't have the problem.
    Since internationalized domain names are getting popular, I hope Apple can fix the bug soon.

    after google a brunch, I learned that javac (hence java.exe) does not support BOM (byte order mark).
    I had to use a diff editor to save my text file as unicode without BOM in order for native2ascii to convert into a ascii file.
    Even the property file is in ascii format, I'm still having trouble to display those character in dos console. In fact, I just noticed I can use system.out.println to display double byte character if I embedded the character itself in java source file:
    public class Run {
         public static void main(String[] args) throws UnsupportedEncodingException{
              String msg = "&#20013;&#25991;";    //double byte character
                    try{
                    System.out.println(new String(msg.getBytes("UTF-8")) + " new string");  //this displays fine
                    catch(Exception e){}
                    Locale locale = new Locale("zh", "HK");
              ResourceBundle resource = ResourceBundle.getBundle("test.MessagesBundle", locale);
                    System.out.println(resource.getString("Hey"));      //this will display weird characterso it seems like to me that I must did something wrong in the process of creating properties file from unicode text file...

  • How best to send double byte characters as http params

    Hi all
    I have a web app that accepts text that can be in many languages.
    I build up a http string and send the text as parameters to another webserver. Hence, whatever text I receive i need to be able to represent on a http query string.
    The parameters are sent as urlencoded UTF8. They are decoded by the second webserver back into unicode and saved to the db.
    Occassionally i find a character that i am unable to convert to a utf8 string and send as a parameter (usually a SJIS character). When this occurs, the character is encoded as '3F' - a question mark.
    What is the best way to send double byte characters as http parameters so they always are sent faithfully and not as question marks? Is my only option to use UTF16?
    example code
    <code>
    public class UTF8Test {
    public static void main(String args[]) {
    encodeString("\u7740", "%E7%9D%80"); // encoded UTF8 string contains question mark (3F)
    encodeString("\u65E5", "%E6%97%A5"); // this other japanese character converts fine
    private static void encodeString(String unicode, String expectedResult) {
    try {
    String utf8 = new String(unicode.getBytes("UTF8"));
    String utf16 = new String(unicode.getBytes("UTF16"));
    String encoded = java.net.URLEncoder.encode(utf8);
    String encoded2 = java.net.URLEncoder.encode(utf16);
    System.out.println();
    System.out.println("encoded string is:" + encoded);
    System.out.println("expected encoding result was:" + expectedResult);
    System.out.println();
    System.out.println("encoded string16 is:" + encoded2);
    System.out.println();
    } catch (Exception e) {
    e.printStackTrace();
    </code>
    Any help would be greatly appreciated. I have been struggling with this for quite some time and I can hear the deadline approaching all too quickly
    Thanks
    Matt

    Hi Matt,
    one last visit to the round trip issue:
    in the Sun example, note that UTF8 encoding is used in the method that produces the byte array as well as in the method that creates the second string. This is equivalent to calling:
    String roundTrip = new String(original.getBytes("UTF8"), "UTF8");//sun exampleWhereas, in your code you were calling:
    String utf8 = new String(unicode.getBytes("UTF8"))//Matt's code
    [/code attracted
    The difference is crucial.  When you call the string constructor without a second (encoding) argument, the default encoding (usually Cp1252) is used.  Therefore your code is equivalent toString utf8 = new String(unicode.getBytes("UTF8"), "Cp1252")//Matt's code
    i.e.you are encoding with one transformation format and decoding back with a different transformation format, so in general you won't get your original string back.
    Regarding safely sending multi-byte characters across the Internet, I'm not completely sure what the situation is because I don't do it myself. (When our program is run as an applet, the only interaction it has with the web server is to download various files). I've seen lots of people on this forum describing problems sending multi-byte characters and I can't tell whether the problem is with the software or with the programming. Two possible methods come to mind (of course you need to find out what your third party software is doing):
    1) use the DataOutput/InputStreams writeUTF/readUTF methods
    2) use the InputStreamReader/OutputStreamWriter pair with UTF8 encoding
    See this thread:
    http://forum.java.sun.com/thread.jsp?forum=16&thread=168630
    You should stick to UTF8. It is designed so that the bytes generated by encoding non-ASCII characters can be safely transmitted across the Internet. Bytes generated by UTF16 can be just about anything.
    Here's what I suggest:
    I am running a version of the Sun tutorial that has a program running on a server to which I can send a string and the program sends back the string reversed.
    http://java.sun.com/docs/books/tutorial/networking/urls/readingWriting.html
    I haven't tried sending multi-byte characters but I will do so and test whether there are any transmission problems. (Assuming that the Sun cgi program itself correctly handles characters).
    More later,
    regards,
    Joe
    P.S.
    I thought one the reasons for the existence of UTF8 was to
    represent things like multi-byte characters in an ascii format?Not exactly. UTF8 encodes ascii characters into single bytes with the same byte values as ASCII encoding. This means that a document consisting entirely of ASCII characters is the same whether it was encoded as UTF8 or ASCII and can consequently be read in any ASCII document reader (e.g.notepad).

  • Sapshcut and  double-byte characters trouble?

    Hi experts,
    I did try some commands to log-on to SAP system as the following examples and get some conclusions:
    (1) sapshcut.exe -sysname="今ウィ ちゃわ異"  -user="paragon1"  -pw="paragon1"  -client="800" -language=en  -maxgui "
    (2) sapshcut.exe -sysname="EC5"  -user="paragon1"  -pw="paragon1"  -client="800" -language=en  -maxgui "
    - For the (1) case, with any "double-byte characters" sysname (Janpanese,...), I can not log-on to SAP and get "Microsoft Visual C++ Runtime Library" error message.
    - For the (2) case, without "double-byte characters" sysname, I can log-on to SAP easily.
    Thus, I wan to know
    1) Does the sapshcut.exe support double-byte characters?
    2) Do we have a way to use sapshcut.exe with double-byte characters(Japanese,...)?
    Kindly Regards,

    The comments on the bytes/strings were helpful. Thanks.
    But I'm still confused as to what matching pattern could be used.
    For example a pattern like:
    [A-Za-z]
    I assume would not match any double byte characters.
    I also assume the following won't work either:
    [\\p{Alpah}]
    because it is posix - US-ASCII only.
    So how do you say "match the tag, then take any characters,
    double byte, ascii, whatever, then match the text tag - per the
    original example ?

  • Text strings from VISA read don't match identical looking text constants - could it be double byte characters"

    Our RS232-enabled instrument sends ASCII strings to COM 1 and I read strings in. For example I get the string "TPM", or at least it looks like "TPM" if I display it. However, if I send that to the selector input of a Case structure, and create a case for "TPM", whether the two appear to match varies. Sometimes it matches, and measuring its length returns 3. Sometimes it measures 7 or 11 or 12 characters long, and it doesn't match. I can reproduce a match or a mismatch by my choice of the command that went to the instrument prior to the command that causes the TPM response, but have made no sense of this clue. I have run it through Trim Whitespace, with Both Ends (the default) explicitly selected. I have also turned the string into a byte array, autoindexed a For loop on that, and only passed the bytes if they don't equal 32, or if they don't equal 0, thinking spaces or nulls might be in there, but no better.
    The Trim Whitespace function's Help remarks that it does not remove "double byte characters". But I can't find anything else about "double byte characters". Could this be the problem? Are there functions that can tell whether there are "double byte characters", or convert into or out of them? By "double byte characters", do they just mean Unicode?
    Solved!
    Go to Solution.

    Cebailey,
    The double byte characters are generally used for characters specific to languages other than English.  If you display your message in  " '\' Codes Display"  in a string indicator do you see any other characters?   You could also use Hex Display to see count the number of bytes in the message.  You are probably getting messages with non-printable characters that might need to be trimmed before using your application.  If you want more information the '\' Codes Display, there's a detailed description found in the LabVIEW Help.  You can also find the same information on our website in the LabVIEW Help.  Backslash ('\') Codes Display
    Caleb WNational Instruments

  • Using Double Byte Characters in URL For Session Variables

    When I supply the value for a session variable in the URL for an IRPT page where the value contains double byte characters, Japanese in this case, the characters are corrupted by the time they are entered for the session variables.  Does anyone know a solution to this problem or experience in this area?  Currently using xMII 11.5 SR3.

    Hi Bryan,
    I would suspect that under the covers the session variable is of datatype string.  For double byte characters, it would need to be wstring.  There is a better explanation to be found at:
    Link: [Kanji and Java Datatypes|http://www.unix.com.ua/orelly/java-ent/jenut/ch10_04.htm] or you can try google on  Kanji Datatype  OR Kanji Java Datatype
    It could also be a problem with the operating system which I ran into about 10 years ago, but I would hope that Microsoft had moved beyond that by now.
    Maybe some more technical folks could chime in to confirm or deny my explanation.
    Mike
    Edited by: Michael Appleby on Jul 8, 2008 5:23 PM

  • Double byte characters in a String - Help needed

    Hi All,
    Can anyone tell me how to find the presence of a double byte ( Japanese ) character in a String. I need to check whether the String contains a Japanese character or not. Thanx in advance.
    Ramya

    /** returns true if the String s contains any "double-byte" characters */
    public boolean containsDoubleByte(String s) {
      for (int i=0; i<s.length(); i++) {
        if (isDoubleByte(s.charAt(i)) {
          return true;
      return false;
    /** returns true if the char c is a double-byte character */
    public boolean isJapanese(char c) {
      if (c >= '\u0100' && c<='\uffff') return true;
      return false;
    // simpler:  return c>'\u00ff';
    /** returns true if the String s contains any Japanese characters */
    public boolean containsJapanese(String s) {
      for (int i=0; i<s.length(); i++) {
        if (isJapanese(s.charAt(i)) {
          return true;
      return false;
    /** returns true if the char c is a Japanese character. */
    public boolean isJapanese(char c) {
      // katakana:
      if (c >= '\u30a0' && c<='\u30ff') return true;
      // hiragana
      if (c >= '\u3040' && c<='\u309f') return true;
      // CJK Unified Ideographs
      if (c >= '\u4e00' && c<='\u9fff') return true;
      // CJK symbols & punctuation
      if (c >= '\u3000' && c<='\u303f') return true;
      // KangXi (kanji)
      if (c >= '\u2f00' && c<='\u2fdf') return true;
      // KanBun
      if (c >= '\u3190' && c <='\u319f') return true;
      // CJK Unified Ideographs Extension A
      if (c >= '\u3400' && c <='\u4db5') return true;
      // CJK Compatibility Forms
      if (c >= '\ufe30' && c <='\ufe4f') return true;
      // CJK Compatibility
      if (c >= '\u3300' && c <='\u33ff') return true;
      // CJK Radicals Supplement
      if (c >= '\u2e80' && c <='\u2eff') return true;
      // other character..
      return false;
    /* NB CJK Unified Ideographs Extension B not supported with 16-bit unicode. Source: http://www.alanwood.net/unicode */
    }

  • XSL Transform, double-byte characters and padding

    I have a stylesheet with the following variable that is being formatted to pad a parameter named textQualifierDescription to a length of 30 by calling the template called format-string.
    <xsl:variable name="textQualifierDescription2">
         <xsl:call-template name="format-string">
              <xsl:with-param name="myString" select="$textQualifierDescription"/>
              <xsl:with-param name="numbatchspaces">30</xsl:with-param>
         </xsl:call-template>
    </xsl:variable>
    <xsl:template name="format-string">
         <xsl:param name="myString" select="' ' "/>
         <xsl:param name="numbatchspaces" select="20"/>
         <xsl:param name="direction" select="right"/>
         <xsl:variable name="spacesstr" select="string('                                              ')"/>
         <xsl:variable name="padsize" select="$numbatchspaces -string-length($myString)"/>
         <xsl:variable name="spacepad" select="substring($spacesstr, 1, $padsize)"/>
         <xsl:choose>
              <xsl:when test="$direction = 'left'">
                   <xsl:value-of select="concat($spacepad,$myString)"/>
              </xsl:when>
              <xsl:otherwise>
                   <xsl:value-of select="concat($myString,$spacepad)"/>
              </xsl:otherwise>
         </xsl:choose>
    </xsl:template>I execute the xsl transform using the following statement in a stored procedure:
    transformedData := xmldata.transform(xsldata);The xsl transform works as expected until it encounters data that contains double-byte characters. My output is supposed to contain the following three fields as a single record
    textQualifierDescription - padded to a length 30
    lineNumber
    id
    If my textQualifierDescription contains a value of "Texto de posición"
    Line 1 - Texto de posición             00000001POS2005
    Line 2 - Texto de posición            00000001POS2005
    Line 1 is the expected result.
    Line 2 is the actual result. When the "format-string" function is called and even though "Texto de posición" is 17 characters in length, it looks as if oracle counts the double-byte character as 2 and calculates the string-length as 18 to come up with a padsize of 12. It then creates a spacepad of 12 spaces which is then concatenated to the 17 characters for a total length of 29. I have tested the stylesheet in xmlspy and it produces the expected result.
    Has anyone ever run into this sort of situation and is able to provide me with some sort of solution to this dilemma? This is running on 10g Release 10.2.0.4.0.

    Your searches should have also come up with the fact that CR XI R2 is not supported in .NET 2008. Only CR 2008 (12.x) and Crystal Reports Basic for Visual Studio 2008 (10.5) are supported in .NET 2008. I realize this is not good news given the release time line, but support or non support of cr xi r2 in .net 2008 is well documented - from [Supported Platforms|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/7081b21c-911e-2b10-678e-fe062159b453
    ] to [KBases|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_dev/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes.do], to [Wiki|https://wiki.sdn.sap.com/wiki/display/BOBJ/WhichCrystalReportsassemblyversionsaresupportedinwhichversionsofVisualStudio+.NET].
    Best I can suggest is to try SP6:
    https://smpdl.sap-ag.de/~sapidp/012002523100015859952009E/crxir2win_sp6.exe
    MSM:
    https://smpdl.sap-ag.de/~sapidp/012002523100000634042010E/crxir2sp6_net_mm.zip
    MSI:
    https://smpdl.sap-ag.de/~sapidp/012002523100000633302010E/crxir2sp6_net_si.zip
    Failing that, you will have to move to a supported environment...
    Ludek
    Follow us on Twitter http://twitter.com/SAPCRNetSup
    Edited by: Ludek Uher on Jul 20, 2010 7:54 AM

  • Double byte characters in dataset

    Hi Gurus,
    I encounter an issue when writing data to dataset with fixed length.
    Company Code: US00
    Document Number: 1234567890
    Fiscal Year: 2014
    Line Item: 001 Short Text ( length 10 char): AF 16
    Line Item: 002 Short Text ( length 10 char): AF 16 
    Comment:X
    In Unix, it becomes like this:-
    US0012345678902014001AF16      X
    US0012345678902014001AF 16       X
    The X in second line item was not in correct fixed position and cause file being rejected by the receiver system.
    I have tried to calculate using the following syntax for AF 16  :
    1) strlen(bseg-sgtxt) = 7
    2) numofchar(bseg-sgtxt) = 6
    I would like to know if there is any idea to resolve the position of subsequent fields after "SGTXT" field if there is double byte characters found. I have tried to pad trailing space but it does not work.
    Expected result is where the X of two line items are located at same and fixed position:-
    US0012345678902014001AF16      X
    US0012345678902014001AF 16    X
    Thank you!

    /** returns true if the String s contains any "double-byte" characters */
    public boolean containsDoubleByte(String s) {
      for (int i=0; i<s.length(); i++) {
        if (isDoubleByte(s.charAt(i)) {
          return true;
      return false;
    /** returns true if the char c is a double-byte character */
    public boolean isJapanese(char c) {
      if (c >= '\u0100' && c<='\uffff') return true;
      return false;
    // simpler:  return c>'\u00ff';
    /** returns true if the String s contains any Japanese characters */
    public boolean containsJapanese(String s) {
      for (int i=0; i<s.length(); i++) {
        if (isJapanese(s.charAt(i)) {
          return true;
      return false;
    /** returns true if the char c is a Japanese character. */
    public boolean isJapanese(char c) {
      // katakana:
      if (c >= '\u30a0' && c<='\u30ff') return true;
      // hiragana
      if (c >= '\u3040' && c<='\u309f') return true;
      // CJK Unified Ideographs
      if (c >= '\u4e00' && c<='\u9fff') return true;
      // CJK symbols & punctuation
      if (c >= '\u3000' && c<='\u303f') return true;
      // KangXi (kanji)
      if (c >= '\u2f00' && c<='\u2fdf') return true;
      // KanBun
      if (c >= '\u3190' && c <='\u319f') return true;
      // CJK Unified Ideographs Extension A
      if (c >= '\u3400' && c <='\u4db5') return true;
      // CJK Compatibility Forms
      if (c >= '\ufe30' && c <='\ufe4f') return true;
      // CJK Compatibility
      if (c >= '\u3300' && c <='\u33ff') return true;
      // CJK Radicals Supplement
      if (c >= '\u2e80' && c <='\u2eff') return true;
      // other character..
      return false;
    /* NB CJK Unified Ideographs Extension B not supported with 16-bit unicode. Source: http://www.alanwood.net/unicode */
    }

  • Double byte characters turn into squares at PDF export use Unicode font

    Hi all,
    We developing with Visual Studio 2008, .NET 2.0 and Crystal Report XI Release 2 SP5 an international windows application. We use the font Arial Unicode MS in the rpt file. We translate the fix texts with the Crystal Translator (3.2.2.299).
    On the distributed installation of our software, the printout and preview displays the double byte characters properly (Japanese, Korean, Chinese), but when we export the report as PDF, the characters get displayed with squares. This happens also, when the font Arial Unicode MS is installed on the distributed installation on Windows XP Professional.
    I searched for hours for a solution in the knowlegde base articles and in forum of Crystal Report. I found one thread, which describes exactly our problem:
    [Crystal XI R2 exporting issues with double-byte character sets|Crystal XI R2 exporting issues with double-byte character sets;
    But we already introduced the solution to use Unicode font and I also linked the font Lucida Sans Unicode to the Arial Unicode MS, but we still face the problem.
    Due to our release on thursday we are very under pressure to solve this problem asap.
    We appreciate your help very much!
    Ronny

    Your searches should have also come up with the fact that CR XI R2 is not supported in .NET 2008. Only CR 2008 (12.x) and Crystal Reports Basic for Visual Studio 2008 (10.5) are supported in .NET 2008. I realize this is not good news given the release time line, but support or non support of cr xi r2 in .net 2008 is well documented - from [Supported Platforms|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/7081b21c-911e-2b10-678e-fe062159b453
    ] to [KBases|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_dev/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes.do], to [Wiki|https://wiki.sdn.sap.com/wiki/display/BOBJ/WhichCrystalReportsassemblyversionsaresupportedinwhichversionsofVisualStudio+.NET].
    Best I can suggest is to try SP6:
    https://smpdl.sap-ag.de/~sapidp/012002523100015859952009E/crxir2win_sp6.exe
    MSM:
    https://smpdl.sap-ag.de/~sapidp/012002523100000634042010E/crxir2sp6_net_mm.zip
    MSI:
    https://smpdl.sap-ag.de/~sapidp/012002523100000633302010E/crxir2sp6_net_si.zip
    Failing that, you will have to move to a supported environment...
    Ludek
    Follow us on Twitter http://twitter.com/SAPCRNetSup
    Edited by: Ludek Uher on Jul 20, 2010 7:54 AM

Maybe you are looking for

  • Windows Live Messenger on 6288

    I used to have MSN on my Nokia 6288 a while ago until one day it said the file was corrupt or something so i deleted it, and then when i came to download it again and run it it said that i had a conflicting item - so i went to Show List and it was a

  • Uninstalling oracle 10g enterprise edition from windows vista

    HI There, I want to uninstall oracle 10g enterprise edition from windows vista home premium edition. I have uninstalled it using oracle universal installer and then deleted folder from program files. But I am unable to delete the main folder i.e. D>O

  • Error 4960 & Illegal Seek

    Hi, I have been downloading CS trial for a few days and now that it is almost complete, I am faced with the following error messages: DesignPremium_CS5_5 error - 4960 FlashPro_11_5_LS1.dm Illegal seek I've been reading threads and other forums for a

  • Transition to iCloud - Contacts - disappointed at myself and Apple...

    I have to stress that the non-automatic transition of contacts from MobileMe to iCloud is a very very poor show from Apple.  Massively dissapointed in myself for not reading the information regarding moving to iCloud more carefully and even more at A

  • How do I upgrade to CS6 without student discount?

    I have photoshop cs5 student edition.