Double Byte Issues inPortal

Hi,
   Can any one help me out with the Double Byte character isuue. The Japaneses Characters are getting printed over lapped inportal screens.
Regards.
Bharath Mohan N

Hi Jun,
The japanese characters seem to be scrunched together. When I log into portal with Japanese logon and open an appraisal document, the characters seem to be either srunched together or they seem appear as some junk/funny characters.
I checked in the backend. These descriptions come up from hrt1002 table. When I log into R/3 in japanese language and then check the values of hrt1002, the description no longer seem to appear right. They just appear as junk. We recently upgraded our system to ERP2005. Can you tell me why this is happening. Any information that you could provide will be really useful.
Thanks.
Bharath Mohan B

Similar Messages

  • Crystal XI R2 exporting issues with double-byte character sets

    NOTE: I have also posted this in the Business Objects General section with no resolution, so I figured I would try this forum as well.
    We are using Crystal Reports XI Release 2 (version 11.5.0.313).
    We have an application that can be run using multiple cultures/languages, chosen at login time. We have discovered an issue when exporting a Crystal report from our application while using a double-byte character set (Korean, Japanese).
    The original text when viewed through our application in the Crystal preview window looks correct:
    性能 著概要
    When exported to Microsoft Word, it also looks correct. However, when we export to PDF or even RPT, the characters are not being converted. The double-byte characters are rendered as boxes instead. It seems that the PDF and RPT exports are somehow not making use of the linked fonts Windows provides for double-byte character sets. This same behavior is exhibited when exporting a PDF from the Crystal report designer environment. We are using Tahoma, a TrueType font, in our report.
    I did discover some new behavior that may or may not have any bearing on this issue. When a text field containing double-byte characters is just sitting on the report in the report designer, the box characters are displayed where the Korean characters should be. However, when I double click on the text field to edit the text, the Korean characters suddenly appear, replacing the boxes. And when I exit edit mode of the text field, the boxes are back. And they remain this way when exported, whether from inside the design environment or outside it.
    Has anyone seen this behavior? Is SAP/Business Objects/Crystal aware of this? Is there a fix available? Any insights would be welcomed.
    Thanks,
    Jeff

    Hi Jef
    I searched on the forums and got the following information:
    1) If font linking is enabled on your device, you can examine the registry by enumerating the subkeys of the registry key at HKEY_LOCAL_MACHINEu2013\SOFTWARE\Microsoft\Windows NT\CurrentVersion\FontLink\SystemLink to determine the mappings of linked fonts to base fonts. You can add links by using Regedit to create additional subkeys. Once you have located the registry key that has just been mentioned, from the Edit menu, Highlight the font face name of the font you want to link to and then from the Edit menu, click Modify. On a new line in the dialog field "Value data" of the Edit Multi-String dialog box, enter "path and file to link to," "face name of the font to link".u201D
    2) "Fonts in general, especially TrueType and OpenType, are u201CUnicodeu201D.
    Since you are using a 'true type' font, it may be an Unicode type already.However,if Bud's suggestion works then nothing better than that.
    Also, could you please check the output from crystal designer with different version of pdf than the current one?
    Meanwhile, I will look out for any additional/suitable information on this issue.

  • Invoke-WebRequest - Double byte characters issue in windows 8.1

    I try write a powershell script to download a file from web server but failed. The path have double byte characters.
    I could run in Windows server 2012 and 2012 R2 successfully, but fail in Windows 8 & 8.1
    Do there any difference below Windows server and client powershell?
    Region and setting are same in Windows 2012 & Windows 8
    Script as below
    Invoke-WebRequest -Uri " http://hostname/m/%E9%...../......./...../xxx.jpg"

    Security settings are one possible cause of this.
    Since we don't have your URL we cannot reproduce this. 
    It is "different". Using "difference" had me confused for qa bit.  I though you were trying to figure out the difference between two things.
    Use:
    $wc=New-Object System.Net.WebClient
    $ws.DownloadFile($url,'c:\file.jpg')
    You will see less issues and it is faster.
    ¯\_(ツ)_/¯

  • SJIS- Japan Encoding Issues(*Unable to handle Double Byte Numeric and Spec)

    Hi All,
    Problem:
    Unable to handle Double Byte Numeric and Special Characters(Hypen)
    The input
    区中央京勝乞田1944-2
    Output
    区中央京勝乞田1944?2
    We have a write service created based on the JCA (Write File Adapter) with the native schema defined with SJIS Encoding as below.
    *<?xml version="1.0" encoding="SJIS"?>
    <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
         xmlns:nxsd="http://xmlns.oracle.com/pcbpel/nxsd" xmlns:tns="http://nike.com/***/***********"
         targetNamespace="http://nike.com/***/*************"
         elementFormDefault="unqualified" attributeFormDefault="unqualified"
         nxsd:version="NXSD" nxsd:stream="chars" nxsd:encoding="SJIS">*
    Do anyone have similar issue? How can we handle the double byte characters while using SJIS encoding? At the least how can we handle double byte hyphen ??
    Thanks in Advance

    Have modified my schema as shown below and it worked well for me and i am partially successful up to some extent. Yet, not sure the workaround will resolve the issue at the final loading...
    <?xml version="1.0" encoding="UTF-8"?>
    <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
    xmlns:nxsd="http://xmlns.oracle.com/pcbpel/nxsd" xmlns:tns="http://nike.com/***/***********"
    targetNamespace="http://nike.com/***/*************"
    elementFormDefault="unqualified" attributeFormDefault="unqualified"
    nxsd:version="NXSD" nxsd:stream="chars" nxsd:encoding="UTF-16">*
    If anyone has the resolution or have these kind of issues let me know.........

  • PDF acceleration F5 BigIP WA and double byte characters

    We have been trying to use the F5 appliance from BigIP to accelerate the delivery of PDF files from SharePoint over the WAN.  However, we encountered problems with the double-byte files many months ago and have been trying to resolve the problem with F5.  We have turned off PDF acceleration on the F5 because of the problems.  The problem occurs when PDF files have Kanji characters in the file name.  If the file names are English (single byte) the problem does not occur, even if the content of the PDF contains Kanji characters.
    After many months of working with F5, they are now saying that the problem is with the Adobe plug-in to Internet Explorer.  Specifically they say:
    The issue is a result of Adobe's (not F5's) handling of the linearization request of PDF’s with the Japanese character set over 300 KB when the Web Accelerator is enabled on the BigIP (F5) appliance.  We assume the issue exists for all double-byte languages, not only Japanese.  If a non-double byte character set is used, this works fine.  “Linearization” is a feature which allows the Adobe web plug-in to start displaying the PDF file while it is still being downloaded in the background.
    The F5 case number is available to anybody from Adobe if interested.
    The F5 product management  and the F5 Adobe relationship manager have been made aware of this and will bring this issue up to Adobe.  But this is as far as F5 is willing to pursue as a resolution.  F5 consider this an Adobe issue, not a F5 issue.
    Anybody know if this is truly a bug with the PDF browser plug-in?  Anybody else experienced this?

    Your searches should have also come up with the fact that CR XI R2 is not supported in .NET 2008. Only CR 2008 (12.x) and Crystal Reports Basic for Visual Studio 2008 (10.5) are supported in .NET 2008. I realize this is not good news given the release time line, but support or non support of cr xi r2 in .net 2008 is well documented - from [Supported Platforms|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/7081b21c-911e-2b10-678e-fe062159b453
    ] to [KBases|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_dev/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes.do], to [Wiki|https://wiki.sdn.sap.com/wiki/display/BOBJ/WhichCrystalReportsassemblyversionsaresupportedinwhichversionsofVisualStudio+.NET].
    Best I can suggest is to try SP6:
    https://smpdl.sap-ag.de/~sapidp/012002523100015859952009E/crxir2win_sp6.exe
    MSM:
    https://smpdl.sap-ag.de/~sapidp/012002523100000634042010E/crxir2sp6_net_mm.zip
    MSI:
    https://smpdl.sap-ag.de/~sapidp/012002523100000633302010E/crxir2sp6_net_si.zip
    Failing that, you will have to move to a supported environment...
    Ludek
    Follow us on Twitter http://twitter.com/SAPCRNetSup
    Edited by: Ludek Uher on Jul 20, 2010 7:54 AM

  • Double byte file name in KM NW04

    Prior to NetWeaver 04 using the HTML editor, we could name a file in a double byte language (Simplified/Traditional Chinese) by entering the English name in the "ID" field and then entering the double byte name in the "Name" field using "File/Details". While this was not elegant, it worked and would allow users to see the double byte name. Now in NetWeaver 04 we receive the error message "! name is not a canonical name" . Is there a way to name files in double byte using HTML or TEXT editors so our users can see the double byte name?

    Hi Boris,
    I'm working with Rick on the issue.
    I've tried the following (per OSS Note 507624), but we still get the "name is not canonical name" error.
    Added the following parameters via configtool to server instances:
    -Dhtmlb.useUTF8=X
    -Dfile.encoding=UTF8
    Added the following parameters via configtool to server instances:
    -Dhtmlb.useUTF8=X
    -Dfile.encoding=ISO-8859-1
    Set the following on <sid>adm account before starting portal (in combination with parameter settings above):
    setenv LC_ALL en_US.UTF-8
         locale
         LANG=en_US.UTF-8
         LC_CTYPE="en_US.UTF-8"
         LC_NUMERIC="en_US.UTF-8"
         LC_TIME="en_US.UTF-8"
         LC_COLLATE="en_US.UTF-8"
         LC_MONETARY="en_US.UTF-8"
         LC_MESSAGES="en_US.UTF-8"
         LC_ALL=en_US.UTF-8
    Unfortunately, these have not corrected the issue.
    Have I misinterpreted the recommendations in the Note or is there something else I needed to do?
    Thanks,
    Fred Bennett
    [email protected]

  • Implementation of double byte character in oracle 10.2.0.5 database

    Hi experts,
    There is an oracle 10.2.0.5 standard edition database running on windows 2003 platform. The application team needs to add a column of the datatype double byte (chinese characters) to an existing table. The database character set is set to WE8ISO8859P1 and the national characterset is AL16UTF16. After going through the Oracle Documentation our DBA team found out that its possible to insert chinese characters into the table with the current character set.
    The client side has the following details:
    APIs used to write data--SQL Developer
    APIs used to read data--SQL Developer
    Client OS--Windows 2003
    The value of NLS_LANG environment variable in client environment is American and the database character set is WE8ISO8859P1 and National Character set is AL16UTF16.
    We have got a problem from the development team saying that they are not able to insert chinese characters into the table of nchar or nvchar column type. The chinese characters that are being inserted into the table are getting interpreted as *?*...
    What could be the workaround for this ??
    Thanks in advance...

    For SQL Developer, see my advices in Re: Oracle 10g - Chinese Charecter issue and Re: insert unicode data into nvarchar2 column in a non-unicode DB
    -- Sergiusz

  • How best to send double byte characters as http params

    Hi all
    I have a web app that accepts text that can be in many languages.
    I build up a http string and send the text as parameters to another webserver. Hence, whatever text I receive i need to be able to represent on a http query string.
    The parameters are sent as urlencoded UTF8. They are decoded by the second webserver back into unicode and saved to the db.
    Occassionally i find a character that i am unable to convert to a utf8 string and send as a parameter (usually a SJIS character). When this occurs, the character is encoded as '3F' - a question mark.
    What is the best way to send double byte characters as http parameters so they always are sent faithfully and not as question marks? Is my only option to use UTF16?
    example code
    <code>
    public class UTF8Test {
    public static void main(String args[]) {
    encodeString("\u7740", "%E7%9D%80"); // encoded UTF8 string contains question mark (3F)
    encodeString("\u65E5", "%E6%97%A5"); // this other japanese character converts fine
    private static void encodeString(String unicode, String expectedResult) {
    try {
    String utf8 = new String(unicode.getBytes("UTF8"));
    String utf16 = new String(unicode.getBytes("UTF16"));
    String encoded = java.net.URLEncoder.encode(utf8);
    String encoded2 = java.net.URLEncoder.encode(utf16);
    System.out.println();
    System.out.println("encoded string is:" + encoded);
    System.out.println("expected encoding result was:" + expectedResult);
    System.out.println();
    System.out.println("encoded string16 is:" + encoded2);
    System.out.println();
    } catch (Exception e) {
    e.printStackTrace();
    </code>
    Any help would be greatly appreciated. I have been struggling with this for quite some time and I can hear the deadline approaching all too quickly
    Thanks
    Matt

    Hi Matt,
    one last visit to the round trip issue:
    in the Sun example, note that UTF8 encoding is used in the method that produces the byte array as well as in the method that creates the second string. This is equivalent to calling:
    String roundTrip = new String(original.getBytes("UTF8"), "UTF8");//sun exampleWhereas, in your code you were calling:
    String utf8 = new String(unicode.getBytes("UTF8"))//Matt's code
    [/code attracted
    The difference is crucial.  When you call the string constructor without a second (encoding) argument, the default encoding (usually Cp1252) is used.  Therefore your code is equivalent toString utf8 = new String(unicode.getBytes("UTF8"), "Cp1252")//Matt's code
    i.e.you are encoding with one transformation format and decoding back with a different transformation format, so in general you won't get your original string back.
    Regarding safely sending multi-byte characters across the Internet, I'm not completely sure what the situation is because I don't do it myself. (When our program is run as an applet, the only interaction it has with the web server is to download various files). I've seen lots of people on this forum describing problems sending multi-byte characters and I can't tell whether the problem is with the software or with the programming. Two possible methods come to mind (of course you need to find out what your third party software is doing):
    1) use the DataOutput/InputStreams writeUTF/readUTF methods
    2) use the InputStreamReader/OutputStreamWriter pair with UTF8 encoding
    See this thread:
    http://forum.java.sun.com/thread.jsp?forum=16&thread=168630
    You should stick to UTF8. It is designed so that the bytes generated by encoding non-ASCII characters can be safely transmitted across the Internet. Bytes generated by UTF16 can be just about anything.
    Here's what I suggest:
    I am running a version of the Sun tutorial that has a program running on a server to which I can send a string and the program sends back the string reversed.
    http://java.sun.com/docs/books/tutorial/networking/urls/readingWriting.html
    I haven't tried sending multi-byte characters but I will do so and test whether there are any transmission problems. (Assuming that the Sun cgi program itself correctly handles characters).
    More later,
    regards,
    Joe
    P.S.
    I thought one the reasons for the existence of UTF8 was to
    represent things like multi-byte characters in an ascii format?Not exactly. UTF8 encodes ascii characters into single bytes with the same byte values as ASCII encoding. This means that a document consisting entirely of ASCII characters is the same whether it was encoded as UTF8 or ASCII and can consequently be read in any ASCII document reader (e.g.notepad).

  • Double byte characters in dataset

    Hi Gurus,
    I encounter an issue when writing data to dataset with fixed length.
    Company Code: US00
    Document Number: 1234567890
    Fiscal Year: 2014
    Line Item: 001 Short Text ( length 10 char): AF 16
    Line Item: 002 Short Text ( length 10 char): AF 16 
    Comment:X
    In Unix, it becomes like this:-
    US0012345678902014001AF16      X
    US0012345678902014001AF 16       X
    The X in second line item was not in correct fixed position and cause file being rejected by the receiver system.
    I have tried to calculate using the following syntax for AF 16  :
    1) strlen(bseg-sgtxt) = 7
    2) numofchar(bseg-sgtxt) = 6
    I would like to know if there is any idea to resolve the position of subsequent fields after "SGTXT" field if there is double byte characters found. I have tried to pad trailing space but it does not work.
    Expected result is where the X of two line items are located at same and fixed position:-
    US0012345678902014001AF16      X
    US0012345678902014001AF 16    X
    Thank you!

    /** returns true if the String s contains any "double-byte" characters */
    public boolean containsDoubleByte(String s) {
      for (int i=0; i<s.length(); i++) {
        if (isDoubleByte(s.charAt(i)) {
          return true;
      return false;
    /** returns true if the char c is a double-byte character */
    public boolean isJapanese(char c) {
      if (c >= '\u0100' && c<='\uffff') return true;
      return false;
    // simpler:  return c>'\u00ff';
    /** returns true if the String s contains any Japanese characters */
    public boolean containsJapanese(String s) {
      for (int i=0; i<s.length(); i++) {
        if (isJapanese(s.charAt(i)) {
          return true;
      return false;
    /** returns true if the char c is a Japanese character. */
    public boolean isJapanese(char c) {
      // katakana:
      if (c >= '\u30a0' && c<='\u30ff') return true;
      // hiragana
      if (c >= '\u3040' && c<='\u309f') return true;
      // CJK Unified Ideographs
      if (c >= '\u4e00' && c<='\u9fff') return true;
      // CJK symbols & punctuation
      if (c >= '\u3000' && c<='\u303f') return true;
      // KangXi (kanji)
      if (c >= '\u2f00' && c<='\u2fdf') return true;
      // KanBun
      if (c >= '\u3190' && c <='\u319f') return true;
      // CJK Unified Ideographs Extension A
      if (c >= '\u3400' && c <='\u4db5') return true;
      // CJK Compatibility Forms
      if (c >= '\ufe30' && c <='\ufe4f') return true;
      // CJK Compatibility
      if (c >= '\u3300' && c <='\u33ff') return true;
      // CJK Radicals Supplement
      if (c >= '\u2e80' && c <='\u2eff') return true;
      // other character..
      return false;
    /* NB CJK Unified Ideographs Extension B not supported with 16-bit unicode. Source: http://www.alanwood.net/unicode */
    }

  • Double byte characters turn into squares at PDF export use Unicode font

    Hi all,
    We developing with Visual Studio 2008, .NET 2.0 and Crystal Report XI Release 2 SP5 an international windows application. We use the font Arial Unicode MS in the rpt file. We translate the fix texts with the Crystal Translator (3.2.2.299).
    On the distributed installation of our software, the printout and preview displays the double byte characters properly (Japanese, Korean, Chinese), but when we export the report as PDF, the characters get displayed with squares. This happens also, when the font Arial Unicode MS is installed on the distributed installation on Windows XP Professional.
    I searched for hours for a solution in the knowlegde base articles and in forum of Crystal Report. I found one thread, which describes exactly our problem:
    [Crystal XI R2 exporting issues with double-byte character sets|Crystal XI R2 exporting issues with double-byte character sets;
    But we already introduced the solution to use Unicode font and I also linked the font Lucida Sans Unicode to the Arial Unicode MS, but we still face the problem.
    Due to our release on thursday we are very under pressure to solve this problem asap.
    We appreciate your help very much!
    Ronny

    Your searches should have also come up with the fact that CR XI R2 is not supported in .NET 2008. Only CR 2008 (12.x) and Crystal Reports Basic for Visual Studio 2008 (10.5) are supported in .NET 2008. I realize this is not good news given the release time line, but support or non support of cr xi r2 in .net 2008 is well documented - from [Supported Platforms|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/7081b21c-911e-2b10-678e-fe062159b453
    ] to [KBases|http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes_boj/sdn_oss_boj_dev/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/scn_bosap/notes.do], to [Wiki|https://wiki.sdn.sap.com/wiki/display/BOBJ/WhichCrystalReportsassemblyversionsaresupportedinwhichversionsofVisualStudio+.NET].
    Best I can suggest is to try SP6:
    https://smpdl.sap-ag.de/~sapidp/012002523100015859952009E/crxir2win_sp6.exe
    MSM:
    https://smpdl.sap-ag.de/~sapidp/012002523100000634042010E/crxir2sp6_net_mm.zip
    MSI:
    https://smpdl.sap-ag.de/~sapidp/012002523100000633302010E/crxir2sp6_net_si.zip
    Failing that, you will have to move to a supported environment...
    Ludek
    Follow us on Twitter http://twitter.com/SAPCRNetSup
    Edited by: Ludek Uher on Jul 20, 2010 7:54 AM

  • Function module to control printing of double byte chinese characters

    Hi,
    My sapscript printing of GR Slip often overflow to the next line whenever a line item encountered article desciption text in CHINESE.
    The system login as "EN" but we do maintain article description in ENGLISH, ChINESE and mixture of both.
    This result in different field length when printing.
    Is there a way to control it and ensure that it will not overflow to the next line?
    How does SAP standard deals with this sort of printing, single & double byte chars?
    Please assist.

    This is the code that solved our issue.
    Besides we set the InfoObject to have NO master data attributes: it was just used as text attribute in an DSO, not as dimensional attribute in a cube. This solved the issue of the SID value generation error.
    FUNCTION z_bw_replace_sp_char.
    ""Local Interface:
    *"  IMPORTING
    *"     REFERENCE(I_STRING)
    *"  EXPORTING
    *"     REFERENCE(O_STRING)
      FIELD-SYMBOLS: <ic> TYPE x.
    Strings with other un-allowed char -
    DATA:
    ch1(12) TYPE x VALUE
    '410000204200002043000020',
    ch2(12) TYPE x VALUE
    '610000206200002063000020'.
      DATA:
      x8(4) TYPE x,
      x0(2) TYPE x VALUE '0020',
      x0200(2) TYPE x VALUE '0200'.
      DATA: v_len TYPE sy-index,
            v_cnt TYPE sy-index.
      o_string = i_string.
      v_len = STRLEN( o_string ).
    # sign
      IF v_len = 1.
        IF o_string(1) = '#'.
          o_string(1) = ' '.
        ENDIF.
      ENDIF.
    ! sign
      IF o_string(1) = '!'.
        o_string(1) = ' '.
      ENDIF.
      DO v_len TIMES.
        ASSIGN o_string+v_cnt(1) TO <ic> CASTING TYPE x.
        IF <ic> <> x0200. "$$$$$$$$$$$$$$$$$$$$$$
          IF <ic> >= '0000' AND
             <ic> <= '1F00'.  " Remove 0000---001F
            o_string+v_cnt(1) = ' '.
         ELSE.
           CONCATENATE <ic> x0 INTO x8 IN BYTE MODE.
           unassign <ic>.
           SEARCH ch1 FOR x8 IN BYTE MODE.
           IF sy-subrc <> 0.
             SEARCH ch2 FOR x8 IN BYTE MODE.
             IF sy-subrc = 0.
               o_string+v_cnt(1) = ' '.
             ENDIF.
           ELSE.
             o_string+v_cnt(1) = ' '.
           ENDIF.
          ENDIF.
        ENDIF. "$$$$$$$$$$$$$$$$$$$$$$
        v_cnt = v_cnt + 1.
      ENDDO.
    ENDFUNCTION.

  • Can Double Byte Codes be used in SharePoint 2013?

    Hello,
    I am looking to create a SharePoint but for Japanese users.  I understand that Double Byte Codes will need to be used when entering information due to the Japanese language.  I have been informed that this can sometimes cause an issue with SharePoint.
    I am looking to test this however I wanted to understand if this can be done / are there any considerations when using DBC's on SharePoint etc and any limitations.
    Any assistance would be greatly appreciated.
    Thanks
    KP

    Hi KP,
    Yes, SharePoint 2013 supports Double Byte Unicode characters:
    If you have non-standard ASCII characters, such as high-ASCII or double-byte Unicode characters, in the SharePoint URL, each of those characters is URL-encoded into two or more ASCII characters when they are passed to the Web browser. Thus, a URL with many
    high-ASCII characters or double-byte Unicode characters can become longer than the original un-encoded URL. The list below gives examples of the multiplication factors:
    High-ASCII characters — for example, (!, ", #, $, %, &, [Space]): multiplication factor = 3
    Double byte Unicode characters — for example, Japanese, Chinese, Korean, Hindi: multiplication factor = 9
    For example, when you translate the names of sites, library, folder, and file in the URL path http://www.contoso.com/sites/marketing/documents/Shared%20Documents/Promotion/Some%20File.xlsx into Japanese, the resulted encoded URL path will become something
    like the following:
    http://www.contoso.com/sites/%E3%83%9E%E3%83%BC%E3%82%B1%E3%83%86%E3%82%A3%E3%83%B3%E3%82%B0/%E6%96%87%E6%9B%B8/DocLib/%E3%83%97%E3%83%AD%E3%83%A2%E3%83%BC%E3%82%B7%E3%83%A7%E3%83%B3/%E3%83%95%E3%82%A1%E3%82%A4%E3%83%AB.xlsx. This path is 224 characters,
    whereas the original URL path is only 94 characters.
    In most cases, one UTF-16 character equals one UTF-16 code unit. However, characters that use Unicode code points greater than U+10000 will equal two UTF-16 code units. These characters include, but are not limited to, Japanese or Chinese surrogate pair
    characters. If your paths include these characters, the URL length will exceed the URL length limitation with fewer than 256 or 260 characters.
    Reference:
    https://technet.microsoft.com/en-us/library/ff919564%28v=office.14%29.aspx?f=255&MSPPError=-2147217396
    Best Regards,
    Eric
    TechNet Community Support
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact
    [email protected].

  • Messages in Double Byte Language

    I had created a message in english and then by logging into the required Double Byte Languages translated the text into Double Byte Language.The message is a warning message. Now the user wanted a pop-up to be dispalyed during the warning message, so in the options setting activated the "Dialog Box at Waring Message".The Activate Multibye Functions to support is also enabled. Now the issue is that when the message is displayed , the pop-up displays the correct text for all languages except the double byte languages, for double byte it displays "???????????".If I disable the "Activate Multibye Functions to support", garbage values are displayed for the double byte languages.Can someone tell me how do I solve this?

    Hi!
    This does not sound like a programming issue. First check / update the sapgui to the newest version, if this does not help ask OSS. I guess, in general your double byte characters are displayed correct, otherwise I would check language installation of SAP, maybe also font installation of the GUI computer.
    Regards,
    Christian

  • Double byte language i.e Japanese or Chinese text in non Unicode System

    Hi,
    I have translated text into Chinese and Japanese in a Unicode system and want to move it into a non Unicode system. Would Chinese/Japanese characters display correctly in a non Unicode system when moved from Unicode system.  I am doing translation in ECC60 or SAP 4.7 Unicode system and moving to SAP 4.7 non Unicode system.
    Thanks
    Balakrishna

    Hi Balakrishna,
    in general the transport between Unicode and Non-Unicode systems is supported.
    However there are restrictions, which are outlined in SAP note 638357.
    In your case it is a prerequisite that the objects to be transported are language dependent (text lang. flag is set on the language key - see SAP note 480671) and the languages are properly setup in the target systems.
    For double byte data there is a specific issue when transferring data from Unicode to Non-Unicode:
    In a Non-Unicode system, one double-byte character needs two bytes, therefore e.g. in a 10 char field, 5 double byte chars are fitting. In a Unicode system, you can insert 10 double-byte chars in a 10 char field.  Hence there is a risk of truncating characters in case of Unicode --> Non-Unicode communication.
    Please also have a look at SAP notes 1322715 and 745030.
    Best regards,
    Nils

  • Text strings from VISA read don't match identical looking text constants - could it be double byte characters"

    Our RS232-enabled instrument sends ASCII strings to COM 1 and I read strings in. For example I get the string "TPM", or at least it looks like "TPM" if I display it. However, if I send that to the selector input of a Case structure, and create a case for "TPM", whether the two appear to match varies. Sometimes it matches, and measuring its length returns 3. Sometimes it measures 7 or 11 or 12 characters long, and it doesn't match. I can reproduce a match or a mismatch by my choice of the command that went to the instrument prior to the command that causes the TPM response, but have made no sense of this clue. I have run it through Trim Whitespace, with Both Ends (the default) explicitly selected. I have also turned the string into a byte array, autoindexed a For loop on that, and only passed the bytes if they don't equal 32, or if they don't equal 0, thinking spaces or nulls might be in there, but no better.
    The Trim Whitespace function's Help remarks that it does not remove "double byte characters". But I can't find anything else about "double byte characters". Could this be the problem? Are there functions that can tell whether there are "double byte characters", or convert into or out of them? By "double byte characters", do they just mean Unicode?
    Solved!
    Go to Solution.

    Cebailey,
    The double byte characters are generally used for characters specific to languages other than English.  If you display your message in  " '\' Codes Display"  in a string indicator do you see any other characters?   You could also use Hex Display to see count the number of bytes in the message.  You are probably getting messages with non-printable characters that might need to be trimmed before using your application.  If you want more information the '\' Codes Display, there's a detailed description found in the LabVIEW Help.  You can also find the same information on our website in the LabVIEW Help.  Backslash ('\') Codes Display
    Caleb WNational Instruments

Maybe you are looking for

  • Dividing a large pdf file

    Can a 240-page (German) book in pdf be divided into two or more separate files? When I attempted to export this pdf file into MS Word (.docx) I received a message saying the export failed because the file was too long or too complex. [email protected

  • Twitter apps suddenly bring me back to main screen when in use

    I am currently facing a problem with my iphone. Whenever I am using twitter apps, be it twwetdeck, echofon, or twitter for iphone..the phone would sometimes bring me back to my main screen when the apps are in use. Does anyone of you have suggestions

  • CS6 Update

    Hi. I have a Windows 7 x64 pc with Adobe CS6 installed. We've had recurring issues with Illustrator crashes that seem to be unable to resolved so I am always checking for available updates in the hopes Adobe will fix. This morning I checked for updat

  • Information on Usage & Loading of Cubes

    Hi all, I am looking out for a <b>FUNCTION MODULE</b> or a <b>TABLE</b> in BW that can give me information like : 1. When was the last time data was loaded to my cube ? 2. When was the Cube last used or queried upon? I have the list of all cubes in m

  • Does anyone here use D-Link NAS boxes?

    Good day. I purchased the two bay D-link NAS box to house my two 3.5 internal SATA drives removed from my sold MacPro. Instructions state that the disks should be formated upon installation. I purchased this to "House" my two discs that have info on