FM to convert double byte chars

Hi All,
Is there anyone know what are the function module to convert double-byte chars? Thanks.

Seems like Blue Sky is not clear
You want to convert what into what?
Whats the purpose of this requirement, kindly give more details.
Regards
Karthik D

Similar Messages

  • Double byte chars in URI

    Is it possible to send double byte chars through a URI? Is this possible to send from a servlet to a client browser then the client browser will just forward it on back to a server. What would have to happen on the server and client side for this to happen?
    I guess I have basic non-understanding on how encoding works through http, using a client browser and also sending a response to a servlet container. Can anyone tell me how this process works? What are the default encodings, what is configurable on the client side or server side, etc.? Thanks.

    I believe the rule is that you first have to encode the string into UTF-8 bytes, then apply the URL-encoding rules to that array of bytes. At least, that's how I understand the most recent rules for HTTP. But it's likely that most browsers don't follow this rule properly, so be prepared for a rough ride if you try this.

  • DOUBLE BYTE chars

    Hi All,
    While uploading some multi-lingual text from a application
    server file, is there any way to treat DOUBLE BYTE characters
    as DOUBLE BYTE ?
    Currently, it is treated as SINGLE BYTE character.
    With Thanks and Regards,
    R.Nagarajan.
    We can -

    No response.

  • Support for Double Byte Chars in Table Names

    Can you please tell me if double byte characters are supported in table names? Thanks.

    Assuming you are using the same double byte character set as your db character set, then the answer is yes. Check out this table in the 9i Database Globalization Support Guide, for more info.
    http://technet.oracle.com/docs/products/oracle9i/doc_library/release2/server.920/a96529/ch2.htm#103678
    Schema objects refer to table/index/view names etc.

  • Form English Char ( Single Byte  ) TO Double Byte ( Japanese Char )

    Hello EveryOne !!!!
    I need Help !!
    I am new to Java ,.... I got assignment where i need to Check the String , if that string Contains Any Non Japanse Character ( a~z , A~Z , 0 -9 ) then this should be replaced with Double Byte ( Japanese Char )...
    I am using Java 1.2 ..
    Please guide me ...
    thanks and regards
    Maruti Chavan

    hello ..
    as you all asked Detail requirement here i an pasting C code where 'a' is passed as input character ..after process it is giving me Double Byte Japanese "A" .. i want this to be Done ..using this i am able to Convert APLHA-Numeric from singale byte to Doubel Byte ( Japanse ) ...
    Same program i want to Java ... so pleas guide me ..
    #include <stdio.h>
    int main( int argc, char *argv[] )
    char c[2];
    char d[3];
    strcpy( c, "a" ); // a is input char
    d[0] = 0xa3;
    d[1] = c[0] + 0x80;
    printf( ":%s:\n", c ); // Orginal Single byte char
    printf( ":%s:\n", d ); // Converted Double Byte ..
    please ..
    thax and regards
    Maruti Chavan

  • Regardibg double byte data type in Xi(japanese character)

    hi am giving japanese character(double byte) as a input data types, will you please tell me how to give whether as a string or constant ..etc. and please give information generally about double byte data type
    regards,
    S.K.Karthikeyan.

    Hi Stefan,
    I got your point it's really helpful for me.
    I have one more doubt;
    Is there any equivalent type for double byte char in XI ?
    regards,
    S.K.Karthikeyan.

  • Function module to control printing of double byte chinese characters

    Hi,
    My sapscript printing of GR Slip often overflow to the next line whenever a line item encountered article desciption text in CHINESE.
    The system login as "EN" but we do maintain article description in ENGLISH, ChINESE and mixture of both.
    This result in different field length when printing.
    Is there a way to control it and ensure that it will not overflow to the next line?
    How does SAP standard deals with this sort of printing, single & double byte chars?
    Please assist.

    This is the code that solved our issue.
    Besides we set the InfoObject to have NO master data attributes: it was just used as text attribute in an DSO, not as dimensional attribute in a cube. This solved the issue of the SID value generation error.
    FUNCTION z_bw_replace_sp_char.
    ""Local Interface:
    *"  IMPORTING
    *"     REFERENCE(I_STRING)
    *"  EXPORTING
    *"     REFERENCE(O_STRING)
      FIELD-SYMBOLS: <ic> TYPE x.
    Strings with other un-allowed char -
    DATA:
    ch1(12) TYPE x VALUE
    '410000204200002043000020',
    ch2(12) TYPE x VALUE
    '610000206200002063000020'.
      DATA:
      x8(4) TYPE x,
      x0(2) TYPE x VALUE '0020',
      x0200(2) TYPE x VALUE '0200'.
      DATA: v_len TYPE sy-index,
            v_cnt TYPE sy-index.
      o_string = i_string.
      v_len = STRLEN( o_string ).
    # sign
      IF v_len = 1.
        IF o_string(1) = '#'.
          o_string(1) = ' '.
        ENDIF.
      ENDIF.
    ! sign
      IF o_string(1) = '!'.
        o_string(1) = ' '.
      ENDIF.
      DO v_len TIMES.
        ASSIGN o_string+v_cnt(1) TO <ic> CASTING TYPE x.
        IF <ic> <> x0200. "$$$$$$$$$$$$$$$$$$$$$$
          IF <ic> >= '0000' AND
             <ic> <= '1F00'.  " Remove 0000---001F
            o_string+v_cnt(1) = ' '.
         ELSE.
           CONCATENATE <ic> x0 INTO x8 IN BYTE MODE.
           unassign <ic>.
           SEARCH ch1 FOR x8 IN BYTE MODE.
           IF sy-subrc <> 0.
             SEARCH ch2 FOR x8 IN BYTE MODE.
             IF sy-subrc = 0.
               o_string+v_cnt(1) = ' '.
             ENDIF.
           ELSE.
             o_string+v_cnt(1) = ' '.
           ENDIF.
          ENDIF.
        ENDIF. "$$$$$$$$$$$$$$$$$$$$$$
        v_cnt = v_cnt + 1.
      ENDDO.
    ENDFUNCTION.

  • Double byte language i.e Japanese or Chinese text in non Unicode System

    Hi,
    I have translated text into Chinese and Japanese in a Unicode system and want to move it into a non Unicode system. Would Chinese/Japanese characters display correctly in a non Unicode system when moved from Unicode system.  I am doing translation in ECC60 or SAP 4.7 Unicode system and moving to SAP 4.7 non Unicode system.
    Thanks
    Balakrishna

    Hi Balakrishna,
    in general the transport between Unicode and Non-Unicode systems is supported.
    However there are restrictions, which are outlined in SAP note 638357.
    In your case it is a prerequisite that the objects to be transported are language dependent (text lang. flag is set on the language key - see SAP note 480671) and the languages are properly setup in the target systems.
    For double byte data there is a specific issue when transferring data from Unicode to Non-Unicode:
    In a Non-Unicode system, one double-byte character needs two bytes, therefore e.g. in a 10 char field, 5 double byte chars are fitting. In a Unicode system, you can insert 10 double-byte chars in a 10 char field.  Hence there is a risk of truncating characters in case of Unicode --> Non-Unicode communication.
    Please also have a look at SAP notes 1322715 and 745030.
    Best regards,
    Nils

  • How do I convert a double-byte encoded file to single-byte ASCII?

    Hello,
    I am working with XML files (apparently coded in UTF-8) which encoded in double-byte characters.
    The problem is the characters for end of line: 00 0D 00 0A
    This double byte end of line is causing a problem with a legacy conversion tool (which deals with 0D 0A). The file itself contains no
    accented/international characters, so in principle converting to single-byte should not cause any problems.
    I have tried to convert this file with tools like native2ascii and the conversion tools that are part of Notepad++ but without
    any luck - the "00 0D 00 0A" are still present in the output
    Can anyone point me to a tool or some code that can convet this file into single-byte?
    Thank you.

    Amiens wrote:
    native2ascii.exe -encoding UTF-16 -reverse INPUT.xml OUTPUT.xml
    gives 00 00 0 0D 00 00 00 0A
    so clearly that is not the required output.What you've got there is UTF-16 encoded text that's been converted to UTF-16. Get rid of the "-reverse" option and you should see the result you expect.

  • Convert Asian characters to EBCDIC double byte

    How can I convert Asian characters like Chinise or Japanese into EBCDIC double byte array in Java. Do any code available to do this
    Thanks

    If I have a asian character in test (String) need to
    convert it to double byte EBCDIC array. Can you please
    let me know how to convert.That is somewhat like saying how do you convert it to ASCII, neither is possible.
    There are a number of character sets in the world that use the EBCDIC encoding for the lower bytes and support a multibyte format for another language.
    First you need to figure out what character set(s) you can use.
    Second you see if java supports an encoding for that.
    If it does then you use String.getBytes(String charsetName)
    If it doesn't you will have to make your own mapping function.

  • Converting a byte[] into a char[]

    Hi All,
    what's the best way to convert a byte[] into a char[]?
    Any advice appreciated. Thanks in advance.

    you will need a encoding schema...
    byte is binary data, char is interpret from byte by the system...
    easiest way without extra coding...
    new String(byte[], encoding).toCharArray()
    if you don't specify encoding... system default is used.

  • Convert/Represent 1 byte char in 2 bytes

    Hello All,
    I would like to convert/represent the 1 byte char to 2 byte.
    for example the "Test ��" is a 9 byte string if i use getBytes()
    the byte array length is 7 bytes but if i use getBytes("UTF8")
    i'm getting 9 bytes but it's not printing "��"
    Now i want to represent them in a 2 byte. How do i do that?
    Regards
    G S Sundaram

    getBytes("UTF16-BE") or getBytes("UTF16-LE"). What is �� supposed to be here?

  • ASCII representations of double-byte characters

    My file contains ASCII representations of double-byte CJK characters (output of native2ascii). How do I restore them back to the original native characters?
    I mean, when I load the file with FileInputStream, what I get are all strings like \uabcd. How do I get the characters represented by these strings?

    My file contains ASCII representations of double-byte
    CJK characters (output of native2ascii. How do
    I restore them back to the original native
    characters?
    I am no expert in unicode so I don't know if this is correct, but I assume that if a String starts with "\u" then there will be 4 more characters that are a hexadecimal representation of the char value. If that's right, then you should be able to parse out the "\uxxxx" and convert it to a char by parsing the hex. For example//the variable unicode is a String like \uabcd
    String hex = unicode.substring(2);
    char result = (char) (Integer.parseInt(hex,16));

  • Encoded double byte characters string conversion

    I have double byte encoded strings stored in a properties file. A sample of such string below (I think it is japanese):
    \u30fc\u30af\u306e\u30a2
    I am supposed to read it from file, convert it to actual string and use them on UI. I am not able to figure how to do the conversion -- the string contains text as is, char backslash, char u, and so on. How to convert it to correct text (either using ai::UnicodeString or otherwise)?
    Thanks.

    Where did this file come from? Some kind of Java or Ruby export? I don't think AI has anything in its SDK that would natively read that. You could just parse the string, looking for \u[4 characters]. I believe if you created a QChar and initialized it with the integer value of the four-character hex string, it would properly create the character.

  • Text strings from VISA read don't match identical looking text constants - could it be double byte characters"

    Our RS232-enabled instrument sends ASCII strings to COM 1 and I read strings in. For example I get the string "TPM", or at least it looks like "TPM" if I display it. However, if I send that to the selector input of a Case structure, and create a case for "TPM", whether the two appear to match varies. Sometimes it matches, and measuring its length returns 3. Sometimes it measures 7 or 11 or 12 characters long, and it doesn't match. I can reproduce a match or a mismatch by my choice of the command that went to the instrument prior to the command that causes the TPM response, but have made no sense of this clue. I have run it through Trim Whitespace, with Both Ends (the default) explicitly selected. I have also turned the string into a byte array, autoindexed a For loop on that, and only passed the bytes if they don't equal 32, or if they don't equal 0, thinking spaces or nulls might be in there, but no better.
    The Trim Whitespace function's Help remarks that it does not remove "double byte characters". But I can't find anything else about "double byte characters". Could this be the problem? Are there functions that can tell whether there are "double byte characters", or convert into or out of them? By "double byte characters", do they just mean Unicode?
    Solved!
    Go to Solution.

    Cebailey,
    The double byte characters are generally used for characters specific to languages other than English.  If you display your message in  " '\' Codes Display"  in a string indicator do you see any other characters?   You could also use Hex Display to see count the number of bytes in the message.  You are probably getting messages with non-printable characters that might need to be trimmed before using your application.  If you want more information the '\' Codes Display, there's a detailed description found in the LabVIEW Help.  You can also find the same information on our website in the LabVIEW Help.  Backslash ('\') Codes Display
    Caleb WNational Instruments

Maybe you are looking for