Double byte chars in URI

Is it possible to send double byte chars through a URI? Is this possible to send from a servlet to a client browser then the client browser will just forward it on back to a server. What would have to happen on the server and client side for this to happen?
I guess I have basic non-understanding on how encoding works through http, using a client browser and also sending a response to a servlet container. Can anyone tell me how this process works? What are the default encodings, what is configurable on the client side or server side, etc.? Thanks.

I believe the rule is that you first have to encode the string into UTF-8 bytes, then apply the URL-encoding rules to that array of bytes. At least, that's how I understand the most recent rules for HTTP. But it's likely that most browsers don't follow this rule properly, so be prepared for a rough ride if you try this.

Similar Messages

  • FM to convert double byte chars

    Hi All,
    Is there anyone know what are the function module to convert double-byte chars? Thanks.

    Seems like Blue Sky is not clear
    You want to convert what into what?
    Whats the purpose of this requirement, kindly give more details.
    Regards
    Karthik D

  • DOUBLE BYTE chars

    Hi All,
    While uploading some multi-lingual text from a application
    server file, is there any way to treat DOUBLE BYTE characters
    as DOUBLE BYTE ?
    Currently, it is treated as SINGLE BYTE character.
    With Thanks and Regards,
    R.Nagarajan.
    We can -

    No response.

  • Support for Double Byte Chars in Table Names

    Can you please tell me if double byte characters are supported in table names? Thanks.

    Assuming you are using the same double byte character set as your db character set, then the answer is yes. Check out this table in the 9i Database Globalization Support Guide, for more info.
    http://technet.oracle.com/docs/products/oracle9i/doc_library/release2/server.920/a96529/ch2.htm#103678
    Schema objects refer to table/index/view names etc.

  • Regardibg double byte data type in Xi(japanese character)

    hi am giving japanese character(double byte) as a input data types, will you please tell me how to give whether as a string or constant ..etc. and please give information generally about double byte data type
    regards,
    S.K.Karthikeyan.

    Hi Stefan,
    I got your point it's really helpful for me.
    I have one more doubt;
    Is there any equivalent type for double byte char in XI ?
    regards,
    S.K.Karthikeyan.

  • Function module to control printing of double byte chinese characters

    Hi,
    My sapscript printing of GR Slip often overflow to the next line whenever a line item encountered article desciption text in CHINESE.
    The system login as "EN" but we do maintain article description in ENGLISH, ChINESE and mixture of both.
    This result in different field length when printing.
    Is there a way to control it and ensure that it will not overflow to the next line?
    How does SAP standard deals with this sort of printing, single & double byte chars?
    Please assist.

    This is the code that solved our issue.
    Besides we set the InfoObject to have NO master data attributes: it was just used as text attribute in an DSO, not as dimensional attribute in a cube. This solved the issue of the SID value generation error.
    FUNCTION z_bw_replace_sp_char.
    ""Local Interface:
    *"  IMPORTING
    *"     REFERENCE(I_STRING)
    *"  EXPORTING
    *"     REFERENCE(O_STRING)
      FIELD-SYMBOLS: <ic> TYPE x.
    Strings with other un-allowed char -
    DATA:
    ch1(12) TYPE x VALUE
    '410000204200002043000020',
    ch2(12) TYPE x VALUE
    '610000206200002063000020'.
      DATA:
      x8(4) TYPE x,
      x0(2) TYPE x VALUE '0020',
      x0200(2) TYPE x VALUE '0200'.
      DATA: v_len TYPE sy-index,
            v_cnt TYPE sy-index.
      o_string = i_string.
      v_len = STRLEN( o_string ).
    # sign
      IF v_len = 1.
        IF o_string(1) = '#'.
          o_string(1) = ' '.
        ENDIF.
      ENDIF.
    ! sign
      IF o_string(1) = '!'.
        o_string(1) = ' '.
      ENDIF.
      DO v_len TIMES.
        ASSIGN o_string+v_cnt(1) TO <ic> CASTING TYPE x.
        IF <ic> <> x0200. "$$$$$$$$$$$$$$$$$$$$$$
          IF <ic> >= '0000' AND
             <ic> <= '1F00'.  " Remove 0000---001F
            o_string+v_cnt(1) = ' '.
         ELSE.
           CONCATENATE <ic> x0 INTO x8 IN BYTE MODE.
           unassign <ic>.
           SEARCH ch1 FOR x8 IN BYTE MODE.
           IF sy-subrc <> 0.
             SEARCH ch2 FOR x8 IN BYTE MODE.
             IF sy-subrc = 0.
               o_string+v_cnt(1) = ' '.
             ENDIF.
           ELSE.
             o_string+v_cnt(1) = ' '.
           ENDIF.
          ENDIF.
        ENDIF. "$$$$$$$$$$$$$$$$$$$$$$
        v_cnt = v_cnt + 1.
      ENDDO.
    ENDFUNCTION.

  • Double byte language i.e Japanese or Chinese text in non Unicode System

    Hi,
    I have translated text into Chinese and Japanese in a Unicode system and want to move it into a non Unicode system. Would Chinese/Japanese characters display correctly in a non Unicode system when moved from Unicode system.  I am doing translation in ECC60 or SAP 4.7 Unicode system and moving to SAP 4.7 non Unicode system.
    Thanks
    Balakrishna

    Hi Balakrishna,
    in general the transport between Unicode and Non-Unicode systems is supported.
    However there are restrictions, which are outlined in SAP note 638357.
    In your case it is a prerequisite that the objects to be transported are language dependent (text lang. flag is set on the language key - see SAP note 480671) and the languages are properly setup in the target systems.
    For double byte data there is a specific issue when transferring data from Unicode to Non-Unicode:
    In a Non-Unicode system, one double-byte character needs two bytes, therefore e.g. in a 10 char field, 5 double byte chars are fitting. In a Unicode system, you can insert 10 double-byte chars in a 10 char field.  Hence there is a risk of truncating characters in case of Unicode --> Non-Unicode communication.
    Please also have a look at SAP notes 1322715 and 745030.
    Best regards,
    Nils

  • Form English Char ( Single Byte  ) TO Double Byte ( Japanese Char )

    Hello EveryOne !!!!
    I need Help !!
    I am new to Java ,.... I got assignment where i need to Check the String , if that string Contains Any Non Japanse Character ( a~z , A~Z , 0 -9 ) then this should be replaced with Double Byte ( Japanese Char )...
    I am using Java 1.2 ..
    Please guide me ...
    thanks and regards
    Maruti Chavan

    hello ..
    as you all asked Detail requirement here i an pasting C code where 'a' is passed as input character ..after process it is giving me Double Byte Japanese "A" .. i want this to be Done ..using this i am able to Convert APLHA-Numeric from singale byte to Doubel Byte ( Japanse ) ...
    Same program i want to Java ... so pleas guide me ..
    #include <stdio.h>
    int main( int argc, char *argv[] )
    char c[2];
    char d[3];
    strcpy( c, "a" ); // a is input char
    d[0] = 0xa3;
    d[1] = c[0] + 0x80;
    printf( ":%s:\n", c ); // Orginal Single byte char
    printf( ":%s:\n", d ); // Converted Double Byte ..
    please ..
    thax and regards
    Maruti Chavan

  • Invoke-WebRequest - Double byte characters issue in windows 8.1

    I try write a powershell script to download a file from web server but failed. The path have double byte characters.
    I could run in Windows server 2012 and 2012 R2 successfully, but fail in Windows 8 & 8.1
    Do there any difference below Windows server and client powershell?
    Region and setting are same in Windows 2012 & Windows 8
    Script as below
    Invoke-WebRequest -Uri " http://hostname/m/%E9%...../......./...../xxx.jpg"

    Security settings are one possible cause of this.
    Since we don't have your URL we cannot reproduce this. 
    It is "different". Using "difference" had me confused for qa bit.  I though you were trying to figure out the difference between two things.
    Use:
    $wc=New-Object System.Net.WebClient
    $ws.DownloadFile($url,'c:\file.jpg')
    You will see less issues and it is faster.
    ¯\_(ツ)_/¯

  • JSF and Double Byte Character

    Hi,
    I wanted to know how to handle <h:outputText> with chinese character or double byte character.
    See sample code below :
    <%@ page language="java" contentType="text/html; charset=UTF-8"      pageEncoding="UTF-8"%>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
    <h:form styleClass="form" id="form1">
    <% request.setCharacterEncoding("UTF-8"); %>
    <h:inputText styleClass="inputText" id="text1"></h:inputText>
    <hx:commandExButton type="submit" value="Submit" styleClass="commandExButton" id="button1"      action="#{pc_SubmitTest.doButton1Action}"></hx:commandExButton>
    <h:outputText styleClass="outputText" id="text2"></h:outputText>
    </h:form>
    When you input with double byte character and submit,
    the output screen value did not render properly .
    I tried this similiar code at JSP, it work fine.
    Anybody know how to solve this problem ?
    Anything need to do at pagecode level ?
    Thank you.
    Reinardy

    Problem was due to the fact that I was trying to generate the excel file in char stream instead of byte stream

  • Double byte characters in a String - Help needed

    Hi All,
    Can anyone tell me how to find the presence of a double byte ( Japanese ) character in a String. I need to check whether the String contains a Japanese character or not. Thanx in advance.
    Ramya

    /** returns true if the String s contains any "double-byte" characters */
    public boolean containsDoubleByte(String s) {
      for (int i=0; i<s.length(); i++) {
        if (isDoubleByte(s.charAt(i)) {
          return true;
      return false;
    /** returns true if the char c is a double-byte character */
    public boolean isJapanese(char c) {
      if (c >= '\u0100' && c<='\uffff') return true;
      return false;
    // simpler:  return c>'\u00ff';
    /** returns true if the String s contains any Japanese characters */
    public boolean containsJapanese(String s) {
      for (int i=0; i<s.length(); i++) {
        if (isJapanese(s.charAt(i)) {
          return true;
      return false;
    /** returns true if the char c is a Japanese character. */
    public boolean isJapanese(char c) {
      // katakana:
      if (c >= '\u30a0' && c<='\u30ff') return true;
      // hiragana
      if (c >= '\u3040' && c<='\u309f') return true;
      // CJK Unified Ideographs
      if (c >= '\u4e00' && c<='\u9fff') return true;
      // CJK symbols & punctuation
      if (c >= '\u3000' && c<='\u303f') return true;
      // KangXi (kanji)
      if (c >= '\u2f00' && c<='\u2fdf') return true;
      // KanBun
      if (c >= '\u3190' && c <='\u319f') return true;
      // CJK Unified Ideographs Extension A
      if (c >= '\u3400' && c <='\u4db5') return true;
      // CJK Compatibility Forms
      if (c >= '\ufe30' && c <='\ufe4f') return true;
      // CJK Compatibility
      if (c >= '\u3300' && c <='\u33ff') return true;
      // CJK Radicals Supplement
      if (c >= '\u2e80' && c <='\u2eff') return true;
      // other character..
      return false;
    /* NB CJK Unified Ideographs Extension B not supported with 16-bit unicode. Source: http://www.alanwood.net/unicode */
    }

  • ASCII representations of double-byte characters

    My file contains ASCII representations of double-byte CJK characters (output of native2ascii). How do I restore them back to the original native characters?
    I mean, when I load the file with FileInputStream, what I get are all strings like \uabcd. How do I get the characters represented by these strings?

    My file contains ASCII representations of double-byte
    CJK characters (output of native2ascii. How do
    I restore them back to the original native
    characters?
    I am no expert in unicode so I don't know if this is correct, but I assume that if a String starts with "\u" then there will be 4 more characters that are a hexadecimal representation of the char value. If that's right, then you should be able to parse out the "\uxxxx" and convert it to a char by parsing the hex. For example//the variable unicode is a String like \uabcd
    String hex = unicode.substring(2);
    char result = (char) (Integer.parseInt(hex,16));

  • To find out whether a character is of Single byte or Double bytes

    hi,
    is there any built-in class to find whether a character is a single byte or double byte. is there any method to do so?? If possible can someone provide a sample code snippet to do a check for single byte and double byte characters.
    thanx in advance.....

    If you are asking what size the char primitive is, it's 16 bits.
    If you want to know the numerical value of a char, you can cast it to an int and compare it to 255 to see if it fits in 1 byte.

  • SJIS- Japan Encoding Issues(*Unable to handle Double Byte Numeric and Spec)

    Hi All,
    Problem:
    Unable to handle Double Byte Numeric and Special Characters(Hypen)
    The input
    区中央京勝乞田1944-2
    Output
    区中央京勝乞田1944?2
    We have a write service created based on the JCA (Write File Adapter) with the native schema defined with SJIS Encoding as below.
    *<?xml version="1.0" encoding="SJIS"?>
    <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
         xmlns:nxsd="http://xmlns.oracle.com/pcbpel/nxsd" xmlns:tns="http://nike.com/***/***********"
         targetNamespace="http://nike.com/***/*************"
         elementFormDefault="unqualified" attributeFormDefault="unqualified"
         nxsd:version="NXSD" nxsd:stream="chars" nxsd:encoding="SJIS">*
    Do anyone have similar issue? How can we handle the double byte characters while using SJIS encoding? At the least how can we handle double byte hyphen ??
    Thanks in Advance

    Have modified my schema as shown below and it worked well for me and i am partially successful up to some extent. Yet, not sure the workaround will resolve the issue at the final loading...
    <?xml version="1.0" encoding="UTF-8"?>
    <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
    xmlns:nxsd="http://xmlns.oracle.com/pcbpel/nxsd" xmlns:tns="http://nike.com/***/***********"
    targetNamespace="http://nike.com/***/*************"
    elementFormDefault="unqualified" attributeFormDefault="unqualified"
    nxsd:version="NXSD" nxsd:stream="chars" nxsd:encoding="UTF-16">*
    If anyone has the resolution or have these kind of issues let me know.........

  • Right Truncation, varchar in SQL Server, double-byte in Oracle

    Hi.
    We have a DB Link using DG4MSQL from Oracle 11.1.0.7.0 to a SQL Server 2005 database. The Oracle database is set up to use UTF-8 so all character fields are double-byte on the oracle side.
    On the SQL Server table we have a column defined as varchar(32). When doing a select * from this table (from Oracle, over the DB link), everything works fine if that column contains values with a length of 16 characters or less. When the column contains 17 - 32 characters we get the following error:
    Error report:
    SQL Error: ORA-28500: connection from ORACLE to a non-Oracle system returned this message:
    [Oracle][ODBC SQL Server Driver]String data, right truncation {01004}
    ORA-02063: preceding 2 lines from SQLUAT
    28500. 00000 - "connection from ORACLE to a non-Oracle system returned this message:"
    *Cause:    The cause is explained in the forwarded message.
    *Action:   See the non-Oracle system's documentation of the forwarded
    message.
    It looks like Oracle (or the ODBC driver) is bringing the data back as double-byte even though it is declared as varchar. If we change the column on the SQL server side to nvarchar, everything works fine.
    Is this a bug? Or is there a setting that can be specified for the gateway so that it recognizes SQL Server varchar columns as single-byte, to prevent them from coming over as double-byte and messing up the ODBC driver? Or is there anything else we should do to prevent this problem?
    Thanks

    Hi,
    I don't think it's a problem with the hs or the dblink (but I may be wrong):
    UTF-8 is double-bytes as you said. when you declare varchar2(32) it actually means you can insert up to 32 bytes --> 16 characters.
    even if you do a simple insert to the table with more than 16 characters - it will fail.
    In order to overcome this you should declare the table with varchar2(32 char) which will tell Oracle to allocate enough bytes to hold 32 chars - not bytes. In UTF-8 this means 64 bytes (2*n). If you'd be using AL32UTF-8 this would mean 4*n.
    BTW- why are you using UTF-8 and not AL32UTF8?
    Hope this helps.

Maybe you are looking for

  • Posting Vendor Downpayment requests with input tax

    Hi Gurus, I am trying to configure the down payment request for Vendor Payments, I want these down payment request to be posted so I un flagged the Noted Items check box and flagged the check box Commitments warnings check box and also I entered the

  • Importing .mov or .mpg and not getting audio in FCP

    I have both an Mpeg (as well as an .mov ) file that play in QuickTime Player beautifully with audio. When I import it into FCP I only get the video. This has been happening with many files that clients are bringing in and have not figured out a way t

  • Cisco SIP Phone 9971 won't register on CME 8.6

    Hello, I'm facing a very strange problem: a Cisco SIP Phone 9971 won't register on CME 8.6 running on a 2811 I have read all the related-postings to this and other Forum, but I have not been able to solve it. One of the "potential solutions" was to m

  • Node id does not exist for the current application server id

    Hi gurus, when i start application services (adstrtal.sh) i encounter the following error: Node id does not exist for the current application server id. i executed the command select server_id from fnd_nodes and had the following output SERVER_ID 991

  • Data loss or corruption on iPad

    I use my iPad to play music.  Occasionally, halfway though listening to a song, there is no sound.  The counter (time elapsed / time remaining) keeps incrementing but there is no sound.  I verified that the source file (in my music library) is not co