Multi-byte character encoding issue in HTTP adapter

Hi Guys,
I am facing problem in the multi-byte character conversion.
Problem:
I am posting data from SAP CRM to third party system using XI as middle ware. I am using HTTP adapter to communicate XI to third party system.
I have given XML code as UT-8 in the XI payload manipulation block.
I am trying to post Chines characters from SAP CRM to third party system. junk characters are going to third party system. my assumption is it is double encoding.
Can you please guide me how to proceed further.
Please let me know if you need more info.
Regards,
Srini

Srinivas,
Can you go through the url:
UTF-8 encoding problem in HTTP adapter
---Satish

Similar Messages

  • UTF-8 encoding problem in HTTP adapter

    Hi Guys,
    I am facing problem in the UTF-8 multi-byte character conversion.
    Problem:
    I am posting data from SAP CRM to third party system using XI as middle ware. I am using HTTP adapter to communicate XI to third party system.
    in HTTP configuration i have given XML code as UT-8 in the XI payload manipulation block.
    I am trying to post Chines characters from SAP CRM to third party system. junk characters are going to third party system. my assumption is it is double encoding.
    I have checked the Xml messages in the Message monitoring in XI, i can able to see the chines charaters in XML files. But in the third party system it is showing as junk characters.
    Can you please any one help me regarding this issue.
    Please let me know if you need more info.
    Regards,
    Srini

    Srinivas
    Can you please go through the SAP Notes 856597 Question No.3 which may resolve your issue? Also have you checked SAP Notes 761608,639882, 666574, 913116, 779981 which might help you.
    ---Satish

  • Multi-byte character

    If DATABASE CHARACTER SET is UTF-8 than
    Than can i use VARCHAR2 to store multi-byte character or i still have to use
    nvarchar2
    also vachar2(1),nvarchar2(1) can store how much (max) bytes in case of UTF-8 CHARACTER SET

    If you create VARCHAR2(1) then you possibly can not store anything as your first character might be multibyte.
    My recommendation would be to consider defining by character rather than by byte.
    CREATE TABLE tbyte (
    testcol VARCHAR2(20));
    CREATE TABLE tchar (
    testcol VARCHAR2(20 CHAR));The second will always hold 20 characters without regard to the byte count.
    Demos here:
    http://www.morganslibrary.org/library.html

  • Where is the Multi-Byte Character.

    Hello All
    While reading data from DB, our middileware interface gave following error.
    java.sql.SQLException: Fail to convert between UTF8 and UCS2: failUTF8Conv
    I understand that this failure is because of a multi-byte character, where 10g driver will fix this bug.
    I suggested the integration admin team to replace current 9i driver with 10g one and they are on it.
    In addition to this, I wanted to suggest to the data input team on where exactly is the failure occured.
    I have asked them and got the download of the dat file and my intention was to findout where exactly is
    that multi-byte character located which caused this failure.
    I wrote the following code to check this.
    import java.io.*;
    public class X
    public static void main(String ar[])
    int linenumber=1,columnnumber=1;
    long totalcharacters=0;
    try
    File file = new File("inputfile.dat");
    FileInputStream fin = new FileInputStream(file);
    byte fileContent[] = new byte[(int)file.length()];
    fin.read(fileContent);
    for(int i=0;i<fileContent.length;i++)
       columnnumber++;totalcharacters++;
       if(fileContent<0 && fileContent[i]!=10 && fileContent[i]!=13 && fileContent[i]>300) // if invalid
    {System.out.println("failure at position: "+i);break;}
    if(fileContent[i]==10 || fileContent[i]==13) // if new line
    {linenumber++;columnnumber=1;}
    fin.close();
    System.out.println("Finished successfully, total lines : "+linenumber+" total file size : "+totalcharacters);
    catch (Exception e)
    e.printStackTrace();
    System.out.println("Exception at Line: "+linenumber+" columnnumber: " +columnnumber);
    }But this shows that the file is good and no issue with this.
    Where as the middleware interface fails with above exception while reading exactly the same input file.
    Anywhere I am doing wrong to locate that multi-byte character ?
    Greatly appreciate any help everyone !
    Thanks.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    My challenge is to spot the multi-byte character hidden in this big dat file.
    This is because the data entry team asked me to spot out the record and column that has issue out of
    lakhs of records they sent inside this file.
    Lets have the validation code like this...
       if( (fileContent<0 && fileContent[i]!=10 && fileContent[i]!=13) || fileContent[i]>300) // if invalid
    {System.out.println("failure at position: "+i);break;}lessthan 0 - I saw some -ve values when I was testing with other files.
    greaterthan 300 - was a try to find out if any characters exceeds actual chars. range.
    if 10 and 13 are for line-feed.
    with this, I randomly placed chinese, korean characters and program found them.
    any alternative (better code ofcourse) way to catch this black sheep ?
    Edited by: Sanath_K on Oct 23, 2009 8:06 PM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • How to set Multi Byte Character Set ( MBCS ) to Particular String In MFC VC++

    I Use Unicode Character Set in my MFC Application ( VC++) .
    now i get the output   ठ桔湡潹⁵潦⁲獵 (like this )character and i want to convert this character in english language (means MBCS),
    But i need Unicode to My Applicatiion. when i change the Multi-Byte Character set It give Correct output in English but other Objects ( TreeCtrl Selection ) will perform wrongly .  so i need to convert the particular String to MBCS
    how can i do that ? In MFC

    I assume your string read from your hardware device is an plains "C" string (ANSI string). This type of string has one byte per character. Unicode has two bytes per character.
    From the situation you explained I'd convert the string returned by the hardware to an Unicode string using i.e. MultibyteTowideChar with CP_ACP. You may also use mbstowcs or some similar functions to convert your string to an Unicode string.
    Best regards
    Bordon
    Note: Posted code pieces may not have a good programming style and may not perfect. It is also possible that they do not work in all situations. Code pieces are only indended to explain something particualar.

  • Character encoding issue in sql server

    Hi Team,
    We have a table with more than 20 columns...In that we have some columns which will have data extracted from the Datawareshouse applicances and its one time load.
    The problem is we have two columns which will may have same set of values from some records and for different set of records
    the below values are the example for same set of records but the values are changed while importing into the sql server database..
    2pk Etiquetas Navide‰as 3000-HG                                
     2pk Etiquetas Navideñas 3000-H                           
    Is there anyway to change the first column values into the second column value ? 
    By looking at the data we can say its the code page issue..(Character encoding issue)..but how to convert it?
    Convertting(2pk Etiquetas Navide‰as 3000-HG)  
    to get   2pk Etiquetas Navideñas 3000-H   in the select query?

    Then it seems that you can do the obvious: replace it.
    DECLARE @Sample TABLE ( Payload NVARCHAR(255) );
    INSERT INTO @Sample
    VALUES ( N'2pk Etiquetas Navide‰as 3000-HG' );
    UPDATE @Sample
    SET Payload = REPLACE(Payload, N'‰', N'ñ');
    SELECT S.Payload
    FROM @Sample S;

  • Character encoding issue

    I'm using the below give code to send mail in Trukish language.
    MimeMessage msg = new MimeMessage(session);
    msg.setText(message, "utf-8", "html");
    msg.setFrom(new InternetAddress(from));
    Transport.send(msg);
    But my customer says that he gets sometime unreadable characters in mail. I'm not able to understand how to solve this character encoding issue.
    Should i ask him to change his mail client's character encoding settings?
    If yes which one he should set.

    Send the same characters using a different mailer (e.g., Thunderbird or Outlook).
    If they're received correctly, come the message from that mailer with the message
    from JavaMail. Most likely other mailers are using a Turkish-specific charset instead
    of UTF-8.

  • CUSTOM Service - multi Byte character issue

    Hi Experts,
    I wrote a custom Service. What this service is doing, its id reading some data from Database and then generates CSV report. Code is working fine. But if we have multi - byte characters in data, then these characters are not properly shown in report. Given below is my service code :
    byte bytes[] = CustomServiceHelper.getReport(this.m_binder,providerName);
                        DataStreamWrapper wrapper = new DataStreamWrapper();
                        wrapper.m_dataEncoding="UTF-8";
                        wrapper.m_dataType="application/vnd.ms-excel;charset=UTF-8";
                        wrapper.m_clientFileName="Report.csv";
                        wrapper.initWithInputStream(new ByteArrayInputStream(bytes), bytes.length);
                        this.m_service.getHttpImplementor().sendStreamResponse(m_binder, wrapper);
    NOTE - This code is working fine on my local ucm (windows) for multi-byte characters. But When I install this service on our DEV and Staging servers (SOLARIS), then multi-byte characters issue occurs.
    Thanks in Advance..!!
    Edited by: user4884609 on May 17, 2011 4:12 PM

    Please Help

  • Problem to display japanese/multi-byte character on weblogic server 9.1

    Hi experts
    We are running weblogic 9.1 on linux box [REHL v4] and trying to display Japanese characters embedded in some of html files, but Japanese characters are converted into a question mark [?]. The html files that contain Japanese characters are stored properly in the file system and retained the Japanese characters as they should be.
    I changed character setting in the html header to shift_jis, but no luck. Then I added the encoding scheme for shift_jis in jsp_description and charset-parameter section in weblogic.xml but also no luck.
    I am wondering how I can properly display multi-byte characters/Japanese on weblogic server without setting up internationalization tools.
    I will appreciate for your advice.
    Thanks,
    yasushi

    This was fixed by removing everything except teh following files from the original ( 8.1 ) domain directory
    1. config.xml
    2. SerializedSystemIni.dat
    3. *.ldift
    4. applications directory
    Is this a bug in the upgrade tool ? Or did I miss a part of the documentation ?
    Thanks
    --sony                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • JSF myfaces character encoding issues

    The basic problem i have is that i cannot get the copyright symbol or the chevron symbols to display in my pages.
    I am using:
    myfaces 2.0.0
    facelets 1.1.14
    richfaces 3.3.3.final
    tomcat 6
    jdk1.6
    I have tried a ton of things to resolve this including:
    1.) creating a filter to set the character encoding to utf-8.
    2.) overridding the view handler to force calculateCharacterEncoding to always return utf-8
    3.) adding <meta http-equiv="content-type" content="text/html;charset=UTF-8" charset="UTF-8" /> to my page.
    4.) setting different combinations of 'URIEncoding="UTF-8"' and 'useBodyEncodingForURI="true"' in tomcat's server.xml
    5.) etc... like trying set encoding on an f:view, using f:verbatim, specifying escape attirbute on some output components.
    all with no success.
    There is a lot of great information on BalusC's site regarding this problem (http://balusc.blogspot.com/2009/05/unicode-how-to-get-characters-right.html) but I have not been able to resolve it yet.
    i have 2 test pages i am using.
    if i put these symbols in a jsp (which does NOT go through the faces servlet) it renders fine and the page info shows that it is in utf-8.
    <html>
    <head>
         <!-- <meta http-equiv="content-type" content="text/html;charset=UTF-8" /> -->
    </head>
    <body>     
              <br/>copy tag: &copy;
              <br/>js/jsp unicode: &#169;
              <br/>xml unicode: &#xA9;
              <br/>u2460: \u2460
              <br/>u0080: \u0080
              <br/>arrow: &#187;
              <p />
    </body>
    </html>if i put these symbols in an xhtml page (which does go through the faces servlet) i get the black diamond symbols with a ? even though the page info says that it is in utf-8.
    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
    <html xmlns="http://www.w3.org/1999/xhtml"
         xmlns:ui="http://java.sun.com/jsf/facelets"
         xmlns:f="http://java.sun.com/jsf/core"
         xmlns:h="http://java.sun.com/jsf/html"
         xmlns:rich="http://richfaces.org/rich"
         xmlns:c="http://java.sun.com/jstl/core"
           xmlns:a4j="http://richfaces.org/a4j">
    <head>
         <meta http-equiv="content-type" content="text/html;charset=UTF-8" charset="UTF-8" />
         <meta http-equiv="X-UA-Compatible" content="IE=EmulateIE7" />
    </head>
    <body>     
         <f:view encoding="utf-8">
              <br/>amp/copy tag: &copy;
              <br/>copy tag: &copy;
              <br/>copy tag w/ pound: #&copy;
              <br/>houtupt: <h:outputText value="&copy;" escape="true"/>
              <br/>houtupt: <h:outputText value="&copy;" escape="false"/>
              <br/>js/jsp unicode: &#169;
              <br/>houtupt: <h:outputText value="&#169;" escape="true"/>
              <br/>houtupt: <h:outputText value="&#169;" escape="false"/>
              <br/>xml unicode: &#xA9;
              <br/>houtupt: <h:outputText value="&#xA9;" escape="true"/>
              <br/>houtupt: <h:outputText value="&#xA9;" escape="false"/>
              <br/>u2460: \u2460
              <br/>u0080: \u0080
              <br/>arrow: &#187;
              <br/>cdata: <![CDATA[©]]>
              <p />
         </f:view>               
    </body>
    </html>on a side note, i have another application that is using myfaces 1.1, facelets 1.1.11, and richfaces 3.1.6 and the unicode symbols work fine.
    i had another developer try to use my test xhtml page in his mojarra implementation and it works fine there using facelets 1.1.14 but NOT myfaces or richfaces.
    i am convinced that somewhere between the view handler and the faces servlet the encoding is being set or reset but i havent been able to resolve it.
    if anyone at all can point me in the right direction i would be eternally greatful.
    thanks in advance.

    UPDATE:
    I was unable to get the page itself to consume the various options for unicode characters like the copyright symbol.
    Ultimately the content I am trying to display is coming from a web service.
    I resolved this issue by calling the web service from my backing bean instead of using ui:include on the webservice call directly in the page.
    for example:
    public String getFooter() throws Exception
              HttpClient httpclient = new HttpClient();
              GetMethod get = new GetMethod(url);
              httpclient.executeMethod(get);
              String response = get.getResponseBodyAsString();
              return response;
         }I'd still love to have a solution to the page usage of the unicode characters, but for the time being this solves my problem.

  • Byte[] character encoding for strings

    Hi All,
    I tried to convert a string into byte[] using the following code:
    byte[] out= [B@30c221;
    String encodedString = out.toString();
    it gives the output [B@30c221 when i print encodedstring.
    but when i convert that encodedstring into byte[] using the following code
    byte[] output = encodedString.getBytes();
    it gives different output.
    is there any character encode needed to give the exact output for this?

    Sorry, but the question makes no sense, and neither does your code. byte[] out= [B@30c221;
    String encodedString = out.toString(); The first line is syntactically incorrect, and the second should print something like "&#x5B;B@30c221", which isn't particularly useful. The correct way to convert a String to a byte[] is with the getBytes() method. Be aware that the byte[] will be in the system default encoding, which means you could get different results on different platforms. To remove this platform dependency, you should specify the encoding, like so: byte[] output = encodedString.getBytes("UTF-8"); Why are you doing this, anyway? There are very few good reasons to convert a String to a byte[] within your code; that's usually done by the I/O classes when your program communicates with the outside world, as when you write the string to a file.

  • FF character encoding issue in Mageia 2 ?

    Hi everyone,
    I'm running Mozilla Firefox 17.0.8 in a KDE distro of Linux called Mageia 2. I'm having problems in character encoding with certain web pages, meaning that certain icons like the ones next to menu entries (Login, Search box etc.) and in section headlines don't appear properly. Instead they appear either in some arabic character or as little grey boxes with numbers and letters written in it.
    I've tried experimenting with different encoding systems: Western (ISO 8859-1), (ISO 8859-15), (Windows 1252), Unicode (UTF-8), Central European (ISO 8859-2) but none of them does the job. Currently the char encoding is set to UTF-8. The same web page in Chrome (UTF-8) gives no such problem.
    Can you help me, please?

    Thank you!
    I solved my problem, however I find fonts are too small for certain web pages when compared to Chrome (see attached pictures of nytimes.com).
    Chrome's font size are set to "Medium".

  • Multi byte character set

    Hi,
    I am going to create a oracle 8i database in linux OS.In that i want to set english, italian and chinese language.
    My question is
    1. what are the parameter to be set in the OS level for this multi byte characterset
    2.how to set these characterset in database level at the time of creation.
    3.and also i am going to migrate one database to the new database. but the old one contains only english and italy. so while migration what are the parameter we have to set in the new database.
    Kindly provide some solutions..
    rgds..

    1) I'm not sure what you're asking here.
    2) While creating the database, you would want to set the NLS_CHARACTERSET to UTF8 (I don't believe AL32UTF8 was available in 8i).
    3) How are you migrating the database? Via export & import? If so, you'd need to ensure that the NLS_LANG on the client machine(s) that do the actual export and import are set appropriately.
    Justin

  • Encoding Issues in AS2 Adapter

    Hi,
    I have a scenario : AS2 (Sender) - IDOC (Receiver).
    Third party is sending the EDI message which we successfully receive at XI and send an IDOC to R/3 system. Now the issue is :
    EDI message has some Cyrillic characters (ISO-8859-5 encoded) in one field of the EDI message. But when we receive the message in XI the cyrillic characters get converted in some junk data and is send  as such to the R/3 system. XI is not able to read the data in correct encoding. I have tried using "xmlAnonymizerBean" in the Module tab of the adapter, using it just before the Call Adapter but this did not solve the purpose.
    Any idea how can we read the data in correct encoding in XI.
    Also I asked them to send the data in UTF-8 encoding. They changes the encoding from there end and sent the data in UTF-8 encoding, but in this case the message did not get processed in Seeburger also. (did not reach XI).
    Any help on this?
    Vikas

    Hi,
       Are you using BIC mapping designer to convert EDI to XML and Vice versa?
    In that case, you can refer this discussion:
    Re: Seeburger BIC Mapping Designer: Special Character (German Umlaut) Problem
    Regards,
    Ravi Kanth Talagana

  • Special Character Encoding issue

    Hi all
    Am using OAS9i. i ve deployed a webservice. i submit a payload request data that has some unicode characters like "§". The data is base64binary encoded. The type of the element mentioned in the schema is base64binary. When i retrieve the payload in java implementation code the character is displayed as � in the console. Please advice how to fix this issue. I tried setting JVM option file.encoding=utf-8 it didnt work out.
    Thanks
    Shiny

    When you use an UDF and you have programmed a Sax parser, then make sure, that the parser works with the correct encoding. So when the result of the webservice is  ISO-8859-1, then assign this to the parser.
    In principle the encoding should be part of XML header. Make sure that the encoding of the response is the same as declared in XML header.

Maybe you are looking for

  • Mac OS 10.4.8 Update times out everytime

    I have been trying to update my new MacBook 1.83 GHz core duo every update worked perfectly except one the Mac OS 10.4.8 combined update (intel). It is 300mb and everytime it gets to around 285mb the internet connection seems to drop. I get an error

  • Reminder Mail for PO/PR approval

    Hi Experts, For PO/PR approval, one mail is sent to respective person. If the approval/reject activity is not done then one reminder mail should go after 24 hrs and after 48 hrs the PO/PR is required to approve automatically but the hours should cons

  • How can I transfer all songs on my Ipod to my Itunes library on my mac?

    Blank i tunes libarary on mac book. Want to transfer songs from my ipod to itunes.

  • Information on how much SAP Business One can handle in throughput

    Hi All Is there any information on how much data SAP Business One can handle. I know in theory it only HDD space really set the limit but also thinking performance what setups do you have out there in terms of user, transactions per day, number of it

  • Address Book now showing all LDAP attributes

    The Address Book does not provide access to all LDAP attributes. For example homePhone homePostalAddress labeledURI are some of the fields currently left out. It would be nice if it was possible to configure the schema mapping, similar to thunderbird