Code page conversion for chinese characters

Hi,
we receive an XML via JMS sender adapter where the code page in the Sending MQ system is cp850.
One tag we receive contain chinese characters but are encoded as below
<FAPIAO><Title>马么</Title><Remark>*æ¤,波特肉*</Remark></FAPIAO>
We have tried the messageTransformBean in the sender JMS adapter to convert into UTF-8, but that gives no change.
If we use some other code page, BIG5 some of the characters are converted to chinese characters, but we need to have it as UTF-8.
Is this possible or do we have to use some other codepage?
Best Regards
Olof

Olof Trönnberg wrote:
Hi,
we receive an XML via JMS sender adapter where the code page in the Sending MQ system is cp850.
One tag we receive contain chinese characters but are encoded as below
<FAPIAO><Title>马么</Title><Remark>*æ¤,波特肉*</Remark></FAPIAO>
XML has to be transported as binary always.
Remove the encoding parameter in comm. channel.
Besides: obviously this is UTF-8, so how can you say, the code page of the sending system is cp850?
It seems, that you have a wrong information.

Similar Messages

  • Basis Code Page Conversion

    Hi everybody,
    Can anyone know that any training program on "Basis Code Page Conversion for Upgarding SAP using AS/400" is going to be held. If u know regardting this, please help me in attending the training.
    Please help on this.
    Thank you

    Hi Cristian,
    Take a look to f.m. TRANSLATE_CODEPAGE_IN.
    It uses class cl_abap_conv_in_ce. I don't know how it works, maybe you can find any idea useful for your pourpose.
    Regards.
    Andrea

  • Regarding code page conversions

    Hi ,
    I have a query on code pages in unicode evironment.
    first of all sorry for big mail on my question
    There were a couple of issues when we have upgraded from 4.7 to 6.0 . These issues were mainly in the code page conversions from one code page xxxx to yyyy and the dataset transfers.
    the problem which im facing is trying to understand  exactly what is this code page doing in the back ground  and these multilingual conversions all about .
    there is a custom code page that my client has made in his landscape and now in majority of the interfaces we need to handle the code based on this code page .
    for ex in the open dataset statement we add legacy text mode code page p_code ignoring conversion errors message lv_message .
    i see some of  the characters especially scandinivian korean chinese and japanese giving some major problem during the file transfers to unix and ftp environment .
    case 1.
    now we are referring to a custom code page 9xxx . now take a character as ä  , now when i write a transfer statement
    with referene to this code page it is displaying as #  in unix path.
    now taking the hexa decimal value of this character if i search in the custom code page (tcode scp with the hexa value of such characters )  there is a value already present  in this code page  9xxx, now when i have a value maintained why am i gettina a #  in the unix server .. how to check the consistency /validity of this code page 
    case 2.
    some hungarian characters like o with a tilt on top cause conversion error dumps during the transfer . again the code page has this hexa value in it .
    why in this case  the conversion from 4102(basically utf-16)
    to code page 9xxx is failing . why it is calling 4102 in this case ?
    ignoring conversion error bypasses this strange chars (cxsycodepage*dump)  but how to hold the correct value at any point of time in the desired server i mean at last i need to transfer o with a tilt instead of a # .
    Please give some in depth  work around solution instead of some vague answers .
    Will appreciate your effort and time for this .
    Thnx much .
    Br,
    Vijay.

    hi
    I am facing same problem when reading the file from application server.It gives me short dump.Can u tell me how can i resolve the code page error issue.In dump analysis i am getting error "NOt able to covert code page '4110' to '4102'

  • Blank spaces while using GUI_DOWNLOAD for Chinese characters

    Hi,
    While using GUI_DOWNLOAD for chinese characters I have used a code page option of 8300 for Chinese.
    The file which is getting downloaded in a notepad has some Chinese characters coming under some headings.
    After that columns other columns are getting shifted towards the right.
    This is working correctly for English characters.
    Can someone please help me.
    Now I am using CL_GUI_FRONT_END_SERVICES=>GUI_DOWNLOAD.
    What special options should I pass now.
    Regards,
    Subhashini

    Hi,
    I only solved my problem by using different code pages 8400 and 8300 for Chinese and Taiwanese characters.
    I fixed the lengths of the fields by converting them to hexadecimal string and back to string using these function modules as below.
    DATA:lv_xstring TYPE xstring,
           lv_temp TYPE string.
      DATA: lv_conv TYPE REF TO cl_abap_conv_in_ce.
      lv_temp = p_name.
      CALL FUNCTION 'HR_KR_STRING_TO_XSTRING'
        EXPORTING
          codepage_to      = p_codepage
          unicode_string   = lv_temp
          out_len          = p_outlen
        IMPORTING
          xstring_stream   = lv_xstring
        EXCEPTIONS
          invalid_codepage = 1
          invalid_string   = 2
          OTHERS           = 3.
      IF sy-subrc <> 0.
        MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
                WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
      ENDIF.
    CALL FUNCTION 'HR_KR_XSTRING_TO_STRING'
       EXPORTING
         from_codepage = p_codepage
         in_xstring    = lv_xstring
         out_len       = p_outlen
       IMPORTING
         out_string    = p_string.
    Create a Conversion Instance
    lv_conv = cl_abap_conv_in_ce=>create(
    encoding = p_codepage
    input = lv_xstring ).
    v_conv->read( IMPORTING data = p_string ).
    Regards,
    Subhashini

  • PDF conversion for chineese characters in Unicode system

    I am facing a problem while converting the SAP Script Output to PDF format for Chinese characters. 
    I am working on ECC (5.0) Unicode system.
    Scenario:
    After saving a Purchase Order an E-mail is sent to the customer - attaching the
    PO output in PDF format.  E-mail was received successfully by the receiver, but while opening the pdf all the chinese characters were displayed in junk characters in the pdf. All the English characters are properly displayed.  I tried to open the pdf file in Acrobat Reader versions 6.0, 7.0, 8.0. but no result.  I used CONVERT_OTF function module for converting the OTF format to PDF format. I tried using the fonts CNSONG also.
    I tried by executing the standard program RSTXPDFT4 for converting to PDF by giving the spool.  In the spool it is showing the Chinese characters perfectly but in the PDF the Chinese characters were were showing as Junk.
    Can you please help and advice to see the Chinese characters in PDF in Unicode systems.
    Thanks in advance.

    >
    Juraj Danko wrote:
    > Hi,
    > I have similar problem than you ... how have you solved it?
    > thanks
    > Juraj
    I found a solution, but I am not sure, if it was for this problem or
    output problem with for example PL in non-unicode systems.
    I created the input for CONVERT_OTF with CALL FUNCTION 'PRINT_TEXT'.
    PRINT_TEXT has to be called with DEVICE = 'PRINTER',
    DEVICE = 'ABAP' uses internally the wrong code page.
    You have also to set otf_options-tdprinter to a valid printer,
    if it is empty, the default printer from user settings is used.
    You can use code example from SAP note 413295.
    Before you call CONVERT_OTF, you can also check entries with 'FC' in OTF input.
    The font (see description of OTF format in SAP help) must be set like described in SAP note 144718.
    /Tibor
    Edited by: Tibor Gerke on Jan 13, 2011 10:29 AM

  • Code page conversion

    Hi
    I have requirement to pick up UTF 7 files. In sender FCC i have used code page conversion bean. But the adapter is not accepting UTF 7 format. What could be another option.
    I am using a 3rd party adapter (XLink adapter) and the error is occurring here itself.
    Thanks

    Hi pratichi,
    You can develop a module which converts the file content from UTF-7 to UTF-8 o other enconding which be valid for your adapter.
    This module must called before the your adapter module.
    Regards
    Ivan

  • Error in code page mapping for Source system whil loading the data from ECC

    HI Gurus,
    I am working in a implementation project, Recently our BI sand box is up, when i am doing my load from 0comp_code_attr it is throwing an error "Error in code page mapping for source system"(This is my first load from ECC).
    In details tab it is showing as the data is sent from the source system but the data is not reaching to the PSA.
    Please let me know if there are any settings needs to be made.
    Many thanks in Advance
    Jagadeesh

    HI V,
    Thanks for your quick response. I did it but it didn't resolve the issue. since the system id which i am having is 3 digits(LEC) but there it is taking only 2 digits, so i clicked on the button called Propose system ids, it has praposed LE, but the issue is stil there.
    Do we need to do any settings in LBWE??
    Thanks and Regards
    Jagadeesh

  • Error in code page mapping for Source System

    Hi All,
    We are Loading data into BI system from MDM System.
    It was loading fine but Yesterday we got an error " Error in code page mapping for Source System"
    Message Class: RSDS_ACCESS 13.
    We alreday tried to Search any IDOCs with error/unprocessed , but there isn't any.
    Any pointers for this error will be Helpful.
    Regards,
    Mayank

    Our SP currently SAPKW70019 , the SAP note mentioned above required to apply SP for 13, which we already beyond that...unfortunately we still face the same problem .. almost every day ..
    any advice.. ?
    Edited by: Edi Erwan  Abu Talib on Jun 22, 2009 8:22 AM

  • HTTP-Receiver: Code page conversion error from UTF-8 to ISO-8859-1

    Hello experts,
    In one of our interfaces we are using the payload manipulation of the HTTP receiver channel to change the payload code page from UTF-8 to ISO-8859-1. And from time to time we are facing the following error:
    u201CCode page conversion error UTF-8 from system code page to code page ISO-8859-1u201D
    Iu2019m quite sure that this error occurs because of non-ISO-8859-1 characters in the processed message. And here comes my question:
    Is it possible to change the error behaviour of the code page converter, so that the error will be ignored?
    Perhaps the converter could replace the disruptive character with e.g. u201C#u201D?
    Thank you in advance.
    Best regards,
    Thomas

    Hello.
    I'm not 100% sure if this will help, but it's a good Reading material on the subject (:
    [How to Work with Character Encodings in Process Integration (NW7.0)|http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/502991a2-45d9-2910-d99f-8aba5d79fb42]
    The part of the XSLT / Java mapping might come in handy in your situation.
    you can check for problematic chars in the code.
    Good luck,
    Imanuel Rahamim.

  • Code Page Conversion Error

    Hi,
    I have a problem while downloading a file which is generated by a standard report program. The R/3 server runs on UNIX and the target system for the file download is Windows XP. When i try to download the file an error is displayed - 'Individual characters could not be converted from code page 4102 to Code P.1100'
    Also when i see the file contents using the display option, all the characters are non-english characters (>> >>>>>>>>>††† etc)
    Could some one help?
    Thanks in advance,
    Sandeep Joseph

    Hi,
    I set a parameter(DCP(Default Code Page)) in the system and gave the value as 4102 and now it works fine. Can anyone tell the reason why it was going wrong?
    Thanks,
    Sandeep

  • UTF8 character set conversion for chinese Language

    Hi friends,
    Would like to some basic explanation on UTF8 feature,what does it help while converting the data from chinese language.
    Would like to know what all characters this UTF8 will not support while converting from chinese language.
    Thanks & Regards
    Ramya Nomula

    Not exactly sure what you are looking for, but on MetaLink, there are numerous detailed papers on NLS character sets, conversions, etc.
    Bottom line is that for traditional Chinese characters (since they are more complicated), they require 4 bytes to store the characters (such as UTF-8, and AL32UTF8). Some mid-eastern characters sets also fall in this category.
    Do a google search on "utf8 al32utf8 difference", and you will get some good explanations.
    e.g., http://decipherinfosys.wordpress.com/2007/01/28/difference-between-utf8-and-al32utf8-character-sets-in-oracle/
    Recently, one of our clients had a question on the differences between these two character sets since they were in the process of making their application global. In an upcoming whitepaper, we will discuss in detail what it takes (from a RDBMS perspective) to address localization and globalization issues. As far as these two character sets go in Oracle, the only difference between AL32UTF8 and UTF8 character sets is that AL32UTF8 stores characters beyond U+FFFF as four bytes (exactly as Unicode defines UTF-8). Oracle’s “UTF8” stores these characters as a sequence of two UTF-16 surrogate characters encoded using UTF-8 (or six bytes per character). Besides this storage difference, another difference is better support for supplementary characters in AL32UTF8 character set.
    You may also consider posting your question on the Globalization Suport forum which pertains more to these types of questions.
    Globalization Support

  • Convert spool to pdf for Chinese characters

    Hi,
       I need to convert spool to pdf for chinese font.
       Spool is creating successfully, and contents are displaying properly. When try to convert the spool to pdf using 'RSTXPDFT4', the pdf is generated successfully, when trying open the pdf, contents are missing, its like empy white pdf page. This is happening for the Chinese font. The same script is working fine for the English font.
    Plz suggets
    Thanks
    Balaji

    Hi Balaji,
    I am having a similar issue with Simplified Chinese, Traditional Chinese, and Thai fonts when saving spools to PDF.  Have you found any more information on why the output shows up as a blank screen?
    We are printing Purchase Orders and the T&Cs are coming through fine because they are saved as picture files but the smartform does not display at all.
    Any information would be appreciated!
    Thanks,
    Josh

  • Code Page Conversion from 4110 to 4103

    Hi,
    I'm getting a short dump at the execution of the statement READ DATASET. The short dump says a character was found that cannot be displayed in one of the two codepages.
    I have used a standard SAP Program which converts the code pages ( RSCP_CONVERT_FILE ). Unfortunately this program also short dumps when i specify the source as 4110 Target 4104 and the filename on application server.
    I've tried using NON UNICODE addition but have realised that it's not the proper way of doing it in a Unicode System.
    The Problem is with one special symbol " in the flat file. This symbol is not get recognised by the program and this is getting identified as # when i see it during debugging.
    Please help me in this regard.
    Kindly do not send me the documentation on OPEN DATASET and READ DATASET i have read that several times.
    Thanks,
    Sai

    Sorry for posting in the wrong forum

  • Issue Searching for Chinese Characters

    I am creating pdfs from SQL Server Reporting Services. I have the data stored in Chinese characters and it displays fine when the files are opened. Our problem is we are unable to search for any Chinese characters. When we copy a Chinese character from the document and paste into the search box we get the following and it does not find the characters.   
    When I inspect the Fonts it has the following :
    Calibri /regular/bold/Italic (Embedded Subset)
         Type:True Type
         Encoding:Ansi
    PMingLiU (Embedded Subset)
         Type:True Type
         Encoding:Ansi
    PMingLiU (Embedded Subset)
         Type:True Type(CID)
         Encoding:Identity-H
    Do I need to install different fonts on my server or clients to make the search box recoginize the Chinese characters.
    I am using Reader v10.1.7 and I have installed both Chinese font packs

    What is your operating system?
    I cannot reproduce this with English Reader 11.0.3 on Windows 7.  I open a random Chinese document (i.e. http://newyork.china-consulate.org/chn/lszj/P020110622119203610776.pdf), then search for some Chinese characters (e.g. 中国), it displays it correctly in the search box, and also finds the characters in the document:
    Do you use the English or Chinese Reader version?

  • BW code page error for receiving sysem.

    Hi friends,
    We are getting following error in BW when We are trying to load data:
    "Could not find code page for receiving system",
    Our BW is Unicode and the R/3 is nonunicode
    Referred to the OSS notes 784381 and 613389
    We have checked we21, we22 and logical system and rfc connections and done the following bit checked :
    Language “EN” is specified in sm59
    Under Special Options -> RFC Bit Options
    Made sure that “Use Found Communication Code Page” has a check mark.
    We keep getting the following error...
    Could not find code page for receiving system
    Message no. E0266
    Diagnosis
    For the logical destination BRQCLNT500, you want to determine the code
    page in which the data is sent with RFC. However, this is not currently
    possible, and the IDoc cannot yet be dispatched.
    Procedure for System Administration
    Possible causes are:
    1. The entry no longer exists in the table of logical destinations.
    2. The target system could not be accessed.
    3. The logon language is not installed in the target system.
    Please help   and I will definely reward points.
    Thnaks

    hi Brent,
    try to check the S10010 source system, rsa1->source system->righ click 'check'. also check the connection and authorization for S10010 in sm59 ? you may need to restore, source system S10010 right click 'restore'. hope this helps.

Maybe you are looking for