Latin-2 characters  converted to Latin-1 codepage

Hi all,
I am on XI 3.0 and i have issue like this
We got an ORDERS IDoc and they send some Latin-2 characters in their ship-to party address,but XI IDoc adapter handles this IDoc as a Latin-1 codepage document.
My first question is How to check IDOC Adpater Characteristics.
next is how to solve this issue.
Thanks in Advance

Hi,
    In outbound interface i..e in IDOC message interface try to change code page to Latin-2 compartible as imported archive. Probable it might help.
Thanks,
Rao.Mallikarjuna

Similar Messages

  • Polish after unicode convertion - Language from different codepage how-to

    Hi,
    we have just convert a sendbox system from ASCII (SAP 1100 codepage) to Unicode (SAP ECC 6.0, Solaris 10 SPARC, Oracle 10.2.0.4, ecc.).
    In ASCII, TCPDB table contais 1100 codepage, now is empty, is it OK?
    Now we would try to install Polish language.... so I load it from SMLT from 000 client, apply delta support packges, run supplementation and then add L into istance profile, it OK? ... must I install also Polish locale into Solaris? ... any other work?
    Regards.

    > Now we would try to install Polish language.... so I load it from SMLT from 000 client, apply delta support packges, run supplementation and then add L into istance profile, it OK?
    - execute program RSCPINST, add "PL" and activate the changes
    - open RZ10, edit your profile, "Basic maintenance" and select PL, save and restart instance
    - then add the language as described with SMLT (import, delta support package, supplementation and client distribution for your production client)
    > ... must I install also Polish locale into Solaris? ... any other work?
    No. You are now in Unicode, the necessary libraries are the libicu* libraries in your kernel directory, those are the (if you want) locales for the operating system. So nothing more is necessary.
    To display the polish characters correctly in the SAPGUI you need to
    - install the polish codepage on the frontend system
    - switch the default from "Arial Monospace" to some other font that supports Latin-2 (such as Courier New)
    Markus

  • Special characters converted in wrong in Upper case ?

    Hello,
    In material description , some characters such as µ when converted into upper case it will be 'M' in stead of 'µ' . This is in SAP ECC unicode . But it's ok in R/3.
    So in this case, Is there any solution for this problem ?
    Thanks

    MAKT-MAKTG : For upper case characters , When our client search for values when pressing F4 in material , If there's material description that contain 'u03BC', It will be displayed as 'M' in the text . This will happen in ECC 6 not in R/3 ( in R/3 two texts are the same) .
    The different behavior you're observing between what you call R/3 and ECC 6 is not due to the different releases, but based on the fact that one system is Unicode enabled and the other not. I suspect that you are most likely logging onto the system with a language that's linked to code page 1100 (in the non-Unicode release). Code page 1100 is similar to the Latin-1/ISO-8859-1 code page, which is designed to have most characters for western languages, but does not contain all Greek letters (SAP would most likely use code page 1700 if you set a greek locale).
    Again, it may sound silly, but you have to distinguish between the symbol µ (Unicode code point U+00B5) and the lower case greek letter u03BC (U03BC). SAP code page 1100 contains only the symbol µ (U00B5), which represents the prefix micro as for example in 1µm = 1 micro meter = 0.000001m. Neither the greek lower case letter u03BC (U+03BC) nor it's corresponding upper case letter exists in code page 1100.
    The Unicode standard defines the greek upper case letter u039C (U039C) as the upper case to use for both the symbol  µ (U00B5) and the lower case greek letter u03BC (U+03BC), see for example the Unicode mapping chart for Greek (so that's why you see a different behavior in ECC 6).
    I'm not sure why, but for some reason SAP found it worthwhile mentioning that they actually also convert the symbol  µ (U00B5) to u039C (U039C), though that essentially just means following Unicode standard. Anyhow, if you're interested in further details, check out OSS note 1078295 - Incomplete to upper cases in old code pages.

  • Problem with characters Converting OTF Spool to PDF

    Hello All,
    Im working on ECC6.0 system. I have a Z report where it will take spool number and mail ID as input.
    It will check wheter the spool is of type OTF of ABAP List, according to that it will use the FMs ONVERT_OTFSPOOLJOB_2_PDF & CONVERT_ABAPSPOOLJOB_2_PDF.
    Now it will download the PDF internal table data into a file using OPEN DATA SET statement as shown below.
      OPEN DATASET gv_dsn FOR OUTPUT IN BINARY MODE.
    *Download the file to application server
      LOOP AT gt_pdf_output.
        TRANSFER gt_pdf_output TO gv_dsn.
      ENDLOOP.
      CLOSE DATASET gv_dsn.
    and it will ZIP the PDF as shown below.
    * open the file from application server for zipping.
      OPEN DATASET gv_dsn FOR INPUT IN BINARY MODE.
      READ DATASET gv_dsn INTO lv_content.
      CLOSE DATASET gv_dsn.
      CREATE OBJECT go_zip.
      go_zip->add( name = gv_file content = lv_content ).
      gv_zip_content    = go_zip->save( ).
    * Convert the xstring content to binary
      CALL FUNCTION 'SCMS_XSTRING_TO_BINARY'
        EXPORTING
          buffer        = gv_zip_content
        IMPORTING
          output_length = gv_file_length
        TABLES
          binary_tab    = gt_data.
    After that the ZIP file conataining the PDF will be sent as an attachement to the mail ID given as input.
    Now the problem is some characters of Czech are not coming properly when the attachement is opend once the mail is received. Can any one tell where the problem is and solution.
    Im getting the message "Can not extract the embedded front 'CourierNewL2'. Some characters may not display or print correctly." while opening the PDF in the ZIP attachment.
    Thank you.
    Best Regards,
    Sasidhar Reddy Matli.

    hi
    u also check the following link
    Re: how call FM otf to pdf in a report
    Re: otf to pdf
    Re: Error while converting OTF into PDF in CONVERT_OTF FM
    Re: Convert OTF to PDF problem
    Edited by: krupa jani on Jul 15, 2009 12:58 PM

  • Characters Converted in URL

    Hello,
    I know that certain characters, when in an URL, must be converted. For example, a space must be converted to
    %20, a percent sign must be converted to %25, and a single quote must be converted to %27.
    I would like to obtain a list of these special characters. I have done searches, but I likely do not know the proper terms to search for. Can anyone help me find this information?
    Thank you,

    Hi, look up asciitable.com ...here you will find all the ascii special characters and their hexadecimal , octal etc equivalents.
    Hope this was what you were looking for.

  • Convert XML-String from Codepage utf-16 to ISO-8859-1

    Hi to all experts,
    our system is now unicode with codepage 4102 (UTF-16) and we do an Simple Transformation for creating an XML-String.
    before UniCode : xml_data = <?xml version="1.0" encoding="iso-8859-1"?>#<transactionRequest userID=" .......
    now with UniCode : xml_data = <?xml version="1.0" encoding="utf-16"?>#<transactionRequest userID=".......
    The xml_data transfered to an external Sytem via HTTPS- Communication direct from ABAP.
    The external Sytem send an Error Request:
    <?xml version="1.0" encoding="ISO-8859-1"?>#<transactionResponse>#    <transactionErrorResponse>#        <errorResponse>#            <errorCode>SYS-0001</errorCode>#            <errorDescription>java.lang.Exception: null[ #<?xml version="1.................
    Have you any idea
    Thanks for your help!
    Peter
    Edited by: Peter Pforr on Sep 25, 2008 9:59 AM
    Edited by: Peter Pforr on Sep 25, 2008 10:14 AM

    Darshan,
    Did you get an answer for this question? We have same requirement to create XML file in ISO-8859-1 format with Attributes is set to "Y" and CDATA is being used for data.
    Can you please let me know if you still remember how did you achieve it?
    Satyen...

  • Latin-2 Polish installation for reference

    Hello group.
    We are running Latin-1 codepage ERP2004.
    We want to install a Z3 correspondence language for defining sapscripts for outgoing documents in the Polish language. So, to be clear, we don't plan to convert to Unicode at this point and do not intend to install full Polish language support.
    However , our developers would like to have a reference to a standard SAP system with the Polish language installed in order to have a look at the Polish translations.
    I would hate to install an ERP system from scratch in Latin-2 or Unicode just for reference purposes.
    Any other ideas ?
    Jo

    Hi Jo,
    nice to hear from you )
    We could provide you with a logon to our IDES systems in all languages ... if possible for you in ECC6 if necessary in ECC5 - as this would be available as well.
    Regards
    Volker Gueldenpfennig, consolut.gmbh
    http://www.consolut.de - http://www.4soi.de - http://www.easymarketplace.de

  • Convert smart quotes and other high ascii characters to HTML

    I'd like to set up Dreamweaver CS4 Mac to automatically convert smart quotes and other high ASCII characters (m-dashes, accent marks, etc.) pasted from MS Word into HTML code. Dreamweaver 8 used to do this by default, but I can't find a way to set up a similar auto-conversion in CS 4.  Is this possible?  If not, it really should be a preference option. I code a lot of HTML emails and it is very time consuming to convert every curly quote and dash.
    Thanks,
    Robert
    Digital Arts

    I too am having a related problem with Dreamweaver CS5 (running under Windows XP), having just upgraded from CS4 (which works fine for me) this week.
    In my case, I like to convert to typographic quotes etc. in my text editor, where I can use macros I've written to speed the conversion process. So my preferred method is to key in typographic letters & symbols by hand (using ALT + ASCII key codes typed in on the numeric keypad) in my text editor, and then I copy and paste my *plain* ASCII text (no formatting other than line feeds & carriage returns) into DW's DESIGN view. DW displays my high-ASCII characters just fine in DESIGN view, and writes the proper HTML code for the character into the source code (which is where I mostly work in DW).
    I've been doing it this way for years (first with GoLive, and then with DW CS4) and never encountered any problems until this week, when I upgraded to DW CS5.
    But the problem I'm having may be somewhat different than what others have complained of here.
    In my case, some high-ASCII (above 128) characters convert to HTML just fine, while others do not.
    E.g., en and em dashes in my cut-and-paste text show as such in DESIGN mode, and the right entries
        &ndash;
        &mdash;
    turn up in the source code. Same is true for the ampersand
        &amp;
    and the copyright symbol
        &copy;
    and for such foreign letters as the e with acute accent (ALT+0233)
        &eacute;
    What does NOT display or code correctly are the typographic quotes. E.g., when I paste in (or special paste; it doesn't seem to make any difference which I use for this) text with typographic double quotes (ALT+0147 for open quote mark and ALT+0148 for close quote mark), which should appear in source code as
        &ldquo;[...]&rdquo;
    DW strips out the ASCII encoding, displaying the inch marks in DESIGN mode, and putting this
        &quot;[...]&quot;
    in my source code.
    The typographic apostrophe (ALT+0146) is treated differently still. The text I copy & paste into DW should appear as
        [...]&rsquo;[...]
    in the source code, but instead I get the foot mark (both in DESIGN and CODE views):
    I've tried adjusting the various DW settings for "encoding"
        MODIFY > PAGE PROPERTIES > TITLE/ENCODING > Encoding:
    and for fonts
        EDIT > PREFERENCES > FONTS
    but switching from "Unicode (UTF-8)" to "Western European" hasn't solved the problem (probably because in my case many of the higher ASCII characters convert just fine). So I don't think it's the encoding scheme I use that's the problem.
    Whatever the problem is, it's caused me enough headaches and time lost troubleshooting that I'm planning to revert to CS4 as soon as I post this.
    Deborah

  • SPUMG with CHARACTERS?

    Hi Gurus
    After running the SPUMG we get thousands of 2 character Vocabulary entries..
    For example:
    þu2014
    ©*
    ¹£
    »u2013u2013
    Which seem to be just a random combination of NON ALPHA NUMERIC CHARACTERS?
    We have 3 codepages to choose from for the language assignment.
    My question is what code page should I assign if  I only see NON ALPHA NUMERIC CHARACTERS?
    Kind Regards
    Lawrence Brown

    Hi Lawrence,
    looks like  characters from a Non-Latin code page like Asian, Russian or Greek ...
    If you have such a code page, please check with native speakers (logging on with the according language) if these are valid words.
    If you do not have such code pages or the byte combination does not make sense in all your three code pages (when logging on with the correct languages), then these could be words from code pages you do not have in the system.
    One typical case where this occurs is in case of  ECC 6.0, if you do not use Russian. Please have a look at note 1275317.
    If you are not able to assign the words, then I would recommend to leave the words empty in the vocabulary and check in the reprocess scan, which table entries caused the problem.
    However it is always possible to repair the entries with SUMG after the Unicode conversion.
    Best regards,
    Nils Buerckel
    SAP AG

  • Applescript for converting UTF-8 (styled )to ASCII

    Does anyone know of an applescript built to change text encoding from UTF-8 to standard ASCII 7 bit?
    I found something called TEC_OSAX1.3.3 on the macscripter forum but was unable to get it to work (I think because it is a classic application). Any help on this would be appreciated as well (can't open the readme file).
    I have a standard UTF-8 XML file that I want to prepare to upload to a web site but I need all the extended UTF-8 characters converted into the standard ASCII characters.
    I have enough applescript experience to do very basic operations but not enought to build something more complex like this.
    Any help would be appreciated.
    Thanks
    Jesse
    Mac book pro   Mac OS X (10.4.2)  
    I-Mac G5   Mac OS X (10.4.2)  
    I-Mac G5   Mac OS X (10.4.2)  

    I have a standard UTF-8 XML file that I want to
    prepare to upload to a web site but I need all the
    extended UTF-8 characters converted into the standard
    ASCII characters.
    There's no way to convert "extended" UTF-8 into ASCII, since the latter doesn't contain the required characters. Unless you are talking about converting them into NCR's like & #1234;. If that's the idea, there's an app called UnicodeChecker which can do it.
    Sometimes UTF-8 can be converted to ISO-8859-1. Is that what's required?
    Why exactly do you need to do this? Is your server one those few which are (mis)configured to only support Latin-1? If so, this can usually be fixed by other means, like an .htaccess file in your web space.
    If you are just talking about one or a few files, you can simply open with TextEdit set to UTF-8 and then save in the new encoding.

  • Convert binary to base64

    I've a special problem. I've a BSP for uploading files from Windows XP-Clients. The file should be transferred to a .NET web service where the file is written to disc (1:1, means the binary is written like it is to disc). The transfer from the file to the web service must be in base64.
    My problem is, that the web service write a file with exact the same size as the file I uploaded, but I couldn’t read the file correctly. Only text files (.csv, .txt, ...) works correctly. I compared the uploaded file with the written file and there are some characters which could not converted in ABAP (hex 23 instead of the correct char). So I think I've a problem with the codepages.
    I use a Unicode WAS 6.20 with UTF-8.
    Please see my code:
    Layout *
    <%@page language="abap"%>
    <%@extension name="htmlb" prefix="htmlb"%>
    <%@ extension name="xhtmlb" prefix="xhtmlb" %>
    <htmlb:content design="design2003">
      <htmlb:page title = "Testupload">
            <htmlb:form method= "post"
                        encodingType = "multipart/form-data"
                        id = "FRM_Pflege">
              <htmlb:fileUpload id   = "and"
                                size = "40" />
              <htmlb:button id      = "file_upload_and"/>
        </htmlb:form>
      </htmlb:page>
    </htmlb:content>
    oninputprocessing   *
    here you see an extract from my code.
    the myfile64 will be transfered to the web service
      CASE event->server_event.
        WHEN 'file_upload_and'.
          TRY.
              fileupload ?= cl_htmlb_manager=>get_data(
                                    request = runtime->server->request
                                    name    = 'fileUpload'
                                    id      = 'and').
              IF fileupload IS NOT INITIAL.
                name         = fileupload->file_name.
                content      = fileupload->file_content.
                length       = fileupload->file_length.
                content_type = fileupload->file_content_type.
              ENDIF.
              conv = cl_abap_conv_in_ce=>create(
                 encoding = '1160'
           endian = 'L'
                 input = fileupload->file_content ).
              conv->read( IMPORTING data = input_string len = len ).
              CREATE OBJECT obj.
              CALL METHOD obj->encode_base64
                EXPORTING
                  unencoded = input_string
                RECEIVING
                  encoded   = myfile64.
              CLEAR wf_string .
    the string myfile64 will be used for sending to the webservice

    Hi Peter,
    ah, ok - the file-upload returns the content as XSTRING, but the Xstring to String conversion method your are using converts by using a codepage - i doubt that this works with binary files.
    there is a function module that converts XSTRINGS to Base64:
    SCMS_BASE64_ENCODE_STR
    can you try this?
    Stefan.

  • Problem after converting Quark to indesign

    Hi Forum,
    this is related question to my previous one.
    After Quark is converted to indesign, I have problems on special characters converted to Outlines and overprint applied.
    I tried to remove it many ways (as i have requested in my earlier post, for contentType.unassigned)...
    How to solve this out.

    > have some old Quark files in 4.01 that I need to access
    InDesign can open Quark 4.01 files.
    There will be some text reflow, but they do open.

  • Limit jtextfield to a specific codepage

    Hi
    In my application some text is edited, saved as XML and sent to a device that does not understand unicode. Therefore I need a way to make sure that only characters included in a specific codepage is entered?
    I'm using the Xerces xmlserialize class to serialize the xml document, and set the encoding on the OutputFormat object.
    Best regards
    Mads

    Could you use Character.UnicodBlock to test whether each character is a member of the character block you can support?
    http://java.sun.com/j2se/1.5.0/docs/api/index.html?java/lang/Character.UnicodeBlock.html

  • Unicode Passwords

    Hi,
    I wonder how acrobat handles a password that contains unicode characters when generating an encryption key. That is, what byte representation (i.e. encoding of the password string) is used, before padding the password in Algorithm 3.2 (PDF Reference 1.7)?
    In another thread (http://forums.adobe.com/message/2235717#2235717) it was stated that only characters that fall into the PDFDocEncoding (basically ISO-8859-1) are allowed. However, I found that in Acrobat I can enter non ISO-8859-1 characters for my password (and successfully reopen the file with Acrobat Reader on another machine using this password). However I cannot open the file with other PDF-readers (evince, PDF-XChange Viewer) that just work fine with non-unicode passwords.
    Is there a document that explains these details? Are there any differences between the different PDF versions?
    Thanks,
    Michael

    Sorry, I forgot to mention the details... So, yes, I am talking about passwords for revisions < 5, and to the PDFReference 1.7 - Extension Level 3 documentation. As said before, reverse engineering has shown that unicode passwords are basically converted to windows-125x codepages on windows machines, and to iso-8859-x codepages on unix machines (which btw means that a PDF document encrypted on windows with a mechnism < rev5 and with a unicode password cannot be opened on a unix machine when using the same password).
    Anyways, this seems to be only half of the truth, as some characters are not encoded using these codepages, but rather replaced by simpler latin characters. I would, for example, expect unicode character 0x011F ("ğ") to map to 0xF0 according to the iso-8859-9 table (see http://en.wikipedia.org/wiki/ISO_8859-9). However, this letter seems to be simply mapped to a normal latin letter "g" (i.e. byte value 0x67). So, my question is, what other characters are not encoded according to the corresponding code-table, but mapped to a simple letter (with a standard ASCII code, such as in the "g" example above)?
    Thanks,
    Michael

  • Problem with display of Chinese, Japanese, and Korean business partner name

    Hi All,
    I'm trying to understand how to get GTS to properly display the names of the business partners that are in Chinese, Japanese, or Korean.
    I have set my SAP gui localization as follows:
    - Select the Customize Local Layout, then Options, then the |18N tab,
    and check the Activate Multibyte Function box
    - Select the Customize Local Layout, then Character Set and check
    Unicode and Simplified Chinese
    In the source R/3 system for these partners, if I set my localization to the right language, the business partner's name and address would be displayed correctly in the C/J/K character sets when I select the appropriate version in International Version of the address.
    However, in GTS these names/addresses would just appear as nonsensical characters (like "·ÉÀûÆÖÕÕÃ÷µç×Ó£¨ÉϺ££©ÓÐÏÞ¹«Ë¾").
    For example, if I do a look up of a known Chinese BP by number, in the seach results window the Description field of that BP would just be garbage, but if I put the cursor over the Description field, then the correct Chinese display (matching what is in R/3) would appear for as long as the cursor is over that field. This indicates that the data was transferred correctly from R/3 to GTS, but some how the display does not default to the right character set.
    If I select that record, the resulting detailed display would also have garbage as the Name and Address. However, here putting the cursor over the fields still do not bring up the correct character displays, and I have found no way to get the correct displays to appear.
    Does any one know of a way to get the displays to show correctly in GTS?
    And can anyone explain why putting the cursor over the Description (in the BP search results window) would let the correct display pop up?
    Greatly appreciate any insight anyone can provide.
    Thanks,
    Rex

    Hi Rex - As per message - this looks to be an issue for BC-I18-UNI and not the GTS application:
    For general information purposes - last response from BC-I18-UNI development via oss message:
    "However, from readind the message, I believe I understood the reason for
    the described effect. I assume GGT is a non-Unicode system? If yes, this
    is my idea:
    The data stored in the BP tables, e.g. for number 2555442 have been
    entred manually, probably within a session using English as logon
    language. In this case the data are stored with a Latin-1 codepage. This
    means the stored data are actually Latin-1 data, not Chinese. By
    switching the font at front-end side to Chinese, the characters appear
    as correct Chinese, although they are still Latin-1.
    It is possible that different GUI components show different effects in
    this context, as the internal font switching mechanisms behave a bit
    different in different components, e.g. control based components are
    different from normal Dynpro-based fields.
    To have correct Chinese data, you need to logon with language Chinese.
    This holds for all non-Unicode systems. On Unicode systems, the logon
    language is arbitrary."

Maybe you are looking for

  • Max problem ok, but how about a Min dito

    So, the Max problem is fixed(see below), but how can I manage a Minimum dito? int[] stor = {s1,s2,s3,s4,s5,s6,s7}; java.util.Arrays.sort(stor); System.out.println("Max " + stor[stor.length-1]); /Martin

  • For color column....need help

    Hi all..          I have table for output. In that table last column is Sum of row. It's start with *.  i need to show the last column with different color.for that what i will do? pls anybody reply  to me with full details..Because i am in first prj

  • Extreme flash problem

    Hello, For almost a week now I have had problems with using flash in my browsers (IE, Mozilla and Chrome which is my default browser). It's like the browsers doesn't feel that it's installed, also Chrome has the same problem even though it's built in

  • Can u boot from a Mac in target disk mode

    I'm wanting to know if I put a Mac mini in target disk mode, can I boot from that Mac mini from my MacBook Pro. I'm wanting to set up a mini as a server but it may have to be the basic and ill install server on it but I prefer to do it without using

  • I would like to learn more about "What" Firefox stands for?

    I would like to learn more about "What" Firefox stands for?