Menu unicode / codepage

Bonjour,
Je travail sur Labview 2011 et je dois faire une application multilingue.
Après quelque recherches j'ai trouvé comment afficher en bouton / Label,etc... En caractère russe,etc... (UseUnicode = True)
Page de référence : https://decibel.ni.com/content/docs/DOC-10153
Mais j'aimerais aussi pouvoir le faire avec le "Run Time Menu" de l'application.
Le menu à l'aire d'être en ASCII et pas moyen de trouver comment le passer en Unicode.
Jai tenter de convertir mon tete unicode en ASCII pour pouvoir l'afficher dans le menu.
Mais cela ne fonctionne pas car le menu utilise le codepage par défault du PC.
Ma question est la suivante :
Peut on forcer le menu en Unicode ? 
Sinon 
Peut on changer le codepage utiliser par l'application en fonction de la langue choisie ?
Résolu !
Accéder à la solution.

Bonjour à vous, je travaille sur des applications dans plusieurs langues (russe, chinois, français, anglais, portugais, …) texte et menu.
J’avais répondu à un post sur « Texte chinois ». Pour le russe cela est identique, il faut modifier l'option de l'OS "langues pour les programmes non unicode"
Configuration Windows XP pour prise en charge caractères non Unicode
Démarrer -> Paramètres -> Panneau de configuration -> Options régionales et linguistiques ->
Langues -> Prise ne charge langue supplémentaires -> Installer les fichiers pour les langues d’Extrême-Orient
Options avancées -> langues pour les programmes non Unicode -> Chinois (république de chine (RPC)
Redémarrer l’ordinateur
http://forums.ni.com/t5/Discussions-au-sujet-de-NI/Texte-chinoix-sur-face-avant/m-p/1855663#M530
Je n’utilise pas la clé « Unicode » de LabVIEW.
Luc Desruelle | Voir mon profil | LabVIEW Code & blog
Co-auteur livre LabVIEW : Programmation et applications
CLA : Certified LabVIEW Architect / Certifié Architecte LabVIEW
CLD : Certified LabVIEW Developer / Certifié Développeur LabVIEW

Similar Messages

  • Error converting XSTRING to STRING (unicode, codepage)

    Hi all,
    I have a problem converting data from an external file into SAP.
    The file is uploaded via an application created in web dynpro, where I use the upload functionality. This returns the file in the XSTRING format, and  then use the following to convert (where l_xstring is the file and l_string is how I want the file):
      l_string TYPE string,
      l_xstring TYPE xstring,
      convt = cl_abap_conv_in_ce=>create( input = l_xstring ).
      convt->read( IMPORTING data = l_string ).
    This worked perfectly - until I recieved a file containing russian characters
    The SAP system (BI) is in Unicode, so this should be ok.
    I get a:
    CONVT_CODEPAGE
    CX_SY_CONVERSION_CODEPAGE
    Error, when trying to run it.
    Also the following migth be helpful:
    At the conversion of a text from codepage '4110' to codepage '4102':
    - a character was found that cannot be displayed in one of the two
    codepages;
    - or it was detected that this conversion is not supported
    The running ABAP program 'CL_ABAP_CONV_IN_CE============CP' had to be
    terminated as the conversion
    would have produced incorrect data.
    The number of characters that could not be displayed (and therefore not
    be converted), is 18141. If this number is 0, the second error case, as
    mentioned above, has occurred.
    I have tried setting the codepage parameter of the READ method, but to no success.
    Anyone ??
    -Tonni

    Friend,
    Call the FM like below....
    CALL FUNCTION 'ECATT_CONV_XSTRING_TO_STRING'
      EXPORTING
        IM_XSTRING        = x
       IM_ENCODING       = 'UTF-8'
    IMPORTING
       EX_STRING         = x.

  • Afficher des textes en chinois

    Bonjour,
    pour une application, je souhaiterai pouvoir afficher les textes de l'interface en français, anglais et chinois. Pour le français et l'anglais il n'y a pas de problème, mais pour le chinois, j'ai essayer de faire copier coller depuis google traduction mais ça n'affiche que des "???" sous labview. 
    Est-ce que quelqu'un connecterai un moyen qui permette d'afficher les caractères chinois?
    Cordialement
    Nathan
    Résolu !
    Accéder à la solution.

    salut, J'ai déjà répondu sur le sujet sur le forum. Perso je travaille sur plusieurs langues : Russe, Chinois, portugais, français, .... pour texte, menu, gestion erreur,... et pas de problème.
    J’avais répondu à un post sur « Texte chinois » et un autre sur "Russe".
    Il faut modifier l'option de l'OS "langues pour les programmes non unicode"
    ici
    http://forums.ni.com/t5/Discussions-au-sujet-de-NI/Texte-chinoix-sur-face-avant/td-p/1855663
    http://forums.ni.com/t5/Discussions-au-sujet-de-NI/Menu-unicode-codepage/td-p/2688015
    Configuration Windows XP pour prise en charge caractères non Unicode
    Démarrer -> Paramètres -> Panneau de configuration -> Options régionales et linguistiques ->
    Langues -> Prise ne charge langue supplémentaires -> Installer les fichiers pour les langues d’Extrême-Orient
    Options avancées -> langues pour les programmes non Unicode -> Chinois (république de chine (RPC)
    Redémarrer l’ordinateur
    A l'origine du système actuel de codage des ordinateurs se trouve le standard ASCII (American Standard Code for Information Interchange). Il représente le codage numérique de 128 signes. Il est assez évident que ce nombre réduit de signes, s'il suffit pour le codage des caractères usuels de l'anglo-américain, ne permet pas le codage des graphèmes spécifiques d'autres langues européennes, ni même d'une.
    A partir du moment où les logiciels de traitement de texte se sont développés et diffusés dans le monde, il a fallu l'étendre à 256 numéros de code : ASCII étendu puis ANSI.
    Par la suite les OS ont gérés plusieurs langues différentes : Attribution d'un code unique à tous les caractères utilisés dans les différentes langues du monde et donc la définition d'un jeu unique, universel, de caractères : c'est le standard Unicode. Dans cette idée un caractère est codé sur un U8, U16, U32
    Il ne faut pas confondre le multi-byte et unicode. En unicode le caractère est unique dans n’importe quel OS, en multi-byte le caractère a une valeur mais est affiché en fonction des paramètres de l’OS.
    Dans votre cas, il existe un Chinois simplifié, écrit de la gauche vers la droite. LabVIEW supporte les caractères « multi-byte » et pas Unicode en natif (en option via fichier ini avec LV2011). Il interprète et affiche donc les caractères Unicode selon l’OS et surtout l’option « Options régionales et linguistiques -> langues pour les programmes non Unicode ».
    Si vous tapez du chinois sur votre clavier (ou copier-coller depuis la traduction de google…) vous pouvez mettre du chinois, et même faire un soft polyglotte.
    Après il faudra gérer l'affichage des erreurs, les .....
    L'avantage avec l'unicode serait de pouvoir affiché Russe et Chinois sur le même logiciel, comme Internet Explorer.
    Un exemple pour le Russe
    Luc Desruelle | Voir mon profil | LabVIEW Code & blog
    Co-auteur livre LabVIEW : Programmation et applications
    CLA : Certified LabVIEW Architect / Certifié Architecte LabVIEW
    CLD : Certified LabVIEW Developer / Certifié Développeur LabVIEW

  • LSMW: Codepage conversion error with a Unicode data file

    Hi all,
    I am currently developing a LSMW upload program which has to use a Unicode data file. The underlying system/target system is NOT a Unicode system. The data file also contains non-Latin2 characters.
    In the step "Specify Files", I have specified my Unicode data file and specified the codepage type "4110 - Unicode UTF-8".
    In the step "Read Data", then I get the runtime error "CONVT_CODEPAGE", exception "CX_SY_CONVERSION_CODEPAGE".
    I would expect that all non-Unicode characters are automatically transformed to "#", but the conversion progam breaks. The character transformation to "#" would be fine.
    I am really wondering why, at first, I am able to specify the Unicode codepage type, but then, the file cannot be converted correctly.
    What do I make wrong, what can I do to avoid the error?
    Thanks a lot in advance for helping me out...
    Regards,
    Klaus

    Hello,
    You need convert the file with the format UTF-8. In notepad you can choose this option.
    Regards,
    Oscar.

  • GUI_DOWNLOAD problems with CR+LF when transfering from unicode system

    Hi,
    I was successfuly used FM GUI_DOWNLOAD in a non-unicode systems for years. Lately I faced a challenge to rewrite my code for a unicode system. The configuration is:
    - SAP R/3 unicode system;
    - data to be downloaded at presentation server in a non-unicode codepage (cp 9504).
    I have successfuly used a GUI_DOWNLOAD-parameter CODEPAGE and the data is translated correctly when checking local file, but due to some reasons CRLF are replaced with '#' (which is default value of REPLACEMENT parameter of this function) - means at the end of each row as a result I have '##' instead of CRLF.
    My question is: how can I force correct behaviour of GUI_DOWNLOAD in order to get my output file at presentation server with CR+LF?
    Any help would be highly appreciated.
    Many thanks in advance.
    Regards,
    Ivaylo Mutafchiev
    SAP/ABAP consultant
    VBS Ltd.
    P.S. In order to find some other way to fix my problem I'm still playing with the instanciation of a CL_ABAP_CONV_OBJ and its methods create & convert, but without success for now - resulted strings are not as expected.

    Hi,
    in fact, I never placed CRLF in my lines before your suggestion. The rest was done by the FM 'GUI_DOWNLOAD'. It works fine even when I use unicode file as output - means I got my CRLF at the end of the record in MY OUTPUT FILE ONLY but not in my internal table - I never placed CR+LF in there.
    The problem occures when I tried to use GUI_DOWNLOAD with parameter CODEPAGE = '9504' (some non-unicode codepage), and the original data (my internal table) is in unicode. Then (in my opinion) this function doesn't translate the unicoded CR+LF into non-unicode ones (if thats possible at all, I can't be sure) and the result is '##' in the output file.
    I checked the value of CL_ABAP_CHAR_UTILITIES=>CR_LF by getting it in my variable - and it is '##'.
    Whet should I put into this class-attribute in order to get it working in this scenario? I have no idea...
    The attribute type is ABAP_CR_LF - which is char 2.
    What next?
    Thanks,
    Ivaylo

  • What is a codepage and how does it work?

    Like any application, DI needs to know which "codepage" to use when processing data. Codepages are character encoding tables. Codepages are necessary because ultimately, all software operates with sequences of bytes, not the actual letters or numbers that you see on a computer screen. Operating systems and other software applications use codepages to map sequences of bytes to specific characters. Those characters can be single-byte characters such as English or French or multi-byte characters such as Korean or Chinese. Do not assume each language has it's own codepage. Codepages provide a mapping for a specific set of characters, and in the case of codepage MS1252, many languages such as English, French, German, and Swahili all use the same codepage. This is possible because they share many characters such as "a", "b", "c", etc... MS949, on the other hand, only maps to Korean. Some codepages, such as UTF-8, UTF-16 and UTF-32, map to a standard generally known as Unicode. This is very useful because Unicode is the character set which includes almost all known characters in the world today. When processing both multi-byte and single-byte data, a Unicode codepage is recommended, which in the case of DI is equivalent to UTF-8 .

    How to identify which codepage is installed  on our OS?
    I have windows XP on my machine.

  • File Transfer non-unicode - unicode via client

    Hello.
    I downloaded a binary file from a SAP 4.0 system to my client (win2k) with OPEN DATASET (to read the file on the app server) and WS_DOWNLOAD (to save it to the client).
    Now i want to upload this file from my client to a 6.40 unicode system. Therefore i do the following:
    - GUI_UPLOAD to get the file from the client into an internal table
    - OPEN DATASET dsn FOR OUTPUT IN BINARY MODE to save the contents of the internal table to the file system of the app server.
    This works pretty well on non-unicode systems but does not work properly for unicode systems.
    Which options do i have to use? Anything with code pages?!
    THX
    --MIKE

    check out the <b>OPEN DATASET - Mode - {TEXT MODE ENCODING {DEFAULT|UTF-8|NON-UNICODE}} </b> option .
    The additions after ENCODING determine in which character representation the content of the file is handled.
    DEFAULT
    In a Unicode system, the designation DEFAULT corresponds to the designation UTF-8, and the designation NON-UNICODE in a non-Unicode system.
    UTF-8
    The characters in the file are handled according to the Unicode character representation UTF-8.
    NON-UNICODE
    In a non-Unicode system, the data is read or written without being converted. In a Unicode system,the characters in the file are handled according to the non-Unicode-codepage that would be assigned to the current text environment according to the database table TCP0C, at the time of reading or writing in a non-Unicode system.
    Check out the ABAP Key Word Documentation .
    Regards
    Raja

  • How to write read dataset statement in unicode

    Hi All,
    I am writing the program using open dataset concept .
    i am using follwing code.
        PERFORM FILE_OPEN_INPUT USING P_P_IFIL.
        READ DATASET P_P_IFIL INTO V_WA.
        IF SY-SUBRC <> 0.
          V_ABORT = C_X.
          WRITE: / TEXT-108.
          PERFORM CLOSE_FILE USING P_P_IFIL.
        ELSE.
          V_HEADER_CT = V_HEADER_CT + 1.
        ENDIF.
    Read dataset will work for normal code.
    when it comes to unicode it is going to dump.
    Please can u tell the solution how to write read dataset in unicode.
    Very urget.
    Regards
    Venu

    Hi Venu,
    This example deals with the opening and closing of files.
    Before Unicode conversion
    data:
      begin of STRUC,
        F1 type c,
        F2 type p,
      end of STRUC,
      DSN(30) type c value 'TEMPFILE'.
    STRUC-F1 = 'X'.
    STRUC-F2 = 42.
    Write data to file
    open dataset DSN in text mode. ß Unicode error
    transfer STRUC to DSN.
    close dataset DSN.
    Read data from file
    clear STRUC.
    open dataset DSN in text mode. ß Unicode error
    read dataset DSN into STRUC.
    close dataset DSN.
    write: / STRUC-F1, STRUC-F2.
    This example program cannot be executed in Unicode for two reasons. Firstly, in Unicode programs, the file format must be specified more precisely for OPEN DATASET and, secondly, only purely character-type structures can still be written to text files.
    Depending on whether the old file format still has to be read or whether it is possible to store the data in a new format, there are various possible conversion variants, two of which are introduced here.
    After Unicode conversion
    Case 1: New textual storage in UTF-8 format
    data:
      begin of STRUC2,
        F1 type c,
        F2(20) type c,
      end of STRUC2.
    Put data into text format
    move-corresponding STRUC to STRUC2.
    Write data to file
    open dataset DSN in text mode for output encoding utf-8.
    transfer STRUC2 to DSN.
    close dataset DSN.
    Read data from file
    clear STRUC.
    open dataset DSN in text mode for input encoding utf-8.
    read dataset DSN into STRUC2.
    close dataset DSN.
    move-corresponding STRUC2 to STRUC.
    write: / STRUC-F1, STRUC-F2.
    The textual storage in UTF-8 format ensures that the created files are platform-independent.
    After Unicode conversion
    Case 2: Old non-Unicode format must be retained
    Write data to file
    open dataset DSN in legacy text mode for output.
    transfer STRUC to DSN.
    close dataset DSN.
    read from file
    clear STRUC.
    open dataset DSN in legacy text mode for input.
    read dataset DSN into STRUC.
    close dataset DSN.
    write: / STRUC-F1, STRUC-F2.
    Using the LEGACY TEXT MODE ensures that the data is stored and read in the old non-Unicode format. In this mode, it is also possible to read or write non-character-type structures. However, be aware that data loss and conversion errors can occur in Unicode systems if there are characters in the structure that cannot be represented in the non-Unicode codepage.
    Reward pts if found usefull :)
    Regards
    Sathish

  • Detect codepage in text files - Help!

    Hi, I'm a newbie about this kind of problem.
    I wrote a class that reads an input .txt, manages the content to another format and then writes another .txt file.
    Since in the input file there are some characters with accents, they cause problems during the corversion, and the output file contains some "strange" chars.
    I expected that I was reading a file saved with UTF-8 codepage. I tried to save with ANSI codepage, and my class works good.
    I can't understand why the class works in this way, since UTF-8 is the "native" codepage in Java. Anyway, I'm searching for a way to test the input file codepage before reading it, or to convert into a manageable file... Maybe with examples of code. I've searched into other threads about this problem, with no results.
    I work on Windows XP platform, using JDK 1.6 update 5.
    Thanks in advance, Maurizio

    malcolmmc wrote:
    (You can find out what codepage you're on - open a cmd window and type chcp).That's not right. That only tells you which codepage the command shell is using. On my machine, CHCP returns "cp437", but the system default encoding according to Java is "windows-1252". Here's how you get the system default encoding: System.out.println(java.nio.charset.Charset.defaultCharset()); By the way, "codepage", "encoding" and "charset" are all synonyms for our purposes. On the other hand, "ANSI" is not a standard term. You're better off using "ASCII" to refer to the first 128 characters that are common to virtually all Western encodings as well as UTF-8 and some others. If you're talking about one of the eight-bit (256-character) encodings, you should be specific: "windows-1252", "ISO-8859-1", "cp850", etc..
    There's another problem with using Notepad, besides the ambiguously-named "ANSI" codepage. If you choose to save a file in the (equally ambiguous) "Unicode" codepage, it gets saved as UTF-8 with a BOM (see link below). Unfortunately, Java's built-in UTF-8 charset doesn't recognize the UTF-8 BOM, so the first three bytes get decoded as garbage characters (which tends to cause XML parsers to barf).
    If you really want to know what charset your file is written in, I suggest you try reading it through Java, using specific charsets like "ASCII", "UTF-8", "ISO-8859-15" or your system default encoding. Also, display the file's contents in a JTextArea, to avoid having to deal with the command shell's codepage. Finally, if you have a choice as to which encoding to save your files in, choose UTF-8 unless you have a pressing reason to use something else. UTF-8 can handle any character known to the Unicode Consortium, but it's still reasonably compact, at least when used with Western scripts.
    re BOM: http://en.wikipedia.org/wiki/Byte_Order_Mark
    must read: http://www.joelonsoftware.com/articles/Unicode.html

  • Difference between IN LEGACY TEXT MODE & TEXT MODE ENCODING NON-UNICODE

    Hi,
    We're upgrading to ECC5 and the 'open dataset' command needs amending if the program is flagged for Unicode (which usually occurrs in user/fm exits). Therefore is ECC5 this command is no longer valid:
    "open dataset DSN in text mode"
    We currently interface with systems that may not have unicode enabled. Yet we have not enabled unicode in our own system just yet.
    So we think these two commands are the most approriate for replacing the 'old' open dataset command:
    "open dataset DSN for input in TEXT MODE encoding NON-UNICODE"
    "open dataset DSN in LEGACY TEXT MODE for input"
    However we're not really sure what the difference between these two commands is?
    Has anyone worked with these commands?
    Could you offer some help as to their differences and when each should be used?
    Many thanks!

    Hi Robert,
       Here is an excerpt from sap documentation.
    ... TEXT MODE ENCODING {DEFAULT|UTF-8|NON-UNICODE}
    Effect:
    The addition IN TEXT MODE opens the file as a text file. The addition ENCODING defines how the characters are represented in the text file. When writing in a text file, the content of a data object is converted to the representation entered after ENCODING, and transferred to the file. If the data type is character-type and flat, trailing blanks are cut off. In the data type string, trailing blanks are not cut off. The end-of-line marking of the relevant platform is applied to the transferred data by default. When reading from a text file, the content of the file is read until the next end-of-line marking, converted from the format specified after ENCODING into the current character format, and transferred to a data object.
    The end-of-line marking depends on the operating system of the application server. In the MS Windows operating systems, the markings "CRLF" and " LF" are possible, while under Unix, only "LF" is used. If, when using Windows, an existing file is opened without the TYPE addition (see os_addition), the first end-of-line marking is found and used for the whole file. If a new file is created without the TYPE addition, the content of the profile parameter abap/NTfmode is used. If the profile parameter is not set, "CRLF" is used. If a file with the TYPE addition is opened and a valid value is contained in attr, this value is used.
    In Unicode programs, only the content of character-type data objects can be transferred to text files and read from text files. The addition ENCODING must be specified in Unicode programs, and can only be omitted in non-Unicode programs.
    The additions after ENCODING determine in which character representation the content of the file is handled.
    DEFAULT
    In a Unicode system, the designation DEFAULT corresponds to the designation UTF-8, and the designation NON-UNICODE in a non-Unicode system.
    UTF-8
    The characters in the file are handled according to the Unicode character representation UTF-8.
    NON-UNICODE
    In a non-Unicode system, the data is read or written without being converted. In a Unicode system,the characters in the file are handled according to the non-Unicode-codepage that would be assigned to the current text environment according to the database table TCP0C, at the time of reading or writing in a non-Unicode system.
    If the addition ENCODING is not specified in non-Unicode programs, the addition NON-UNICODE is used implicitly.
    ... LEGACY TEXT MODE [{BIG|LITTLE} ENDIAN] [CODE PAGE cp]
    Effect:
    Opening a Legacyfile. The addition IN LEGACY TEXT MODE opens the file as a legacy text file. As with legacy binary files, the byte order and the codepage with which the content of the file should be handled can also be specified. The syntax and meaning of {BIG|LITTLE} ENDIAN and CODE PAGE cp are the same as for legacy binary files.
    In contrast to legacy binary files, the trailing blanks in a legacy file are cut off when writing character-type flat data objects in a legacy text file. As for a text file, an end-of-line marking is also applied to the transferred data. In contrast to text files opened with the addition INTEXT MODE, Unicode programs do not check whether the data objects used for reading or writing are character-type. Furthermore, the LENGTH additions of the statements READ DATASET and TRANSFER are used for counting in bytes in legacy text files and in the units of a character represented in the memory for text files.
    Note:
    As with legacy binary files, text files that have been written in a non-Unicode system can be accessed in Unicode systems as legacy text files, and the content is converted accordingly.
    Example
    A file test.dat is created as a text file, filled with data, changed, and exported. As every TRANSFER statement applies end-of-line marking to written content, after the change, the content of the file has two lines. The first line contains "12ABCD". The second line contains "890". The character "7" has been overwritten by the end-of-line marking of the first line.
    DATA: file   TYPE string VALUE `test.dat`,
          result TYPE string.
    OPEN DATASET file FOR OUTPUT IN TEXT MODE ENCODING DEFAULT.
    TRANSFER `1234567890` TO file.
    CLOSE DATASET file.
    OPEN DATASET file FOR UPDATE IN TEXT MODE ENCODING DEFAULT
                                 AT POSITION 2.
    TRANSFER `ABCD` TO file.
    CLOSE DATASET file.
    OPEN DATASET file FOR INPUT IN TEXT MODE ENCODING DEFAULT.
    WHILE sy-subrc = 0.
      READ DATASET file INTO result.
      WRITE / result.
    ENDWHILE.
    CLOSE DATASET file.
    Regards,
    Ravi

  • Open data set / code page

    Hello ,
    I am sending file to print server  and i am accounting problems with  special characters
    In first version of  program  ( please see the code below )
    "OPEN DATASET g_filename  FOR OUTPUT IN TEXT MODE ENCODING DEFAULT."
    The special characters l from German and French   alphabet were NOT printed properly  , and we had  some nonsense results . Other '"Normal "characters  like A , B... are printed without errors .
    To prevent this error i wrote new line of code for open data set ( below ).
    "OPEN DATASET g_filename FOR OUTPUT IN   LEGACY TEXT MODE CODE PAGE '4110'  IGNORING CONVERSION ERRORS."
    This was working only when FTP was NOT used  , when FTP was used i had  following short  dump.
    I am working in SAP release 6.0
    Please Advice
    SHORT DUMP Message :
    What happened?
        The conversion of texts in code page '4102' to code page '4110' is not
        supported here.
        The current ABAP program 'SAPLZPRN_AUTO_LBL' had to be interrupted because
         incorrect
        data would have been created by the conversion.
    Error analysis
        An exception occurred that is explained in detail below.
        The exception, which is assigned to class 'CX_SY_CODEPAGE_CONVERTER_INIT', was
         not caught in
        procedure "Z_TRANSFER_FILE" "(FUNCTION)", nor was it propagated by a RAISING
         clause.
        Since the caller of the procedure could not have anticipated that the
        exception would occur, the current program is terminated.
        The reason for the exception is:
        Possibly, one of the codepages '4102' or '4110' - needed for the
        conversion - is unknown to the system. Another option is, that a Unicode
         codepage was specified for a file in LEGACY MODE, which is not allowed.
        Additional parameters for the codepage conversion (as , for example, the
         replacement character) might have invalid values. You can find further
        information under 'Inernal notes'.
        If the problem occurred at opening, reading, or writing of a file, then
        the file name was '/te/mm/labels/0488_20091208_051317_EC008119_01_001.dd'.
         (Further information about this file: " X ")

    Hi,
    Also check the character set supported by the Printer. Printer configuration should also be checked on SAP side to determine character set and code page using SPAD.
    Regards,
    Nishad

  • About upgrade from 4.6c to ecc6.0

    hi expert:
    i met the short dump after execute the following statement.
    data: begin of STRUC,
    F1 type c,
    F2(10) type c,
    end of STRUC,
    data: dsn(30) type c.
    open dataset DSN in legacy text mode for input.
    read dataset DSN into STRUC.
    close dataset DSN.
    short dump :
    Description:
        A character set conversion is not possible.
    issue content:
        At the conversion of a text from codepage '8000' to codepage '4103':
        - a character was found that cannot be displayed in one of the two
        codepages;
        - or it was detected that this conversion is not supported
        The running ABAP program 'ZJB1080' had to be terminated as the conve
        would have produced incorrect data.
        The number of characters that could not be displayed (and therefore
        be converted), is 0. If this number is 0, the second error case, as
        mentioned above, has occurred.
    i read some material about the unicode converstion, i got following information:
    The use of the LEGACY TEXT MODE ensures that the data in old non-Unicode format is stored and read. In this mode, it is also possible to read and write non-character-type structures. However, you have to take into account that loss of data and conversion errors might occur in real Unicode systems if the structure contains characters that cannot be displayed in the non-Unicode codepage.
    who could explain this for me or provide a solution , thanks in advance
    Keivn

    Read the documentation at [OPEN DATASET - Mode |http://help.sap.com/abapdocu/en/ABAPOPEN_DATASET_MODE.htm] and [The File Interface in Unicode Programs |http://help.sap.com/abapdocu/en/ABENUNICODE_DATASET.htm].
    NB: And please use a more precise Subject, there are many possible problems behind "upgrade from 4.6c to rcc6.0"
    Regards

  • GUI_DOWNLOAD not writing BOM for UTF-8

    We are using the following code to produce an Excel Spreadsheet. But the parameter WRITE_BOM is not working - SAP does not write a byte-order mark to the file. The result is that Excel does not open the file correctly. When we add the BOM manually in UltraEdit the file opens correctly.
      CALL FUNCTION 'GUI_DOWNLOAD'
        EXPORTING
          filename              = 'C:\export.xls'
          filetype              = 'DAT'
          trunc_trailing_blanks = 'X'
          write_field_separator = 'X'
          codepage              = '4310'
          write_bom             = 'X'
        TABLES
          data_tab              = gt_textsi
        EXCEPTIONS

    Please check the function help:
    Short Text
         If set, writes a Unicode byte order mark
    Description
         If data is written in a Unicode codepage, at the top of the file the
         respective byte order mark (BOM) is included.
         Unicode - Little Endian codepage 4103, binary values 'FFFE'
         Unicode - Big Endian codepage 4102, binary values 'FEFF'
         UTF-8 codepage 4110, binary values 'EFBBBF'
         Note: Microsoft Excel supports Unicode data only if they have been
         written in Unicode - Little Endian format.

  • ITS cannot can transform %3D into = and canu00B4t get the Work Item ID

    Hi experts,
    We have an ECC 6.0 EHP3 with a EP 7 (Netweaver 2004s SP18). We use the UWL.
    The issue is when we try to open a taks in the UWL that launch a transaction in the ECC, it can´t open, show this error:
    Transaction SWK1+WI_ID=000000005 is unknown
    This is because in the parameter "DynamicParameter" it pass the value "wi_id%3D000000005004". It can´t resolve "%3D" and change it for "=" so lost the last 3 characteres, try to opens the WI_ID=000000005 instead the WI_ID=000000005004.
    The url is try to open is this:
    http://asvesap002.forcendm.es:50100/irj/servlet/prt/portal/prteventname/navigate/prtroot/pcd!3aportal_content!2fZDiseno!2fZDesktop!2fcom.sap.portal.defaultDesktop!2fframeworkPages!2fcom.sap.portal.frameworkpage!2fcom.sap.portal.innerpage!2fcom.sap.portal.contentarea?NavigationTarget=ROLES%3A%2F%2Fportal_content%2Fevery_user%2Fgeneral%2Fuwl%2Fcom.sap.netweaver.bc.uwl.uwlSapLaunch&System=Memorias&TCode=swk1&UseSPO1=false&AutoStart=true&DynamicParameter=wi_id%3D000000005004&CurrentWindowId=WID1248779358537&NavMode=1
    If i copy this url in a browser and chage %3D for = , DynamicParameter=wi_id=000000005004 , it works fine, open the workitem.
    Here is the code for the task in the uwl xml:
    In the iview "UWL - Launch SAP Transaction", com.sap.netweaver.bc.uwl.uwlSapLaunch, i put the "NO" value for the property "Transaction Supports Unicode Codepages" but the error is still throw.
    Any one can help me?
    Thanks in advance,
    Manuel

    Hello Manuel,
    Please see note 1360904.  Thanks.
    Edgar

  • ITS cannot can transform %3D into = and can´t get the Work Item ID

    Hi experts,
    We have an ECC 6.0 EHP3 with a EP 7 (Netweaver 2004s SP18). We use the UWL.
    The issue is when we try to open a taks in the UWL that launch a transaction in the ECC, it can´t open, show this error:
    Transaction SWK1+WI_ID=000000005 is unknown
    This is because in the parameter "DynamicParameter" it pass the value "wi_id%3D000000005004". It can´t resolve "%3D" and change it for "=" so lost the last 3 characteres, try to opens the WI_ID=000000005 instead the WI_ID=000000005004.
    The url is try to open is this:
    http://asvesap002.forcendm.es:50100/irj/servlet/prt/portal/prteventname/navigate/prtroot/pcd!3aportal_content!2fZDiseno!2fZDesktop!2fcom.sap.portal.defaultDesktop!2fframeworkPages!2fcom.sap.portal.frameworkpage!2fcom.sap.portal.innerpage!2fcom.sap.portal.contentarea?NavigationTarget=ROLES%3A%2F%2Fportal_content%2Fevery_user%2Fgeneral%2Fuwl%2Fcom.sap.netweaver.bc.uwl.uwlSapLaunch&System=Memorias&TCode=swk1&UseSPO1=false&AutoStart=true&DynamicParameter=wi_id%3D000000005004&CurrentWindowId=WID1248779358537&NavMode=1
    If i copy this url in a browser and chage %3D for = , DynamicParameter=wi_id=000000005004 , it works fine, open the workitem.
    Here is the code for the task in the uwl xml:
    In the iview "UWL - Launch SAP Transaction", com.sap.netweaver.bc.uwl.uwlSapLaunch, i put the "NO" value for the property "Transaction Supports Unicode Codepages" but the error is still throw.
    Any one can help me?
    Thanks in advance,
    Manuel

    The note 1360904 solves the issue.

Maybe you are looking for