Truncated record in OPEN DATASET ENCODING NON-UNICODE

Hi,
I have to read a Unicode created file into a non-unicode SAP System, version 4.7.
When I make the OPEN DATASET using ENCODING UTF-8 y get a CONVT_CODEPAGE dump. That´s odd cause my system is non-unicode. I don´t wanna use IGNORING CONVERSION ERRORS attribute, the output will be corrupt.
But when I use ENCODING NON-UNICODE or ENCODING DEFAULT the READ DATASET mysteriously truncate the record which try to read from real 401 characters to 361 characters. Variable is string.
I can see full records through AL11.
Any ideas?
Thanks,
Pablo.

Hi,
Try using:
  open dataset filename in text mode encoding default for input
                              ignoring conversion errors.
As said in AL11 its coming so the above code is used for that.
Hope this will surely help you !!!
Regards,
Punit

Similar Messages

  • OPEN DATASET file FOR OUTPUT IN TEXT MODE ENCODING NON-UNICODE

    Hi There,
    I also have the similar issue. I am able to write the data into appliaction server in Chinese Characters using :OPEN DATASET datei FOR OUTPUT IN TEXT MODE ENCODING DEFAULT or OPEN DATASET datei FOR OUTPUT IN TEXT MODE ENCODING UTF-8. But when i save that file into my presentation server manually, all the chinese characters are showing as Junk.
    When i use OPEN DATASET datei FOR OUTPUT IN TEXT MODE ENCODING NON-UNICODE, giving runtime error and when i use OPEN DATASET datei FOR OUTPUT IN TEXT MODE ENCODING NON-UNICODE IGNORING CONVERSION ERRORS, No error but application server output itself showing as Junk characters.
    Could you please suggest me what you have done?
    Regards,
    Chaitanya A

    Hi,
       Use this
      OPEN DATASET File_path  FOR OUTPUT IN TEXT MODE ENCODING NON-UNICODE
      WITH SMART LINEFEED
    it will definitely work.
    Regards,
    Manesh. R

  • Difference between IN LEGACY TEXT MODE & TEXT MODE ENCODING NON-UNICODE

    Hi,
    We're upgrading to ECC5 and the 'open dataset' command needs amending if the program is flagged for Unicode (which usually occurrs in user/fm exits). Therefore is ECC5 this command is no longer valid:
    "open dataset DSN in text mode"
    We currently interface with systems that may not have unicode enabled. Yet we have not enabled unicode in our own system just yet.
    So we think these two commands are the most approriate for replacing the 'old' open dataset command:
    "open dataset DSN for input in TEXT MODE encoding NON-UNICODE"
    "open dataset DSN in LEGACY TEXT MODE for input"
    However we're not really sure what the difference between these two commands is?
    Has anyone worked with these commands?
    Could you offer some help as to their differences and when each should be used?
    Many thanks!

    Hi Robert,
       Here is an excerpt from sap documentation.
    ... TEXT MODE ENCODING {DEFAULT|UTF-8|NON-UNICODE}
    Effect:
    The addition IN TEXT MODE opens the file as a text file. The addition ENCODING defines how the characters are represented in the text file. When writing in a text file, the content of a data object is converted to the representation entered after ENCODING, and transferred to the file. If the data type is character-type and flat, trailing blanks are cut off. In the data type string, trailing blanks are not cut off. The end-of-line marking of the relevant platform is applied to the transferred data by default. When reading from a text file, the content of the file is read until the next end-of-line marking, converted from the format specified after ENCODING into the current character format, and transferred to a data object.
    The end-of-line marking depends on the operating system of the application server. In the MS Windows operating systems, the markings "CRLF" and " LF" are possible, while under Unix, only "LF" is used. If, when using Windows, an existing file is opened without the TYPE addition (see os_addition), the first end-of-line marking is found and used for the whole file. If a new file is created without the TYPE addition, the content of the profile parameter abap/NTfmode is used. If the profile parameter is not set, "CRLF" is used. If a file with the TYPE addition is opened and a valid value is contained in attr, this value is used.
    In Unicode programs, only the content of character-type data objects can be transferred to text files and read from text files. The addition ENCODING must be specified in Unicode programs, and can only be omitted in non-Unicode programs.
    The additions after ENCODING determine in which character representation the content of the file is handled.
    DEFAULT
    In a Unicode system, the designation DEFAULT corresponds to the designation UTF-8, and the designation NON-UNICODE in a non-Unicode system.
    UTF-8
    The characters in the file are handled according to the Unicode character representation UTF-8.
    NON-UNICODE
    In a non-Unicode system, the data is read or written without being converted. In a Unicode system,the characters in the file are handled according to the non-Unicode-codepage that would be assigned to the current text environment according to the database table TCP0C, at the time of reading or writing in a non-Unicode system.
    If the addition ENCODING is not specified in non-Unicode programs, the addition NON-UNICODE is used implicitly.
    ... LEGACY TEXT MODE [{BIG|LITTLE} ENDIAN] [CODE PAGE cp]
    Effect:
    Opening a Legacyfile. The addition IN LEGACY TEXT MODE opens the file as a legacy text file. As with legacy binary files, the byte order and the codepage with which the content of the file should be handled can also be specified. The syntax and meaning of {BIG|LITTLE} ENDIAN and CODE PAGE cp are the same as for legacy binary files.
    In contrast to legacy binary files, the trailing blanks in a legacy file are cut off when writing character-type flat data objects in a legacy text file. As for a text file, an end-of-line marking is also applied to the transferred data. In contrast to text files opened with the addition INTEXT MODE, Unicode programs do not check whether the data objects used for reading or writing are character-type. Furthermore, the LENGTH additions of the statements READ DATASET and TRANSFER are used for counting in bytes in legacy text files and in the units of a character represented in the memory for text files.
    Note:
    As with legacy binary files, text files that have been written in a non-Unicode system can be accessed in Unicode systems as legacy text files, and the content is converted accordingly.
    Example
    A file test.dat is created as a text file, filled with data, changed, and exported. As every TRANSFER statement applies end-of-line marking to written content, after the change, the content of the file has two lines. The first line contains "12ABCD". The second line contains "890". The character "7" has been overwritten by the end-of-line marking of the first line.
    DATA: file   TYPE string VALUE `test.dat`,
          result TYPE string.
    OPEN DATASET file FOR OUTPUT IN TEXT MODE ENCODING DEFAULT.
    TRANSFER `1234567890` TO file.
    CLOSE DATASET file.
    OPEN DATASET file FOR UPDATE IN TEXT MODE ENCODING DEFAULT
                                 AT POSITION 2.
    TRANSFER `ABCD` TO file.
    CLOSE DATASET file.
    OPEN DATASET file FOR INPUT IN TEXT MODE ENCODING DEFAULT.
    WHILE sy-subrc = 0.
      READ DATASET file INTO result.
      WRITE / result.
    ENDWHILE.
    CLOSE DATASET file.
    Regards,
    Ravi

  • Open Dataset non-unicode & default

    Currently im using the open dataset with default then it hit a codepage error. I found out that the file contained characters that is not UTF-8 so when i changed the open dataset to non-unicode the codepage error goes away. Im wondering if there is any effect of changing from default to non-unicode.

    OPEN DATASET P_AFILE FOR INPUT IN TEXT MODE ENCODING DEFAULT ignoring conversion errors.
    ADD ignoring conversion errors to ur prg
    and should be unicode.
    hope it will help you.
    Regards,
    sinagam.
    Edited by: Venkata Pavan Sinagam on Sep 23, 2011 5:50 PM

  • Unable to Open unix file in UNICODE system which created NON-UNICODE system

    Unable to Open unix file in UNICODE system which created in NON-UNICODE system
    We have two SAP systems both are ECC6.0 but System 1 is NON-Unicode and System2 is Unicode system.
    There is a common unix directory/folder for both system.
    Our requirement is to create one file on unix common folder and write the data to file from system1 .
    In system2 open the same file for appending mode to write the data .
    The file in system 1 created with below sentence.
    OPEN DATASET g_unix_file FOR OUTPUT IN TEXT MODE ENCODING UTF-8.
    Now I have to append the data from system 2 to same file.
    I have tried to used below statement in system 2 to open the file but sy-subrc value comes as '8'.
    1> OPEN DATASET g_unix_file FOR APPENDING IN TEXT MODE ENCODING UTF-8.
    2>OPEN DATASET g_unix_file FOR APPENDING IN legacy TEXT MODE CODE PAGE
    cdp IGNORING CONVERSION ERRORS  .
    3>OPEN DATASET g_unix_file FOR APPENDING IN TEXT MODE ENCODING Default.
    4>OPEN DATASET g_unix_file FOR APPENDING IN TEXT MODE ENCODING NON-UNICODE.
    Tried out all the possibilities as per F1 help given for open dataset , but still there is problem with opn file in appending as well output mode.However the file successfully open in Input mode(Read).
    Please advice suggestion to resolve this issue.
    Thanks.

    Messgae captured as 'Permission Denied". The program gets triggered with system user Id PPID.
    How to check the security access of the User ID.

  • Open dataset twice once for input and once for output in unicode system

    Hi All,
    In a program
    I used a open dataset to read the data from the file.
    OPEN DATASET cmp_file FOR INPUT IN TEXT MODE
          ENCODING NON-UNICODE.
    Then i closed the file.
    Again later in the program,
    I used a open dataset to transfer the data.
    OPEN DATASET cmp_file FOR OUTPUT IN TEXT MODE
          ENCODING NON-UNICODE.
    But this time I get sy-subrc = 8.
    Unable to open the file and subsequent TRANSFER is leading to the runtime error.
    Note : I am using a unicode system
    I could run the same program well in non-unicode system..
    ->
    Is it that if a file contains data already
    1.I need to delete the data and open it
    or
    2.I need to open in APPENDING mode manadatorily...
    in case of UNICODE system..
    Kindly suggest..

    Hi,
    IF you have write permission al S.O. Level you need to check your DATASET rigths using AUTHORITY_CHECK_DATASET, this validate your rigths with S_DATASET object.
    Example
    TYPE-POOLS SABC.
    CALL FUNCTION 'AUTHORITY_CHECK_DATASET'
    EXPORTING PROGRAM = 'ZDATASET'
    ACTIVITY = SABC_ACT_READ
    FILENAME = '/tmp/sapv01'
    EXCEPTIONS NO_AUTHORITY = 1
    ACTIVITY_UNKNOWN = 2.
    See SABC type pool to know wich activities are aviable.
    Hope this help.
    Regards

  • Open dataset (UNICODE) for english en polish characters

    Hi,
    I have a problem on my multi language project on where I need to manage English language and polish language.
    I have a table (that we can call TABLE) I manage the translation on it. When I am connected in English I can see some squares instead some polishes characters. When I am connected in polish, I have the right characters. Any way I don’t think it’s a problem, cause if I copy/paste the square on Word, I find back the corrects characters.
    My problem is after.
    I have a structure with the collon name and an internal table with data.
    I use the module function DDIF_TABL_GET on my table TABLE to have the collon name corresponding to the language wanted.
    I am building a string with that and some carry return.
    (Here in English I have again some square, but always the good character if I paste it on Words.)
    Now I would like to put this string in a file using the good encoding.
    I have tried lot of ‘opening dataset’ and lot of ‘encoding’.
    I have a file on SAP (transaction FILE)
    (Here in English I have again some square, but always the good character if I paste it on Words.)
    If I download this file on my computer, I have wrong characters.
    Could I have help?
    My wrong code:
      OPEN DATASET lv_filename FOR OUTPUT IN TEXT MODE ENCODING UTF-8.
      TRANSFER u_contenu TO lv_filename.
      CLOSE DATASET lv_filename.

    Hi,
       Use this
      OPEN DATASET File_path  FOR OUTPUT IN TEXT MODE ENCODING NON-UNICODE
      WITH SMART LINEFEED
    it will definitely work.
    Regards,
    Manesh. R

  • OPEN DATASET...  in a Unicode system

    I'm discussing with a developer @ SAP about the correct use of OPEN DATASET in a Unicode system. I'm not sure I'm correct with my opinion so maybe someone could shed a light on it.
    The source of discussion is the (SAP standard) program RSQUEU01. This program is used to download data from the TemSE; in our case we use it to produce a file that is to be sent to auditors (component FI-AIS).
    Our system is a little endian Unicode system (codpage 4103). If we download the created data using that program we get a dump with "CONVT_CODEPAGE" because a character could not be converted from 4103 to 1100.
    The proposed correction by the developer is changing
    OPEN DATASET EXP_FILENAME FOR OUTPUT IN legacy BINARY MODE.
    to
    OPEN DATASET exp_filename                              
    FOR OUTPUT IN LEGACY BINARY MODE                     
    IGNORING CONVERSION ERRORS.
    I think, that correction is wrong. Since we have 12 languages in the system including some Asian the output might get corrupted.
    When I asked him about that I was told use an application server with a "correct codepage" then - I'm not sure what that means since I can't connect an ASCII application server to a Unicode system.
    I guess the statement should be
    OPEN DATASET EXP_FILENAME FOR OUTPUT IN BINARY MODE ENCODING DEFAULT.
    This makes sure that no data is cut (like doublebyte) and makes sure, the appropriate codepage (LE/BE) is used.
    Are my assumptions right?
    Markus
    (OSS 323320/2010)

    Hi Markus,
    Let's first clarify the difference ways for writing files:
    <ul style="list-style:circle!important;">
    <li>BINARY MODE: Means that we essentially dump a sequence of bytes, which isn't necessarily related to any code page (and characters). I.e. if I'd want to save for example an executable program, the individual bytes have no meaning when interpreted as characters (unless we look at strings stored in the program). Note that legacy binary mode actually allows you to specify a code page though, but in general the recommendation is not to use the legacy option.</li>
    <li>TEXT MODE: Here we have text information that has to be interpreted using a specific code page; thus usually the additional parameter ENCODING should be given, which specifies which code page is used.</li>
    </ul>
    Now, let's clear up a small typo in Ajay's response:
    Yes you are right if you want to have all double byte characters too then you need to use ENCODING DEFAULT which would use 4103 in Unicode system.
    That is incorrect. In a Unicode system [encoding default|http://help.sap.com/abapdocu_70/en/ABAPOPEN_DATASET_ENCODING.htm] corresponds to UTF-8, not UTF-16.
    Back to your problem. Your suggestion doesn't work, because you cannot specify encoding default for a binary output (the legacy binary mode allows you to specify a code page, but that's misleading and I wouldn't use any legacy mode). So when you try to use the syntax you proposed, you'd get a syntax error.
    Generally the recommendation is for Unicode enabled applications to use UTF-8 files with byte order mark, i.e. something like
    open dataset EXP_FILE in text mode encoding utf-8 with byte-order mark.
    However, the real question is what your external audit application expects and it sounds as if it's not Unicode enabled...
    Enough blabber, here's what I'd do: Since you're having issues with a audit-related standard SAP program I'd post the question in forum - other people must have run into that problem. Also, I checked OSS, but couldn't make much sense out of the few notes I've found (and nothing seemed relevant). Check what the expected input format is in the external audit system; possibly post a message to OSS.
    Cheers, harald

  • What is the programming (ABAP) difference between Unicode and non Unicode?

    What is the programming(ABAP) difference between Unicode and non Unicode?
    Edited by: NIV on Apr 12, 2010 1:29 PM

    Hi
    The difference between programming in Unicode or not Unicode is that you should consider some adjustments to make on the Program "Z" to comply with the judgments Unicode Standard.
    In the past, developments in SAP using multiple systems to encode the characters of different alphabets. For example: ASCII, EBCDI, or double-byte code pages.
    These coding systems mostly use 1 byte per character, which can encode up to 256 characters. However, other alphabets such as Japanese or Chinese use a larger number of characters in their alphabets. That's why the system using double-byte code page, which uses 2 bytes per character.
    In order to unify the different alphabets, it was decided to implement a single coding system that uses 2 bytes per character regardless of what language is concerned. That system is called Unicode.
    Unicode is also the official way to implement ISO/IEC 10646 and is supported in many operating systems and all modern browsers.
    The way of verifying whether a program was adjusted or not, is through the execution of the UCCHECK transaction. Additionally, you can check by controlling syntax (making sure that this asset verification check Unicode).
    The main decisions to adjust / replace are (examples):
    ASSIGN H-SY-INDEX TEXT TO ASSIGN <F1> by
    H-SY-INDEX TEXT (*) TO <F1>.
    DATA INIT (50) VALUE '/'. by
    DATA INIT (1) VALUE '/'.
    DESCRIBE FIELD text LENGTH lengh2 by
    DESCRIBE FIELD text LENGTH lengh2 in character mode.
    T_ZSMY_DEMREG_V1 = record_tab by
    record_tab TO MOVE-Corresponding t_zsmy_demreg_v1.
    escape_trick = hot3. by
    escape_trick-x1 = hot3.
    itab_txt TYPE wt by
    ITAB_TXT TYPE TABLE OF TEXTPOOL
    DATA: string3 (3) TYPE X VALUE B2023 '3 'by
    DATA: string3 (6) B2023 TYPE c VALUE '3 '.
    OPEN DATASET file_name IN TEXT MODE by
    OPEN DATASET file_name FOR INPUT IN TEXT MODE ENCODING NON-UNICODE.
    or
    OPEN DATASET file_name FOR INPUT IN TEXT MODE ENCODING DEFAULT.
    CODE FROM PAGE TRANSLATE a_codepage record by
    record TRANSLATE USING a_codepage.
    CALL FUNCTION 'DOWNLOAD' by
    CALL METHOD cl_gui_frontend_services => gui_download
    CALL FUNCTION 'WS_DOWNLOAD' by
    CALL METHOD cl_gui_frontend_services => gui_download
    CALL FUNCTION 'UPLOAD' by
    CALL METHOD cl_gui_frontend_services => gui_upload
    CALL FUNCTION 'WS_UPLOAD' by
    CALL METHOD cl_gui_frontend_services => gui_upload
    PERFORM USING HEAD APPEND_XFEBRE +2. by
    PERFORM USING HEAD APPEND_XFEBRE +2 (98).
    Best Regars
    Fabio Rodriguez

  • Err with scheduling an abap program using open dataset

    issue: have an abap program which uses "open dataset ... for input ..." to read the file. 
    - with manual ly running it, receive the following message "dataset_not_open".  
    - with scheduling it, receive same message
    attempting to run an abap program as part of a process chain (ie scheduling a background job) in BI.
    the abap performs the following fxns:
    1) read a file on the server
    2) removes delimiter, renames it
    3) rewrites the file onto the server
    initially used ws_upload for reading and ws_download for writing the file. 
    - both fxns worked fine if it is run manually --> but failed as a  background (part of process chain)
    - note 7925 states can't use ws_upload, download for background jobs
    -so switched to "open dataset"
    Any suggestions as to why the "open dataset" does not work is greatly appreciated it.
    B.A.

    Thank you for all responses. here is more info about the err message:
    sy-subrc = 8
    'invalid argument'
    I looked up the invalid argument in note 99155 --> due to "The destination file is no longer available during repeated file access. "   So, the following steps were taken:
    - file was regenerated and
    - file was placed on the server to be read
    have the following code:
    OPEN DATASET FILENAME FOR OUTPUT IN TEXT MODE encoding default
                          MESSAGE D_MSG_TEXT.
    also have tried the following:
       OPEN DATASET d1 for input in text mode encoding default.
       open dataset d1 for output in text mode encoding NON-UNICODE..
       open dataset d1 for output in text mode  encoding utf-8.
    none had worked.  system --> status shows no unicode.
    THanks again for any suggestions.

  • File transfer Open dataset CSV file Problem

    Hi Experts,
    I have an issue in transferring Korean characters to a .CSV file using open dataset.
    data : c_file(200) TYPE c value '
    INTERFACES\In\test8.CSV'.
    I have tried
    open dataset  c_file for output LEGACY TEXT MODE CODE PAGE '4103'.
    open dataset  c_file for output    in TEXT MODE ENCODING NON-UNICODE.
    open dataset  c_file for output    in TEXT MODE ENCODING Default.
    Nothing is working.
    But to download to the presentation server the below code is working. How can the same be achieved for uploading the file to application server.
    CALL METHOD cl_gui_frontend_services=>gui_download
          EXPORTING
            filename                = 'D:/test123.xls'
            filetype                = 'ASC'
            write_field_separator   = 'X'
            dat_mode                = 'X'
            codepage                = '4103'
            write_bom               = 'X'
          CHANGING
            data_tab                = t_tab
          EXCEPTIONS
            file_write_error        = 1
            no_batch                = 2
            gui_refuse_filetransfer = 3
            invalid_type            = 4
            no_authority            = 5
            unknown_error           = 6
            header_not_allowed      = 7
            separator_not_allowed   = 8
            filesize_not_allowed    = 9
            header_too_long         = 10
            dp_error_create         = 11
            dp_error_send           = 12
            dp_error_write          = 13
            unknown_dp_error        = 14
            access_denied           = 15
            dp_out_of_memory        = 16
            disk_full               = 17
            dp_timeout              = 18
            file_not_found          = 19
            dataprovider_exception  = 20
            control_flush_error     = 21
            not_supported_by_gui    = 22
            error_no_gui            = 23
            OTHERS                  = 24.

    Hi,
    I would recommend to use OPEN DATASET ... ENCODING UTF-8 ...
    If your excel version is unable to open this format, you can convert from 4110 to 4103 with report RSCP_CONVERT_FILE.
    Please also have a look at
    File upload: Special character
    Best regards,
    Nils Buerckel

  • Open Dataset fails with "Invalid Argument"

    From an ABAP program I can't write to a share folder on a windows 2000 box.  The share permissions are set to Everyone Full control.  I can write to a share on a windows 2003 box, but not windows 2000.  Here is my code:
    data: filename type string.
    data: open_message type string.
    filename = '
    servername\edi\test.txt'.
    open dataset filename for output
                            message open_message
                           in text mode.
                          encoding non-unicode.
    write: /1 open_message.
    I noticed that the kernel has the following info:
    ICU Version                 2.6.1 Unicode Version 4.0
    I was thinking there was a problem writing to a win 2000 box with an SAP Unicode Kernel.
    Anyone have a clue?
    Thank you,

    hi
    the shared folder must be accessible to the os user of your R3 installation. bcoz when u run the pgm this user will be used at the os level.
    for checking this logon to the R3 os and try to access the share path. it should be accessible without providing any logon credentials.
    rgds
    arun

  • Open DataSet problem

    Hello Expert:
    I wanna download a excel file from unix server
    the file inculde some Chinese Character.
    Program is:
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
    *& Report  YTEST_17
    REPORT  ytest_17.
    *& Report  ZUPLOADTAB                                                  *
    *& Example of Uploading tab delimited file                             *
    *REPORT  zuploadtab                    .
    PARAMETERS: p_infile  LIKE rlgrap-filename
                            OBLIGATORY DEFAULT  '/usr/sap/report/BS_template.xls'.
    *PARAMETERS: p_infile  type string.
    *DATA: ld_file LIKE rlgrap-filename.
    DATA: ld_file TYPE string.
    *Internal tabe to store upload data
    TYPES: BEGIN OF t_record,
        name1 LIKE pa0002-vorna,
        name2 LIKE pa0002-name2,
        age   TYPE i,
        END OF t_record.
    DATA: it_record TYPE STANDARD TABLE OF t_record INITIAL SIZE 0,
          wa_record TYPE t_record.
    *Text version of data table
    TYPES: BEGIN OF t_uploadtxt,
      name1(10) TYPE c,
      name2(15) TYPE c,
      age(5)  TYPE c,
    END OF t_uploadtxt.
    DATA: wa_uploadtxt TYPE t_uploadtxt,
          wa_upload    TYPE t_record.
    *String value to data in initially.
    DATA: wa_string(255) TYPE c.
    *constants: con_tab(2) TYPE C VALUE '09'.
    *If you have Unicode check active in program attributes then you will
    *need to declare constants as follows:
    *class cl_abap_char_utilities definition load.
    CONSTANTS:
        con_tab  TYPE c VALUE cl_abap_char_utilities=>horizontal_tab.
    *START-OF-SELECTION
    START-OF-SELECTION.
      ld_file = p_infile.
      OPEN DATASET ld_file FOR INPUT IN TEXT MODE encoding default.
      IF sy-subrc NE 0.
      ELSE.
        DO.
          CLEAR: wa_string, wa_uploadtxt.
          READ DATASET ld_file INTO wa_string.
          IF sy-subrc NE 0.
            EXIT.
          ELSE.
            SPLIT wa_string AT con_tab INTO wa_uploadtxt-name1
                                            wa_uploadtxt-name2
                                            wa_uploadtxt-age.
            MOVE-CORRESPONDING wa_uploadtxt TO wa_upload.
            APPEND wa_upload TO it_record.
          ENDIF.
        ENDDO.
        CLOSE DATASET ld_file.
      ENDIF.
    *END-OF-SELECTION
    END-OF-SELECTION.
    *!! Text data is now contained within the internal table IT_RECORD
    <Display report data for illustration purposes
      LOOP AT it_record INTO wa_record.
        WRITE:/     sy-vline,
               (10) wa_record-name1, sy-vline,
               (10) wa_record-name2, sy-vline,
               (10) wa_record-age, sy-vline.
      ENDLOOP.
    >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
    but a shortdump happened, would u please give some solution
    ShrtText
        You cannot convert the character set.
    &#21457;&#29983;&#20102;&#20160;&#20040;?
        While a text was being converted from code page '4110' to '4102', one of
        the following occurred:
        - an character was discovered that could not be represented in one of
        the two code pages;
        - the system established that this conversion is not supported.
        The running ABAP program, 'YTEST_17' had to be terminated, since the
        conversion could cause incorrect data to be generated.
        683 characters could not be represented (and thus could not converted).
        If 683 = 0, a second or a different error has occurred.
    thank you
    Kevin
    any solution, please..........
    Message was edited by:
            Kevin Gao

    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
    *& Report YTEST_17
    REPORT ytest_17.
    *& Report ZUPLOADTAB *
    *& Example of Uploading tab delimited file *
    *REPORT zuploadtab .
    PARAMETERS: p_infile LIKE rlgrap-filename
    OBLIGATORY DEFAULT '/usr/sap/report/BS_template.xls'.
    *PARAMETERS: p_infile type string.
    *DATA: ld_file LIKE rlgrap-filename.
    DATA: ld_file TYPE string.
    *Internal tabe to store upload data
    TYPES: BEGIN OF t_record,
    name1 LIKE pa0002-vorna,
    name2 LIKE pa0002-name2,
    age TYPE i,
    END OF t_record.
    DATA: it_record TYPE STANDARD TABLE OF t_record INITIAL SIZE 0,
    wa_record TYPE t_record.
    *Text version of data table
    TYPES: BEGIN OF t_uploadtxt,
    name1(10) TYPE c,
    name2(15) TYPE c,
    age(5) TYPE c,
    END OF t_uploadtxt.
    DATA: wa_uploadtxt TYPE t_uploadtxt,
    wa_upload TYPE t_record.
    *String value to data in initially.
    DATA: wa_string(255) TYPE c.
    *constants: con_tab(2) TYPE C VALUE '09'.
    *If you have Unicode check active in program attributes then you will
    *need to declare constants as follows:
    *class cl_abap_char_utilities definition load.
    CONSTANTS:
    con_tab TYPE c VALUE cl_abap_char_utilities=>horizontal_tab.
    *START-OF-SELECTION
    START-OF-SELECTION.
    ld_file = p_infile.
    <b>
    SET LOCALE LANGUAGE 'JA'. “Specify the language .. JA Is for Japanese
    OPEN DATASET ld_file FOR INPUT IN TEXT MODE encoding NON-UNICODE.</b>
    IF sy-subrc NE 0.
    ELSE.
    DO.
    CLEAR: wa_string, wa_uploadtxt.
    READ DATASET ld_file INTO wa_string.
    IF sy-subrc NE 0.
    EXIT.
    ELSE.
    SPLIT wa_string AT con_tab INTO wa_uploadtxt-name1
    wa_uploadtxt-name2
    wa_uploadtxt-age.
    MOVE-CORRESPONDING wa_uploadtxt TO wa_upload.
    APPEND wa_upload TO it_record.
    ENDIF.
    ENDDO.
    CLOSE DATASET ld_file.
    ENDIF.
    *END-OF-SELECTION
    END-OF-SELECTION.
    *!! Text data is now contained within the internal table IT_RECORD
    <Display report data for illustration purposes
    LOOP AT it_record INTO wa_record.
    WRITE:/ sy-vline,
    (10) wa_record-name1, sy-vline,
    (10) wa_record-name2, sy-vline,
    (10) wa_record-age, sy-vline.
    ENDLOOP.

  • OPEN DATASET in ECC6.0

    Hi Guys,
    We are upgrading from 4.6c to ECC 6.0 and a lot of our programs are giving unicode compliance error on the OPEN DATASET statement. Even though we are moving to a unicode system, the systems we talk to are not unicode compliant yet so we donot want to read/write files in unicode format yet. After a lot of research I am still consfused between the following 2 statements. Which one should we use?
    OPEN DATASET O_DSN FOR OUTPUT IN TEXT MODE ENCODING NON-UNICODE
    OR
    OPEN DATASET O_DSN FOR OUTPUT IN LEGACY TEXT MODE.
    Does anyone know if they are any different or both serve the same purpose?
    We basically want to retain the same functionality as was in 4.6c when it comes to read/write files...
    Please suggest
    Thanks,
    Sanket

    IN LEGACY BINARY MODE [CODE PAGE cp]
    Data is read or written in a form which is compatible to BINARY MODE in Releases <= 4.6. This addition is primarily used to convert a file into the code page format specified already when it is opened. At runtime, the system uses the format of the system code page of the application server. The system saves the file then again in the code page specified. This procedure is important if data is exchanged between systems using different code pages.
    IN LEGACY TEXT MODE [CODE PAGE cp]
    Data is read or written in a form which is compatible to BINARY MODE in Releases <= 4.6. This addition is primarily used to convert a file into the code page format specified already when it is opened. At runtime, the system uses the format of the system code page of the application server. The system saves the file then again in the code page specified. This procedure is important if data is exchanged between systems using different code pages.
    For more information, check the below link.
    [http://www.s001.org/ABAP-Hlp/abapopen_dataset.htm|http://www.s001.org/ABAP-Hlp/abapopen_dataset.htm]
    Hope this helps.
    Thanks,
    Balaji

  • ECC 6.0 Dataset Encoding

    Hi,
      My company is upgrading the system from 46c to ECC 6 and some program encounter problem during the upgrade:
      In "TEXT MODE" the "ENCODING" addition must be specified.
      Currently, the program is using the following syntax TO READ A PLAIN TEXT FILE:
      OPEN DATASET 'filename' FOR INPUT IN TEXT MODE.
      Which encoding should i use in ECC 6??
      OPEN DATASET 'filename' IN TEXT MODE for INPUT ENCODING NON-UNICODE.
      OPEN DATASET 'filename' IN TEXT MODE for INPUT ENCODING DEFAULT.
      OPEN DATASET 'filename' IN TEXT MODE for INPUT ENCODING UTF-8.
    Regards,
    Kit

    Hi Kit,
    Refer to the help.sap,com link [link|http://help.sap.com/saphelp_47x200/helpdata/en/79/c554dcb3dc11d5993800508b6b8b11/frameset.htm] which clearly confirm that you must use UTF-8.
    The textual storage in UTF-8 format ensures that the created files are platform-independent
    Replace
    open dataset DSN in text mode.
    with
    open dataset DSN in text mode for input encoding utf-8.
    Cheers,
    Aditya

Maybe you are looking for