ECC 6.0 Dataset Encoding

Hi,
  My company is upgrading the system from 46c to ECC 6 and some program encounter problem during the upgrade:
  In "TEXT MODE" the "ENCODING" addition must be specified.
  Currently, the program is using the following syntax TO READ A PLAIN TEXT FILE:
  OPEN DATASET 'filename' FOR INPUT IN TEXT MODE.
  Which encoding should i use in ECC 6??
  OPEN DATASET 'filename' IN TEXT MODE for INPUT ENCODING NON-UNICODE.
  OPEN DATASET 'filename' IN TEXT MODE for INPUT ENCODING DEFAULT.
  OPEN DATASET 'filename' IN TEXT MODE for INPUT ENCODING UTF-8.
Regards,
Kit

Hi Kit,
Refer to the help.sap,com link [link|http://help.sap.com/saphelp_47x200/helpdata/en/79/c554dcb3dc11d5993800508b6b8b11/frameset.htm] which clearly confirm that you must use UTF-8.
The textual storage in UTF-8 format ensures that the created files are platform-independent
Replace
open dataset DSN in text mode.
with
open dataset DSN in text mode for input encoding utf-8.
Cheers,
Aditya

Similar Messages

  • Truncated record in OPEN DATASET ENCODING NON-UNICODE

    Hi,
    I have to read a Unicode created file into a non-unicode SAP System, version 4.7.
    When I make the OPEN DATASET using ENCODING UTF-8 y get a CONVT_CODEPAGE dump. That´s odd cause my system is non-unicode. I don´t wanna use IGNORING CONVERSION ERRORS attribute, the output will be corrupt.
    But when I use ENCODING NON-UNICODE or ENCODING DEFAULT the READ DATASET mysteriously truncate the record which try to read from real 401 characters to 361 characters. Variable is string.
    I can see full records through AL11.
    Any ideas?
    Thanks,
    Pablo.

    Hi,
    Try using:
      open dataset filename in text mode encoding default for input
                                  ignoring conversion errors.
    As said in AL11 its coming so the above code is used for that.
    Hope this will surely help you !!!
    Regards,
    Punit

  • Turkish Special characters issue in ECC 6.0, while uploading into table

    HI All,
    We are working with Turkish special characters in ECC 6.0
    We are uploading a CSV file containing characters like : İ, Ğ, Ş, Ü, Ö, as soon as the file gets uploaded to application server,
    it gets converted to ? or # or Ý.
    Also, we are reading the file using OPEN DATASET, ENCODING NON-UNICODE, IGNORING CONVERSION ERRORS, the same characters gets uploaded into the custom table  i.e. Ý.
    I have also tried saving the file in UNICODE / UTF-8 format, but the issue still remains.
    I tried searching SAP Notes as well.
    Any pointers to resolve the same will be helpful.
    Regards,
    Siddhesh Sanghvi

    Dear Siddhesh,
    Perhaps OSS note 508854 could help here.
    Also please be aware that Turkish only runs on ISO codepage 8859-9 (aka Latin-5 / SAP codepage 1610).
    I hope this helps.
    Best regards,
    Ian Kehoe

  • Text file attachment in UTF-8 encoding

    Hi
    I have written a program which sends  mails to the users with text file attached. the problem is the text file when you save it to the local desktop ( by clicking on save as ) the encoding is by default ANSI. I want to make the encoding as UTF-8. Is it possible to change this in program?.
    thanks
    sankar

    OPEN DATASET - encoding
    Syntax
    ... ENCODING { DEFAULT
                 | {UTF-8 [SKIPPING|WITH BYTE-ORDER MARK]}
                 | NON-UNICODE } ... .
    Alternatives:
    1. ... DEFAULT
    2. ... UTF-8 [SKIPPING|WITH BYTE-ORDER MARK]
    3. ... NON-UNICODE
    Effect
    : The additions after ENCODING determine the character representation in which the content of the file is handled. The addition ENCODING must be specified in Unicode programs and may only be omitted in non-Unicode programs. If the addition ENCODING is not specified in non-Unicode programs, the addition NON-UNICODE is used implicitly.
    Note
    : It is recommended that files are always written in UTF-8, if all readers can process this format. Otherwise, the code page can depend on the text environment and it is difficult to identify the code page from the file content.
    Alternative 1
    ... DEFAULT
    Effect
    : In a Unicode system, the specification DEFAULT corresponds to UTF-8, and in a non-Unicode system, it corresponds to NON-UNICODE.
    Alternative 2
    ... UTF-8 [SKIPPING|WITH BYTE-ORDER MARK]
    Addition:
    ... SKIPPING|WITH BYTE-ORDER MARK
    Effect
    : The characters in the file are handled according to the Unicode character representation UTF-8.
    Notes
    : The class CL_ABAP_FILE_UTILITIES contains the method CHECK_UTF8 for determining whether a file is a UTF-8 file.
    A UTF-16 file can only be opened as a binary file.
    Addition
    ... SKIPPING|WITH BYTE-ORDER MARK
    Effect
    : This addition defines how the byte order mark (BOM), with which a file encoded in the UTF-8 format can begin, is handled. The BOM is a sequence of 3 bytes that indicates that a file is encoded in UTF-8.
    SKIPPING BYTE-ORDER MARK
    is only permitted if the file is opened for reading or changing using FOR INPUT or FOR UPDATE. If there is a BOM at the start of the file, this is ignored and the file pointer is set after it. Without the addition, the BOM is handled as normal file content.
    WITH BYTE-ORDER MARK
    is only permitted if the file is opened for writing using FOR OUTPUT. When the file is opened, a BOM is inserted at the start of the file. Without the addition, no BOM is inserted.
    The addition BYTE-ORDER MARK cannot be used together with the AT POSITION.
    Notes
    : When opening UTF-8 files for reading, it is recommended to always enter the addition SKIPPING BYTE-ORDER MARK so that a BOM is not handled as file content.
    It is recommended to always open a file for reading as a UTF-8 with the addition WITH BYTE-ORDER MARK, if all readers can process this format.
    Alternative 3
    ... NON-UNICODE
    Effect
    : In a non-Unicode system, the data is read or written without conversion. In a Unicode system, the characters of the file are handled according to the non-Unicode codepage that would be assigned at the time of reading or writing in a non-Unicode system according to the entry in the database table TCP0C of the current text environment.

  • Special characters in UTF-8 UNIX file

    We have a program which downloads data from certain info-types in to the UNIX file, Fields are written to the specific position in the UNIX file. Some of the fields contains "Special Characters" in them.
    When we download the file in UTF-8 mode (encoding default) then the file display special characters correctly but the all the characters in that record gets shifted to left.
    When we download the file in ANSI mode then the file display doesn't special characters correctly but the all the characters in that record do not get shifted from their place.
    How can i find the special character in the field and accordingly i will shift the field right so that in the final UNIX file field won't shift their positions.

    Hi Ramnivas.
    Have you tried to read the characters with class: CL_ABAP_CHAR_UTILITIES (transaction SE24, label Attributes)
    For example you can see # in ABAP but in the file will be a new line or a carriage return, you can detect that an adapt it with:
    CL_ABAP_CHAR_UTILITIES=>NEWLINE or CL_ABAP_CHAR_UTILITIES=>CR_LF
    Other hand, if you are using OPEN DATASET to download the file, look at the options of encoding
    See F1 of OPEN DATASET and then encoding..
    OPEN DATASET - encoding
    Syntax
    ... ENCODING { DEFAULT
                 | {UTF-8 [SKIPPING|WITH BYTE-ORDER MARK]}
                 | NON-UNICODE } ... .
    Hope it helps
    Regards.
    Alfonso.

  • Orageo:nearestNeighbor no results

    Hi all,
    I am using Oracle 12c Spatial and Graph and I would like to retrieve the nearest neighbors of a point from a dataset encoded according to GeoSPARQL.
    I have created a spatial index on the datatype geo:wktLiteral and I add the appropriate hints to the query so the optimizer pick a plan using the index RDF_V$GEO_IDX. The query I pose is the following:
    SELECT geo, wkt
    FROM TABLE(SEM_MATCH('
         PREFIX geo: <http://www.opengis.net/ont/geosparql#>
         SELECT ?geo ?wkt
         WHERE {
              ?geo geo:asWKT ?wkt.
               FILTER(orageo:nearestNeighbor(?wkt,"POINT(22.39 38.25)"^^geo:wktLiteral,"sdo_num_res=10")).
    SEM_MODELS('geosparqlmd'), null, null, null, null, 'HINT0={ LEADING(?wkt) INDEX(?wkt RDF_V$GEO_IDX) }', null, null));
    I do not get any error but I do not get any results neither. However, there are a lot of points around POINT(22.39 38.25) in my dataset and if I remove the filter I get results.
    Do I use orageo:nearestNeighbor in a wrong way?
    Best regards,
    George

    George,
    orageo:nearestNeighbor behaves as expected in my test (see below). Can you post the commands you used to load the data and create the spatial index.
    Thanks,
    Matt
    Using data US.zip from http://download.geonames.org/export/dump/
    SQL*Loader control file:
    LOAD DATA
    CHARACTERSET UTF8
    TRUNCATE
    INTO TABLE GEONAMES_RAW
    FIELDS TERMINATED BY '\t'
    TRAILING NULLCOLS
    GEONAMEID CHAR(4000) NULLIF (NAME="NULL"),
    NAME CHAR(4000) NULLIF (NAME="NULL"),
    ASCIINAME CHAR(4000) NULLIF(ASCIINAME="NULL"),
    ALTERNATENAMES CHAR(4000) NULLIF (ALTERNATENAMES="NULL"),
    LATITUDE CHAR(4000) NULLIF(LATITUDE="NULL"),
    LONGITUDE CHAR(4000) NULLIF(LONGITUDE="NULL"),
    FEATURE_CLASS CHAR(4000) NULLIF(FEATURE_CLASS="NULL"),
    FEATURE_CODE CHAR(4000) NULLIF(FEATURE_CODE="NULL"),
    COUNTRY_CODE CHAR(4000) NULLIF(COUNTRY_CODE="NULL"),
    CC2 CHAR(4000) NULLIF(CC2="NULL"),
    ADMIN1 CHAR(4000) NULLIF(ADMIN1="NULL"),
    ADMIN2 CHAR(4000) NULLIF(ADMIN2="NULL"),
    ADMIN3 CHAR(4000) NULLIF(ADMIN3="NULL"),
    ADMIN4 CHAR(4000) NULLIF(ADMIN4="NULL"),
    POPULATION CHAR(4000) NULLIF(POPULATION="NULL"),
    ELEVATION CHAR(4000) NULLIF(ELEVATION="NULL"),
    DEM CHAR(4000) NULLIF(DEM="NULL"),
    TIMEZN CHAR(4000) NULLIF(TIMEZN="NULL"),
    MOD_DATE CHAR(4000) NULLIF(MOD_DATE="NULL")
    Script output:
    SQL> set echo on;
    SQL> set serverout on;
    SQL> set timing on;
    SQL> set lines 200 pages 10000;
    SQL>
    SQL> conn / as sysdba;
    Connected.
    SQL> create user rdfuser identified by rdfuser;
    User created.
    Elapsed: 00:00:01.56
    SQL> grant connect,resource,unlimited tablespace to rdfuser;
    Grant succeeded.
    Elapsed: 00:00:00.03
    SQL>
    SQL> exec sem_apis.create_sem_network('tbs_3');
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:12.75
    SQL>
    SQL> conn rdfuser/rdfuser;
    Connected.
    SQL>
    SQL> -- create table to hold geonames data
    SQL> create table
      2  geonames_raw (
      3    GEONAMEID     VARCHAR2(4000),
      4    NAME          VARCHAR2(4000),
      5    ASCIINAME     VARCHAR2(4000),
      6    ALTERNATENAMES VARCHAR2(4000),
      7    LATITUDE     VARCHAR2(4000),
      8    LONGITUDE     VARCHAR2(4000),
      9    FEATURE_CLASS  VARCHAR2(4000),
    10    FEATURE_CODE     VARCHAR2(4000),
    11    COUNTRY_CODE     VARCHAR2(4000),
    12    CC2          VARCHAR2(4000),
    13    ADMIN1          VARCHAR2(4000),
    14    ADMIN2          VARCHAR2(4000),
    15    ADMIN3          VARCHAR2(4000),
    16    ADMIN4          VARCHAR2(4000),
    17    POPULATION     VARCHAR2(4000),
    18    ELEVATION     VARCHAR2(4000),
    19    DEM          VARCHAR2(4000),
    20    TIMEZN          VARCHAR2(4000),
    21    MOD_DATE     VARCHAR2(4000)
    22  );
    Table created.
    Elapsed: 00:00:00.23
    SQL>
    SQL> -- load geonames data with sqlldr
    SQL> host sqlldr userid=rdfuser/rdfuser control=geonames.ctl data=US.txt direct=true skip=0 load=100000000 discardmax=1000000 bad=d0.bad discard=d0.rej log=d0.log errors=1000000
    SQL*Loader: Release 12.1.0.1.0 - Production on Mon Oct 14 12:06:15 2013
    Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.
    Path used:      Direct, LOAD=100000000
    Load completed - logical record count 2152673.
    Table GEONAMES_RAW:
      2152672 Rows successfully loaded.
    Check the log file:
      d0.log
    for more information about the load.
    SQL>
    SQL> alter session enable parallel dml;
    Session altered.
    Elapsed: 00:00:00.01
    SQL>
    SQL> -- create triples and insert into staging table
    SQL> create table
      2  geonames_stable(
      3    rdf$stc_sub  VARCHAR2(4000) NOT NULL,
      4    rdf$stc_pred VARCHAR2(4000) NOT NULL,
      5    rdf$stc_obj  VARCHAR2(4000) NOT NULL
      6  );
    Table created.
    Elapsed: 00:00:00.01
    SQL>
    SQL> insert /*+ append */ into geonames_stable(rdf$stc_sub, rdf$stc_pred, rdf$stc_obj)
      2  select '<http://www.geonames.org/geometry_'||geonameid||'>',
      3          '<http://www.opengis.net/ont/geosparql#asWKT>',
      4          '"POINT('||trim(longitude)||' '||trim(latitude)||')"^^<http://www.opengis.net/ont/geosparql#wktLiteral>'
      5  from geonames_raw;
    2152672 rows created.
    Elapsed: 00:00:18.51
    SQL>
    SQL> -- create application table
    SQL> create table geonames_atab(triple sdo_rdf_triple_s);
    Table created.
    Elapsed: 00:00:00.25
    SQL>
    SQL> -- create model
    SQL> exec sem_apis.create_sem_model('geonames','geonames_atab','triple');
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:03.47
    SQL>
    SQL> -- bulk load data
    SQL> grant select on geonames_stable to mdsys;
    Grant succeeded.
    Elapsed: 00:00:00.80
    SQL> grant insert on geonames_atab to mdsys;
    Grant succeeded.
    Elapsed: 00:00:00.01
    SQL>
    SQL> exec sem_apis.bulk_load_from_staging_table('geonames','rdfuser','geonames_stable',flags=>' PARSE MBV_METHOD=SHADOW PARALLEL_CREATE_INDEX PARALLEL=4 ');
    PL/SQL procedure successfully completed.
    Elapsed: 00:04:28.11
    SQL>
    SQL> -- create spatial index
    SQL> conn / as sysdba;
    Connected.
    SQL> EXECUTE sem_apis.add_datatype_index('http://www.opengis.net/ont/geosparql#wktLiteral',    options=>'TOLERANCE=10 SRID=8307 DIMENSIONS=((LONGITUDE,-180,180) (LATITUDE,-90,90))');
    PL/SQL procedure successfully completed.
    Elapsed: 00:11:44.35
    SQL>
    SQL> -- run some queries
    SQL> conn rdfuser/rdfuser;
    Connected.
    SQL>
    SQL> column s$rdfterm format a45;
    SQL> column p$rdfterm format a45;
    SQL> column o$rdfterm format a80;
    SQL>
    SQL> -- simple query
    SQL> select s$rdfterm, p$rdfterm, o$rdfterm
      2  from table(sem_match(
      3  'SELECT ?s ?p ?o
      4   WHERE { ?s ?p ?o }
      5   LIMIT 10'
      6  ,sem_models('geonames')
      7  ,null,null,null
      8  ,null,' PLUS_RDFT=T '
      9  ));
    S$RDFTERM                      P$RDFTERM                     O$RDFTERM
    <http://www.geonames.org/geometry_7184618>    <http://www.opengis.net/ont/geosparql#asWKT>  "POINT(-79.84532 40.37332)"^^<http://www.opengis.net/ont/geosparql#wktLiteral>
    <http://www.geonames.org/geometry_6511783>    <http://www.opengis.net/ont/geosparql#asWKT>  "POINT(-86.6964 36.2241)"^^<http://www.opengis.net/ont/geosparql#wktLiteral>
    <http://www.geonames.org/geometry_4782986>    <http://www.opengis.net/ont/geosparql#asWKT>  "POINT(-79.00196 37.26792)"^^<http://www.opengis.net/ont/geosparql#wktLiteral>
    <http://www.geonames.org/geometry_7179670>    <http://www.opengis.net/ont/geosparql#asWKT>  "POINT(-90.03046 35.14954)"^^<http://www.opengis.net/ont/geosparql#wktLiteral>
    <http://www.geonames.org/geometry_5816998>    <http://www.opengis.net/ont/geosparql#asWKT>  "POINT(-104.93164 44.68998)"^^<http://www.opengis.net/ont/geosparql#wktLiteral>
    <http://www.geonames.org/geometry_5004294>    <http://www.opengis.net/ont/geosparql#asWKT>  "POINT(-86.58177 43.64462)"^^<http://www.opengis.net/ont/geosparql#wktLiteral>
    <http://www.geonames.org/geometry_4708627>    <http://www.opengis.net/ont/geosparql#asWKT>  "POINT(-96.24915 33.69705)"^^<http://www.opengis.net/ont/geosparql#wktLiteral>
    <http://www.geonames.org/geometry_4356124>    <http://www.opengis.net/ont/geosparql#asWKT>  "POINT(-76.75386 39.28816)"^^<http://www.opengis.net/ont/geosparql#wktLiteral>
    <http://www.geonames.org/geometry_4245170>    <http://www.opengis.net/ont/geosparql#asWKT>  "POINT(-88.85464 38.32222)"^^<http://www.opengis.net/ont/geosparql#wktLiteral>
    <http://www.geonames.org/geometry_4900472>    <http://www.opengis.net/ont/geosparql#asWKT>  "POINT(-88.67007 41.38781)"^^<http://www.opengis.net/ont/geosparql#wktLiteral>
    10 rows selected.
    Elapsed: 00:00:07.09
    SQL>
    SQL> -- nearestNeighbor
    SQL> select s$rdfterm, o$rdfterm
      2  from table(sem_match(
      3  'SELECT ?s ?o
      4   WHERE
      5   { ?s ogc:asWKT ?o .
      6      FILTER (orageo:nearestNeighbor(?o, "POINT(-88.67007 41.38781)"^^ogc:wktLiteral, "sdo_num_res=10")) }'
      7  ,sem_models('geonames')
      8  ,null,null,null
      9  ,null,' PLUS_RDFT=T '
    10  ));
    S$RDFTERM                      O$RDFTERM
    <http://www.geonames.org/geometry_4902311>    "POINT(-88.651 41.41273)"^^<http://www.opengis.net/ont/geosparql#wktLiteral>
    <http://www.geonames.org/geometry_4900472>    "POINT(-88.67007 41.38781)"^^<http://www.opengis.net/ont/geosparql#wktLiteral>
    <http://www.geonames.org/geometry_6298675>    "POINT(-88.68333 41.36667)"^^<http://www.opengis.net/ont/geosparql#wktLiteral>
    <http://www.geonames.org/geometry_4902345>    "POINT(-88.63368 41.39753)"^^<http://www.opengis.net/ont/geosparql#wktLiteral>
    <http://www.geonames.org/geometry_4894812>    "POINT(-88.65313 41.39781)"^^<http://www.opengis.net/ont/geosparql#wktLiteral>
    <http://www.geonames.org/geometry_4889470>    "POINT(-88.67837 41.36357)"^^<http://www.opengis.net/ont/geosparql#wktLiteral>
    <http://www.geonames.org/geometry_4899321>    "POINT(-88.69563 41.39809)"^^<http://www.opengis.net/ont/geosparql#wktLiteral>
    <http://www.geonames.org/geometry_4884055>    "POINT(-88.63535 41.37614)"^^<http://www.opengis.net/ont/geosparql#wktLiteral>
    <http://www.geonames.org/geometry_4905592>    "POINT(-88.67118 41.37198)"^^<http://www.opengis.net/ont/geosparql#wktLiteral>
    <http://www.geonames.org/geometry_4906650>    "POINT(-88.68285 41.39503)"^^<http://www.opengis.net/ont/geosparql#wktLiteral>
    10 rows selected.
    Elapsed: 00:00:02.23
    SQL>

  • Downloading chinese character

    I am downloading vendor names using dataset.
    Sap is non unicode. I am loggin on in english language (EN)
    In the program i am writing open dataset encoding UTF-8.
    But the vendor names in chinese doesnot download in correct format.
    What do i need to do?

    When you download in Excel, the character formats are taken from the Front end, besure that the chinese font is installed in your system, i.e. you OS should support the lanagugae format you are downloading to.
    <b>better that u contect ur basis peson to solve ur problem...</b>

  • Open dataset in ECC 6.0 passes AUTHORITY_CHECK but still fails to download

    Hello Gurus,
    I'm having trouble getting open dataset to work in ECC 6.0.
    I've tried using encoding utf-8 and default plus an authority check.
    The program doesn't abap dump, but nothing happens.
    Am I doing something wrong?
      DATA: my_full_path_auth LIKE authb-filename .
      my_full_path_auth = 'c:\temp\inv_coverage_by_material.txt'.
      MESSAGE i001(zstd) WITH 'converting itab to characters'.
      REFRESH ta_by_part_number_ascii.
      LOOP AT pta_out INTO ts_by_part_number .
        MOVE-CORRESPONDING ts_by_part_number
          TO ts_by_part_number_ascii.
        APPEND ts_by_part_number_ascii TO ta_by_part_number_ascii.
      ENDLOOP.
      MESSAGE i001(zstd) WITH 'convertion complete'.
      CALL FUNCTION 'AUTHORITY_CHECK_DATASET'
        EXPORTING
    *   PROGRAM                =
          activity               = 'WRITE'
          filename               = my_full_path_auth
        EXCEPTIONS
          no_authority           = 1
          activity_unknown       = 2
          OTHERS                 = 3 .
      IF sy-subrc <> 0.
        MESSAGE i001(zstd) WITH 'authority check failed'.
      ELSE.
        MESSAGE i001(zstd) WITH 'authority check PASSED'.
      ENDIF.
      OPEN DATASET my_full_path FOR OUTPUT
         IN TEXT MODE ENCODING DEFAULT .
      LOOP AT ta_by_part_number_ascii INTO ts_by_part_number_ascii .
        TRANSFER ts_by_part_number_ascii TO my_full_path.
      ENDLOOP.
      CLOSE DATASET my_full_path.

    Ahhh. I forgot that the file is saved on the app server.
    New question:
    I was trying to do a download of an internal table in batch mode. The GUI_DOWNLOAD function won't work for that.
    What do people do to download a file to the presentation server?
    Moderator message - Please limit yourself to one question per thread. That makes it easier for others to find solutions to their similar problems.
    So please close this one and ask a new question
    Edited by: Rob Burbank on Apr 14, 2009 1:12 PM

  • Not able to open Dataset when adding Encoding Default

    Hi Experts,
    I have an urgent requirement . I wanted to make one programs Unicode Complaint
    I had to change the statement
      open dataset gv_string in text mode  for output.
    to
      open dataset gv_string in text mode encoding default for output.
    But after this change the sy-subrc is becoming 8 and not able to open the .Can any one please help
    Thanks
    Arshad

    HI,
    Maybe there is no authorization for you to write the file onto the app.server.
    Check with basis about your authorizations.
    Or could be the directory does not exist.
    Regards,
    Subramanian
    Edited by: Subramanian PL on Jun 27, 2008 1:24 AM

  • Open dataset filename for output in text mode encoding default

    Hi,
    When I excute this command I get a sy-subrc = 0, but no file is created. This happens in an ECC 6.0 system. In 4.6C works ok.
    filename has the following structure '
    server\directory\filename'
    What am I doing wrong??
    Thank you very much for your your help.
    Regards.

    Hi,
    Can u tell what syntax u have written for open statement.
    In programs with active Unicode check, you must specify the access type (such as ... FOR INPUT, ... FOR OUTPUT, and so on) and the mode (such as ... IN TEXT MODE, ... IN BINARY MODE, and so on). If the file is opened using ... IN TEXT MODE, you must still use the addition ... ENCODING. If the Unicode check is enabled, it is possible to use file names containing blanks.
    Regards,
    Sruthi

  • OPEN DATASET file FOR OUTPUT IN TEXT MODE ENCODING NON-UNICODE

    Hi There,
    I also have the similar issue. I am able to write the data into appliaction server in Chinese Characters using :OPEN DATASET datei FOR OUTPUT IN TEXT MODE ENCODING DEFAULT or OPEN DATASET datei FOR OUTPUT IN TEXT MODE ENCODING UTF-8. But when i save that file into my presentation server manually, all the chinese characters are showing as Junk.
    When i use OPEN DATASET datei FOR OUTPUT IN TEXT MODE ENCODING NON-UNICODE, giving runtime error and when i use OPEN DATASET datei FOR OUTPUT IN TEXT MODE ENCODING NON-UNICODE IGNORING CONVERSION ERRORS, No error but application server output itself showing as Junk characters.
    Could you please suggest me what you have done?
    Regards,
    Chaitanya A

    Hi,
       Use this
      OPEN DATASET File_path  FOR OUTPUT IN TEXT MODE ENCODING NON-UNICODE
      WITH SMART LINEFEED
    it will definitely work.
    Regards,
    Manesh. R

  • Open Dataset - unicode system ECC 6.0,special characters

    Hi,
    we have upgraded our system from 46b to ecc 6.0 unicode. We face following problem in file download to application server(solaris) using open dataset.
    in output file hyphen(-) character is by special character â.
    eg. if data in table is  'BANGALORE - 560038' is displayed as 'Bangalore â 560034' in application server file. if we download the same file to windows it shows proper data.
    only when we open file in vi editor on application server , system shows this special character.
    Kindly help.
    Regards,
    Sidhesh S

    OPEN DATASET G_APFILE FOR INPUT IN TEXT MODE.in 4.6                                                                                                    
        OPEN DATASET G_APFILE IN LEGACY TEXT MODE FOR INPUT in ECC6.0
    regards
    Giridhar

  • Open Dataset for input in BINARY MODE not working after ECC 6.0 upgrade

    Hi All,
    Our requirement is to download an XML file from the application server and there is a customized program to download these files.
    This program uses the statement,  Open dataset ...for input in BINARY MODE
    and it works perfect for 4.7. There were no issues. But, after the upgrade to ECC 6.0 this does not work.
    When the data is read in ECC 6.0 , it is shown in some special characters and it could not be opened with XML editor and the file is not completely downloaded. I read through the forum and tried the following statement as well,
    Open dataset....for input in LEGACY BINARY MODE.
    After this statement, there were no special characters, but there is a blank space introduced before every character.
    Example : TEST(actual)
                      T E S T (After the legacy binary mode)
    Could you please let me know if there is any solution to rectify this problem. Appreciate your help.
    Thanks a million.
    Edited by: Manikd on May 12, 2011 3:52 PM

    But this program was already using BINARY MODE and after upgrade this is not working. I know, it may work in TEXT MODE. However, I cannot change the whole program to TEXT mode now.

  • Question re ENCODING cp option of OPEN DATASET statement

    I'm working on a 6.0 system with 4.6 data that I've downloaded from the 4.6 system and uploaded to the 6.0 system.
    The 4.6 data has "umlauts" in it (like when "o's" have two dots above them in Scandinavian names), and  when my READ DATASET executes on the 6.0 server, my try block is catching  a CX_SY_CONVERSION_CODEPAGE error.
    I'm assuming that to solve this, I will need to specify the codepage of the 4.6 server in the ENCODING codepage option of the OPEN DATASET statement that's exceuting on the 6.0 server.
    Will this solve the problem?  If not, what do I try next ?
    Also, how can I determine the system codpage of my current ABAP "text environment"?  I know all the possibilities are in table TCP0P, but how do I know which one is "active" ???
    Thanks guys.

    Hi,
    Refer this OPEN DATASET in ECC6.0 solves your problem and also check the abap documentation for system codepage and text environment.
    For unicode systems,system code page is UTF-16.
    Thanks.
    Ramya.

  • Open dataset not working in ECC 5.0, returning 8

    Gurus,
    Open data set is returning 8. The file is there on application server as selected by KD4 function module when I try to do open dataset it retruns 08.
    REPORT XXX message-id zdolfemsg.
    Data Declaration
    data: v_mfile type C,
          v_ffile type C.
    data: w_mpath type string,
          w_fpath type string.
    SELECTION-SCREEN BEGIN OF BLOCK b1 WITH FRAME
                                       TITLE text-001.
    PARAMETERS: p_master like rlgrap-filename obligatory,
                            p_func like rlgrap-filename obligatory,
    SELECTION-SCREEN END OF BLOCK b1.
    AT SELECTION-SCREEN ON VALUE-REQUEST FOR p_master.
      CALL FUNCTION 'KD_GET_FILENAME_ON_F4'
          CHANGING
          file_name = p_master.
    w_mpath = p_master.
    AT SELECTION-SCREEN ON VALUE-REQUEST FOR p_func.
    CALL FUNCTION 'KD_GET_FILENAME_ON_F4'
        EXPORTING
          static    = 'X'
        CHANGING
          file_name = p_func.
          w_fpath = p_func.
    start-of-selection.
      perform initilaization_for_master.
      perform initialization_for_func.
      if v_mfile ='X' and V_FFILE = 'X'.
        message e001 with sy-repid.
      ELSEIF V_mfile = 'X'.
        message e000 with p_master sy-repid.         "Error opening file
      ELSEIF V_FFILE = 'X'.
        message e000 with p_func sy-repid.         "Error opening file
      endif.
    close dataset p_master.
    close dataset p_func.
    *&      Form  initilaization
          text
    -->  p1        text
    <--  p2        text
    FORM initilaization_for_master .
      open dataset w_mpath for output in text mode encoding default.
      if sy-subrc <> 0.
        v_mfile = 'X'.
      endif.
    ENDFORM.                    " initilaization
    *&      Form  initialization_for_func
          text
    -->  p1        text
    <--  p2        text
    FORM initialization_for_func .
      open dataset w_fpath for input in text mode ENCODING DEFAULT.
      if sy-subrc <> 0.
        v_ffile = 'X'.
      endif.
    ENDFORM.                    " initialization_for_func
    Please help.
    Regards,
    Rajsh.

    Thanks Rob. How can I open the file on presentation server. I have to read the data from the file and fill in internal table.
    Regards,
    Rajesh.

Maybe you are looking for

  • BI Content transfer to Flatfile Source System

    Hi How can we transfer standard business content for source system file. We are not able to view some standard contents ( both BW 3.x or BI 7.0) eg. 0CONSUMER_ATTR to activate from the business content. The same is available for R3 Source System but

  • File Mapping from Servlet to client

    Hi, I want help in Servlets.... My problem is, i am having one servlet and one client(Html) and one imageFile. My servlet knows the filePath, now it has to process the filePath so that the imageFile is displayed in Html. Now, the issue is in servlet

  • Photoshop 8.0 Crashes when I open videos

    I can see the videos in the organizer.  It opens pictures as expected but won`t open the videos.  It cashes as soon as I click on a video.  I have updated software and video drivers.  I tried to re-set preferences using ctrl-alt-shift when it starts

  • Tip message in various languages

    Hello all, We are implementing Self Service in spain. We need to specify a custom tip message. I have created application message AM1 with language as US and also E so that when the user changes the session language to Spain, he/she can see the messa

  • HT201209 i have £25 voucher do i have to spend it all at once

    i have a £25 voucher do i have to spend it all at once