UTF-8 stored in VARCHAR2 on a non-Unicode DB

Hi there,
we have a company that implements storing Unicode data in Oracle in the following way:
A plain VARCHAR2 on a non-Unicode DB (charset is actually WE8MSWIN1252) receives UTF-8 coded data.
As client and server have the same setting for NLS_LANG, no conversion takes place, and the app will run fine.
(in my eyes, a clean way to set this up would be utilizing NVARCHAR fields for this, but this is no option)
But: how can I do query based on these columns without getting garbage for each non-ASCII character?
I imagine setting up views for that purpose, but I need the syntax on how to re-interpret the UTF-8 data coming from a VARCHAR2 field.
I tried the following:
SELECT CONVERT(column, 'WE8MSWIN1252', 'AL32UTF8') FROM table where ...
This will give me the right data on a client with cp 1252 set up, with the restriction to 8 bit output.
Now I would like to have a Unicode-capable application like SQL*Developer to be fully capable of dealing with the Unicode data, but I guess, for that to work, I would need the DB to deliver a NVCHAR2 output from the above query?
Any help and comments appreciated.
Tom
Message was edited by: snmdla

we have a company that implements storing Unicode data in Oracle in the following way:
No - they don't. They are NOT storing unicode data - they are storing individual one-byte characters and using that VARCHAR2 column as a BLOB. Ask them how, of if, they query the data.
A plain VARCHAR2 on a non-Unicode DB (charset is actually WE8MSWIN1252) receives UTF-8 coded data.
No - it doesn't. It receives a string of one byte characters in the WE8MSWIN1252 character set. It does not know, or care, what those one-byte characters represent. All you are doing is storing BINARY data in that VARCHAR2 column one byte at a time. When you query it you will get one or several bytes back - but since Oracle thinks it is really character data, when it is actually binary, you can only match it by matching those one-byte characters.
I imagine setting up views for that purpose, but I need the syntax on how to re-interpret the UTF-8 data coming from a VARCHAR2 field.
You don't have UTF-8 data - you have a BLOB that you need to convert to UTF-8 data. You can use the DBMS_LOB.CONVERTTOCLOB procedure to do the conversion and specify the character set to use. See the DBMD_LOB API
http://docs.oracle.com/cd/B28359_01/appdev.111/b28419/d_lob.htm#i1020356
CONVERTTOCLOB Procedure
This procedure takes a source BLOB instance, converts the binary data in the source instance to character data using the character set you specify, writes the character data to a destination CLOB or NCLOB instance, and returns the new offsets.
See this AskTom article for further review
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:3575852900346063772

Similar Messages

  • UTF-8  to non-unicode RFC - encoding

    Hi,
    I get data via SOAP UTF-8 and send them with some simple mappings to a RFC receiver non-unicode. How can I post special characters?
    In sender payload i see  and i expected to get A & B in SAP but i get A & B in SAP. In the RFC receiver adapter i don't see any setting for codepage.
    Anyone an idea?
    Regards
    Jörg

    Hi,
    The first thing is that XI is unicode so if you are sending special char which are UTF-8 encoded then you should be seeing them in XI in the payload monitor. Can you check that first?
    Now if the target is Not Unicode then what is the encoding on the target side? You can encode these special chars in XI and then pass it on in the encoding format the target system expects. One more thing in the SM59 transaction in XI for the Receiving R/3 system you can specify the Char Set encoding. Make sure you check and based on that your encoding should work. try this out.
    thanks
    Ashish

  • Report Script output in UTF-8 code with Non-Unicode Application

    Essbase Nation,
    Report Script output (.txt) file is being coded as UTF-8 when the application is set to Non-unicode. This coding creates a signature character in the first line of the text file, which in turn shows up when we import the file into Microsoft Access. Does anyone know how to change the coding of the output file or know who to remove the UTF-8 signature character.
    Any adive is greatly appreciated.
    Thank you.
    Concerned Admin

    You may be able to find a text editor that can do the conversion. Alternatively, I have converted from one encoding to the another programmatically using Java as well.
    Tim Tow
    Applied OLAP, Inc

  • TS4036 My camera roll will not restore from iCloud after updating my iPhone 4 to iOS 6. It shows I have 1.7 gigs of photos stored on camera roll but none are appearing on my phone. Everything else restored from iCloud.

    My camera roll will not restore from iCloud after updating my iPhone 4 to iOS 6. It shows I have 1.7 gigs of photos stored on camera roll but none are appearing on my phone. Everything else restored from iCloud. When I go to iCloud.com my photostream is not there.

    A response I put together from another post. Also going into iTunes and choosing to sync phone to iCloud then back "this computer" sometimes works.
    First thing.. Stop where you are. Don't try to sync, update, hard reset, iCloud... Just stop.
    Check on your phone under 'SETTINGS' 'GENERAL' USAGE'. It should show some value for PHOTOSTREAM. Also, connect to iTunes and you will notice at the bottom of your phones summary page that there is a large amount of 'OTHER' data.
    I used third party software called iExplorer (formerly iPhone explorer). In this program you select 'MEDIA FOLDER', then 'DCIM'. In the various apple folders (IE: 101APPLE) you should be able to preview your photos and copy them to your desktop.
    From there sync them back to your device through iTunes or whatever method you use.
    There is a simpler solution I'm sure, which may involve using iExplorer to delete some .plist files then restart the phone and let it rebuild. I'm going to try it myself AFTER I've finished copying all the photos to my MAC
    Hope this helps.

  • Unicode and non-unicode string data types Issue with 2008 SSIS Package

    Hi All,
    I am converting a 2005 SSIS Package to 2008. I have a task which has SQL Server as the source and Oracle as the destination. I copy the data from a SQL server view with a field nvarchar(10) to a field of a oracle table varchar(10). The package executes fine
    on my local when i use the data transformation task to convert to DT_STR. But when I deploy the dtsx file on the server and try to run from an SQL Job Agent it gives me the unicode and non-unicode string data types error for the field. I have checked the registry
    settings and its the same in my local and the server. Tried both the data conversion task and Derived Column task but with no luck. Pls suggest me what changes are required in my package to run it from the SQL Agent Job.
    Thanks.

    What is Unicode and non Unicode data formats
    Unicode : 
    A Unicode character takes more bytes to store the data in the database. As we all know, many global industries wants to increase their business worldwide and grow at the same time, they would want to widen their business by providing
    services to the customers worldwide by supporting different languages like Chinese, Japanese, Korean and Arabic. Many websites these days are supporting international languages to do their business and to attract more and more customers and that makes life
    easier for both the parties.
    To store the customer data into the database the database must support a mechanism to store the international characters, storing these characters is not easy, and many database vendors have to revised their strategies and come
    up with new mechanisms to support or to store these international characters in the database. Some of the big vendors like Oracle, Microsoft, IBM and other database vendors started providing the international character support so that the data can be stored
    and retrieved accordingly to avoid any hiccups while doing business with the international customers.
    The difference in storing character data between Unicode and non-Unicode depends on whether non-Unicode data is stored by using double-byte character sets. All non-East Asian languages and the Thai language store non-Unicode characters
    in single bytes. Therefore, storing these languages as Unicode uses two times the space that is used specifying a non-Unicode code page. On the other hand, the non-Unicode code pages of many other Asian languages specify character storage in double-byte character
    sets (DBCS). Therefore, for these languages, there is almost no difference in storage between non-Unicode and Unicode.
    Encoding Formats: 
    Some of the common encoding formats for Unicode are UCS-2, UTF-8, UTF-16, UTF-32 have been made available by database vendors to their customers. For SQL Server 7.0 and higher versions Microsoft uses the encoding format UCS-2 to store the UTF-8 data. Under
    this mechanism, all Unicode characters are stored by using 2 bytes.
    Unicode data can be encoded in many different ways. UCS-2 and UTF-8 are two common ways to store bit patterns that represent Unicode characters. Microsoft Windows NT, SQL Server, Java, COM, and the SQL Server ODBC driver and OLEDB
    provider all internally represent Unicode data as UCS-2.
    The options for using SQL Server 7.0 or SQL Server 2000 as a backend server for an application that sends and receives Unicode data that is encoded as UTF-8 include:
    For example, if your business is using a website supporting ASP pages, then this is what happens:
    If your application uses Active Server Pages (ASP) and you are using Internet Information Server (IIS) 5.0 and Microsoft Windows 2000, you can add "<% Session.Codepage=65001 %>" to your server-side ASP script.
    This instructs IIS to convert all dynamically generated strings (example: Response.Write) from UCS-2 to UTF-8 automatically before sending them to the client.
    If you do not want to enable sessions, you can alternatively use the server-side directive "<%@ CodePage=65001 %>".
    Any UTF-8 data sent from the client to the server via GET or POST is also converted to UCS-2 automatically. The Session.Codepage property is the recommended method to handle UTF-8 data within a web application. This Codepage
    setting is not available on IIS 4.0 and Windows NT 4.0.
    Sorting and other operations :
    The effect of Unicode data on performance is complicated by a variety of factors that include the following:
    1. The difference between Unicode sorting rules and non-Unicode sorting rules 
    2. The difference between sorting double-byte and single-byte characters 
    3. Code page conversion between client and server
    Performing operations like >, <, ORDER BY are resource intensive and will be difficult to get correct results if the codepage conversion between client and server is not available.
    Sorting lots of Unicode data can be slower than non-Unicode data, because the data is stored in double bytes. On the other hand, sorting Asian characters in Unicode is faster than sorting Asian DBCS data in a specific code page,
    because DBCS data is actually a mixture of single-byte and double-byte widths, while Unicode characters are fixed-width.
    Non-Unicode :
    Non Unicode is exactly opposite to Unicode. Using non Unicode it is easy to store languages like ‘English’ but not other Asian languages that need more bits to store correctly otherwise truncation will occur.
    Now, let’s see some of the advantages of not storing the data in Unicode format:
    1. It takes less space to store the data in the database hence we will save lot of hard disk space. 
    2. Moving of database files from one server to other takes less time. 
    3. Backup and restore of the database makes huge impact and it is good for DBA’s that it takes less time
    Non-Unicode vs. Unicode Data Types: Comparison Chart
    The primary difference between unicode and non-Unicode data types is the ability of Unicode to easily handle the storage of foreign language characters which also requires more storage space.
    Non-Unicode
    Unicode
    (char, varchar, text)
    (nchar, nvarchar, ntext)
    Stores data in fixed or variable length
    Same as non-Unicode
    char: data is padded with blanks to fill the field size. For example, if a char(10) field contains 5 characters the system will pad it with 5 blanks
    nchar: same as char
    varchar: stores actual value and does not pad with blanks
    nvarchar: same as varchar
    requires 1 byte of storage
    requires 2 bytes of storage
    char and varchar: can store up to 8000 characters
    nchar and nvarchar: can store up to 4000 characters
    Best suited for US English: "One problem with data types that use 1 byte to encode each character is that the data type can only represent 256 different characters. This forces multiple
    encoding specifications (or code pages) for different alphabets such as European alphabets, which are relatively small. It is also impossible to handle systems such as the Japanese Kanji or Korean Hangul alphabets that have thousands of characters."<sup>1</sup>
    Best suited for systems that need to support at least one foreign language: "The Unicode specification defines a single encoding scheme for most characters widely used in businesses around the world.
    All computers consistently translate the bit patterns in Unicode data into characters using the single Unicode specification. This ensures that the same bit pattern is always converted to the same character on all computers. Data can be freely transferred
    from one database or computer to another without concern that the receiving system will translate the bit patterns into characters incorrectly.
    https://irfansworld.wordpress.com/2011/01/25/what-is-unicode-and-non-unicode-data-formats/
    Thanks Shiven:) If Answer is Helpful, Please Vote

  • Unable to Open unix file in UNICODE system which created NON-UNICODE system

    Unable to Open unix file in UNICODE system which created in NON-UNICODE system
    We have two SAP systems both are ECC6.0 but System 1 is NON-Unicode and System2 is Unicode system.
    There is a common unix directory/folder for both system.
    Our requirement is to create one file on unix common folder and write the data to file from system1 .
    In system2 open the same file for appending mode to write the data .
    The file in system 1 created with below sentence.
    OPEN DATASET g_unix_file FOR OUTPUT IN TEXT MODE ENCODING UTF-8.
    Now I have to append the data from system 2 to same file.
    I have tried to used below statement in system 2 to open the file but sy-subrc value comes as '8'.
    1> OPEN DATASET g_unix_file FOR APPENDING IN TEXT MODE ENCODING UTF-8.
    2>OPEN DATASET g_unix_file FOR APPENDING IN legacy TEXT MODE CODE PAGE
    cdp IGNORING CONVERSION ERRORS  .
    3>OPEN DATASET g_unix_file FOR APPENDING IN TEXT MODE ENCODING Default.
    4>OPEN DATASET g_unix_file FOR APPENDING IN TEXT MODE ENCODING NON-UNICODE.
    Tried out all the possibilities as per F1 help given for open dataset , but still there is problem with opn file in appending as well output mode.However the file successfully open in Input mode(Read).
    Please advice suggestion to resolve this issue.
    Thanks.

    Messgae captured as 'Permission Denied". The program gets triggered with system user Id PPID.
    How to check the security access of the User ID.

  • RFC destination definition with non-unicode external program

    Hello All,
    we have one issue with our RFID system connecting to WM system (SWD). 
    For most functions, external RFID server/middleware makes RFC call to SAP system.  Means, from outside system to inside SAP.  This seems to be working fine.
    But in one case, SAP needs to print label to RFID printer.  In this case, SAP is calling external RFID server/middleware using RFC destination.  Means, from inside SAP to outside system.  This is failing.
    We think this has something to do with ECC60 being unicode system, and the RFC destination setup.  We define rfc destination "ZRFID_SSI" in SM59.  We define it same way as in as-is SSI 4.6B production system.  But in SWD ECC60 system, SM59 has more options.  We define as 'non-unicode' because target RFID server/middleware is non-unicode Windows server.  RFC destination is actually working ok.
    To send label to printer, SWD calls function ZFSSIRF202 with remote destination ZRFID_SSI.   Listener on RFID server/middleware will respond to function call.  This also seems to be working.  But then, RFID server/middleware cannot process the data from ZFSSIRF202, log says that data is "0".  (Actually, the first time SWD calls, log says 0 data, the 2nd time there is connection failure, the 3rd time it is 0 data, 4th time connection failure, etc, etc.)
    Compared with production environment, everything is the same, except that SAP is unicode system.  So, our suspicion is, that when ECC60 system sends data, it is encoded somewhat a little different than when SAP 46B system sends data.  I have similar experience with our other interface (XSI), also involving RFC desintion, but XML file exchange.  Here, ECC60 is encoding XML file with UTF-16, but SAP46B was encoding with UTF-8.
    Anyone know how to help ?

    Hi Jeongbae,
    as of NW 7.0 EhP2, it is possible to directly set the code page in SM59.
    In earlier releases, this is not possible.
    In general, SAP systems use the logon language (SY-LANGU)  to determine the code page, if this is available.
    Please check SAP note 788239.
    Please also have a look at SAP note 991572 for possible alternative settings, if SY-LANGU is not available.
    In addition I would recommend to have a look at SAP note 1021459.
    However in order to analyze the problem properly, you need to know the exact short dump text (via rfc trace).
    For XML processing, please read SAP note 1017101. UTF-16 should NOT be used in an XML file !
    Best regards,
    Nils Buerckel
    SAP AG

  • Open dataset in UTF8. Problems between Unicode and non Unicode

    Hello,
    I am currently testing the file transfer between unicode an non unicode systems.
    I transfered some japanese KNA1 data from non unicode system (Mandt,Name1, Name2,City) to a file with this option:
    set local language pi_langu.
      open dataset pe_file in text mode encoding utf-8 for output with byte-order mark.
    Now I want to read the file from a unicode system. The code looks like this:
    open dataset file in text mode encoding utf-8 for input skipping byte-order mark.
    The characters look fine but they are shifted. name1 is correct but now parts of the city characters are in name2....
    If I open the file in a non unicode system with the same coding the data is ok again!
    Is there a problem with spaces between unicode an non-unicode?!

    Hello again,
    after implementing and testing this method, we saw that the conversion is always taken place in the unicode system.
    For examble: we have a char(35) field in mdmp with several japanese signs..as soon as we transfer the data into the file and have a look at it the binary data the field is only 28 chars long. several spaces are missing...now if we open the file in our unicode system using the mentioned class the size is gaining to 35 characters
    on the other hand if we export data from unicode system using this method the size is shrinking from 35 chars to 28 so the mdmp system can interprete the data.
    as soon as all systems are on unicode this method is obselete/wrong because we don't want to cut off/add the spaces..it's not needed anymore..
    the better way would be to create a "real" UTF-8 file in our MDMP system. The question is, is there a method to add somehow the missing spaces in the mdmp system?
    so it works something like thtat:
          OPEN DATASET p_file FOR OUTPUT IN TEXT MODE ENCODING UTF-8 WITH BYTE-ORDER MARK.
    "MDMP (with ECC 6.0 by the way)
    if charsize = 1.
    *add missing spaces to the structure
    *transfer strucutre to file
    *UNICODE
    else.
    *just transfer struc to file -> no conversion needed anymore
    endif.
    I thought mybe somehow this could work with the class CL_ABAP_CONV_OUT_CE. But until now I had no luck...
    Normally I would think that if I'am creating a UTF-8 file this work is done automatically on the transfer command

  • Cannot type/paste normal international text when non-Unicode in CS6

    Hello,
    In all versions of DW (up to CS3 which I have used) I had no problem pasting / typing HTML or text with international characters in Design View when the page is using a non-Unicode, yet international encoding (like Windows - Greek or ISO - Greek).
    Now, DW CS6 auto converts all international chars typed/pasted in Design View to html entities (unless Page Encoding is Unicode).
    For example, when the document has an encoding of:
    <meta http-equiv="Content-Type"  content="text/html;  charset=windows-1253">
    [ This is equal to Modify / Page Properties / Title/Encoding / Document Type (DTD): None & Encoding: Greek (Windows) ]
    ...in the past I was able to type/paste greek characters/text in Design view and they were retained as such (simple text) in Code view (and this is what we need now as well).
    Yet, in DW CS6 such international chars/text (typed / pasted in Design view) are auto-switched to "&somechar;" entities which is not what should happen; this messes up all text. Design view shows the text correctly, but html source (Code View) does not retain international characters themselves, although it should, as long as the html page is using a proper encoding/charset that allows compatible international text to be retained (e.g. greek encoding is compatible with greek characters). I repeat that this was working correctly at least until DW CS3.
    Directly typing/pasting in DW CS6 design view correctly (i.e. retaining the original chars in code view) works ONLY when using Unicode.
    However, if we type/paste greek text (with html tags or not) directly in Code view, then DW CS6 retains chars/text properly and Design view displays everything properly too. Consequently, as a work-around, we can use the Code View to type/paste international text/html when not using Unicode (UTF-8) as the Page Encoding. But this makes our life more difficult.
    So, has CS6 dropped support for typing/pasting international text/html directly in Design view, for non-Unicode international encodings?
    Or something has changed and we need to configure some setting(s) so that the feature works properly? (I haven't been able to find any setting that might affect this behavior. I also played with Document Type (DTD) settings but I found these did not affect the described behavior.)
    Please advise. This is very important.
    Thanks,
    Nick
    Message was edited by: JJ-GR

    Thanks for the reply.
    As I have already mentioned, typing/pasting in Code View works properly.
    However, in previous versions of DW, pasting/typing in Design View was working fine, whatever the page encoding.
    I agree that pasting in Code View is not really a big deal. But having to do all editing/typing in Code View definitely is! What is the point of using a WYSIWYG editor, if it can't produce correct source code (except in Unicode pages)? If we are going to do all editing in Code View, then we could simply use notepad (just an exaggeration) or other programming-oriented tool.
    I hope other people can confirm the problem and suggest solutions or Adobe fixes it.

  • OPEN DATASET file FOR OUTPUT IN TEXT MODE ENCODING NON-UNICODE

    Hi There,
    I also have the similar issue. I am able to write the data into appliaction server in Chinese Characters using :OPEN DATASET datei FOR OUTPUT IN TEXT MODE ENCODING DEFAULT or OPEN DATASET datei FOR OUTPUT IN TEXT MODE ENCODING UTF-8. But when i save that file into my presentation server manually, all the chinese characters are showing as Junk.
    When i use OPEN DATASET datei FOR OUTPUT IN TEXT MODE ENCODING NON-UNICODE, giving runtime error and when i use OPEN DATASET datei FOR OUTPUT IN TEXT MODE ENCODING NON-UNICODE IGNORING CONVERSION ERRORS, No error but application server output itself showing as Junk characters.
    Could you please suggest me what you have done?
    Regards,
    Chaitanya A

    Hi,
       Use this
      OPEN DATASET File_path  FOR OUTPUT IN TEXT MODE ENCODING NON-UNICODE
      WITH SMART LINEFEED
    it will definitely work.
    Regards,
    Manesh. R

  • Open Dataset non-unicode & default

    Currently im using the open dataset with default then it hit a codepage error. I found out that the file contained characters that is not UTF-8 so when i changed the open dataset to non-unicode the codepage error goes away. Im wondering if there is any effect of changing from default to non-unicode.

    OPEN DATASET P_AFILE FOR INPUT IN TEXT MODE ENCODING DEFAULT ignoring conversion errors.
    ADD ignoring conversion errors to ur prg
    and should be unicode.
    hope it will help you.
    Regards,
    sinagam.
    Edited by: Venkata Pavan Sinagam on Sep 23, 2011 5:50 PM

  • File Transfer non-unicode - unicode via client

    Hello.
    I downloaded a binary file from a SAP 4.0 system to my client (win2k) with OPEN DATASET (to read the file on the app server) and WS_DOWNLOAD (to save it to the client).
    Now i want to upload this file from my client to a 6.40 unicode system. Therefore i do the following:
    - GUI_UPLOAD to get the file from the client into an internal table
    - OPEN DATASET dsn FOR OUTPUT IN BINARY MODE to save the contents of the internal table to the file system of the app server.
    This works pretty well on non-unicode systems but does not work properly for unicode systems.
    Which options do i have to use? Anything with code pages?!
    THX
    --MIKE

    check out the <b>OPEN DATASET - Mode - {TEXT MODE ENCODING {DEFAULT|UTF-8|NON-UNICODE}} </b> option .
    The additions after ENCODING determine in which character representation the content of the file is handled.
    DEFAULT
    In a Unicode system, the designation DEFAULT corresponds to the designation UTF-8, and the designation NON-UNICODE in a non-Unicode system.
    UTF-8
    The characters in the file are handled according to the Unicode character representation UTF-8.
    NON-UNICODE
    In a non-Unicode system, the data is read or written without being converted. In a Unicode system,the characters in the file are handled according to the non-Unicode-codepage that would be assigned to the current text environment according to the database table TCP0C, at the time of reading or writing in a non-Unicode system.
    Check out the ABAP Key Word Documentation .
    Regards
    Raja

  • Peoplesoft convert Oracle non-unicode database to unicode database

    I am following doc 1437384.1 to convert a Peoplesoft database from non-unicode database to unicode database
    I use the following export statement (as user PS)
    SET NO TRACE;
    SET OUTPUT output_file.dat;
    SET NO DATA;
    EXPORT *;
    And the following import statement (as user sysadm)
    SET NO TRACE;
    SET NO DATA;
    SET INPUT output_file;
    SET LOG log_file;
    SET UNICODE ON;
    SET STATISTICS OFF;
    SET ENABLED_DATATYPE 9.0;
    IMPORT *;
    Before I do the datapump import, I am comparing the objects
    SQL> select object_type, count(*) from dba_objects where owner = 'SYSADM' group by object_type order by 1 asc;
    OBJECT_TYPE COUNT(*)
    INDEX 33797
    LOB 2775
    TABLE 28829
    TRIGGER 9
    VIEW 21208
    on oracsc63 (targetdb):
    SQL> select object_type, count(*) from dba_objects where owner = 'SYSADM' group by object_type order by 1 asc;
    OBJECT_TYPE COUNT(*)
    INDEX 23748
    LOB 2170
    TABLE 19727
    I don't have the same number of object. When I do the import this means that around 10000 tables will not have the UTF-8 format.
    Any ideas how I can solve this? How has experience with this peoplesoft conversions?

    Hello Jacques,
    please check sapnote #808505 (Secondary connection to Oracle DB w/ different character set).
    Regards
    Stefan

  • How to read a Unicode text file in a non-unicode SAP system

    I'm currently has a text file which is saved as UTF-8 format. And inside this file has Thai characters.
    However, my SAP system is a non-unicode system.
    When I use the code
    open dataset DSN for output in TEXT MODE encoding UTF-8
    the thai characters become some funny characters.
    How can I solve this in the ABAP program without changing the system from non-unicode to unicode system?

    Hi Eswar,
       How can I check whether the code page for THAI already installed or not in SAP system?
       Here is the code I use to test to read a file that saved as UTF-8.
    DATA: l_filepath         TYPE string.
    DATA : BEGIN OF l_filepath_str OCCURS 0,
                 comm(1000)  TYPE c,
               END OF l_filepath_str.
    DATA: l_datasetsucc LIKE rlgrap-filename.
    l_filepath = '/BAAC/Files/APP_.txt'.
    OPEN DATASET l_filepath FOR INPUT IN TEXT MODE ENCODING UTF-8.
    IF sy-subrc EQ 0.
      DO.
        CLEAR l_filepath_str.
        READ DATASET l_filepath INTO l_filepath_str.
        IF sy-subrc EQ 0.
          APPEND l_filepath_str.
        ELSE.
          EXIT.
        ENDIF.
      ENDDO.
    ELSE.
      WRITE:/ 'Error reading from file:', sy-subrc.
    ENDIF.
    CLOSE DATASET l_filepath.
    LOOP AT l_filepath_str.
      WRITE: / l_filepath_str-comm.
    ENDLOOP.
        It returns me a runtime error. Any idea?

  • Truncated record in OPEN DATASET ENCODING NON-UNICODE

    Hi,
    I have to read a Unicode created file into a non-unicode SAP System, version 4.7.
    When I make the OPEN DATASET using ENCODING UTF-8 y get a CONVT_CODEPAGE dump. That´s odd cause my system is non-unicode. I don´t wanna use IGNORING CONVERSION ERRORS attribute, the output will be corrupt.
    But when I use ENCODING NON-UNICODE or ENCODING DEFAULT the READ DATASET mysteriously truncate the record which try to read from real 401 characters to 361 characters. Variable is string.
    I can see full records through AL11.
    Any ideas?
    Thanks,
    Pablo.

    Hi,
    Try using:
      open dataset filename in text mode encoding default for input
                                  ignoring conversion errors.
    As said in AL11 its coming so the above code is used for that.
    Hope this will surely help you !!!
    Regards,
    Punit

Maybe you are looking for

  • Maintenance Optimizer "local logical system is not defined"

    hello, i would like to create a new maintenance transaction to upgrade from SAP ERP 6.0 to SAP ERP 6.0 Enhanced Package 4 i follow the steps in the "maintenance optimizer configuration guide" BUT when i click to create a new maintenance transaction i

  • Is there a way to get a resized element height by ILayoutElement??

    Hi all, I am currently working on a means of layout out dynamic text and want my layout to check the RichText component to determine if it has enough space horizontally to fit in the available real estate.  For example, if the available browser width

  • Only SSL in web AS 6.40

    Hi everyone! How do I prohibit access to the webAS6.40 through any other socket than SSL? I.e. I do only want users to access my webAS through port 5<instanceNo>001 ? I have looked in the HTTP Provider and updated the value from Ports = (Port:58100,T

  • Account assignment order-related intercompany billing (SD-EDI-OM-IV)

    Hello! Intercompany billing is being generated and posted and intercompany accounts payable (IC AP) document is parked through IDOC and when trying to post this IC AP in the receiving company code is giving an error: "Account 30069520 requires an ass

  • Read only field failed to stop input

    Hi gurus, we found a serious problem with adobe interactive form, we have a form, which is dynamic and fillable, and we set a field as readonly field in design time in sfp (or make it in program using .access = 'readOnly'). it works, but , when we sw