EA1 2.1 and unicode data

Using version 2.1, I get rectangles in fields containing non-English characters (Amharic text, Geez characters). Version 1.5 displays the data correctly.
Both versions are set to UTF_16 in Tools->Preferences->Environment->Encoding.
Is there another parameter I need to set in 2.1 to get my Amharic characters to display?

Where exactly in the product are you unable to see the correct character rendering for non English locale? Is it in the data grid, result grid, report, script output etc? Pl. provide specific details to reproduce the issue.

Similar Messages

  • Unable to insert and retrieve Unicode data using Microsoft OLE DB Provider

    Hi,
    I have an ASP.NET web application that uses OLEDB connection to Oracle database.
    Database: Oracle 11g
    Provider: MSDAORA
    ConnectionString: "Provider=MSDAORA;Data Source=localhost;User ID=system; Password=oracle;*convertNcharLiterals*=true;"
    When I use SQL Develeoper client and add convertNcharLiterals=true; in sqldeveloper.conf then I am able to store and retrieve Unicode data.
    The character sets are as follows:
    Database character set is: WE8MSWIN1252
    National Language character set: AL16UTF16
    Select * from nls_database_parameters where parameter in ('NLS_CHARACTERSET','NLS_LENGTH_SEMANTICS','NLS_NCHAR_CHARACTERSET');
    PARAMETER VALUE ---------------------------------------
    NLS_CHARACTERSET WE8MSWIN1252
    NLS_LENGTH_SEMANTICS BYTE
    NLS_NCHAR_CHARACTERSET AL16UTF16
    I have a test table:
    desc TestingUni
    Name Null Type
    UNI1 VARCHAR2(20)
    UNI2 VARCHAR2(20)
    UNI3 NVARCHAR2(20)
    I execute the below mentioned query from a System.OleDb.OleDbCommand object.
    Insert into TestingUni(UNI3 ) values(N'汉语漢語');
    BUT when retrieving the same I get question marks (¿¿¿¿) instead of the Chinese characters (汉语漢語)
    Is there any way to add the above property(convertNcharLiterals) when querying the Oracle database from OLEDB connection?
    OR is there any other provider for Oracle which would help me solve my problem?
    OR any other help regarding this?
    Thanks

    using OraOLEDB Provider.
    set the environment variable ORA_NCHAR_LITERAL_REPLACE to TRUE. Doing so transparently replaces the n' internally and preserves the text literal for SQL processing.
    http://docs.oracle.com/cd/B28359_01/server.111/b28286/sql_elements003.htm#i42617

  • Question around UTL_FILE and writing unicode data to a file.

    Database version : 11.2.0.3.0
    NLS_CHARACTERSET : AL32UTF8
    OS : Red Hat Enterprise Linux Server release 6.3 (Santiago)
    I did not work with multiple language characters and manipulating them. So, the basic idea is to write UTF8 data as Unicode file using UTL_FILE. This is fairly an empty database and does not have any rows in at least the tables I am working on. First I inserted a row with English characters in the columns.
    I used utl_file.fopen_nchar to open and used utl_file.put_line_nchar to write it to the file on the Linux box. I open the file and I still see English characters (say "02CANMLKR001".
    Now, I updated the row with some columns having Chinese characters and ran the same script. It wrote the file. Now when I "vi" the file, I see "02CANè¹æ001" (some Unicode symbols in place of the Chinese characters and the regular English.
    When I FTP the file to windows and open it using notepad/notepad++ it still shows the Chinese characters. Using textpad, it shows ? in place of Chinese characters and the file properties say that the file is of type UNIX/UTF-8.
    My question : "Is my code working and writing the file in unicode? If not what are the required changes?" -- I know the question is little vague, but any questions/suggestions towards answering my question would really help.
    sample code:
    {pre}
    DECLARE
       l_file_handle   UTL_FILE.file_type;
       l_file_name     VARCHAR2 (50) := 'test.dat';
       l_rec           VARCHAR2 (250);
    BEGIN
       l_file_handle := UTL_FILE.fopen_nchar ('OUTPUT_DIR', l_file_name, 'W');
       SELECT col1 || col2 || col3 INTO l_rec FROM table_name;
       UTL_FILE.put_line_nchar (l_file_handle, l_rec);
       UTL_FILE.fclose (l_file_handle);
    END;
    {/pre}

    Regardless of what you think of my reply I'm trying to help you.
    I think you need to reread my reply because I can't find ANY relation at all between what I said and what you responded with.
    I wish things are the way you mentioned and followed text books.
    Nothing in my reply is related to 'text books' or some 'academic' approach to development. Strictly based on real-world experience of 25+ years.
    Unfortunately lot of real world projects kick off without complete information.
    No disagreement here - but totally irrevelant to anything I said.
    Till we get the complete information, it's better to work on the idea rather than wasting project hours. I don't think it can work that way. All we do is to lay a ground preparation, toy around multiple options for the actual coding even when we do not have the exact requirements.
    No disagreement here - but totally irrevelant to anything I said.
    And I think it's a good practice rather than waiting for complete information and pushing others.
    You can't just wait. But you also can't just go ahead on your own. You have to IMMEDIATELY 'push others' as soon as you discover any issues affecting your team's (or your) ability to meet the requirements. As I said above:
    Your problems are likely:
    1. lack of adequate requirements as to what the vendor really requires in terms of format and content
    2. lack of appropriate sample data - either you don't have the skill set to create it yourself or you haven't gotten any from someone else.
    3. lack of knowledge of the character sets involved to be able to create/conduct the proper tests
    If you discover something missing with the requirements (what character sets need to be used, what file format to use, whether BOMs are required in the file, etc) you simply MUST bring that to your manager's attention as soon as you suspect it might be an issue.
    It is your manager's job, not yours, to make sure you have the tools needed to do the job. One of those tools is the proper requirements. If there is ANYTHING wrong, or if you even THINK something is wrong with those requirements it is YOUR responsibility to notify your manager ASAP.
    Send them an email, leave a yellow-sticky on their desk but notify them. Nothing in what I just said says, or implies, that you should then just sit back and WAIT until that issue is resolved.
    If you know you will need sample data you MUST make sure the project plan includes SOME means to obtain sample data witihin the timeline needed by your project. As I repeated above if you don't have the skill set to create it yourself someone else will need to do it.
    I did not work with multiple language characters and manipulating them.
    Does your manager know that? If the project requires 'work with multiple language characters and manipulating them' someone on the project needs to have experience doing that. If your manager knows you don't have that experience but wants you to proceed anyway and/or won't provide any other resource that does have that experience that is ok. But that is the manager's responsibility and that needs to be documented. At a minimum you need to advise your manager (I prefer to do it with an email) along the following lines:
    Hey - manager person - As you know I have little or no experience to 'work with multiple language characters and manipulating them' and those skills are needed to properly implement and test that the requirements are met. Please let me know if such a resource can be made available.
    And I'm serious about that. Sometimes you have to make you manager do their job. That means you ALWAYS need to keep them advised of ANY issue that might affect the project. Once your manager is made aware of an issue it is then THEIR responsibility to deal with it. They may choose to ignore it, pretend they never heard about it or actually deal with it. But you will always be able to show that you notified them about it.
    Now, I updated the row with some columns having Chinese characters and ran the same script.
    Great - as long as you actually know Chinese that is; and how to work with Chinese characters in the context of a database character set, querying, creating files, etc.
    If you don't know Chinese or haven't actually worked with Chinese characters in that context then the project still needs a resource that does know it.
    You can't just try to bluff your way through something like character sets and code conversions. You either know what a BOM (byte order mark) is or you don't. You have either learned when BOMs are needed or you haven't.
    That said, we are in process of getting the information and sample data that we require.
    Good!
    Now make sure you have notified your manager of any 'holes' in the requirements and keep them up to date with any other issues that arise.
    NONE of the above suggests, or implies, that you should just sit back and wait until that is done. But any advice offered on the forums about specifics of your issue (such as whether you need to even worry about BOMs) is premature until the vendor or the requirements actually document the precise character set and file format needed.

  • Unable to show Unicode Data in Oracle RESTful Service JSON

    Hi Everyone.
    I have stored unicode data in Oracle database and when i retrieve in sql query it is showing the same. But when i retrieve the data in json using oracle RESTful web service (GET), it bringing with unknown character as shown below.
    next: {},$ref: "http://000.00.00.00:8085/ords/mobile/sch/loginm/?user=SURESH&pwd=123&page=1"
    items: [
    uri: {},$ref: "http://000.00.00.00:8085/ords/mobile/sch/loginm/41"
    stud_id: 41,
    stud_code: "1001",
    stud_name: "அப்துல் ஜப்பார்"
    My Database Setup as below:
    SQL> SELECT name,value$ FROM sys.props$;
    NAME                                                          VALUE$
    DICT.BASE                                                  2
    DEFAULT_TEMP_TABLESPACE               TEMP
    DEFAULT_PERMANENT_TABLESPACE     USERS
    DEFAULT_EDITION                                   ORA$BASE
    Flashback Timestamp TimeZone                    GMT
    TDE_MASTER_KEY_ID
    DBTIMEZONE                                        -07:00
    DST_UPGRADE_STATE                         NONE
    DST_PRIMARY_TT_VERSION               11
    DST_SECONDARY_TT_VERSION          0
    DEFAULT_TBS_TYPE                              SMALLFILE
    NLS_LANGUAGE                              AMERICAN
    NLS_TERRITORY                                   AMERICA
    NLS_CURRENCY                                   $
    NLS_ISO_CURRENCY                         AMERICA
    NLS_NUMERIC_CHARACTERS               .,
    NLS_CHARACTERSET                         AL32UTF8
    NLS_CALENDAR                                   GREGORIAN
    NLS_DATE_FORMAT                              DD-MON-RR
    NLS_DATE_LANGUAGE                         AMERICAN
    NLS_SORT                                        BINARY
    NLS_TIME_FORMAT                         HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT               DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT               HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT          DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY                    $
    NLS_COMP                                   BINARY
    NLS_LENGTH_SEMANTICS          BYTE
    NLS_NCHAR_CONV_EXCP          FALSE
    NLS_NCHAR_CHARACTERSET          AL16UTF16
    NLS_RDBMS_VERSION               11.2.0.1.0
    GLOBAL_DB_NAME                    MOBILE
    EXPORT_VIEWS_VERSION   
    SQL> select DECODE(parameter, 'NLS_CHARACTERSET', 'CHARACTER SET',
      2  'NLS_LANGUAGE', 'LANGUAGE',
      3  'NLS_TERRITORY', 'TERRITORY') name,
      4  value from v$nls_parameters
      5  WHERE parameter IN ( 'NLS_CHARACTERSET', 'NLS_LANGUAGE', 'NLS_TERRITORY');
    NAME          VALUE
    LANGUAGE      AMERICAN
    TERRITORY     AMERICA
    CHARACTER SET AL32UTF8
              8
    WORKLOAD_CAPTURE_MODE    
    WORKLOAD_REPLAY_MODE
    Awaiting you solution.
    -- Abdul Jabbar

    Kumar,
    Ftping the PG.xml to mds folder will not help the page to goto MDS directory
    You have to import the file using xmlimporter
    I understand you have done the import, but it is not success.
    Could you please post what is the script you used to import the PG.xml
    and once you run what was the output you have got.
    May be you can refer the URL for the scripts
    http://apps2fusion.com/at/61-kv/331-oa-framework-scripts
    With regards,
    Kali.
    OSSI.

  • How to load unicode data files with fixed records lengths?

    Hi!
    To load unicode data files with fixed records lengths (in terms of charachters and not of bytes!) using SQL*Loader manually, I found two ways:
    Alternative 1: one record per row
    SQL*Loader control file example (without POSITION, since POSITION always refers to bytes!)<br>
    LOAD DATA
    CHARACTERSET UTF8
    LENGTH SEMANTICS CHAR
    INFILE unicode.dat
    INTO TABLE STG_UNICODE
    TRUNCATE
    A CHAR(2) ,
    B CHAR(6) ,
    C CHAR(2) ,
    D CHAR(1) ,
    E CHAR(4)
    ) Datafile:
    001111112234444
    01NormalDExZWEI
    02ÄÜÖßêÊûÛxöööö
    03ÄÜÖßêÊûÛxöööö
    04üüüüüüÖÄxµôÔµ Alternative2: variable length records
    LOAD DATA
    CHARACTERSET UTF8
    LENGTH SEMANTICS CHAR
    INFILE unicode_var.dat "VAR 4"
    INTO TABLE STG_UNICODE
    TRUNCATE
    A CHAR(2) ,
    B CHAR(6) ,
    C CHAR(2) ,
    D CHAR(1) ,
    E CHAR(4)
    ) Datafile:
    001501NormalDExZWEI002702ÄÜÖßêÊûÛxöööö002604üuüüüüÖÄxµôÔµ Problems
    Implementing these two alternatives in OWB, I encounter the following problems:
    * How to specify LENGTH SEMANTICS CHAR?
    * How to suppress the POSITION definition?
    * How to define a flat file with variable length and how to specify the number of bytes containing the length definition?
    Or is there another way that can be implemented using OWB?
    Any help is appreciated!
    Thanks,
    Carsten.

    Hi Carsten
    If you need to support the LENGTH SEMANTICS CHAR clause in an external table then one option is to use the unbound external table and capture the access parameters manually. To create an unbound external table you can skip the selection of a base file in the external table wizard. Then when the external table is edited you will get an Access Parameters tab where you can define the parameters. In 11gR2 the File to Oracle external table can also add this clause via an option.
    Cheers
    David

  • Reading Unicode data from a file...

    I am writing an application that needs to read some configuration data from a file. An end user edits the configuration file to provide the configuration data. The Java code reads this file and uses the configuration data supplied by the user.
    The user can also save non-ascii characters as part of the configuration data. hence, I do not want to use java properties files. What are the other options available that allow me reading Unicode data into my Java code and will also allow user to save the configuration file as Unicode?

    Java characters are Unicode characters. Read file data that consists of Unicode characters as Java characters or strings.
    You can read the data as primitive char values using the DataInputStream class. The InputStreamReader class can also read Unicode (UTF-16) data.
    Data can be written using the OutputStreamWriter class.

  • Help with writing and retrieving data from a table field with type "LCHR"

    Hi Experts,
    I need help with writing and reading data from a database table field which has a type of "LCHR". I have given an example of the original code but don't know what to change it to in order to fix it and still read in the original data that's stored in the LCHR field.
    Basically we have two Function modules, one that saves list data to a database table and one that reads in this data. Both Function modules have an identicle table which has an array of fields from type INT4, CHAR, and type P. The INT4 field is the first one.
    Incidentally this worked in the 4.7 non-unicode system but is now dumping in the new ECC6 Unicode system.
    Thanks in advance,
    C
    SAVING THE LIST DATA TO DB
    DATA: L_WA(800).
    LOOP AT T_TAB into L_WA.
    ZDBTAB-DATALEN = STRLEN( L_WA ).
    MOVE: L_WA to ZDBTAB-RAWDATA.
    ZDBTAB-LINENUM = SY-TABIX.
    INSERT ZDBTAB.
    READING THE DATA FROM DB
    DATA: BEGIN OF T_DATA,
                 SEQNR type ZDBTAB-LINENUM,
                 DATA type ZDBTAB-RAWDATA,
               END OF T_TAB.
    Select the data.
    SELECT linenum rawdata from ZDBTAB into table T_DATA
         WHERE repid = w_repname
         AND rundate = w_rundate
         ORDER BY linenum.
    Populate calling Internal Table.
    LOOP AT T-DATA.
    APPEND T_DATA to T_TAB.
    ENDLOOP.

    Hi Anuj,
    The unicode flag is active.
    When I run our report and then to try and save the list data a dump is happening at the following point
    LOOP AT T_TAB into L_WA.
    As I say, T_TAB consists of different fields and field types whereas L_WA is CHAR 800. The dump mentions UC_OBJECTS_NOT_CONVERTIBLE
    When I try to load a saved list the dump is happening at the following point
    APPEND T_DATA-RAWDATA to T_TAB.
    T_DATA-RAWDATA is type LCHR and T_TAB consists of different fields and field types.
    In both examples the dumps mention UC_OBJECTS_NOT_CONVERTIBLE
    Regards
    C

  • Inserting and retrieving data from a al32UTF8 database USING SQL Developer

    hi guys,
    Before i post my questions , i think its better for me to provide you guys with my understandings first so that it easier to understand where/if i have gone wrong..
    I am using Window XP and Oracle 10g
    Non-unicode client - a client program that need to use the OS code page for mapping of the retrieved unicode data from the database as well as the support of displaying/inserting the characters from that code page to the database.
    E.G sqlplusw.exe
    Therefore, when using a non-unicode client
    1) we have to set the OS code page (Control panel - regional and language setting - advance - language for non unicode program ) to the code page that contain the characters we are going to display/insert.
    2) we will also have to set the NLS_LANG characterset to the character set of the code page we are going to insert so that when we do a insert (for e.g in thai ) , oracle will know, and auto conversion to UNICODE can take place. This is also true when we retrieve unicode data from the database so that conversion to the correct character set can take place.
    INSERTING
    THAI ---> conversion ----> UNICODE
    RETRIEVING
    THAI <---- conversion <---- UNICODE
    I hope my basic understanding is correct up till this point.
    Unicode client - a client program that supports the displaying/inserting of unicode characters without the need of setting the OS code page (Control panel - regional and language setting - advance - language for non unicode program )
    E.G isqlplus http or SQL developer
    However,
    1) There is still a need to set the NLS_LANG so that correct conversion can take place between the client and the database.
    For e.g, when retrieving if we set the NLS_LANG character set to ZHS16GBK (chinese) and the data store in unicode in database is E.G (THAI) , then the conversion would be wrong .
    Since it is a unicode supported client, then the NLS_LANG character set should be set to UNICODE as well.
    Here come my questions
    *Important - please help if you are busy and have no other time to answer the rest of the questions
    *Q1) If i were to use a unicode client, what should i set my NLS_LANG character set to ?
    AMERICAN_AMERICA.UTF8 ?
    *Q2) Where do i set the NLS_LANG character set information in SQL Developer, i know there is a metalink for setting NLS_LANG using isqlplus but i cant seems to google any result for SQL developer.
    Q3) Is my basic understanding right until this point ? If not, please explain in a more generalised term as i am really not familiar with character sets, code page, unicode , glyphs and fonts..
    Q4) If a unicode client does not need to refer to the OS code page (set in regional and language) , is there a UNICODE code page for the client to refer to , or is there any Window API available ?
    Q5)
    There is still a need to set the NLS_LANG so that correct conversion can take place between the client and the >>database.
    For e.g, when retrieving if we set the NLS_LANG character set to ZHS16GBK (chinese) and the data store in >>unicode in database is E.G (THAI) , then the conversion would be wrong .am i right on this point for UNICODE supported client ?
    Thanks for spending time to read my questions and i hope to hear advices from you guys soon.
    Million thanks again for sharing.
    Best Regards,
    Noob but willing to learn

    The requirement to always set NLS_LANG is not true for JDBC, which ignores NLS_LANG altogether. Java programs fetch text data into String variables, which use Unicode UTF-16 by design. JDBC sets character set conversion so that data is converted between UTF-16 and the database or national character set.
    The requirement to set NLS_LANG is not generally true for OCI, either. The first call in an OCI problem can be OCIEnvNlsCreate(). This call has two parameters that allow the caller to define the character set to use for VARCHAR2/CHAR/LONG/CLOB/statement text and the character set to use for NVARCHAR2/NCHAR/NCLOB. Only if these character sets are specified as 0, NLS_LANG character set is used. Also, OCI programs can specify different character sets for each bind or define variable (i.e. input/output buffer). Note: OCI programs always use NLS_LANG to initialize the language and territory settings for the client program and the database session. Only the character set can be specified is OCIEnvNlsCreate().
    OCIEnvNlsCreate() can specify the client character set as UTF-16 (in platform endianess). This is not possible with NLS_LANG.
    Various interfaces building on OCI, such as Oracle ODBC and ODP .NET, explicitly initialize OCI with Unicode character set, and thus ignore the NLS_LANG character set as well.
    Thnx,
    Sergiusz

  • Pass unicode data in SQL Server 2008 stored procedure parameter

    Hi,
    I want to pass unicode data in a SQL stored procedure. But I am not sure that how to append N with NVarchar datatype.
    For example I Create a table LocalizationTest and wants to insert a few records.
    Create table LocalizationTest
        Field1 NVarchar(255)
    INSERT INTO LocalizationTest
    VALUES(N'123 Illini Dr, 東皮奧里亞, 8989')
    the above given statement works.
    To explain it more , script A works but script B does not work.
    DECLARE @X NVARCHAR(255)
    [A]
    SET @X=N '123 Illini Dr, 東皮奧里亞, 8989'
    INSERT INTO LocalizationTest
    VALUES(@X)
    [B]
    SET @X= '123 Illini Dr, 東皮奧里亞, 8989'
    INSERT INTO LocalizationTest
    VALUES(@X)
    What is the correct way to execute script B because @X will be passed in a SQL Stored Procedure?
    Thanks
    Sharma M.

    If you do not pass the value as a Unicode value there is nothing left of the Unicode characters but question marks. Once this has happened, AFAIK there is no way to undo it (other than resetting the value as Unicode characters again).
    Affirmative.
    Demo - This is what happens if you convert UNICODE (nvarchar) to varchar with implicit conversion (try to force 2 bytes into one byte).
    3F hex is '?'.
    CREATE TABLE #LocalizationTest (Field1 VARCHAR(255))
    INSERT INTO #LocalizationTest
    VALUES(N'123 Illini Dr, 東皮奧里亞, 8989')
    DECLARE @X NVARCHAR(255), @dSQL NVARCHAR(MAX)
    SET @X=N'123 Illini Dr, 東皮奧里亞, 8989'
    INSERT INTO #LocalizationTest VALUES(CONVERT(NVARCHAR(255),@X))
    SET @dSQL = N'INSERT INTO #LocalizationTest VALUES(N'''+@X+''')'
    PRINT @dSQL
    EXEC sp_ExecuteSQL @dSQL
    SELECT *, Field1Bin=convert(binary(30), Field1) FROM #LocalizationTest
    DROP TABLE #LocalizationTest
    123 Illini Dr, ?????, 8989 0x31323320496C6C696E692044722C203F3F3F3F3F2C203839383900000000
    123 Illini Dr, ?????, 8989 0x31323320496C6C696E692044722C203F3F3F3F3F2C203839383900000000
    123 Illini Dr, ?????, 8989 0x31323320496C6C696E692044722C203F3F3F3F3F2C203839383900000000
    Kalman Toth Database & OLAP Architect
    SQL Server 2014 Design & Programming
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • 4.7EEx1.10 to ECC6.0 upgrade and Unicode conversion

    Hi Experts,
    We are going to initiate the upgrade from next month onwards. Subsequently i have started preparing the plan and strategy for the same.
    As our current setup is 4.7EEx110/Win 2003 R2-64 bit/Oracle 10.2.0.4.0 (Non unicode). And we have recently migrated on to this setup from WIn2k 32 bit. Also the current hardware is Unicode compatible.
    With respect to strategy for achieving this Upgrade and Unicode conversion, i am planning as follows.
    Step 1) Perform Unicode conversion on the current landscape (Both Export/import on the same servers)
    Step 2) Setup Temporary landscape as part of Dual maintenance strategy and migrate data from the current systems to temporary systems using backup/restore method.
    Step 3) Perform the SAP version upgrade on the current landscape and setup transport routes from temporary to current landscape in order to keep it in sync
    Step 4) after successful upgrade, decommission the temporary landscape
    Please provide your suggestions and valuable advices if there is anything wrong with my strategy and execution plan.
    Regards,
    Dheeraj

    Hi,
    Thanks. As i have already referred these notes as i am seeking advise with respect to my upgrade approach.
    However i have planned to perform in the following manner.
    1) Refresh Sandbox with Prod data and perform Upgrade to ECC6.0 EHP5 & subsequently Unicode conversion on the same server (Since both export & Import has to perform on the same hardware as we have recently migrated on this hardware which is Unicode compatible)
    2) Setup temporary landscape for DEv & QAs and establish transport connection to Production system in order to move urgent changes
    3) Keep a track of the changes which have transported during upgrade phase so that the same can be implemented in the upgraded systems i.e. Dev & QAS
    4) After Sandbox Migration and signoff, we will perform Dev & QAS upgrade & unicode conversion on the same hardware (Note: Since these are running on VMware can we export the data from the upgraded system and import on to a new VM?)
    5) Plan for production cutover and Upgrade the Prod system to ECC6.0 Ehp5 and then Unicode conversion. As i am planning to perform upgrade over the weekend and then Unicode conversion activity in the next weekend (Is it a right way?)
    My Production setup: DB on one Physical host and CI on separate Virtual host
    6) After the stabilization phase, we are planning for OS & DB upgrade as follows:
          a) Windows upgrade from 2003 R2 to Windows 2008 R2
          b) Oracle Upgrade from 10.2 to 11.2
    If anyone thinks that there is anything wrong with my above approach and need changes then please revert.
    I have one more doubt as I am going to upgrade 4.7EEx110 (WAS 620, Basis SP64) to ECC6.0 EHp5.As I presume that I can straight away upgrade from the current version to ECC6.0 Ehp5 without installing EHP. Kindly confirm
    Thanks

  • Integrating MDMP and Unicode systems with IDoc interfaces

    Hi,
    We are working on integrating SAP R/3 6.20 (MDMP) with SAP PI 7.0  SP10 (Unicode) system.  The source will send MATMAS or CLFMAS IDocs with Thai / Japanese characters and PI should transform and post it to SAP ECC 5.0 Target system [IDoc to IDoc scenario].
    ( SAP R/3 Legacy 620 - Non-Unicode / MDMP ) |---->  ( XI  - Unicode) ->  (ECC - Unicode)
    Had a look at few SAP notes (745030 ,656350 and 613389)  and it looks like there is no standard way/best practice to handle this scenario. 
    References:
    1.PDF of TECHED Session ID: IM101 Dealing with Multi-Language Garbage?Data – Lessons Learned
    2.SAP Note 745030 - MDMP - Unicode Interface_Solution Overview.pdf
    3.MDMP_Unicode_Transfer_final.doc from SAP Note 745030
    4.SAP Note 656350 - Master Data Transfer UNICODE to MDMP Systems with ALE.pdf
    5.SAP Note 613389 - ALE SAP system group with Unicode systems (Solution-2)
    My understanding per SAP Notes: (Please correct me if I'm wrong)
    a. For MDMP integraton we can't use Standard ALE Config instead we have to use Custom Configuration (IDoc collection setting in Partner profile &Scheduling  RSEOUT002/RSEOUT00_MDMP and use Function module :IDOC_INBOUND_ASYNCHRONOUS_2 ).
    b. The RFC Destinations should use proper logon langauage for correct MDMP IDoc transfer (For sending IDocs with Japanese character, the logon language should be JP).
    c. If we want to transfer IDocs in more than one language we need to create multiple Partner profiles/RFC destinations each with specific logon language.
    Please guide us in integrating these systems if you have done a similar integrations, following are my questions:
    1] Is there any configuration change required at PI layer?
    2] Do we need to install codepages in PI Unicode system for all languages used or Unicode system is capable of handling all the languages?
    3] Is it necessary to install any SAP Add-on package in R/3 MDMP system inorder to support MDMP to Unicode data transfer?
    3] If we want to send MATMAS/CLFMAS IDoc with Thai/Japanese characters from same system, what are the changes required at the Source system?
        (Source may send MATMAS/CLFMAS IDoc with either Thai/Japanese characters but not both of them in a single IDoc)
    4] Can we use regualr ALE & Partner profile settings for handling multi-byte characters or we need to use IDoc collection and RSEOUT002/RSEOUT00_MDMP report for transfer?
    5] Is there any restrictions on the IDoc types (MATMAS,CLFMAS etc) supported in MDMP-Unicode integration solution.
    5] Is there any best practice document available for this scenario?
    6] Do we need to involve SAP AG for MDMP to Unicode system integrations(As per SAP Note: 656350) ?
    Thanks and Regards,
    Ananth

    Hi Ananth,
    as you have already mentioned, you need differents RFC destinations for each language. So you have to make sure, that the IDocs use the right destination according to there content.
    If you have messages from PI to MDMP it is the same, you need different channels with different logon languages as well. You need an identifier in the message, that can be used for selecting the correct channel.
    It should not be a restricting to any IDoc type, but it is not possible to post a message with different languages (which require different codepages) in one IDoc.
    For correct conversion from a non unicode system to unicode, the codepages have to be installed in th OS of the PI server.
    Regards
    Stefan

  • Importing non-unicode data into unicode 10gR2 database

    Hi:
    I will have to import non-unicode data into unicode 10gR2 database. The systems the data is coming from are the following: CODA, Timberline, COMMS, CMS, LIMS. These are all RDBMS, sql-enabled systems. We are talking about pretty big amounts of data (a couple hundred GB combined).
    Did anybody go through a similar exersize?
    I know I'll have to setup nls_length_semantics to CHAR.
    What other recommendations could you guys give?
    TIA,
    Greg

    I think "nls_length_semantics" isn't mandatory at this point, and you must extract a little quantity of information from every source and do some probes injecting them into the Oracle10g database.

  • LSMW: Codepage conversion error with a Unicode data file

    Hi all,
    I am currently developing a LSMW upload program which has to use a Unicode data file. The underlying system/target system is NOT a Unicode system. The data file also contains non-Latin2 characters.
    In the step "Specify Files", I have specified my Unicode data file and specified the codepage type "4110 - Unicode UTF-8".
    In the step "Read Data", then I get the runtime error "CONVT_CODEPAGE", exception "CX_SY_CONVERSION_CODEPAGE".
    I would expect that all non-Unicode characters are automatically transformed to "#", but the conversion progam breaks. The character transformation to "#" would be fine.
    I am really wondering why, at first, I am able to specify the Unicode codepage type, but then, the file cannot be converted correctly.
    What do I make wrong, what can I do to avoid the error?
    Thanks a lot in advance for helping me out...
    Regards,
    Klaus

    Hello,
    You need convert the file with the format UTF-8. In notepad you can choose this option.
    Regards,
    Oscar.

  • Encoding Problem: non-Unicode Data to Unicode format of XI

    Hi SDN,
    I have a JDBC sender to SAP BW scenario. The database is MS SQL server. 
    The code page of db CP1CIAS
    Description:SQL Server Sort Order 52 on Code Page 1252 for non-Unicode Data
    Some fields with values like <b>ZAK&#x0;ADY TWORZYW SZTUCZNYCH</b> are failing in XI Mapping with error
    <b>Fatal Error: com.sap.engine.lib.xml.parser.Parser~
    XMLParser : #0 not allowed in Character data sections
    in the trace.</b>
    Please help how should i get over this code page errors. By installing this code page on XI server help?

    There is no such global setting, this is b/c your source has Unicode I trust, and the only one other thing to try would be this:
    Arthur My Blog

  • Unicode data in non-utf8 oracle 8.1.7

    Hi,
    I have to migrate unicode data from a UTF-8 Oracle 9.0.2 database to non-utf8 oracle 8.1.7. The tables are small and I am reading and writing into the database using java code.The column which contained the unicode data have been made nchar in oracle 8.1.7.
    When I try to insert the data,I get the error:
    java.sql.SQLException: ORA-12704: character set mismatch
    Can I have unicode data stored in nchar columns in a non-utf8 database?
    Is there any documentation available on the same?
    Thanks,
    Shipra

    Check out the Oracle Unicode Database Support paper on OTN - http://technet.oracle.com/tech/globalization/content.html
    Basically NCHAR prior to Oracle9i can not be Unicode. If you need to store Unicode data in 8.1.7, you need to use UTF8 as the database character set.
    Nat

Maybe you are looking for