Language Conversion from Unicode 8 to Character Set

Hi,
I am creating a file programmatically containing Vendor Master data (FTP interface).
The vendor name and vendor address is maintained in the local language (Taiwanese) in SAP System, these characters are in Unicode 8 character set.
The Unicode character set should be converted to BIG5 for Taiwanese, and then send this information in the file.
How can I perform this conversion and change the character set of the values I'm retrieving from table (LFA1) to character set BIG5.
Is is possible to does this conversion in SAP, does sap allows this?
/Mike

Hi Manik,
  I am also having a similar requirement, as I need to convert the unicode chinese character to GB2312 encoded chinese character,. I already posted in forums but didnt get the required the solution.
Can you please provide the solution which you implemented and also confirm whether it can be used  to solve  the above problem.
Hoping for your good reply.
Regards,
Prakash

Similar Messages

  • Error " conversion error between two character sets" in PI MONI

    Hi Experts
    I am doing file to Idoc scenario. I am getting the following error in PI MONI "conversion error between two character sets".
    please suggest me how to solve the issue.
    thanx in advance.

    Hi Mickael
    Below is the complete error message found in PI MONI.
    SAP:Error SOAP:mustUnderstand="1" xmlns:SAP="http://sap.com/xi/XI/Message/30" xmlns:SOAP="http://schemas.xmlsoap.org/soap/envelope/">
      <SAP:Category>XIServer</SAP:Category>
      <SAP:Code area="INTERNAL">SYSTEM_DUMP</SAP:Code>
      <SAP:P1 />
      <SAP:P2 />
      <SAP:P3 />
      <SAP:P4 />
      <SAP:AdditionalText />
      <SAP:Stack>PI Server : XBTO80__0000 : Conversion error between two character sets.</SAP:Stack>
      <SAP:Retry>M</SAP:Retry>
      </SAP:Error>

  • Character conversion from Unicode to WE8ISO8859P1

    Hi,
    I am having a text field in which i need to enter Unicode characters also. When entering the Unicode character '€' in the text field and when clicked on the "Save" button it is throwing the error "java.sql.SQLException: Cannot map Unicode to Oracle character". The oracle database characterset is of type "WE8ISO8859P1". Do i need to do the conversion of Unicode character to WE8ISO8859P1 in the Controller explicitly?If so, Please mention the method through which i can achieve the same.Or is there any profile options or setup changes through which i can avoid this error?Can anyone please suggest how to proceed with this?
    Thanks,
    Sreeja

    Pl identify your database characterset
    SQL> select * from NLS_DATABASE_PARAMETERS;If the NLS_CHARACTERSET is WE8ISO8859P1, it is not capable of storing the Euro symbol (pl do a google search to find various references).
    To store the Euro symbol, you will most likely need to change the database characterset to UTF8 - pl see the MOS Docs mentioned in this thread for details - Adding Greek & German language to R12
    HTH
    Srini

  • Importing from a different character set

    Oracle 8.1.7 / Windows NT
    I'm trying to import a dump file which was created with character set WE8ISO8859P9. My database uses character set UTF8. Some of the records can't be inserted because of error "ORA-1401: Value too large for column". Is this because of the different character sets? If I switch my session to WE8ISO8859P9, imp says "character set conversion from x to y not supported."
    How can I get these last records inserted? Here's an excerpt from the log:
    Verbunden mit: Oracle8i Enterprise Edition Release 8.1.7.0.0 - Production
    With the Partitioning option
    JServer Release 8.1.7.0.0 - Production
    <
    Export-Datei wurde von EXPORT:V08.00.05 |ber konventionellen Pfad erstellt
    Warnung: Die Objekte wurden von NOC_ADMIN exportiert, nicht von Ihnen.
    Importvorgang mit Zeichensatz WE8ISO8859P9 und Zeichensatz UTF8 NCHAR durchgef|hrt
    Import-Server verwendet Zeichensatz UTF8 (mvgliche Zeichensatzkonvertierung)
    Export-Server verwendet Zeichensatz WE8ISO8859P9 NCHAR (mvgliche Zeichensatzkonvertierung)
    . Import NOC_ADMIN's Objekte in NOC_ADMIN
    . . Import der Tabelle "ACCESSROUTERIFS_" 782 Zeilen importiert
    . . Import der Tabelle "ITEM_"
    IMP-00019: Zeile zur|ckgewiesen aufgrund von Oracle-Fehler 1401
    IMP-00003: Oracle-Fehler 1401 gefunden
    ORA-01401: Eingef|gter Wert zu gro_ f|r Spalte
    Spalte 1 33886
    Spalte 2
    Spalte 3
    Spalte 4 1323
    Spalte 5
    Spalte 6 11
    Spalte 7 18600
    Spalte 8 18600
    Spalte 9 20-NOV-2000:00:00:00
    Spalte 10 processing
    Spalte 11 inactive
    Spalte 12
    Spalte 13
    Spalte 14 35682.0
    Spalte 15
    Spalte 16
    Spalte 17
    Spalte 18 05.12.00: KD weiss noch nix neues, er wird uns inf...
    Spalte 19
    Spalte 20 kschmid
    Spalte 21 09-FEB-2001:15:50:21
    Spalte 22
    Spalte 23 12
    Spalte 24
    Spalte 25 06-NOV-2000:00:00:00
    null

    Please try ORacle RDBMS support. this issues is to do with Oracle Import.

  • Validate that a string is composed from a defined character set

    Hi experts,
    I need to validate that a string enetered as parameter is composed of the following character set :
    26 alphabets (both in capital or lower case), a “’” (like O’Brien), a blank between characters like (Mc Donald). So the total valid characters on any of these fields will be 54.
    could You provide with an efficient code ?
    Plz Help....
    rewards gauranteed........

    Hi,
    Check the below code.
    data: var(52) type c Value  'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz'.
    data: v1(14) type c value 'welcome to SDN'.
    data: v2 type i.
    data: v3 type i.
      v3 = 0.
      v2 = STRLEN( v1 ).
      do v2 times.
       if v1+v3(1) = space .
        write:/ 'test contains character space'.
       elseif v1+v3(1) ca var.
        write:/ 'test'.
       elseif v1(v3) = '/'.
        write:/ 'test contains character /'.
       endif.
         v3 = v3 + 1.
      enddo.
    Regards,
    Shravan G.

  • Language conversion from one to other in runtime

    Hello Friends,
    I have a requirement that.. i needs to display the material Description in EN and Chinees as well. in MAKT table i have Description only in EN, but my requiement is i needs to take the description ( in English ) and i needs to convert it in runtime and needs to be populate in the Chinees language on a Script...
    I hope some one will give the solution for my question.
    Valueble answers will be Rewarded...
    Thanks,
    Murali.

    Hi,
    For doing so the translation for it has to be maintained in Chinese in MAKT. Dynamically it is not possible to convert to Chinese from english
    Edited by: Ramachandra Kamath on Feb 15, 2008 8:17 AM

  • Language conversion from english - chinese

    Hi,
    I have a requirement that i need to take a report after converting the Employee name and address details from english to chinese.
    Plz let me know, if anyone have an idea on the same.
    Thanks & Regards,
    Raju
    Edited by: V R K Raju P on Feb 19, 2010 2:06 PM

    Raju,
    Elaborate your exact requirement. Do you want to pull a report with employee name and address in chinese? or do you want to know how to maintain a employee name and address in chinese?.
    For later question you may have to handle by enhancing your infotype 0001 and 0006 which should make the user to enter his
    details in china and th same will be stored in PA0001 and PA0006 table. Your custome report can later read these tables to display the chinese details of these fields.
    Regards
    Nanda

  • Conversion from Unicode to Font

    Greetings everyone,
    Does anyone have any idea how I can convert a Devnagari Unicode (UTF-8) document into a normal ASCII text (with devanagari fonts) document to edit with word and use with non-unicode software?
    Thanks a lot in advance,
    Vibhu
    Message was edited by: bullet350

    It has been awhile since I was trying to find a page layout program that supported Devanagari. About six months ago I found iCalumus. It didn't support editing or input, but if you had a text file you could paste it in place.
    I contacted the developer and he said they would be adding input/editing support. I just downloaded the latest version and it seems that they have done so!
    I haven't done any extensive testing to see if all the conjuncts work properly. Quick testing seems to suggest it is a bit buggy, but fairly good – at least on screen. I haven't tried printing anything at all.

  • Conversion error, from character set 4102 to character set 4103

    Hi,
    We've developed a JCO server(in Java) with an ABAP report the function provided by the JCO server.
    MetaData:
         static {
              repository = new Repository("SMSRepository");
              fmeta = new JCO.MetaData("ZSMSSEND");
              fmeta.addInfo("TO", JCO.TYPE_CHAR, 255,   0,  0, JCO.IMPORT_PARAMETER, null);
              fmeta.addInfo("CONTENT", JCO.TYPE_CHAR, 255,   0,  0, JCO.IMPORT_PARAMETER, null);
              fmeta.addInfo("RETN", JCO.TYPE_CHAR, 255,   0,  0, JCO.EXPORT_PARAMETER, null);
              repository.addFunctionInterfaceToCache(fmeta);     
    Server parameters:
           Properties prop = new Properties();
           prop.put("jco.server.gwhost","shaw2k07");
           prop.put("jco.server.gwserv","sapgw01");
           prop.put("jco.server.progid","JCOSERVER01");
           prop.put("jco.server.unicode","1");
           srv = new SMSServer(prop,repository);
    If we run JCO server in both my client machine(from developer studio) and in the WAS machine(stand alone Java program), everything is ok. In the Abap side, the SM59 unicode test return the destination is an unicode system, and the ABAP report call the function can run smoothly.
    But we package this JCO server to a web application and deploy to WAS, problem occured. The SM59 unicode test still say the destination is an unicode system. But the ABAP report runs with an ABAP DUMP:
    Conversion error between two character set
    RFC_CONVERSION_FIELD
    Conversion error "RETN" from character set 4102 to character set 4103
    A conversion error occurred during the execution of a Remote Function
    Call. This happened either when the data was received or when it was
    sent. The latter case can only occur if the data is sent from a Unicode
    system to a non-Unicode system.
    I read the jrfc.trc log, it shows it receives data in unicode 4103(that's ok), but send data in unicode 4102(that's the problem).4102 is UTF-16 Big Endian and 4103  UTF-16 Little Endian. Our system is windows on intel 32 aritechture, so based on Note 552464, it should be 4103.
    Why it sends data (Java JCO server send output parameter to ABAP) in 4102?????
    What's the problem??? Thank you very much!!
    Best Regards,
    Xiaoming Yang
    Message was edited by:
            Xiaoming Yang

    Hello Experts,
    Any replies on this?
    I am also getting a similar kind of error.
    Do you have any idea on this?
    Thanks and Best Regards,
    Suresh

  • Risk involved converting Oracle character set to Unicode (AL32UTF8 or UTF8)

    Hi All -
    I am a PL/SQL devloper and quite new in Database Adminstration have very little knowledge base on this.
    Currently I am working on project where we have requirement to store data in Multiple Languages in Database.
    After my findings via Google I am clear that our database character set needs to be changed to Unicode (AL32UTF8 or UTF8). Before moving forward I would like to know what are the risk involved doing this?
    Few Question:-
    Would this change take long time & involve lots of effort ?
    Can we revert back once this chnage is done, with no data loss?
    Will there be any changes required while wrting SQL on tables having multi language data?
    As of now requirement to store data in Multi Language is very specfic to some tables only, not the whole DB, are there any other options storing data in diffrent languages like (Spanish,Japnese,Chinese,Italian, German, and French) in just one specific table?
    Thanks...
    Edited by: user633761 on Jun 7, 2009 9:15 PM

    >
    Will there be any changes required while wrting SQL on tables having multi language data?If you move from single byte character set to multi byte character set, you should take into count that 1 character my use 1,2,3 or 4 bytes to be stored: http://download.oracle.com/docs/cd/B19306_01/server.102/b14225/ch2charset.htm#i1006683
    This may impact SQL or PL/SQL code that is working on character string lengths.
    Note also that using exp/imp to change database character set is not so simple; see following message:
    Re: charset conversion from WE8ISO8859P1 (8.1.7.0) to AL32UTF8(9.0.1.1)
    >
    As of now requirement to store data in Multi Language is very specfic to some tables only, not the whole DB, are there any other options storing data in diffrent languages like (Spanish,Japnese,Chinese,Italian, German, and French) in just one specific table?Using NCHAR character types is another possibility:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14225/ch11charsetmig.htm#sthref1493
    Edited by: P. Forstmann on Jun 8, 2009 9:10 AM

  • Installing Character Sets for different languages

    We have a need to have the character sets installed for 14 different languages. Our BW system will receive data from other systems with different languages installed. We don't necessarily need to login with these languages but just need the character sets in order to properly read the data. Do we need to go thru the entire language installation process for each language in order to get the character sets installed? That seems like a rather lengthy process and I'm looking for other options. Also, once  I get all of these character sets into my development system is it possible to transport these characters sets into my QA and Production
    systems?

    1. Start the FM 7.0 installation from the CD.
    2. Work your way to the Setup screen and select "Custom" installation.
    3. Select only the "Dictionaries" components (i.e. uncheck the other
    components) and click on the "Change" button.
    4. This will pop-up the available "sub-components", i.e. the
    dictionaries.
    5. Select the ones you want and click on the "Continue" button and
    then the "Next" button to install only the new dictionaries.

  • Need to change character set from WE8MSWIN1252 to AL32UTF8.

    Hi,
    We have installed the database with Character set WE8MSWIN1252. But off late we understood that client requirement is AL32UTF8. Is there any way to change in easy method?
    Thanks in-advance.
    Rgds
    DBA.

    Pl also see these Docs
    AL32UTF8 / UTF8 (Unicode) Database Character Set Implications          [Document 788156.1]
    Changing the NLS_CHARACTERSET to AL32UTF8 / UTF8 (Unicode)          [Document 260192.1]
    If you are using EBS, then also see
    For 11i - Appendix A of "Oracle Applications 11i Internationalization Guide          [Document 333785.1]"
    For R12 - Appendix A of "Globalization Guide for Oracle Applications Release 12          [Document 393861.1]"
    Changing the characterset of an existing database/application is not a trivial task - will require thorough planning and testing.
    HTH
    Srini

  • Change character set

    Hi
    is anyone can tell me how to change characterset.
    i try with alter session but it doesnt work.
    thanks

    Article from Metalink
    Doc ID:      Note:66320.1
    Subject:      Changing the Database Character Set or the Database National Character Set
    Type:      BULLETIN
    Status:      PUBLISHED
         Content Type:      TEXT/PLAIN
    Creation Date:      23-OCT-1998
    Last Revision Date:      12-DEC-2003
    PURPOSE ======= To explain how to change the database character set or national character set of an existing Oracle8(i) or Oracle9i database without having to recreate the database. 1. SCOPE & APPLICATION ====================== The method described here is documented in the Oracle 8.1.x and Oracle9i documentation. It is not documented but it can be used in version 8.0.x. It does not work in Oracle7. The database character set is the character set of CHAR, VARCHAR2, LONG, and CLOB data stored in the database columns, and of SQL and PL/SQL text stored in the Data Dictionary. The national character set is the character set of NCHAR, NVARCHAR2, and NCLOB data. In certain database configurations the CLOB and NCLOB data are stored in the fixed-width Unicode encoding UCS-2. If you are using CLOB or NCLOB please make sure you read section "4. HANDLING CLOB AND NCLOB COLUMNS" below in this document. Before changing the character set of a database make sure you understand how Oracle deals with character sets. Before proceeding please refer to [NOTE:158577.1] "NLS_LANG Explained (How Does Client-Server Character Conversion Work?)". See also [NOTE:225912.1] "Changing the Database Character Set - an Overview" for general discussion about various methods of migration to a different database character set. If you are migrating an Oracle Applications instance, read [NOTE:124721.1] "Migrating an Applications Installation to a New Character Set" for specific steps that have to be performed. If you are migrating from 8.x to 9.x please have a look at [NOTE:140014.1] "ALERT: Oracle8/8i to Oracle9i Using New "AL16UTF16"" and other referenced notes below. Before using the method described in this note it is essential to do a full backup of the database and to use the Character Set Scanner utility to check your data. See the section "2. USING THE CHARACTER SET SCANNER" below. Note that changing the database or the national character set as described in this document does not change the actual character codes, it only changes the character set declaration. If you want to convert the contents of the database (character codes) from one character set to another you must use the Oracle Export and Import utilities. This is needed, for example, if the source character set is not a binary subset of the target character set, i.e. if a character exists in the source and in the target character set but not with the same binary code. All binary subset-superset relationships between characters sets recognized by the Oracle Server are listed in [NOTE:119164.1] "Changing Database Character Set - Valid Superset Definitions". Note: The varying width character sets (like UTF8) are not supported as national character sets in Oracle8(i) (see [NOTE:62107.1]). Thus, changing the national character set from a fixed width character set to a varying width character set is not supported in Oracle8(i). NCHAR types in Oracle8 and Oracle8i were designed to support special Oracle specific fixed-width Asian character sets, that were introduced to provide higher performance processing of Asian character data. Examples of these character sets are : JA16EUCFIXED ,JA16SJISFIXED , ZHT32EUCFIXED. For a definition of varying width character sets see also section "4. HANDLING CLOB AND NCLOB COLUMNS" below. WARNING: Do not use any undocumented Oracle7 method to change the database character set of an Oracle8(i) or Oracle9i database. This will corrupt the database. 2. USING THE CHARACTER SET SCANNER ================================== Character data in the Oracle 8.1.6 and later database versions can be efficiently checked for possible character set migration problems with help of the Character Set Scanner utility. This utility is included in the Oracle Server 8.1.7 software distribution and the newest Character Set Scanner version can be downloaded from the Oracle Technology Network site, http://otn.oracle.com The Character Set Scanner on OTN is available for limited number of platforms only but it can be used with databases on other platforms in the client/server configuration -- as long as the database version matches the Character Set Scanner version and platforms are either both ASCII-based or both EBCDIC-based. It is recommended to use the newest Character Set Scanner version available from the OTN site. The Character Set Scanner is documented in the following manuals: - "Oracle8i Documentation Addendum, Release 3 (8.1.7)", Chapter 3 - "Oracle9i Globalization Support Guide, Release 1 (9.0.1)", Chapter 10 - "Oracle9i Database Globalization Support Guide, Release 2 (9.2)", Chapter 11 Note: The Character Set Scanner coming with Oracle 8.1.7 and Oracle 9.0.1 does not have a separate version number. It reports the database release number in its banner. This version of the Scanner does not check for illegal character codes in a database if the FROMCHAR and TOCHAR (or FROMNCHAR and TONCHAR) parameters have the same value (i.e. you simulate migration from a character set to itself). The Character Set Scanner 1.0, available on OTN, reports its version number as x.x.x.1.0, where x.x.x is the database version number. This version adds a few bug fixes and it supports FROMCHAR=TOCHAR provided it is not UTF8. The Character Set Scanner 1.1, available on OTN and with Release 2 (9.2) of the Oracle Server, reports its version number as v1.1 followed by the database version number. This version adds another bug fixes and the full support for FROMCHAR=TOCHAR. None of the above versions of the Scanner can correctly analyze CLOB or NCLOB values if the database or the national character set, respectively, is multibyte. The Scanner reports such values randomly as Convertible or Lossy. The version 1.2 of the Scanner will mark all such values as Changeless (as they are always stored in the Unicode UCS-2 encoding and thus they do not change when the database or national character set is changed from one multibyte to another). Character Set Scanner 2.0 will correctly check CLOBs and NCLOBs for possible data loss when migrating from a multibyte character set to its subset. To verify that your database contains only valid codes, specify the new database character set in the TOCHAR parameter and/or the new national character set in the TONCHAR parameter. Specify FULL=Y to scan the whole database. Set ARRAY and PROCESS parameters depending on your system's resources to speed up the scanning. FROMCHAR and FROMNCHAR will default to the original database and national character sets. The Character Set Scanner should report only Changless data in both the Data Dictionary and in application data. If any Convertible or Exceptional data are reported, the ALTER DATABASE [NATIONAL] CHARACTER SET statement must not be used without further investigation of the source and type of these data. In situations in which the ALTER DATABASE [NATIONAL] CHARACTER SET statement is used to repair an incorrect database character set declaration rather than to simply migrate to a new wider character set, you may be advised by Oracle Support Services analysts to execute the statement even if Exceptional data are reported. For more information see also [NOTE:225912.1] "Changing the Database Character Set - a short Overview". 3. CHANGING THE DATABASE OR THE NATIONAL CHARACTER SET ====================================================== Oracle8(i) introduces a new documented method of changing the database and national character sets. The method uses two SQL statements, which are described in the Oracle8i National Language Support Guide: ALTER DATABASE [<db_name>] CHARACTER SET <new_character_set> ALTER DATABASE [<db_name>] NATIONAL CHARACTER SET <new_NCHAR_character_set> The database name is optional. The character set name should be specified without quotes, for example: ALTER DATABASE CHARACTER SET WE8ISO8859P1 To change the database character set perform the following steps. Note that some of them have been erroneously omitted from the Oracle8i documentation: 1. Use the Character Set Scanner utility to verify that your database contains only valid character codes -- see "2. USING THE CHARACTER SET SCANNER" above. 2. If necessary, prepare CLOB columns for the character set change -- see "4. HANDLING CLOB AND NCLOB COLUMNS" below. Omitting this step can lead to corrupted CLOB/NCLOB values in the database. If SYS.METASTYLESHEET (STYLESHEET) is populated (9i and up only) then see [NOTE:213015.1] "SYS.METASTYLESHEET marked as having convertible data (ORA-12716 when trying to convert character set)" for the actions that need to be taken. 3. Make sure the parallel_server parameter in INIT.ORA is set to false or it is not set at all. 4. Execute the following commands in Server Manager (Oracle8) or sqlplus (Oracle9), connected as INTERNAL or "/ AS SYSDBA": SHUTDOWN IMMEDIATE; -- or NORMAL <do a full database backup> STARTUP MOUNT; ALTER SYSTEM ENABLE RESTRICTED SESSION; ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0; ALTER SYSTEM SET AQ_TM_PROCESSES=0; ALTER DATABASE OPEN; ALTER DATABASE CHARACTER SET <new_character_set>; SHUTDOWN IMMEDIATE; -- OR NORMAL STARTUP RESTRICT; 5. Restore the parallel_server parameter in INIT.ORA, if necessary. 6. Execute the following commands: SHUTDOWN IMMEDIATE; -- OR NORMAL STARTUP; The double restart is necessary in Oracle8(i) because of a SGA initialization bug, fixed in Oracle9i. 7. If necessary, restore CLOB columns -- see "4. HANDLING CLOB AND NCLOB COLUMNS" below. To change the national character set replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET. You can issue both statements together if you wish. Error Conditions ---------------- A number of error conditions may be reported when trying to change the database or national character set. In Oracle8(i) the ALTER DATABASE [NATIONAL] CHARACTER SET statement will return: ORA-01679: database must be mounted EXCLUSIVE and not open to activate - if you do not enable restricted session - if you startup the instance in PARALLEL/SHARED mode - if you do not set the number of queue processes to 0 - if you do not set the number of AQ time manager processes to 0 - if anybody is logged in apart from you. This error message is misleading. The command requires the database to be open but only one session, the one executing the command, is allowed. For the above error conditions Oracle9i will report one of the errors: ORA-12719: operation requires database is in RESTRICTED mode ORA-12720: operation requires database is in EXCLUSIVE mode ORA-12721: operation cannot execute when other sessions are active Oracle9i can also report: ORA-12718: operation requires connection as SYS if you are not connect as SYS (INTERNAL, "/ AS SYSDBA"). If the specified new character set name is not recognized, Oracle will report one of the errors: ORA-24329: invalid character set identifier ORA-12714: invalid national character set specified ORA-12715: invalid character set specified The ALTER DATABASE [NATIONAL] CHARACTER SET command will only work if the old character set is considered a binary subset of the new character set. Oracle Server 8.0.3 to 8.1.5 recognizes US7ASCII as the binary subset of all ASCII-based character sets. It also treats each character set as a binary subset of itself. No other combinations are recognized. Newer Oracle Server versions recognize additional subset/superset combinations, which are listed in [NOTE:119164.1]. If the old character set is not recognized as a binary subset of the new character set, the ALTER DATABASE [NATIONAL] CHARACTER SET statement will return: - in Oracle 8.1.5 and above: ORA-12712: new character set must be a superset of old character set - in Oracle 8.0.5 and 8.0.6: ORA-12710: new character set must be a superset of old character set - in Oracle 8.0.3 and 8.0.4: ORA-24329: invalid character set identifier You will also get these errors if you try to change the characterset of a US7ASCII database that was started without a (correct) ORA_NLSxx parameter. See [NOTE:77442.1] It may be necessary to switch off the superset check to allow changes between formally incompatible character sets to solve certain character set problems or to speed up migration of huge databases. Oracle Support Services may pass the necessary information to customers after verifying the safety of the change for the customers' environments. If in Oracle9i an ALTER DATABASE NATIONAL CHARACTER SET is issued and there are N-type colums who contain data then this error is returned: ORA-12717:Cannot ALTER DATABASE NATIONAL CHARACTER SET when NCLOB data exists The error only speaks about Nclob but Nchar and Nvarchar2 are also checked see [NOTE:2310895.9] for bug [BUG:2310895] 4. HANDLING CLOB AND NCLOB COLUMNS ================================== Background ---------- In a fixed width character set codes of all characters have the same number of bytes. Fixed width character sets are: all single-byte character sets and those multibyte character sets which have names ending with 'FIXED'. In Oracle9i the character set AL16UTF16 is also fixed width. In a varying width character set codes of different characters may have different number of bytes. All multibyte character sets except those with names ending with FIXED (and except Oracle9i AL16UTF16 character set) are varying width. Single-byte character sets are character sets with names of the form xxx7yyyyyy and xxx8yyyyyy. Each character code of a single-byte character set occupies exactly one byte. Multibyte character sets are all other character sets (including UTF8). Some -- usually most -- character codes of a multibyte character set occupy more than one byte. CLOB values in a database whose database character set is fixed width are stored in this character set. CLOB values in an Oracle 8.0.x database whose database character set is varying width are not allowed. They have to be NULL. CLOB values in an Oracle >= 8.1.5 database whose database character set is varying width are stored in the fixed width Unicode UCS-2 encoding. The same holds for NCLOB values and the national character set. The UCS-2 storage format of character LOB values, as implemented in Oracle8i, ensures that calculation of character positions in LOB values is fast. Finding the byte offset of a character stored in a varying width character set would require reading the whole LOB value up to this character (possibly 4GB). In the fixed width character sets the byte offsets are simply character offsets multiplied by the number of bytes in a character code. In UCS-2 byte offsets are simply twice the character offsets. As the Unicode character set contains all characters defined in any other Oracle character set, there is no data loss when a CLOB/NCLOB value is converted to UCS-2 from the character set in which it was provided by a client program (usually the NLS_LANG character set). CLOB Values and the Database Character Set Change ------------------------------------------------- In Oracle 8.0.x CLOB values are invalid in varying width character sets. Thus you must delete all CLOB column values before changing the database character set to a varying width character set. In Oracle 8.1.5 and later CLOB values are valid in varying width character sets but they are converted to Unicode UCS-2 before being stored. But UCS-2 encoding is not a binary superset of any other Oracle character set. Even codes of the basic ASCII characters are different, e.g. single-byte code for "A"=0x41 becomes two-byte code 0x0041. This implies that even if the new varying width character set is a binary superset of the old fixed width character set and thus VARCHAR2/LONG character codes remain valid, the fixed width character codes in CLOB values will not longer be valid in UCS-2. As mentioned above, the ALTER DATABASE [NATIONAL] CHARACTER SET statement does not change character codes. Thus, before changing a fixed width database character set to a varying width character set (like UTF8) in Oracle 8.1.5 or later, you first have to export all tables containing non-NULL CLOB columns, then truncate these tables, then change the database character set and, finally, import the tables back to the database. The import step will perform the required conversion. If you omit the steps above, the character set change will succeed in Oracle8(i) (Oracle9i disallows the change in such situation) and the CLOBs may appear to be correctly legible but as their encoding is incorrect, they will cause problems in further operations. For example, CREATE TABLE AS SELECT will not correctly copy such CLOB columns. Also, after installation of the 8.1.7.3 server patchset the CLOB columns will not longer be legible. LONG columns are always stored in the database character set and thus they behave like CHAR/VARCHAR2 in respect to the character set change. BLOBs and BFILEs are binary raw datatypes and their processing does not depend on any Oracle character set setting. NCLOB Values and the National Character Set Change -------------------------------------------------- The above discussion about changing the database character set and exporting and importing CLOB values is theoretically applicable to the change of the national character set and to NCLOB values. But as varying width character sets are not supported as national character sets in Oracle8(i), changing the national character set from a fixed width character set to a varying width character set is not supported at all. Preparing CLOB Columns for the Character Set Change --------------------------------------------------- Take a backup of the database. If using Advanced Replication or deferred transactions functionality, make sure that there are no outstanding deferred transactions with CLOB parameters, i.e. DEFLOB view must have no rows with non-NULL CLOB_COL column; to make sure that replication environment remains consistent use only recommended methods of purging deferred transaction queue, preferably quiescing the replication environment. Then: - If changing the database character set from a fixed width character set to a varying with character set in Oracle 8.0.x, set all CLOB column values to NULL -- you are not allowed to use CLOB columns after the character set change. - If changing the database character set from a fixed width character set to a varying width character set in Oracle 8.1.5 or later, perform table-level export of all tables containing CLOB columns, including SYSTEM's tables. Set NLS_LANG to the old database character set for the Export utility. Then truncate these tables. Restoring CLOB Columns after the Character Set Change ----------------------------------------------------- In Oracle 8.1.5 or later, after changing the character set as described above (steps 3. to 6.), restore CLOB columns exported in step 2. by importing them back into the database. Set NLS_LANG to the old database character set for the Import utility to avoid IMP-16 errors and data loss. RELATED DOCUMENTS: ================== [NOTE:13856.1] V7: Changing the Database Character Set -- This note has limited distribution, please contact Oracle Support [NOTE:62107.1] The National Character Set in Oracle8 [NOTE:119164.1] Changing Database Character set - Valid Superset definitions [NOTE:118242.1] ALERT: Changing the Database or National Character Set Can Corrupt LOB Values <Note.158577.1> NLS_LANG Explained (How Does Client-Server Character Conversion Work?) [NOTE:140014.1] ALERT: Oracle8/8i to Oracle9i using New "AL16UTF16" [NOTE:159657.1] Complete Upgrade Checklist for Manual Upgrades from 8.X / 9.0.1 to Oracle9i (incl. 9.2) [NOTE:124721.1] Migrating an Applications Installation to a New Character Set Oracle8i National Language Support Guide Oracle8i Release 3 (8.1.7) Readme - Section 18.12 "Restricted ALTER DATABASE CHARACTER SET Command Support (CLOB and NCLOB)" Oracle8i Documentation Addendum, Release 3 (8.1.7) - Chapter 3 "New Character Set Scanner Utility" Oracle8i Application Developer's Guide - Large Objects (LOBs), Release 2 - Chapter 2 "Basic Components" Oracle8 Application Developer's Guide, Release 8.0 - Chapter 6 "Large Objects (LOBs)", Section "Introduction to LOBs" Oracle9i Globalization Guide, Release 1 (9.0.1) Oracle9i Database Globalization Guide, Release 2 (9.2) For further NLS / Globalization information you may start here: [NOTE:150091.1] Globalization Technology (NLS) Library index .
         Copyright (c) 1995,2000 Oracle Corporation. All Rights Reserved. Legal Notices and Terms of Use.     
    Joel P�rez

  • Transferring Data between Databases with Character Sets UTF08 and US7ASCII

    Hi,
    I am trying to transfer data from Oracle 10g (character set :UTF08) to Oracle 8i ( character set: US7ASCII). I have tried the transfer using the DBLinks and found that there is no way the data could be transferred from 10g to Oracle 8i.
    The last option available is to use staging database for transfer. The staging database would be Oracle 10g only but the character set would be US7ASCII. I am expecting that since the character set is US7ASCII, this would be able to get compatible with Oracle 8i (US7ASCII). Secondly, Transfer from 10g to staging 10g should also work, since staging 10g would support UTF08 character set.
    Kindly tell me if this option would work or if there is any other way around.
    Thanks
    Nitin
    Message was edited by:
    Nits
    Message was edited by:
    Nits
    Message was edited by:
    Nits

    You possibly have a fundamental problem,which is more important than any technical issues. If your UTF8 (Unicode) database stores non-english characters you will lose these characters when transferring.
    Werner

  • Oracle database character set

    Hello all -
    Please help..................
    I will be exporting/importing 6/7 users/schemas/data from one database to the another database on solaris. Users are created.
    I am confused about NLS_LANG variable and database characterset.
    I have the following questions -
    1. What is impact of NLS_LANG variable setting of user session while import/export the data ?
    2. Why do we need to set this NLS_LANG user session variable before export/import ?
    3.If NLS_LANG variable is not set (doesnot have any value) what would happen ?
    4. If I have to set NLS_LANG varible, what should I set it to?
    5. How can I see the characterset of my database?
    6. Where can I get more info about database charaterset and What are the valid values for database characterset and NLS_LANG varibale ?
    Any help would really appreciated...
    Thanks a lot.....
    RAMA

    1. What is impact of NLS_LANG variable setting of user session while import/export the data ?
    On export, the data will be converted from the database character set to the character set specified by NLS_LANG. In import, the database will assume that the data is in the character set specified by NLS_LANG and use that value to perform the conversion to the database character set if the two values to not match.
    2. Why do we need to set this NLS_LANG user session variable before export/import ?
    If your database character set is the same as your OS, you don't necessarily have to set NLS_LANG. For instance, if you have a US7ASCII db, and your OS locale is set to AMERICA_AMERICAN.US7ASCII, there won't be any problems. The only time it's really important to set this is when the db and OS settings don't match.
    3.If NLS_LANG variable is not set (doesnot have any value) what would happen ?
    If your database character set doesn't match your OS, the data could be garbled because the db will incorrectly transcode the data on import/export.
    4. If I have to set NLS_LANG varible, what should I set it to?
    Depends on what your database character set is set to (see below).
    5. How can I see the characterset of my database?
    select * from nls_database_parameters and look for the value set for the NLS_CHARACTERSET parameter. Don't get confused by the NLS_NCHAR_CHARACTERSET, that's for NCHAR datatypes.
    So, for instance, if your NLS_CHARACTERSET value is set to UTF8, you would set NLS_LANG to .UTF8 (the dot is important because that's actually shorthand for territory_language.characterset, or language_territory.characterset, I can never remember which comes first. In any case, use the dot). For example:
    setenv NLS_LANG .UTF8
    6. Where can I get more info about database charaterset and What are the valid values for database characterset and NLS_LANG varibale ?
    It's all in the Oracle documentation.
    hope this helps.
    Tarisa.

Maybe you are looking for

  • Why does Firefox no longer scale its size correctly with my new Macbook Pro?

    hey folks, experiencing a weird issue and I'm not sure how to resolve it. I've been using Firefox as my default browser for about a year, love it. I had been using a late 2011 Macbook Pro, sometimes just as a laptop, sometimes attached to a 4 year ol

  • Firefox 3.6.11 on Linux is not recognizing any plugins how can I fix this?

    I'm running Enterprise linux 5.5 64-bit. I have tried installing the java plugin and can't get Firefox to recognize it. I could not get flash to work either.

  • XFCE opening folders in Firefox instead of Thunar

    If I do something like select Dropbox's "Open Dropbox Folder" or choose to "Open containing folder" of a downloaded file in Chromium, Firefox starts up and displays the folder contents (as a directory listing). This is baffling for a number of reason

  • Captivate 6 PDF file trashed

    In the past I have had no problem creating PDF versions of my Captivate 6 modules. However, I found today that all the PDF files I recently created don't play in my PDF Reader. Older Captivate 6 PDFs play, but not the new ones. The Adobe site says th

  • Migration Assistant & User Accounts

    When using MA, for transferring a user account from an iMac G4 to a Mac Pro, I was going to transfer with the same Admin User name, bob on iMac and bob on Mac Pro. MA required that I change one of the account names. When migration was complete, I had