[SOLVED] default character set in mousepad

Is there a way to set the default character set in mousepad? I want it to always save in utf-8, but it defaults to iso-8859-1.
It's a big pain when you're trying to save something that you've copied from a browser. I always have to go back and type the filename again (and change to utf-8) after getting the error message about not being able to save.
Also, is there a way to set the default locale to utf-8 instead of iso-8859?
Last edited by mrbug (2008-10-31 16:18:12)

You put me on the right track; the problem was that I left the .utf8 part out of the LC_ALL= line in /etc/profile
EDIT: Oops, accidentally put a - between utf and 8. That won't work!
Last edited by mrbug (2008-10-31 16:25:35)

Similar Messages

  • UTF8 as system default character set

    Hi,
    I'm new to OSX and I moved to a new company few days ago. We work with Tiger workstations and servers, several Linux internet servers and Windows Laptops.
    One of the problems I have is to find a way to set UTF8 as the default character set for the system, not just user level preferences. That would solve many of my other problems.
    There is probably an easy way to do that?!!
    Any tip or trick would be greatly appreciated.
    Thanks.

    One of the problems I have is to find a way to set UTF8 as the default character set for the system, not just user level preferences. That would solve many of my other problems.
    What are your problems exactly? If you give some specifics, someone can probably help.
    Generally speaking Unicode UTF-8 is what OS X uses by default for a lot of things. The exception would be the file system, which uses UTF-16 NFD. There is certainly no way to change that.

  • System wide default character set

    Hi,
    I moved to a new company few days ago where we use Tiger workstations and servers.
    I've been trying to figure a way to set UTF8 as the default character set for a whole system, not just a user preference.
    I suppose there is a way to do that, but I'm afraid I won't have more time to find it.
    Any help is welcome.
    thx fred

    I've been trying to figure a way to set UTF8 as the default character set for a whole system, not just a user preference.
    If you could explain exactly why you think you need to do that, someone might be able to help.
    Generally speaking Unicode UTF-8 is what OS X uses by default for a lot of things. The exception would be the file system, which uses UTF-16 NFD. There is no way to change that.

  • Change default character set of JVM

    Is there a way to change the default character set of JVM to say, UTF-8?
    System.out.println("Default Character Set: " +  new java.io.OutputStreamWriter(new java.io.ByteArrayOutputStream()).getEncoding());
    System.out.println("File Encoding: " + System.getProperty("file.encoding")); On Windows
    ==========
    Default Character Set: Cp1252
    File Encoding: Cp1252
    On Linux
    ========
    Default Character Set: ASCII
    File Encoding: ANSI_X3.4-1968
    I would like to save on the effort of changing the many lines of code that looks like
       new BufferedWriter(new OutputStreamWriter(out)); to
       new BufferedWriter(new OutputStreamWriter(out, "UTF-8")); Thanks

    Try this:
    -Dfile.encoding=utf-8
    as vm argument.
    /Kaj

  • Default character set

    I installed the database 10.2.01 with basic installation (not advanced option).
    what is the default charater set in the basic installation?
    how to show character set in command line?
    Can I change it to AL32UTF8?
    like:
    SQL>alter database character set AL32UTF8;
    Thanks

    thanks.
    I'm using Linux
    So default character is "WE8ISO8859P1".
    But it seems I can not alter to AL32UTF8, which has the globalization support.
    SQL> select value from nls_database_parameters where parameter='NLS_CHARACTERSET';
    VALUE
    WE8ISO8859P1
    SQL> alter database character set AL32UTF8;
    alter database character set AL32UTF8
    ERROR at line 1:
    ORA-12712: new character set must be a superset of old character set
    ----------------------

  • Default Character Set using JSObject

    Hi All,
    This problem has been nagging me for a while and am now resorting to this forum for an answer.
    I have a jsp page with an embedded applet. Inside the applet, I read the HTML page using JSObject.
    The problem is when using the JSObject to get values of controls from the HTML page with Japanese characters.
    The HTML page is encoded in UTF-8, however, when I get values from the controls using JSObject in the applet, the values returns as ???. Latin characters are supported but not Japanese characters. So.. I'm wondering what character set the JSObject supports when converting a Javascript string to a Java String.
    The following code is executed:
              JSObject win = JSObject.getWindow(this);
              JSObject doc = (JSObject) win.getMember("document");
              JSObject forms = (JSObject) doc.getMember("forms");
         JSObject form = (JSObject)forms.getSlot(0);
              JSObject title = (JSObject)form.getMember("title");
    String titleValue = (String)title.getMember("value");
    I've also tried form.eval("document.forms[0].title.value") and that returns the same ??? for japanese characters.
    Any ideas?
    Kent

    Hi Larry,
    The characters that appear at the beginning of each file -  - is the BOM or byte order mark for UTF-8, which is automatically added to the file on creation. These files are UTF-8 encoded, to allow for the support of multi-byte characters. An updated version of the Exporter Tool removes these BOM characters. Please contact Support to obtain this updated version of the Exporter tool.
    Alternatively, you can try the following:
    If the character set of your Oracle database is not UTF-8, then you have two options:
    1) If possible, change the character set of your database to UTF-8. To check the current database characterset, check the "NLS_DATABASE_PARAMETERS" table.
    or
    2) Open the generated .dat files using Notepad, then use the File | Save As menu option, and set the "Encoding" to ANSI, then save the file. The BOM will now be removed from the .dat files.
    I hope this helps.
    Regards,
    Hilary

  • Will Reinstallation and Data migration solve the Character Set Problem????

    Hi to all,
    We are having a problem with Arabic Data in the Oracle 9i Database, when we are adding any data using SQL * Plus and a asp.net web based application, the data is converting to ????? (Question Marks), when the database was installed on the Server with Windows 2003 OS, it was not having Arabic Support, after installation of oracle 9i it was realised, later we installed the language packs for Arabic on the Server.
    So this was the reason the arabic support is not there in the database, we changed the NLS_LANG parameter of the Server's Registry entry, but this did'nt make any effect on the database, the only change we observed was that now we were able to type arabic data in the SQL * Plus on the Server machine. But after saving the record when we retrieve the data is coming as ?????? , so we tried to insert the data using a client which is having the Arabic Support and we inserted a row using the SQL * Plus with arabic data in the insert statement, after inserting when we retrieve the data. The data was visible without any problem.
    So What is the problem is the database storing the arabic data correctly or not ???, we think there is problem with the Server's Oracle Configuartion, is there any other way we can get the server working with arabic data or a Reinstall of the Oracle Data will solve the Problem.
    Or we can do something on the server machine so that we dont need the Resinstallation.
    If we have to Reinstall the Server what are the best practices and steps we should follow so that we have a seamless migration between the old and the new oracle database.
    Thanks and Regards

    With NLS_LANG on the client and NLS_CHARACTERSET in the database equal AR8MSWIN1256, you should be able to support Arabic data without problems. If you have problems, it usually means a configuration issue on the client, for example, NLS_LANG not set for the right Oracle Home, or lack of Arabic support in a particular tool.
    You need to add Arabic message files to the database Oracle Home (i.e. Arabic should be selected in the Product Languages dialog box in the Installer) only if you care about the language of error messages returned from the database to clients. If you do not display these ORA-xxxx error messages to end users, it should be OK to leave them in English. Also, if you want Arabic error messages, month names, and weekday names, you should change AMERICAN to ARABIC in NLS_LANG.
    -- Sergiusz

  • What affects the default character set environment of JVM?

    For Example, String.getBytes(); since not specify the characterset, so will use the system default characterset. what is this system default characterset affected by? I have tried in Linux Platform, which is LANG environment variable. Is this right? What about other platforms?
    And also, are there some JVM parameters can override the default settings of system default characterset? I have tried -Dfile.encoding but found no effect.
    Thanks!

    Hi,
               I guess it can be Uni CODE.
    Regards
    Prashant

  • How do i change the default character set?

    I'm from Brazil, and here we have special characters like "�", "�", "�", etc.
    I need the JVM to understand that from the database and print it correctly on my page (I'm using servlets).
    How do i do that?
    thanks to all.

    java works just fine with Brazilian characters....I
    should know I developed software for a brazilian
    company for the last 5 years using java.
    Ok. I believe you do. I just wanna know how. i'm using servlets. So, to print out the page, i use.
    PrintWriter out = response.getWriter();
    out.println("some_text");
    out.println(resultset.getString("some_field"));
    If "some_text" contains special characters, they are printed correctly. No problem about that.
    the problem is when the string from the database/resultset has special characters. Those aren't printed correctly. And that's what i want to fix.
    Any suggestions?

  • XML data from BLOB to CLOB - character set conversion

    Hi All,
    I'm trying to solve a problem with a character set conversion in PL/SQL in the following scenario:
    1. source is an XML as a BLOB variable.
    2. target is an XML as a CLOB variable.
    3. the problem I have is the following:
    - database character set is set to UTF-8
    - XML character set could be anything (UTF-8, ISO 8859-1, ISO 8859-2, ASCII, ...)
    - I need to write a procedure which converts the source BLOB content into the target CLOB taking into account the XML encoding and converts it into the DB default character set (UTF8).
    I've been able to implement a simple conversion function. However, this function expects static XML encoding ISO-8859-1. The main part of the function looks as follows:
    buffer := UTL_RAW.cast_to_varchar2(
    UTL_RAW.convert(
    DBMS_LOB.SUBSTR(source_blob_variable, 16000, pos)
    , 'American_America.UTF8'
    , 'American_America.we8iso8859p1')
    Does anyone have an idea how to rewrite the code to handle "any" XML encoding in the source BLOB file? In other words, is there a function in Oracle which converts XML character set names into Oracle character set values (ISO-8859-1 to we8iso8859p1, UTF-8 to UTF8, ...)?
    Thanks a lot for any help.
    Julius

    I want to pass a BLOB to some "createXML" procedure and get a proper XMLType in UTF8 character set, properly converted from whatever character set is the input in.As per documentation the generated XML has always the encoding set at the client side depending on NLS_LANG (default UTF-8), regardless of the input encoding, so I don't see a need to parse the PI of the XML:
    C:\>echo %NLS_LANG%
    %NLS_LANG%
    C:\>sqlplus
    SQL*Plus: Release 11.1.0.6.0 - Production on Wed Apr 30 08:54:12 2008
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> var cur refcursor
    SQL>
    SQL> declare
      2     b   blob := utl_raw.cast_to_raw ('<a>myxml</a>');
      3  begin
      4     open :cur for select xmlroot (xmltype (utl_raw.cast_to_varchar2 (b))) xml from dual;
      5  end;
      6  /
    PL/SQL procedure successfully completed.
    SQL>
    SQL> print cur
    XML
    <?xml version="1.0" encoding="UTF-8"?><a>myxml</a>
    SQL> exit
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    C:\>set NLS_LANG=GERMAN_GERMANY.WE8ISO8859P1
    C:\>sqlplus
    SQL*Plus: Release 11.1.0.6.0 - Production on Mi Apr 30 08:55:02 2008
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    SQL> var cur refcursor
    SQL>
    SQL> declare
      2     b   blob := utl_raw.cast_to_raw ('<a>myxml</a>');
      3  begin
      4     open :cur for select xmlroot (xmltype (utl_raw.cast_to_varchar2 (b))) xml from dual;
      5  end;
      6  /
    PL/SQL-Prozedur erfolgreich abgeschlossen.
    SQL>
    SQL> print cur
    XML
    <?xml version="1.0" encoding="ISO-8859-1"?>
    <a>myxml</a>

  • Change character set

    Hi
    is anyone can tell me how to change characterset.
    i try with alter session but it doesnt work.
    thanks

    Article from Metalink
    Doc ID:      Note:66320.1
    Subject:      Changing the Database Character Set or the Database National Character Set
    Type:      BULLETIN
    Status:      PUBLISHED
         Content Type:      TEXT/PLAIN
    Creation Date:      23-OCT-1998
    Last Revision Date:      12-DEC-2003
    PURPOSE ======= To explain how to change the database character set or national character set of an existing Oracle8(i) or Oracle9i database without having to recreate the database. 1. SCOPE & APPLICATION ====================== The method described here is documented in the Oracle 8.1.x and Oracle9i documentation. It is not documented but it can be used in version 8.0.x. It does not work in Oracle7. The database character set is the character set of CHAR, VARCHAR2, LONG, and CLOB data stored in the database columns, and of SQL and PL/SQL text stored in the Data Dictionary. The national character set is the character set of NCHAR, NVARCHAR2, and NCLOB data. In certain database configurations the CLOB and NCLOB data are stored in the fixed-width Unicode encoding UCS-2. If you are using CLOB or NCLOB please make sure you read section "4. HANDLING CLOB AND NCLOB COLUMNS" below in this document. Before changing the character set of a database make sure you understand how Oracle deals with character sets. Before proceeding please refer to [NOTE:158577.1] "NLS_LANG Explained (How Does Client-Server Character Conversion Work?)". See also [NOTE:225912.1] "Changing the Database Character Set - an Overview" for general discussion about various methods of migration to a different database character set. If you are migrating an Oracle Applications instance, read [NOTE:124721.1] "Migrating an Applications Installation to a New Character Set" for specific steps that have to be performed. If you are migrating from 8.x to 9.x please have a look at [NOTE:140014.1] "ALERT: Oracle8/8i to Oracle9i Using New "AL16UTF16"" and other referenced notes below. Before using the method described in this note it is essential to do a full backup of the database and to use the Character Set Scanner utility to check your data. See the section "2. USING THE CHARACTER SET SCANNER" below. Note that changing the database or the national character set as described in this document does not change the actual character codes, it only changes the character set declaration. If you want to convert the contents of the database (character codes) from one character set to another you must use the Oracle Export and Import utilities. This is needed, for example, if the source character set is not a binary subset of the target character set, i.e. if a character exists in the source and in the target character set but not with the same binary code. All binary subset-superset relationships between characters sets recognized by the Oracle Server are listed in [NOTE:119164.1] "Changing Database Character Set - Valid Superset Definitions". Note: The varying width character sets (like UTF8) are not supported as national character sets in Oracle8(i) (see [NOTE:62107.1]). Thus, changing the national character set from a fixed width character set to a varying width character set is not supported in Oracle8(i). NCHAR types in Oracle8 and Oracle8i were designed to support special Oracle specific fixed-width Asian character sets, that were introduced to provide higher performance processing of Asian character data. Examples of these character sets are : JA16EUCFIXED ,JA16SJISFIXED , ZHT32EUCFIXED. For a definition of varying width character sets see also section "4. HANDLING CLOB AND NCLOB COLUMNS" below. WARNING: Do not use any undocumented Oracle7 method to change the database character set of an Oracle8(i) or Oracle9i database. This will corrupt the database. 2. USING THE CHARACTER SET SCANNER ================================== Character data in the Oracle 8.1.6 and later database versions can be efficiently checked for possible character set migration problems with help of the Character Set Scanner utility. This utility is included in the Oracle Server 8.1.7 software distribution and the newest Character Set Scanner version can be downloaded from the Oracle Technology Network site, http://otn.oracle.com The Character Set Scanner on OTN is available for limited number of platforms only but it can be used with databases on other platforms in the client/server configuration -- as long as the database version matches the Character Set Scanner version and platforms are either both ASCII-based or both EBCDIC-based. It is recommended to use the newest Character Set Scanner version available from the OTN site. The Character Set Scanner is documented in the following manuals: - "Oracle8i Documentation Addendum, Release 3 (8.1.7)", Chapter 3 - "Oracle9i Globalization Support Guide, Release 1 (9.0.1)", Chapter 10 - "Oracle9i Database Globalization Support Guide, Release 2 (9.2)", Chapter 11 Note: The Character Set Scanner coming with Oracle 8.1.7 and Oracle 9.0.1 does not have a separate version number. It reports the database release number in its banner. This version of the Scanner does not check for illegal character codes in a database if the FROMCHAR and TOCHAR (or FROMNCHAR and TONCHAR) parameters have the same value (i.e. you simulate migration from a character set to itself). The Character Set Scanner 1.0, available on OTN, reports its version number as x.x.x.1.0, where x.x.x is the database version number. This version adds a few bug fixes and it supports FROMCHAR=TOCHAR provided it is not UTF8. The Character Set Scanner 1.1, available on OTN and with Release 2 (9.2) of the Oracle Server, reports its version number as v1.1 followed by the database version number. This version adds another bug fixes and the full support for FROMCHAR=TOCHAR. None of the above versions of the Scanner can correctly analyze CLOB or NCLOB values if the database or the national character set, respectively, is multibyte. The Scanner reports such values randomly as Convertible or Lossy. The version 1.2 of the Scanner will mark all such values as Changeless (as they are always stored in the Unicode UCS-2 encoding and thus they do not change when the database or national character set is changed from one multibyte to another). Character Set Scanner 2.0 will correctly check CLOBs and NCLOBs for possible data loss when migrating from a multibyte character set to its subset. To verify that your database contains only valid codes, specify the new database character set in the TOCHAR parameter and/or the new national character set in the TONCHAR parameter. Specify FULL=Y to scan the whole database. Set ARRAY and PROCESS parameters depending on your system's resources to speed up the scanning. FROMCHAR and FROMNCHAR will default to the original database and national character sets. The Character Set Scanner should report only Changless data in both the Data Dictionary and in application data. If any Convertible or Exceptional data are reported, the ALTER DATABASE [NATIONAL] CHARACTER SET statement must not be used without further investigation of the source and type of these data. In situations in which the ALTER DATABASE [NATIONAL] CHARACTER SET statement is used to repair an incorrect database character set declaration rather than to simply migrate to a new wider character set, you may be advised by Oracle Support Services analysts to execute the statement even if Exceptional data are reported. For more information see also [NOTE:225912.1] "Changing the Database Character Set - a short Overview". 3. CHANGING THE DATABASE OR THE NATIONAL CHARACTER SET ====================================================== Oracle8(i) introduces a new documented method of changing the database and national character sets. The method uses two SQL statements, which are described in the Oracle8i National Language Support Guide: ALTER DATABASE [<db_name>] CHARACTER SET <new_character_set> ALTER DATABASE [<db_name>] NATIONAL CHARACTER SET <new_NCHAR_character_set> The database name is optional. The character set name should be specified without quotes, for example: ALTER DATABASE CHARACTER SET WE8ISO8859P1 To change the database character set perform the following steps. Note that some of them have been erroneously omitted from the Oracle8i documentation: 1. Use the Character Set Scanner utility to verify that your database contains only valid character codes -- see "2. USING THE CHARACTER SET SCANNER" above. 2. If necessary, prepare CLOB columns for the character set change -- see "4. HANDLING CLOB AND NCLOB COLUMNS" below. Omitting this step can lead to corrupted CLOB/NCLOB values in the database. If SYS.METASTYLESHEET (STYLESHEET) is populated (9i and up only) then see [NOTE:213015.1] "SYS.METASTYLESHEET marked as having convertible data (ORA-12716 when trying to convert character set)" for the actions that need to be taken. 3. Make sure the parallel_server parameter in INIT.ORA is set to false or it is not set at all. 4. Execute the following commands in Server Manager (Oracle8) or sqlplus (Oracle9), connected as INTERNAL or "/ AS SYSDBA": SHUTDOWN IMMEDIATE; -- or NORMAL <do a full database backup> STARTUP MOUNT; ALTER SYSTEM ENABLE RESTRICTED SESSION; ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0; ALTER SYSTEM SET AQ_TM_PROCESSES=0; ALTER DATABASE OPEN; ALTER DATABASE CHARACTER SET <new_character_set>; SHUTDOWN IMMEDIATE; -- OR NORMAL STARTUP RESTRICT; 5. Restore the parallel_server parameter in INIT.ORA, if necessary. 6. Execute the following commands: SHUTDOWN IMMEDIATE; -- OR NORMAL STARTUP; The double restart is necessary in Oracle8(i) because of a SGA initialization bug, fixed in Oracle9i. 7. If necessary, restore CLOB columns -- see "4. HANDLING CLOB AND NCLOB COLUMNS" below. To change the national character set replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET. You can issue both statements together if you wish. Error Conditions ---------------- A number of error conditions may be reported when trying to change the database or national character set. In Oracle8(i) the ALTER DATABASE [NATIONAL] CHARACTER SET statement will return: ORA-01679: database must be mounted EXCLUSIVE and not open to activate - if you do not enable restricted session - if you startup the instance in PARALLEL/SHARED mode - if you do not set the number of queue processes to 0 - if you do not set the number of AQ time manager processes to 0 - if anybody is logged in apart from you. This error message is misleading. The command requires the database to be open but only one session, the one executing the command, is allowed. For the above error conditions Oracle9i will report one of the errors: ORA-12719: operation requires database is in RESTRICTED mode ORA-12720: operation requires database is in EXCLUSIVE mode ORA-12721: operation cannot execute when other sessions are active Oracle9i can also report: ORA-12718: operation requires connection as SYS if you are not connect as SYS (INTERNAL, "/ AS SYSDBA"). If the specified new character set name is not recognized, Oracle will report one of the errors: ORA-24329: invalid character set identifier ORA-12714: invalid national character set specified ORA-12715: invalid character set specified The ALTER DATABASE [NATIONAL] CHARACTER SET command will only work if the old character set is considered a binary subset of the new character set. Oracle Server 8.0.3 to 8.1.5 recognizes US7ASCII as the binary subset of all ASCII-based character sets. It also treats each character set as a binary subset of itself. No other combinations are recognized. Newer Oracle Server versions recognize additional subset/superset combinations, which are listed in [NOTE:119164.1]. If the old character set is not recognized as a binary subset of the new character set, the ALTER DATABASE [NATIONAL] CHARACTER SET statement will return: - in Oracle 8.1.5 and above: ORA-12712: new character set must be a superset of old character set - in Oracle 8.0.5 and 8.0.6: ORA-12710: new character set must be a superset of old character set - in Oracle 8.0.3 and 8.0.4: ORA-24329: invalid character set identifier You will also get these errors if you try to change the characterset of a US7ASCII database that was started without a (correct) ORA_NLSxx parameter. See [NOTE:77442.1] It may be necessary to switch off the superset check to allow changes between formally incompatible character sets to solve certain character set problems or to speed up migration of huge databases. Oracle Support Services may pass the necessary information to customers after verifying the safety of the change for the customers' environments. If in Oracle9i an ALTER DATABASE NATIONAL CHARACTER SET is issued and there are N-type colums who contain data then this error is returned: ORA-12717:Cannot ALTER DATABASE NATIONAL CHARACTER SET when NCLOB data exists The error only speaks about Nclob but Nchar and Nvarchar2 are also checked see [NOTE:2310895.9] for bug [BUG:2310895] 4. HANDLING CLOB AND NCLOB COLUMNS ================================== Background ---------- In a fixed width character set codes of all characters have the same number of bytes. Fixed width character sets are: all single-byte character sets and those multibyte character sets which have names ending with 'FIXED'. In Oracle9i the character set AL16UTF16 is also fixed width. In a varying width character set codes of different characters may have different number of bytes. All multibyte character sets except those with names ending with FIXED (and except Oracle9i AL16UTF16 character set) are varying width. Single-byte character sets are character sets with names of the form xxx7yyyyyy and xxx8yyyyyy. Each character code of a single-byte character set occupies exactly one byte. Multibyte character sets are all other character sets (including UTF8). Some -- usually most -- character codes of a multibyte character set occupy more than one byte. CLOB values in a database whose database character set is fixed width are stored in this character set. CLOB values in an Oracle 8.0.x database whose database character set is varying width are not allowed. They have to be NULL. CLOB values in an Oracle >= 8.1.5 database whose database character set is varying width are stored in the fixed width Unicode UCS-2 encoding. The same holds for NCLOB values and the national character set. The UCS-2 storage format of character LOB values, as implemented in Oracle8i, ensures that calculation of character positions in LOB values is fast. Finding the byte offset of a character stored in a varying width character set would require reading the whole LOB value up to this character (possibly 4GB). In the fixed width character sets the byte offsets are simply character offsets multiplied by the number of bytes in a character code. In UCS-2 byte offsets are simply twice the character offsets. As the Unicode character set contains all characters defined in any other Oracle character set, there is no data loss when a CLOB/NCLOB value is converted to UCS-2 from the character set in which it was provided by a client program (usually the NLS_LANG character set). CLOB Values and the Database Character Set Change ------------------------------------------------- In Oracle 8.0.x CLOB values are invalid in varying width character sets. Thus you must delete all CLOB column values before changing the database character set to a varying width character set. In Oracle 8.1.5 and later CLOB values are valid in varying width character sets but they are converted to Unicode UCS-2 before being stored. But UCS-2 encoding is not a binary superset of any other Oracle character set. Even codes of the basic ASCII characters are different, e.g. single-byte code for "A"=0x41 becomes two-byte code 0x0041. This implies that even if the new varying width character set is a binary superset of the old fixed width character set and thus VARCHAR2/LONG character codes remain valid, the fixed width character codes in CLOB values will not longer be valid in UCS-2. As mentioned above, the ALTER DATABASE [NATIONAL] CHARACTER SET statement does not change character codes. Thus, before changing a fixed width database character set to a varying width character set (like UTF8) in Oracle 8.1.5 or later, you first have to export all tables containing non-NULL CLOB columns, then truncate these tables, then change the database character set and, finally, import the tables back to the database. The import step will perform the required conversion. If you omit the steps above, the character set change will succeed in Oracle8(i) (Oracle9i disallows the change in such situation) and the CLOBs may appear to be correctly legible but as their encoding is incorrect, they will cause problems in further operations. For example, CREATE TABLE AS SELECT will not correctly copy such CLOB columns. Also, after installation of the 8.1.7.3 server patchset the CLOB columns will not longer be legible. LONG columns are always stored in the database character set and thus they behave like CHAR/VARCHAR2 in respect to the character set change. BLOBs and BFILEs are binary raw datatypes and their processing does not depend on any Oracle character set setting. NCLOB Values and the National Character Set Change -------------------------------------------------- The above discussion about changing the database character set and exporting and importing CLOB values is theoretically applicable to the change of the national character set and to NCLOB values. But as varying width character sets are not supported as national character sets in Oracle8(i), changing the national character set from a fixed width character set to a varying width character set is not supported at all. Preparing CLOB Columns for the Character Set Change --------------------------------------------------- Take a backup of the database. If using Advanced Replication or deferred transactions functionality, make sure that there are no outstanding deferred transactions with CLOB parameters, i.e. DEFLOB view must have no rows with non-NULL CLOB_COL column; to make sure that replication environment remains consistent use only recommended methods of purging deferred transaction queue, preferably quiescing the replication environment. Then: - If changing the database character set from a fixed width character set to a varying with character set in Oracle 8.0.x, set all CLOB column values to NULL -- you are not allowed to use CLOB columns after the character set change. - If changing the database character set from a fixed width character set to a varying width character set in Oracle 8.1.5 or later, perform table-level export of all tables containing CLOB columns, including SYSTEM's tables. Set NLS_LANG to the old database character set for the Export utility. Then truncate these tables. Restoring CLOB Columns after the Character Set Change ----------------------------------------------------- In Oracle 8.1.5 or later, after changing the character set as described above (steps 3. to 6.), restore CLOB columns exported in step 2. by importing them back into the database. Set NLS_LANG to the old database character set for the Import utility to avoid IMP-16 errors and data loss. RELATED DOCUMENTS: ================== [NOTE:13856.1] V7: Changing the Database Character Set -- This note has limited distribution, please contact Oracle Support [NOTE:62107.1] The National Character Set in Oracle8 [NOTE:119164.1] Changing Database Character set - Valid Superset definitions [NOTE:118242.1] ALERT: Changing the Database or National Character Set Can Corrupt LOB Values <Note.158577.1> NLS_LANG Explained (How Does Client-Server Character Conversion Work?) [NOTE:140014.1] ALERT: Oracle8/8i to Oracle9i using New "AL16UTF16" [NOTE:159657.1] Complete Upgrade Checklist for Manual Upgrades from 8.X / 9.0.1 to Oracle9i (incl. 9.2) [NOTE:124721.1] Migrating an Applications Installation to a New Character Set Oracle8i National Language Support Guide Oracle8i Release 3 (8.1.7) Readme - Section 18.12 "Restricted ALTER DATABASE CHARACTER SET Command Support (CLOB and NCLOB)" Oracle8i Documentation Addendum, Release 3 (8.1.7) - Chapter 3 "New Character Set Scanner Utility" Oracle8i Application Developer's Guide - Large Objects (LOBs), Release 2 - Chapter 2 "Basic Components" Oracle8 Application Developer's Guide, Release 8.0 - Chapter 6 "Large Objects (LOBs)", Section "Introduction to LOBs" Oracle9i Globalization Guide, Release 1 (9.0.1) Oracle9i Database Globalization Guide, Release 2 (9.2) For further NLS / Globalization information you may start here: [NOTE:150091.1] Globalization Technology (NLS) Library index .
         Copyright (c) 1995,2000 Oracle Corporation. All Rights Reserved. Legal Notices and Terms of Use.     
    Joel P�rez

  • Unable to migrate table, character set from WE8MSWIN1252 to AL32UTF8

    Hi,
    On our source db the character set is AL32UTF8
    On our own db, we used the default character set of WE8MSWIN1252 .
    When migrating one of the table, we get an error of this: ORA-29275: partial multibyte character
    So in to alter our character set from WE8MSWIN1252 to AL32UTF8, we get this error:
    ALTER DATABASE CHARACTER SET AL32UTF8
    ERROR at line 1:
    ORA-12712: new character set must be a superset of old character set
    I would sure not like to reinstall the db and migrate the tables again. Thanks.

    See this related thread - Re: Want to change characterset of DB
    You can use the ALTER DATABASE CHARACTER SET command in very few cases. You will most likely have to recreate the database and re-migrate the data.
    HTH
    Srini

  • How to get the character set which i want it to be?

    i confront a problem, when using jdbc to connect to oracle8i database on solaris; because i use getBinaryStream() to read from db in "byte" reading, seemly it would use default character set on OS or database(such as EUC,etc.), not which the client i want it to be. is there anyway to take control of such code writing(i mean to change the character set when i get byte out of database, thus the byte would use NLS_LANG the same as using sqlplus setting in user env)?

    First of all read Note 581312 - Oracle database: licensing restrictions:
    As of point 3, it follows that direct access to the Oracle database is only allowed for tools from the areas of system administration and monitoring. If other software is used, the following actions, among other things, are therefore forbidden at database level:
    Creating database users
    Create database objects
    Querying/changing/creating data in the database
    Using ODBC or other SAP external access methods
    Are you trying this on the database server itself? If yes, then you need to install the hebrew codepages as well as hebrew fonts in order to display the data correctly.
    Markus

  • Wrong character set in RFC Response.

    Hi all,
    interesting little problem - in calling an ERP system via the RFC Adapater, I am expecting English text in a string field, however,I end up getting a series of, I assume, Chinese characters. I don't know why, it has nothing to do with a language text look up as I am explicity forcing a return of 'Call Made!' to one of the the string fields all apart of my debugging.
    So the question is - where can I go to find out what the default character set is for the PI system.
    Thanks.

    Thanks Stefan,
    the front end codepage is character set is UTF-8 and the RFC user, in fact all users, are logging on with langauge of 'EN'. It seems to be just this particular field. Another field for example is Material Description, and that is coming through as english and is readable, however, this other field as populated by an RFC Call, regardless of whether I set delibrately via code eg;
    ev_sortf = 'Call Made!'.
    or let it populate from a select statement comes through in a different character set. When I look at the field configuration and the WSDL, it is set as type STRING in both the data type and in the RFC definition, and I cannot see anything that stands out as not to use the English character set.

  • MySQL to Oracle Migration Issue - Unknown character set index for field.

    Hi,
    Looking for help..!!!
    Migrating mySQL - version 4 database to Oracle 10g using oracle migration work bench. It went well until the last step. But, during the data migration, OMWB has given the following error:
    +++++++++++
    Unable to migrate data from source table gets.sales_order_01 to destination table gets_ora.sales_order_oracle10: gets_ora.sales_order_oracle10; Unknown character set index for filed "12596" received from ser.
    +++++++++++
    Log file shows:
    java.sql.SQLException: Unknown character set index for field '12596' received from server.
         at com.mysql.jdbc.Connection.getCharsetNameForIndex(Connection.java:1692)
         at com.mysql.jdbc.Field.<init>(Field.java:161)
         at com.mysql.jdbc.MysqlIO.unpackField(MysqlIO.java:510)
         at com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:285)
         at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:1326)
         at com.mysql.jdbc.MysqlIO.sqlQuery(MysqlIO.java:1225)
         at com.mysql.jdbc.Connection.execSQL(Connection.java:2278)
         at com.mysql.jdbc.Connection.execSQL(Connection.java:2225)
         at com.mysql.jdbc.Statement.executeQuery(Statement.java:1163)
         at oracle.mtg.migrationServer.LoadTableData._migrateTableData(LoadTableData.java:563)
         at oracle.mtg.migrationServer.LoadTableData.run(LoadTableData.java:326)
         at oracle.mtg.migration.WorkerThread.run(Worker.java:268)
    I appreciate your help in this regards.
    Regards,
    K

    Hi K,
    Whats the default character set of you MySQL and Oracle Databases.
    What version of JDBC driver are you using?
    Is there any unicode characters used in your data?
    Have you tried offline data move?
    Thanks
    Dermot.
    Message was edited by:
    dooneill

Maybe you are looking for