ORACLE 9I 에서의 NATIONAL CHARACTERSET(AL16UTF16)

제품 : ORACLE SERVER
작성날짜 : 2002-05-27
ORACLE 9I 에서의 NATIONAL CHARACTERSET("AL16UTF16" )
===================================================
Description
~~~~~~~~~~~
Oracle 9i 에서는 NCHAR(National Character Set) 를 UTF8 와 AL16UTF16 만을 지원한다.
AL16UTF16 는 Oracle 9i 에서 새롭게 소개된 character set으로 16 bit unicode data 이다.
그러므로 기존에 다양한 NCHAR 를 설정한 Oracle 8/8i 의 client에서 Oracle 9i 로 connect시
지원하지 않는 NCHAR 로 문제가 야기될 수 있다.
National Language Support character 이 지원하는 datatypes 은 다음과 같다.
National Language Support character datatypes:
NCHAR
NVARCHAR2
NCLOB.
Oracle 9i 에서는 default NATIONAL CHARACTER SET 이 AL16UTF16 으로 설정되어 DB가 생성된다.
8/8i 에서는 지원하지 않는 character set 이므로 character set이 깨지는것을 막고 data가 정상적으로
저장되게 하기 위해서는 8/8i 의 client side에 NLS patch 를 적용해야만 한다.
(NCHAR, NVARCHAR2 or NCLOB column data 사용이 가능하게 된다.)
다음과 같은 경우 문제 발생
~~~~~~~~~~~~~~~~~~~~~~~~
- 8/8i 과 9i 연동해서 datatype을 NCHAR, NVARCHAR2 or NCLOB 를 사용하고자 할 경우
- Oracle 8/8i client 에서 9i의 NCHAR, NVARCHAR2 or NCLOB data 를 수정하거나 읽고자 할 경우
Possible Symptoms
~~~~~~~~~~~~~~~~~
a. Client 8/8i 에서 Oracle 9i 에 접속하여 NCHAR, NVARCHAR2 or NCLOB column 의 data를
access 하고자 할때 다음 처럼 quer가 수행된다.
eg: Assume a table NC defined as ( A VARCHAR2(10), B NVARCHAR2(100) )
From an Oracle8 client running WE8DEC:
SQL> insert into nc values('hello','hello');
SQL> select * from nc;
A B
hello h e l l o
^^ Note the extra spaces between characters
b. db link 를 이용하여 data 를 access 하고자 할때 ORA-24365 에러 발생
SQL> select * from [email protected];
ERROR at line 1:
ORA-02068: following severe error from V9.WORLD
ORA-24365: error in character conversion
Workarounds
~~~~~~~~~~~
새로운 character set 에 대한 이해가 가능토록 Patch 적용을 수행하거나
NATIONAL CHARACTER SET 으로 UTF8을 사용하는 것이다.
Patch 적용후에 query를 수행한 결과
Eg:
SQL> select convert(b,'WE8DEC') from nc;
CONVERT(B,'WE8DEC')
hello
만약 Oracle 8/8i 를 Oracle 9i로 migratio 을 해야 한다면 다음 단계를 따라야 한다.
Migration Path
~~~~~~~~~~~
Oracle8/8i에서 NCAHR 를 사용하는 경우 우선 Oracle server 8.2로 다음 단계처럼
migration 을 수행한다.(character set이 UTF8 인경우 제외)
1) NCHAR, NVARCHAR, or NCLOB column이 포함된 table에 대해 모두 export를 한다.
2) NCHAR, NVARCHAR, or NCLOB column이 포함된 table을 모두 drop 한다.
3) Oracle8 Migration Utility를 이용하여 Oracle 8.2 로 upgrade를 한다.
4) export 받은 table을 모두 import 한다.
다음은 patch 에 대한 정보이다.
Patches
~~~~~~~
Oracle intends to make patches available for 8.1.7 and 8.0.6
on all platforms where these releases are available. These patches
can be used with any patch set release on that platform.
Eg: The patch for 8.0.6 on HPUX 32 bit can be used with 8.0.6.0, 8.0.6.1,
8.0.6.2 or 8.0.6.3.
The patches must be installed in the ORACLE_HOME of any tools being
used to connect to the Oracle9i database, and in the ORACLE_HOME of
any Oracle8/8i server which uses database links to an Oracle9i instance.
Oracle does not intend to make patches available for any other releases
unless explicitly requested to do so. Any such request would need to be
made whilst the target release is still in its "Error Correction Support"
period. (Note : 8.1.6 will be desupported from October 2001)
References
~~~~~~~~~~
Note. 140014.1
Notification of desupport of other NCHAR character sets besides UTF8 and AL16UTF16 <Note:102633.1>
The base bug for tracking patches for this issue is <Bug:1634613>

This is embarrasing, I feel just like a fool, I was reading Oracle 9i R2 documentation instead of Oracle 9i R1 documentation, that's why I was using "EXTENT MANAGEMENT LOCAL". Definitely I need get some sleep.
You are right, I removed "EXTENT MANAGEMENT LOCAL" and additionally I changed "DATAFILE" for "TEMPFILE" at the specification of default temporary tablespace and the DB was successfully created.
Thanks a lot!

Similar Messages

  • ORACLE 8.X에서의 NATIONAL CHARACTERSET

    제품 : ORACLE SERVER
    작성날짜 : 2002-05-02
    ORACLE 8.x에서의 NATIONAL CHARACTERSET
    =======================================
    PURPOSE
    Oracle8 의 new feature인 National Character Set에 대해 설명한다.
    Explanation
    1. 개관
    Oracle 8.0 에서는 Database Character Set 말고도 National Character Set을 정의할 수 있는 기능이 추가 되었다.
    2. National Character Set
    National character set은 NCHAR, NVARCHAR2, NCLOB 컬럼에 데이터를 저장할 때
    사용되는 character set으로 데이터베이스 character set과는 별도로 정의할 수 있다.
    National Character Set 값은 데이터베이스 생성시 지정되며, 컬럼 정보에
    character set 식별자가 저장된다.
    3. National Character Set이 사용되는 data type
    1) NCHAR - fixed-length national character set을 저장할 수 있는 data type.
         컬럼의 length는 national character set이 fixed-width일 경우
         character의 갯수로 정의가 되고, varying-width character set일
         경우 byte 단위로 정의 된다.
    2) NVARCHAR2 - variable-length national character set을 저장하는 data type.
         NCHAR와 마찬가지로 character set이 fixed-width인지, varing-width
         인지에 따라 컬럼의 length는 byte 단위로 정의가 되거나
         character의 갯수로 정의된다.
    3) NCLOB - 4G 까지 national character set 데이터를 저장할 수 있는 data type.
         fixed-width charater set 만 사용할 수 있다.
    NCHAR, NVARCHAR2, NCLOB data type은 Oracle 8.0 object의 attribute로 사용될
    수 없다.
    4. National Character Set을 사용하는 이유
    National character set을 사용하는 이유는 다음과 같다.
    1) 데이터베이스의 character set의 subset이 ASCII 나 EBCDIC 이어야 할 경우에도
    fixed-width multi-type character set을 사용할 수 있다.
    2) 데이터베이스에서 사용하는 character set이 아닌 다른 character set을 사용할
    수 있게 해 준다.
    3) NCHAR와 관련 data type은 SQL'92 표준안을 준수한다.
    Fixed-width multi-type character set은 variable-width multi-type character set에 비해 성능상의 장점이 있다.
    JS16SJIS나 ZHT32TRIS와 같은 variable-width multi-byte character set에는 수천가지 종류의 character가 있으며, 동일한 character set의 character가 1 byte, 2 byte 혹은 그 이상일 수 있다. 이경우 불가피하게, 각각의 character가 몇 byte인지를 분석하는 부담이 추가 된다.
    반면 fixed-width character set은 이와 같은 추가 부담이 없으므로 national
    character set을 사용할 경우 fixed-width를 사용하는 것이 좀더 효과적인 처리를
    할 수 있게 된다.
    Fixed-width multi-byte character set 가운데 일부 character set은 단순히
    variable-width multi-byte character set의 일부분일 수 있는데,
    이 경우 PL/SQL이나 SQL에서 identifier로 사용하는 single-type character
    ( 7-bit ASCII, EBCDIC )등이 포함되어 있지 않을 수도 있다. 따라서 데이터베이스
    character set 외에 national character set이 필요하게 된다.
    5. 고려사항
    National character set은 CHAR/VARCHAR2/CLOB등에 사용되는 character set에서
    나타낼 수 없는 character를 NCHAR/NVARCHAR2/NCLOB등에 저장하는 데 사용할 수
    있다. 하지만, 다음과 같은 제약사항을 고려하여야 한다.
    1) 한개의 column에 다른 언어를 조합해 저장할 수 없을 수 있다.
    2) NCHAR/NVARCHAR2 컬럼의 liternal을 SQL 문장에 나타낼 수 없을 수 있다.
    3) 데이터베이스의 기본 character set만이 object type에 사용될 수 있다.
    4) character set과 national character set 이외의 다른 언어의 character set을
    나타낼 수 없다.
    다국어 애플리케이션에서는 database character set으로 Unicode( UTF8 )을
    사용하는 것이 좀더 편리하다.
    SQL이나 PL/SQL의 liternal로 national character set을 사용하기 위해서는
    national character set이 데이터베이스의 기본 character set의 subset일 경우
    가능하다.
    National character를 liternal로 사용하기 위해서는 'N'을 앞에 붙인다.
    예)
    WHERE nchar_column = N'<characters>';
    이때 <characters> 는 데이터베이스와 national character set에 모두 속하는
    character이어야 한다.
    경우에 따라서는 데이터베이스의 character set에 포함되지 않는 national
    character set을 사용하여야 할 경우가 있을 수 있다. 이 경우에는
    CHR(n USING NCHAR_CS) 함수를 사용하여 NVARCHAR2 값으로 사용하여야 한다.
    예)
    WHERE nchar_column = CHR(12345 USING NCHAR_CS) || CHR(23456 USING NCHAR_CS);
    6. 주의 사항
    NCHAR/NVARCHAR2을 OCI나 Pro*C에서 사용할 경우 variable-width character set을
    사용할 수 없다. 프로그램을 컴파일할 수는 있으나 실행 시 다음과 같은 에러가
    발생한다.
    ORA-012704: Character set mismatch
    OCI/Pro*C에서 NCHAR/NVARCHAR2에 variable-width character set의 사용은
    Oracle 8.1.6에서 지원이 될 예정이다.
    7. Fixed-Width Multi-Byte Character Set
    Oracle 8에서 새로 지원되는 character set은 다음과 같다.
    JA16SJISFIXED
    JA16EUCFIXED
    JA16DBCSFIXED
    ZHT32TRISFIXED
    KO16KSC5601FIXED
    KO16DBCSFIXED
    Reference Ducumment
    <Note:62107.1>

    Chinese and other Asian Linguistic Sorts are available in Oracle9i only.

  • Unicode Migration using National Characterset data types - Best Practice ?

    I know that Oracle discourages the use of the national characterset and national characterset data types(NCHAR, NVARCHAR) but that is the route my company has decide to take and I would like to know what is the best practice regarding this specifically in relation to stored procedures.
    The database schema is being converted by changing all CHAR, VARCHAR and CLOB data types to NCHAR, NVARCHAR and NCLOB data types respectively and I would appreciate any suggestions regarding the changes that need to be made to stored procedures and if there are any hard and fast rules that need to be followed.
    Specific questions that I have are :
    1. Do CHAR and VARCHAR parameters need to be changed to NCHAR and NVARCHAR types ?
    2. Do CHAR and VARCHAR variables need to be changed to NCHAR and NVARCHAR types ?
    3. Do string literals need to be prefixed with 'N' in all cases ? e.g.
    in variable assignments - v_module_name := N'ABCD'
    in variable comparisons - IF v_sp_access_mode = N'DL'
    in calls to other procedures passing string parameters - proc_xyz(v_module_name, N'String Parameter')
    in database column comparisons - WHERE COLUMN_XYZ = N'ABCD'
    If anybody has been through a similar exercise, please share your experience and point out any additional changes that may be required in other areas.
    Database details are as follows and the application is written in COBOL and this is also being changed to be Unicode compliant:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    NLS_CHARACTERSET = WE8MSWIN1252
    NLS_NCHAR_CHARACTERSET = AL16UTF16

    ##1. while doing a test convertion I discovered that VARCHAR paramaters need to be changed to NVARCHAR2 and not VARCHAR2, same for VARCHAR variables.
    VARCHAR columns/parameters/variables should not by used as Oracle reserves the right to change their semantics in the future. You should use VARCHAR2/NVARCHAR2.
    ##3. Not sure I understand, are you saying that unicode columns(NVARCHAR2, NCHAR) in the database will only be able to store character strings made up from WE8MSWIN1252 characters ?
    No, I meant literals. You cannot include non-WE8MSWIN1252 characters into a literal. Actually, you can include them under certain conditions but they will be transformed to an escaped form. See also the UNISTR function.
    ## Reason given for going down this route is that our application works with SQL Server and Oracle and this was the best option
    ## to keep the code/schemas consistent between the two databases
    First, you have to keep two sets of scripts anyway because syntax of DDL is different between SQL Server and Oracle. There is therefore little benefit of just keeping the data type names the same while so many things need to be different. If I designed your system, I would use a DB-agnostic object repository and a script generator to produce either SQL Server or Oracle scripts with the appropriate data types or at least I would use some placeholder syntax to replace placeholders with appropriate data types per target system in the application installer.
    ## I don't know if it is possible to create a database in SQL Server with a Unicode characterset/collation like you can in Oracle, that would have been the better option.
    I am not an SQL Server expert but I think VARCHAR data types are restricted to Windows ANSI code pages and those do not include Unicode.
    -- Sergiusz

  • Database characterset and National Characterset

    Hi,
    I am asking a basic question here in this forum.
    What is a Database Characterset and what is National Characterset? I did google, but did not get too much of help.
    Thanks
    -Neel

    When looking for basic Oracle terms or concepts, better reference the source directly.
    Via http://tahiti.oracle.com you'll find the Globalization Support Guide. Relevant chapter to begin with is 2 - Choosing a Character Set.
    Simply put, database character set is for "char" data types and national character set is for "nchar" data types.
    For most applications the latter is never needed and current recommendation is to choose AL32UTF8 for database character set, which basically makes national character set i.e. unicode data type solution obsolete. See chapter 6 in the same book for more information.

  • DISPLAYNAME_CONVERSION_ERROR in XML DB

    Hello,
    After I have installed Oracle 10g Release 2 patchetset 1 (10.2.0.2.0), I have created a new database (the previous one was disabled after the update process, I suppose I did not make all steps well). This new database has the NLS parameters following:
    nls_language: SPANISH
    nls_territory: SPAIN
    Database Characterset: WE8ISO8859P1
    National Characterset: AL16UTF16
    OK!, now I have registered my schema (encoding='ISO8859-1') and I uploaded some examples files based in this XSD (encoding='ISO8859-1') to the XML DB Repository through WebDAV.
    After this, I have closed the WebDAV session and I opened a new session with the File Manager. When I access to the folder that it contain the files, Oracle show a DISPLAYNAME_CONVERSION_ERROR. I have detected that this error is showed when the file contain 'ñ' character.
    If I make the connection with a Web Browser (iExplorer or Firefox) all files names are showed correctly.
    With Windows FTP Client, the 'ñ' is showed as '±'.
    I tried the same with a commercial FTP Client and the names are showed correctly.
    I was thinking a database characterset based in ISO-8859-1 will be a better standard option than others.
    I must delete the current database and to create another with a database characterset based in WE8MSWIN1252 (or similar) or there is an easy way to solve this problem?.
    In my previous configuration (10.2.0.1) all was working fine based in WE8WIN1252.
    The operating system is Windows XP Service Pack 2.
    Thanks in advanced,
    David.
    Mensaje editado por:
    David2005

    There does not appear to be a fast, support way to change the database character set. However I was under the impression you were creating a new database anyway in which case I would recommend selecting the AL32UTF8 character set.

  • Implementation of double byte character in oracle 10.2.0.5 database

    Hi experts,
    There is an oracle 10.2.0.5 standard edition database running on windows 2003 platform. The application team needs to add a column of the datatype double byte (chinese characters) to an existing table. The database character set is set to WE8ISO8859P1 and the national characterset is AL16UTF16. After going through the Oracle Documentation our DBA team found out that its possible to insert chinese characters into the table with the current character set.
    The client side has the following details:
    APIs used to write data--SQL Developer
    APIs used to read data--SQL Developer
    Client OS--Windows 2003
    The value of NLS_LANG environment variable in client environment is American and the database character set is WE8ISO8859P1 and National Character set is AL16UTF16.
    We have got a problem from the development team saying that they are not able to insert chinese characters into the table of nchar or nvchar column type. The chinese characters that are being inserted into the table are getting interpreted as *?*...
    What could be the workaround for this ??
    Thanks in advance...

    For SQL Developer, see my advices in Re: Oracle 10g - Chinese Charecter issue and Re: insert unicode data into nvarchar2 column in a non-unicode DB
    -- Sergiusz

  • Can you suggest a best way to store and read arabic from oracle database?

    Hi ,
    can you suggest a best way to store and read arabic from oracle database?
    My oracle database is Oracle Database 10g Release 10.1.0.5.0 - 64bit Production on unix HP-UX ia64.
    NLS_NCHAR_CHARACTERSET AL16UTF16
    NLS_LANGUAGE AMERICAN
    NLS_TERRITORY AMERICA
    NLS_CHARACTERSET WE8ISO8859P1
    I have presently stored the data in nvarchar2 field. But i am not able to display it correctly.

    Using the national characterset should work but there are other factors that you have to consider when working with NCHAR/NVARCHAR2/NCLOB.
    If possible, changing the characterset is usually the best solution if it's a possiblity for you.
    For more info:
    Dear Gurus: Can u pls explain the difference between VARCHAR2 & NVARCHAR2??

  • Cannot Store Chinese Characters In Oracle 9.2.0.7 Database

    Hi,
    I'm having trouble localizing my Oracle 9.2.0.7 / ASP web application for our Chinese-speaking users.
    My Oracle 9.2.0.7 Database has NLS_NCHAR_CHARACTERSET set to AL16UTF16.
    I've set up a test table thus:
    CREATE TABLE "TBL_TEST_CH"
    field1                          NVARCHAR2(40),
    field2                          NVARCHAR2(40)
    I have the Chinese character set installed on my database / web server (same box), as well as a test client machine. I can see Chinese characters in my web browser, and can enter them in a test ASP page I've set up. When I execute an insert statement via ADO, the insert statement seems to work, but the result is that the data seems to be stored as upside-down question marks.
    I thought perhaps the data was being somehow scrambled between the web app and the database, so I set up an external table import the Chinese data from a Unicode text file:
    CREATE TABLE kenny.ch_import
         FIELD1          NVARCHAR2(255),
         FIELD2          NVARCHAR2(255)
         ORGANIZATION EXTERNAL (TYPE oracle_loader
         DEFAULT DIRECTORY ext_dat_dir
         ACCESS PARAMETERS
         (RECORDS DELIMITED BY ":"
         FIELDS TERMINATED BY "~"
         missing field values are null)
         LOCATION (ext_dat_dir:'test_ch.txt'))
         reject limit unlimited
    However, when I query the data in the external table using my web application, it comes back with garbage like "ÿþ1" and the like.
    To attempt to determine if the database is capable of storing the Chinese characters, I've performed the following test:
    1) I insert a Chinese character in an NVARCHAR2 field in my table by using the UNISTR function thus:
    insert into tbl_test_ch (field1) values (unistr('\3E00'))
    2) I interrogated the value using the dump function thus:
    select dump(field1, 1016) FROM tbl_test_ch
    I'm struggling to understand the output. Obviously the character set being used is "AL16UTF16" (which I would expect to be able to store Chinese characters), but the return_format argument I've provided to the function (1016) should return the hexadecimal code point of the character that's being stored. I would expect this to be the same as I inserted ("3E00"), but I'm getting the following output:
    DUMP(FIELD1,1016)
    Typ=1 Len=2 CharacterSet=AL16UTF16: 3e,0
    I'd really appreciate any suggestions on what I could do next to determine exactly where the problem lies. I've not been able to convince myself that the database is correctly storing the Chinese character data, but I appreciate equally that the problem could lie elsewhere.
    Thanks in advance,
    Kenny McEwan.

    Thanks, Serguisz.
    My technology stack is as follows:
    ASP 3.0 web application, running on IIS6.
    On the web servier, I have MDAC 2.8 SP2 on Windows Server 2003 SP1.
    On the Oracle database server, I have Windows Server 2003 SP1.
    My Oracle database version is 9.2.0.7.
    The client I've been using in this investigation is Internet Explorer 6.0.2900.
    It does look like you're right about characters coming from the application are being corrupted. To support this, I tried to insert the chinese character 博 as well as the Unihan character 中 from a web page in my application. I then used the dump function to interrogate the contents of the field I input to thus:
    select dump(field1, 1016) FROM tbl_test_ch
    DUMP(FIELD1,1016)
    Typ=1 Len=2 CharacterSet=AL16UTF16: 0,bf
    Typ=1 Len=2 CharacterSet=AL16UTF16: 0,bf
    Both characters seem to have suffered the same corruption.
    The problem seems to happen in the other direction as well - even after verifying that the character detailed in the previous post was stored correctly, it is still displayed by my web app as an upside down question mark.
    Do you have any suggestions on how to proceed?
    Best regards,
    Kenny.

  • How to insert unicode characters in oracle

    hiiii...........i want to add special unicode characters in oracle database......can anyone guide me how to do this.
    i kno we have nvarchar2 datatype which supports multilingual languages......but im enable to insert the values from sql prompt........can anyone guide me how to insert the values.
    also please tell will there be special care which had to be taken care of if we are accessing it through .NET??

    output of
    select * from nls_database_parameters where parameter like '%SET';
    is PARAMETER VALUE
    NLS_CHARACTERSET WE8MSWIN1252
    NLS_NCHAR_CHARACTERSET AL16UTF16
    when i query :select testmsg, dump(testmsg,1016) from test ;
    i get
    TESTMSG DUMP(TESTMSG,1016)
    éµOF¿¿ad¿ Typ=1 Len=18 CharacterSet=AL16UTF16: 0,e9,0,b5,0,4f,0,46,0,bf,0,bf,0,61,0,64,0,bf
    dsdas Typ=1 Len=10 CharacterSet=AL16UTF16: 0,64,0,73,0,64,0,61,0,73
    éµOF¿¿ad¿ Typ=1 Len=18 CharacterSet=AL16UTF16: 0,e9,0,b5,0,4f,0,46,0,bf,0,bf,0,61,0,64,0,bf
    éµOF¿¿ad¿ Typ=1 Len=18 CharacterSet=AL16UTF16: 0,e9,0,b5,0,4f,0,46,0,bf,0,bf,0,61,0,64,0,bf
    what basically i want is to store some special characters like éµΩΦЛήαδӨװΘ§³¼αγ into my oracle database but i am unable to do dat....
    Edited by: [email protected] on Jun 28, 2010 10:19 PM
    Edited by: [email protected] on Jun 28, 2010 10:54 PM

  • Oracle 11g decode issue with null

    Hi,
    we want to migrate from Oracle 10g to Oracle 11g and have an issue with decode.
    The database has the following character set settings:
    NLS_CHARACTERSET = AL32UTF8 in Oracle 11g and UTF8 in Oracle 10g
    NLS_NCHAR_CHARACTERSET = AL16UTF16
    If I try a select with decode which has null as first result argument I will get a wrong value.
    select decode(id, null, null, name) from tab1;
    ("name" is a NVARCHAR2 field. Table tab1 has only one entry and "id" is not null.)
    This select returns a value with characters which are splitted by 0 bytes.
    In Oracle 10g the value without 0 bytes is delivered.
    If I suround the decode with dump I get following results:
    select dump(decode(id, null, null, name), 1016) from tab1;
    Oracle 10g: Typ=1 Len=6 CharacterSet=AL32UTF8: 4d,61,72,74,69,6e
    Oracle 11g: Typ=1 Len=12 CharacterSet=US7ASCII: 0,4d,0,61,0,72,0,74,0,69,0,6e
    NLS_LANG has no effect on the character set of 'null' in Oracle 11g.
    Non null literals work:
    select dump(decode(id, null, 'T', name), 1016) from tab1;
    Oracle 10g: Typ=1 Len=6 CharacterSet=UTF8: 4d,61,72,74,69,6e
    Oracle 11g: Typ=1 Len=6 CharacterSet=AL32UTF8: 4d,61,72,74,69,6e
    select dump(decode(id, null, N'T', name), 1016) from tab1;
    Oracle 10g: Typ=1 Len=12 CharacterSet=AL16UTF16: 0,4d,0,61,0,72,0,74,0,69,0,6e
    Oracle 11g: Typ=1 Len=12 CharacterSet=AL16UTF16: 0,4d,0,61,0,72,0,74,0,69,0,6e
    Here the scripts for creating the table and the entry:
    create table tab1 (
    id NUMBER(3),
    name NVARCHAR2(10)
    insert into tab1 (id, name) values (1, N'Martin');
    commit;
    Is it possible to change the character set?
    Could you please help me?
    Regards
    Martin

    This doesn't have the problem.looks this doesn't solve the problem (of returning a value with characters which are splitted by 0 bytes):
    SQL> select * from v$version where rownum = 1
    BANNER                                                                         
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production         
    1 row selected.
    SQL> select dump(decode(id, null, null, name), 1016) from tab1
    union all
    select dump(case id when null then null else name end, 1016) cs from tab1
    DUMP(DECODE(ID,NULL,NULL,NAME),1016)                                           
    Typ=1 Len=12 CharacterSet=US7ASCII: 0,4d,0,61,0,72,0,74,0,69,0,6e              
    Typ=1 Len=12 CharacterSet=AL16UTF16: 0,4d,0,61,0,72,0,74,0,69,0,6e             
    2 rows selected.You need to explicitly convert the third parameter to char:
    SQL> select dump(decode(id, null, to_char(null), name), 1016) from tab1
    DUMP(DECODE(ID,NULL,TO_CHAR(NULL),NAME),1016)                                  
    Typ=1 Len=6 CharacterSet=WE8MSWIN1252: 4d,61,72,74,69,6e                       
    1 row selected.

  • How login in BI Publisher from MyApp (Oracle Forms 11.1.2)

    Hi.
    Anyone can help me, as I login from my application in Oracle Forms 11.1.2, BI Publisher 11.1.6
    Thank you very much
    Carlos

    I would not use the national characterset, but change the database characterset instead. Oracle is perfectly capable of suporting multibyte charactersets like UTF-16 or UTF-8.
    Take a look how to verify if you can change the database characterset using csalter or if you'd need to create a new database:
    http://www.oracle-base.com/articles/10g/CharacterSetMigration.php
    If you change the database characterset to e.g. AL32UTF8 your VARCHAR2 columns are multibyte enabled (and depending on your NLS_LENGTH_SEMANTICS the precision you specified are BYTEs or CHARs), and you won't need to change your sourcecode at all.
    The national characterset is there to support multiple charactersets in one database; As you simply want to support multibyte it's by far more easy to change the database characterset. Change the database characterset to AL32UTF8 (4 byte variable-width characterset) and you won't have problems with charactersets at all in the future.
    Also changing to NVARCHAR2 multibyte won't be a easy "alter table modify blabla", as the data itself also needs to be migrated. The whole migration plus the change of your entire code plus the bug that this doesn't even work in forms makes a reinstall of your database install and a characterset migration via exp/imp by far the easiest solution.
    cheers

  • Problem Access2000-- ODBC-- Oracle 9.2 reexecuting on scrolling down result

    I have the following problem:
    I have an access 2000 frontend-db with linked oracle tables and views via Oracle-ODBC driver Version 9.2. I have relative complex linked Oracle-views which are collecting data from 4 different tables with outer joins.
    When I open one of this Oracle-views inside my Access-Frontend (in Access-Table-Register), I get the result after a few seconds. that's ok.
    But, if I want to scroll through the resultset in access, it seems, that everytime I push the 'pagedown' key, the view has to be executed again on my oracle database server. I can see that in the performance log of the Win2000 server from my Oracle-databaseserver.
    I testet the oracle-ODBC-parameter 'Prefetch count'. I set it to 1000 which should save 1000 records in the local memory. (my view has approx. 350 records). Even when I relink my Oracle-views in MS-Access and I check, the parameter settings, this doesn't help.
    I would be very gratefull for any help or suggestions.
    Thanks in advance,
    Zigi

    Thanks Pierre. I've performed the following test:
    1) I insert a chinese character in an NCHAR field in my table by using the UNISTR function thus:
    insert into tbl_test_ch (field1) values (unistr('\3E00'))
    2) I interrogated the value using the dump function thus:
    select dump(field1, 1016) FROM tbl_test_ch
    I'm struggling to understand the output. Obviously the character set being used is "AL16UTF16" (which I would expect to be able to store chinese characters), but the return_format argument I've provided to the function (1016) should return the hexadecimal code point of the character that's being stored. I would expect this to be the same as I inserted ("3E00"), but I'm getting the following output:
    DUMP(FIELD1,1016)
    Typ=1 Len=2 CharacterSet=AL16UTF16: 3e,0
    Can you shed any light on this?

  • Oracle character set confused

    Dear all:
          We installed the Latest NW7.0 SR3 for EP&EP core for my Portal application. After that ,i found that our oralce default character set is UTF-8. But some of other Java codes(Iviews,pages..which we developed on Tomcat envionment.)are based in the environment of Oracle character set ZHS16GBK. So iam confused that The NW7.0 SR3 can only install in 64bit OS and a Unicode system. Can i change the oracle character set from UTF8 to ZHS16GBK. Or how can i install the SAP system whose oracle basing on character set ZHS16GBK.
    Thanks everyone.

    Hello Shao,
    ok lets clarify some things at the beginning.
    A SAP Java system is not "only using" the database characterset, it is using the national characterset (column types NCHAR, NVARCHAR2, NCLOB). You can check this by reading sapnote #669902.
    You can also check sapnote #456968/#695899 for the supported super sets:
    => As of SAP Web AS Release 7.00 and Oracle client 10g, multibyte character sets are no longer supported
    With these informations you can not use ZHS16GBK.
    > But some of other Java codes(Iviews,pages..which we developed on Tomcat envionment.)are based in the environment of Oracle character set ZHS16GBK
    Sorry but i don't understand this sentence.
    What is your problem with the java code? Do you have custom tables with column types CHAR,VARCHAR2 or CLOB?
    Regards
    Stefan

  • Spoil national characters while migration

    I ran into a bit problem: while migration from MS SQL to Oracle all national symbols were transformed to "?" character. Oracle was installed with two locale - American and Russian. Changing of current operatio system localization hasn't taken any effect.
    I tried to migrate data by means of MTS but the error has been occured while migrating LOB objects (something like "OCI error when call lob function").
    As a result I cant migrate the majority of the data
    How can I solve the problem?
    null

    check the list of fonts available. You can use the following code for the purpose.
      public static void main(String args[])
            String fonts[] = getFontNames();
            for(int i = 0; i < fonts.length; i++)
                System.out.println(fonts);
    public static String[] getFontNames()
    GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment();
    return ge.getAvailableFontFamilyNames();

  • Latin-1 Characterset Translation Issues

    I have an Oracle 9.2.0.5 database on OpenVMS 7.3-2. Currently, there are 101 incorrect Latin-1 to Latin-1 character set translations that are being loaded into my Oracle database (Incorrect OS conversion tables when data is transfered from source system).
    NLS DB parameters (nls parameters not listed are default values):
    nls_language string AMERICAN
    nls_length_semantics string BYTE
    nls_nchar_conv_excp string FALSE
    nls_territory string AMERICA
    example:
    Source Data : Résine de PolyPropylène
    Loaded in my database after OS translation: R©sine de PolyPropyl¬ne
    The invalid translations are happening external to the oracle database at the OS level. My problem is I need to correct all the invalid character sets that are in my database. The database is current 3.5TB in size, so I have to do this in an efficient matter. I know what the before (invalid translations values in HEX) and after (correct translations in HEX) values are.
    Is there a PL/SQL program or Oracle tool that can help me to correct these values against millions of rows of data in Oracle (Basically a characterset translation program)?
    I have a C program that works to convert the charactersets if they are in a CSV file. The problem is it takes to long to extract the data from oracle into CSV files for tables that are multi-millions of rows.
    Any help is appreciated.

    It looks like during the insertion from ASP the Latin 1 string has not been converted to UTF8. Hence you are storing Latin-1 encoding inside a UTF-8 database.
    I thought it would automatically be handled by OO4O.True. Did you specify the character set of the NLS_LANG env variable for the OO4O client to WE8ISO8859P1 ? If it was set to UTF8 then Oracle will assume that the encoding coming thru' the ASP page are in UTF-8 , hence no conversion takes place ..
    Also may be you should check the CODEPAGE directive and Charset property in your ASP ?
    null

Maybe you are looking for

  • Logical database HR

    Dear Gurus, I m trying to debug a report which is done using logical database programig. Report is rnning for all employees and checking whetere for thsi employee contract is expring in 30 days or not.if expring iys just displaying data. Report is ru

  • Payment Terms not saved using BAPI_PO_CHANGE

    Hi, We are using BAPI_PO_CHANGE to save changes to the PO. When I use a Payment Term with 'Day Limit' (i. e Day Limit not zero) the Payment term field in the PO header is blank after commit. This doesnt happen for other payment terms. Has anyone face

  • No audio on my videos???

    I just bought two music videos from iTunes and both skip horribly and neither have any audio. They're from the American Idol Gives Back, and I'm just trying to play them directly on the computer. Is anyone else having this problem/knows how to fix it

  • What if my hard disk damaged?

    I'm using T61 with Vista Business. I used to restore my system frequently to original factory settings, whenever I faced any problem. My questions are: 1- Is it really danger to restore the system frequently? 2- How can I recover my system if the har

  • Disable commit using set_block_property

    Hi, Is it possible to disable commit, exe. query, ent. query using set_block_property? thanks.