NChar/NVarchar2/NClob support

We are currently using single-byte character representation(WE8ISO8859P1 and WE8MSWIN1252) within the db and are wanting to try and keep it that way, so are looking at using the n* data types for handling strings that contain special characters outside the current character set. The kicker her is that we would also like to be able to Text index the fields within these data types.
First off, anyone know why these data types are not supported within Oracle Text?
Secondly, am I way off here in thinking this can be done or do we have no choice but to make the character set within the server be a unicode set?
Finally, if there is a way to do this, could one point me in the direction of a document explaining how this can be done?
Thanks a ton for your help,
Dan

The choices are (1) use UTF8 as the base character set and don't use
NCHAR/NVARCHAR/NCLOB or (2) submit patches to PHP OCI8 to add NCLOB
support or (3) use another language.
I wouldn't trust NCHAR/NVARCHAR to work consistently in PHP OCI8,
despite (or because of) there being automatic mapping to non N* types
in some cases.
I know you might be using XE 11.2 just for testing. However, if you
are targeting that edition for production use, the character set is
AL32UTF8 so there isn't a direct need to use NCHAR/NVARCHAR/NCLOB.
From Choosing a Character Set:
"Oracle recommends using SQL CHAR, VARCHAR2, and CLOB data types in [a] AL32UTF8
database to store Unicode character data. Use of SQL NCHAR,NVARCHAR2, and NCLOB
should be considered only if you must use a database whose database character set
is not AL32UTF8."
Regarding CentOS, I would recommend using Oracle Linux which is free
to download & install, has free patches etc, has a faster kernel than
RHEL available, and is the OS we test the DB on.  You can get it from
http://public-yum.oracle.com/
If possible target a newer version of PHP.  PHP 5.3 is in security
fix-only mode, and this will end soon.  I would recommend building
your own PHP, or using Zend Server (either the free or paid edition).
For new development, use PHP 5.5.

Similar Messages

  • NCLOB support in OCI8

    I'm currently developing a multilingual website with PHP and Oracle and I'm facing a problem concerning NCLOBs. I know that the "The Underground PHP and Oracle Manual" states that all NCHAR-related data types are not supported. However, apparently NCHAR and NVARCHAR2 do work just fine. So I'm wondering how up to date this information is. Anyway, the problem is that I can't write to NCLOB columns via OCI-Lob::write. The call succeeds and returns the number of chars written, but the column stays empty. Is there any hope I can get this to work? If not, is there another library that I could use to connect PHP with the database that supports NCLOBs?
    I'm using the following code:
    $s = oci_parse($dbc, "INSERT INTO trouble (xnclob) values (EMPTY_CLOB()) returning xnclob into :lob");
    $lob = oci_new_descriptor($dbc);
    oci_bind_by_name($s, "lob", $lob, -1, SQLT_CLOB);
    $result = oci_execute($s, OCI_NO_AUTO_COMMIT);
    $lob->write($data); // Succeeds, but column stays empty!
    oci_commit($dbc);
    The code seems to work fine with BLOBs and normal CLOBs. Out test server is running the following software:
    CentOS 6.4 64-Bit
    PHP 5.3.3
    OCI8 1.4.9
    Oracle 11g Release 2 XE
    Thanks for the help in advance.
    Best regards
    David

    The choices are (1) use UTF8 as the base character set and don't use
    NCHAR/NVARCHAR/NCLOB or (2) submit patches to PHP OCI8 to add NCLOB
    support or (3) use another language.
    I wouldn't trust NCHAR/NVARCHAR to work consistently in PHP OCI8,
    despite (or because of) there being automatic mapping to non N* types
    in some cases.
    I know you might be using XE 11.2 just for testing. However, if you
    are targeting that edition for production use, the character set is
    AL32UTF8 so there isn't a direct need to use NCHAR/NVARCHAR/NCLOB.
    From Choosing a Character Set:
    "Oracle recommends using SQL CHAR, VARCHAR2, and CLOB data types in [a] AL32UTF8
    database to store Unicode character data. Use of SQL NCHAR,NVARCHAR2, and NCLOB
    should be considered only if you must use a database whose database character set
    is not AL32UTF8."
    Regarding CentOS, I would recommend using Oracle Linux which is free
    to download & install, has free patches etc, has a faster kernel than
    RHEL available, and is the OS we test the DB on.  You can get it from
    http://public-yum.oracle.com/
    If possible target a newer version of PHP.  PHP 5.3 is in security
    fix-only mode, and this will end soon.  I would recommend building
    your own PHP, or using Zend Server (either the free or paid edition).
    For new development, use PHP 5.5.

  • NCLOB support-Urgent

    Hi,
    I have a client in oracle 8.1.7 and the server is in oracle 9.2.0.5. I have a function which returns an NCLOB datatype. I am able to successfully execute the function.
    I am getting an error when I use it in Business Objects as "ORA-03108: oranet: ORACLE does not support this interface version".
    I upgraded my Oracle client version to 9.2.0.1. But, even now I am getting the same error in Business Objects.
    Can anyone tell me whether it is a problem with Oracle or Business Objects?
    Is it still a problem that my Oracle version is different from my server's version?
    Thanks a lot for any help provided.
    Nachiketa Iyengar

    The choices are (1) use UTF8 as the base character set and don't use
    NCHAR/NVARCHAR/NCLOB or (2) submit patches to PHP OCI8 to add NCLOB
    support or (3) use another language.
    I wouldn't trust NCHAR/NVARCHAR to work consistently in PHP OCI8,
    despite (or because of) there being automatic mapping to non N* types
    in some cases.
    I know you might be using XE 11.2 just for testing. However, if you
    are targeting that edition for production use, the character set is
    AL32UTF8 so there isn't a direct need to use NCHAR/NVARCHAR/NCLOB.
    From Choosing a Character Set:
    "Oracle recommends using SQL CHAR, VARCHAR2, and CLOB data types in [a] AL32UTF8
    database to store Unicode character data. Use of SQL NCHAR,NVARCHAR2, and NCLOB
    should be considered only if you must use a database whose database character set
    is not AL32UTF8."
    Regarding CentOS, I would recommend using Oracle Linux which is free
    to download & install, has free patches etc, has a faster kernel than
    RHEL available, and is the OS we test the DB on.  You can get it from
    http://public-yum.oracle.com/
    If possible target a newer version of PHP.  PHP 5.3 is in security
    fix-only mode, and this will end soon.  I would recommend building
    your own PHP, or using Zend Server (either the free or paid edition).
    For new development, use PHP 5.5.

  • NCHAR & NVARCHAR2

    I have recetly heard that Oracle is not longer going to support NCHAR and NVARCHAR
    data types. Can any one tell me if you know any thing about this. I can not find any documents for this claim.
    Thanks
    [email protected]
    null

    There is no special external data type for NCHAR or NVARCHAR2 columns. You can use any capable data type at your choice.

  • Euro-sign (and Greek) doesn't work even with nchar/nvarchar2

    This is something that has been blocking me for a few days now, and I'm running out of ideas.
    Basically, the problem can be summarised as follows:
    declare
        text nvarchar2(100) := 'Make €€€ fast!';
    begin
      dbms_output.put_line( text );
    end;And the output (both in SQL Developer and Toad) is:
    Make ¿¿¿ fast!See, I was under the impression that by using nchar and nvarchar2, you avoid the problems you get with character sets. What I need this for is to check (in PL/SQL) what the length of a string is in 7-bit units when converted to the GSM 03.38 character set. In that character set, there are 128 characters: mostly Latin characters, a couple of Greek characters that differ from the Latin ones, and some Scandinavian glyphs.
    Some 10 other characters, including square brackets and the euro sign, are escaped and take two 7-bit units. So, the above message takes 17 7-bit spaces.
    However, if I make a PL/SQL function that defines an nvarchar2(128) with the 128 standard characters and another nvarchar2(10) for the extended characters like the euro sign (the ones that take two 7-bit units), and I do an instr() for each character in the source string, the euro sign gets converted to an upside-down question mark, and because the delta (the first Greek character in the GSM 03.38 character set) also becomes an upside-down question mark, the function thinks that the euro sign is in fact a delta, and so assigns a length of 1.
    To try to solve it, I created a table with an nchar(1) for the character and a smallint for the number of units it occupies. The characters are entered correctly, and show as euro signs and Greek letters, but as soon as I do a query, I get the same problem again. The code for the function is below:
      function get_gsm_0338_length(
        text_content in nvarchar2
      ) return integer
      as
        v_offset integer;
        v_length integer := 0;
        v_char nchar(1);
      begin
        for i in 1..length(text_content)
        loop
          v_char := substr( text_content, i, 1 );
          select l
          into v_offset
          from gsm_0338_charset
          where ch = v_char;
          v_length := v_length + v_offset;
        end loop;
        return v_length;
        exception
          when no_data_found then
            return length(text_content) * 2;
      end get_gsm_0338_length;Does anybody have any idea how I can get this to work properly?
    Thanks,
    - Peter

    Well, the person there used a varchar2, whereas I'm using an nvarchar2. I understand that you need the right codepage and such between the client and the database if you use varchar2, which is exactly the reason why I used the nvarchar2.
    However, if I call the function from /Java/, it does work (I found out just now). But this doesn't explain why SQL Developer and Toad are being difficult, and I'm afraid that, because this function is part of a much bigger application, I'll run into the same problem.
    - Peter

  • Oracle defaultNChar=true SLOW on NCHAR/NVARCHAR2

    Hi all,
    I am using a JDBC Prepared Statement with a bunch of parameters using setString(pos, value). The underlying columns on the tables are all NCHAR and NVARCHAR2. I have set the Oracle JDBC driver's "defaultNChar=true" so that Oracle DB would always treat my parameters as national language characters. The driver file is "ojdbc6.jar".
    My problem: My parametrized query is extremely slow with "defaultNChar=true". But as soon as I set "defaultNChar=false" the query is ultra fast (3 seconds).
    Query usage looks like that:
    String sql = "INSERT INTO MYTABLE_ERROR(MY_NAME,MY_FLAG,MY_VALUE) "
                            + "SELECT ? AS MY_NAME,"
                            + "? AS MY_FLAG,v.MY_VALUE"
                            + " FROM OTHER_TABLE v"
                            + " JOIN ( SELECT * FROM ... iv ... WHERE iv.MY_NAME = ? ) rule1 "
                            + " ON v.\"MY_NAME\"=rule1.\"MY_NAME\" AND v.\"MY_VALUE\"=rule1.\"MY_VALUE\""
                            + " WHERE rule1.\"MY_NAME\" = ? AND v.\"MY_VALUE\" = ?";
                preStatement = conn.prepareStatement (sql);
                int count = 1;
                for (String p : params)
                    // SLOW
                    //preStatement.setNString (count++, p);
                    // SLOW
                    //preStatement.setObject (count++, p, Types.NVARCHAR);
                    // SLOW
                    preStatement.setString (count++, p);
    I have been trying to find the root cause of why my prepared statements executed against an "Oracle Database 11g Release 11.2.0.3.0 - 64bit Production" DB are slow with a JDBC driver "Oracle JDBC driver, 11.2.0.3.0". I could not find any clue!
    I even got the DB NLS config hoping to find anything, but I am not sure here either:
    NLS_LANGUAGE AMERICAN
    NLS_TERRITORY AMERICA
    NLS_CURRENCY $
    NLS_ISO_CURRENCY AMERICA
    NLS_NUMERIC_CHARACTERS .,
    NLS_CALENDAR GREGORIAN
    NLS_DATE_FORMAT DD-MON-RR
    NLS_DATE_LANGUAGE AMERICAN
    NLS_CHARACTERSET AL32UTF8
    NLS_SORT BINARY
    NLS_TIME_FORMAT HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY $
    NLS_NCHAR_CHARACTERSET AL16UTF16
    NLS_COMP BINARY
    Please help!
    Thanks,
    H.

    UPDATE: It looks like the query is stuck when using "defaultNChar=true" somehow. I am seeing this when using JConsole:
    Total blocked: 1  Total waited: 1
    Stack trace:
    java.net.SocketInputStream.socketRead0(Native Method)
    java.net.SocketInputStream.read(Unknown Source)
    java.net.SocketInputStream.read(Unknown Source)
    oracle.net.ns.Packet.receive(Packet.java:311)
    oracle.net.ns.DataPacket.receive(DataPacket.java:103)
    oracle.net.ns.NetInputStream.getNextPacket(NetInputStream.java:312)
    oracle.net.ns.NetInputStream.read(NetInputStream.java:257)
    oracle.net.ns.NetInputStream.read(NetInputStream.java:182)
    oracle.net.ns.NetInputStream.read(NetInputStream.java:99)
    oracle.jdbc.driver.T4CSocketInputStreamWrapper.readNextPacket(T4CSocketInputStreamWrapper.java:121)
    oracle.jdbc.driver.T4CSocketInputStreamWrapper.read(T4CSocketInputStreamWrapper.java:77)
    oracle.jdbc.driver.T4CMAREngine.unmarshalUB1(T4CMAREngine.java:1173)
    oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:309)
    oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:200)
    oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:543)
    oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:238)
    oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:1446)
    oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1757)
    oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:4372)
    oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:4539)
       - locked oracle.jdbc.driver.T4CConnection@7f2315e5
    oracle.jdbc.driver.OraclePreparedStatementWrapper.executeUpdate(OraclePreparedStatementWrapper.java:5577)
    com.mycomp.test.DriverTest.fireStatement(DriverTest.java:253)

  • NCHAR, NVARCHAR2 with JDBC THICK drivers

    Hi,
    I am using weblogic Thick jdbc drivers. We have a requirement of storing data
    in multiple languages. So we have added two columns in the oracle 9i table with
    data type NCHAR and NVARCHAR2.
    I tried the code using Oracle jdbc thin drivers. Its working fine.
    But when I tried with weblogic thick drivers its not able to read the data...
    Its reading null values.
    Any suggestions/links/guidelines would of great help.
    Thanks & Regards,
    Purvesh Vora

    Purvesh Vora wrote:
    Hi,
    I am using weblogic Thick jdbc drivers. We have a requirement of storing data
    in multiple languages. So we have added two columns in the oracle 9i table with
    data type NCHAR and NVARCHAR2.
    I tried the code using Oracle jdbc thin drivers. Its working fine.
    But when I tried with weblogic thick drivers its not able to read the data...
    Its reading null values.
    Any suggestions/links/guidelines would of great help.
    Thanks & Regards,
    Purvesh VoraHi. Our type-2 driver (weblogic.jdbc.oci.Driver) uses Oracle's OCI, which
    may need some OCI-specific charset properties and/or environment variables
    set for what you need. Please check our docs. I wouldn't expect nulls though,
    maybe corrupted data, but not nulls... What happens if you try Oracle's own
    thick driver? (all you would need to do is to change their URL).
    Joe

  • Problem storing Russian Characters in Oracle 10g

    We are facing an issue in one of our sites which is in Russian Language. Whenever data is submitted with Russian Characters it saves it as Question mark(upside down) in the database. Database is not supporting these characters.The character encoding is done in UTF-8 format from the front end.
    This code use to work fine with Oracle 9i database but after the upgradation to Oracle 10g this problem has started occuring. We have not made any changes to the code after the upgradation of the database.
    How can we resolve this and what are the settings that we can do to make this work fine?

    What is your database character set and national character set?
    SELECT *
      FROM v$nls_parameters
    WHERE parameter like '%CHARACTERSET';Are you storing the data in CHAR/ VARCHAR2/ CLOB columns? Or NCHAR/ NVARCHAR2/ NCLOB?
    Justin

  • What table column size is needed to accomodate Unicode characters

    Hi guys,
    I have encounter something which i dont understand and i hope gurus here will shed some light on me.
    I am running a non-unicode database and i decided to port the data over to a unicode database.
    So
    1) i export the schema out --> data.dmp
    2) then i create the unicode database + create a user
    3) then i import the schema into the database
    during the imp i can see that character conversion will take place.
    During importing of data into the unicode database
    I encounter some error
    saying column size is too small
    so i went to check the row that has the column value that is too large to fit in the table.
    I realise it has some [][][][] data.. so i went to the live non-unicode database and find the row. Indeed it has some [][][][] rubbish data which i feel that someone has inserted other language then english into the database.
    But regardless,
    I went to modify the column size to a larger size, now the row can be accommodated. However the data is still [][][].
    q1) why so ? since now my database is unicode, during the import, this column data [][][] should be converted to unicode already but i still have problem seeing what language it is.
    q2) why at the non-unicode database, the [][][] data can fit into the table column size, but on unicode database, the same table column size need to be increase ?
    q3) while doing more research on unicode, it was said that unicode character takes up 2 byte per character. Alot of my table data are exactly the same size of the table column size.
    E.g Name VARCHAR2(5);
    value - 'Peter'
    Now if converting to unicode, characters will take 2byte instead of 1, isnt 'PETER' going to take up 10byte ( 2 byte per character ),
    why is it that i can still accomodate the data into the table column ?
    q4) now with unicode database up, i will be supporting different language characters around the world. How big should i set my column size to ? the longest a name can get ? or ?
    Thanks guys!

    /// does oracle automatically "look" at the each and individual characters in a word and determine how much byte it should take.
    Characters usually originate from a keyboard, which has an associated keyboard layout and an associated character set encoding (a.k.a code page, a.k.a. encoding). This means, the keyboard driver knows that when a key with a letter "á" on it is pressed on a French keyboard, and the associated character set encoding is MS Code Page 1252 (Oracle name WE8MSWIN1252), then one byte with the value 225 is generated. If the associated character set encoding is UTF-16LE (standard internal Windows encoding), two bytes 225 and 0 are generated. When the generated bytes travel through APIs, they may undergo character set conversions from one encoding to another encoding. The conversion algorithms use translation tables to find out how to translate given byte sequence from one encoding to another encoding. In case of translation from WE8MSWIN1252 to AL32UTF8, Oracle will know that the byte sequence resulting from conversion of the code 225 should be 195 followed by 161. For a Chinese characters, for example when converting it from ZHS16GBK, Oracle knows the resulting sequence as well, and this sequence is usually 3 bytes.
    This is how AL32UTF8 data gets into a database. Now, when Oracle processes a multibyte string, and needs to look at individual characters, for example to count them with LENGTH, or take a substring with SUBSTR, it uses information it has about the structure of the character set. Multibyte character sets are of two type: fixed-width and variable-width. Currently, Oracle supports only one fixed-width multibyte character set in the database: AL16UTF16, which is Oracle's name for Unicode UTF-16BE encoding. It supports this character set for NCHAR/NVARCHAR2/NCLOB data types only. This character set uses two bytes per each character code. To find the next code, 2 is simply added to the string pointer.
    All other Oracle multibyte character sets are variable-width character sets, including AL32UTF8. In most cases, the length of each character code can be determined by looking at its first byte. In AL32UTF8, the number of 1-bits in the most significant positions in the first byte before the first 0-bit tells how many bytes a character has. 0 such bits means 1 byte (such codes are identical to 7-bit ASCII), 2 such bits mean two bytes, 3 bits mean 3 bytes, 4 bits mean four bytes. 1 bit (e.g. the bit sequence 10) starts each second, third or fourth byte of a code.
    In other ASCII-based multibyte character sets, the number of bytes is usually determined by the value range of the first byte. Bytes below 128 means a one-byte code, bytes above 128 begin a two- or three-byte sequence, depending on the range.
    There are also EBCDIC-based (mainframe) multibyte character sets, a.k.a shift-sensitive character sets, where a sequence of two-byte codes is introduced by inserting the SO character (code 14=0x0e) and ended by inserting the SI character (code 15=0x0f). There are also character sets, like ISO-2022-JP, which use more complicated byte sequences to define the length and meaning of byte sequences but Oracle supports them only in limited number of places.
    /// e.g i have a word with 4 character. the 3rd character will be a chinese character..the rest are ascii character
    /// will oracle use 4 byte per character regardless its ascii(english) or chinese
    No.
    /// or it will use 1 byte per english character then 3 byte for the chinese character ? e.g.total - 6 bytes taken
    It will use 6 bytes.
    Thnx,
    Sergiusz

  • ORACLE 9I 에서의 NATIONAL CHARACTERSET(AL16UTF16)

    제품 : ORACLE SERVER
    작성날짜 : 2002-05-27
    ORACLE 9I 에서의 NATIONAL CHARACTERSET("AL16UTF16" )
    ===================================================
    Description
    ~~~~~~~~~~~
    Oracle 9i 에서는 NCHAR(National Character Set) 를 UTF8 와 AL16UTF16 만을 지원한다.
    AL16UTF16 는 Oracle 9i 에서 새롭게 소개된 character set으로 16 bit unicode data 이다.
    그러므로 기존에 다양한 NCHAR 를 설정한 Oracle 8/8i 의 client에서 Oracle 9i 로 connect시
    지원하지 않는 NCHAR 로 문제가 야기될 수 있다.
    National Language Support character 이 지원하는 datatypes 은 다음과 같다.
    National Language Support character datatypes:
    NCHAR
    NVARCHAR2
    NCLOB.
    Oracle 9i 에서는 default NATIONAL CHARACTER SET 이 AL16UTF16 으로 설정되어 DB가 생성된다.
    8/8i 에서는 지원하지 않는 character set 이므로 character set이 깨지는것을 막고 data가 정상적으로
    저장되게 하기 위해서는 8/8i 의 client side에 NLS patch 를 적용해야만 한다.
    (NCHAR, NVARCHAR2 or NCLOB column data 사용이 가능하게 된다.)
    다음과 같은 경우 문제 발생
    ~~~~~~~~~~~~~~~~~~~~~~~~
    - 8/8i 과 9i 연동해서 datatype을 NCHAR, NVARCHAR2 or NCLOB 를 사용하고자 할 경우
    - Oracle 8/8i client 에서 9i의 NCHAR, NVARCHAR2 or NCLOB data 를 수정하거나 읽고자 할 경우
    Possible Symptoms
    ~~~~~~~~~~~~~~~~~
    a. Client 8/8i 에서 Oracle 9i 에 접속하여 NCHAR, NVARCHAR2 or NCLOB column 의 data를
    access 하고자 할때 다음 처럼 quer가 수행된다.
    eg: Assume a table NC defined as ( A VARCHAR2(10), B NVARCHAR2(100) )
    From an Oracle8 client running WE8DEC:
    SQL> insert into nc values('hello','hello');
    SQL> select * from nc;
    A B
    hello h e l l o
    ^^ Note the extra spaces between characters
    b. db link 를 이용하여 data 를 access 하고자 할때 ORA-24365 에러 발생
    SQL> select * from [email protected];
    ERROR at line 1:
    ORA-02068: following severe error from V9.WORLD
    ORA-24365: error in character conversion
    Workarounds
    ~~~~~~~~~~~
    새로운 character set 에 대한 이해가 가능토록 Patch 적용을 수행하거나
    NATIONAL CHARACTER SET 으로 UTF8을 사용하는 것이다.
    Patch 적용후에 query를 수행한 결과
    Eg:
    SQL> select convert(b,'WE8DEC') from nc;
    CONVERT(B,'WE8DEC')
    hello
    만약 Oracle 8/8i 를 Oracle 9i로 migratio 을 해야 한다면 다음 단계를 따라야 한다.
    Migration Path
    ~~~~~~~~~~~
    Oracle8/8i에서 NCAHR 를 사용하는 경우 우선 Oracle server 8.2로 다음 단계처럼
    migration 을 수행한다.(character set이 UTF8 인경우 제외)
    1) NCHAR, NVARCHAR, or NCLOB column이 포함된 table에 대해 모두 export를 한다.
    2) NCHAR, NVARCHAR, or NCLOB column이 포함된 table을 모두 drop 한다.
    3) Oracle8 Migration Utility를 이용하여 Oracle 8.2 로 upgrade를 한다.
    4) export 받은 table을 모두 import 한다.
    다음은 patch 에 대한 정보이다.
    Patches
    ~~~~~~~
    Oracle intends to make patches available for 8.1.7 and 8.0.6
    on all platforms where these releases are available. These patches
    can be used with any patch set release on that platform.
    Eg: The patch for 8.0.6 on HPUX 32 bit can be used with 8.0.6.0, 8.0.6.1,
    8.0.6.2 or 8.0.6.3.
    The patches must be installed in the ORACLE_HOME of any tools being
    used to connect to the Oracle9i database, and in the ORACLE_HOME of
    any Oracle8/8i server which uses database links to an Oracle9i instance.
    Oracle does not intend to make patches available for any other releases
    unless explicitly requested to do so. Any such request would need to be
    made whilst the target release is still in its "Error Correction Support"
    period. (Note : 8.1.6 will be desupported from October 2001)
    References
    ~~~~~~~~~~
    Note. 140014.1
    Notification of desupport of other NCHAR character sets besides UTF8 and AL16UTF16 <Note:102633.1>
    The base bug for tracking patches for this issue is <Bug:1634613>

    This is embarrasing, I feel just like a fool, I was reading Oracle 9i R2 documentation instead of Oracle 9i R1 documentation, that's why I was using "EXTENT MANAGEMENT LOCAL". Definitely I need get some sleep.
    You are right, I removed "EXTENT MANAGEMENT LOCAL" and additionally I changed "DATAFILE" for "TEMPFILE" at the specification of default temporary tablespace and the DB was successfully created.
    Thanks a lot!

  • Oracle character set confused

    Dear all:
          We installed the Latest NW7.0 SR3 for EP&EP core for my Portal application. After that ,i found that our oralce default character set is UTF-8. But some of other Java codes(Iviews,pages..which we developed on Tomcat envionment.)are based in the environment of Oracle character set ZHS16GBK. So iam confused that The NW7.0 SR3 can only install in 64bit OS and a Unicode system. Can i change the oracle character set from UTF8 to ZHS16GBK. Or how can i install the SAP system whose oracle basing on character set ZHS16GBK.
    Thanks everyone.

    Hello Shao,
    ok lets clarify some things at the beginning.
    A SAP Java system is not "only using" the database characterset, it is using the national characterset (column types NCHAR, NVARCHAR2, NCLOB). You can check this by reading sapnote #669902.
    You can also check sapnote #456968/#695899 for the supported super sets:
    => As of SAP Web AS Release 7.00 and Oracle client 10g, multibyte character sets are no longer supported
    With these informations you can not use ZHS16GBK.
    > But some of other Java codes(Iviews,pages..which we developed on Tomcat envionment.)are based in the environment of Oracle character set ZHS16GBK
    Sorry but i don't understand this sentence.
    What is your problem with the java code? Do you have custom tables with column types CHAR,VARCHAR2 or CLOB?
    Regards
    Stefan

  • Can you suggest a best way to store and read arabic from oracle database?

    Hi ,
    can you suggest a best way to store and read arabic from oracle database?
    My oracle database is Oracle Database 10g Release 10.1.0.5.0 - 64bit Production on unix HP-UX ia64.
    NLS_NCHAR_CHARACTERSET AL16UTF16
    NLS_LANGUAGE AMERICAN
    NLS_TERRITORY AMERICA
    NLS_CHARACTERSET WE8ISO8859P1
    I have presently stored the data in nvarchar2 field. But i am not able to display it correctly.

    Using the national characterset should work but there are other factors that you have to consider when working with NCHAR/NVARCHAR2/NCLOB.
    If possible, changing the characterset is usually the best solution if it's a possiblity for you.
    For more info:
    Dear Gurus: Can u pls explain the difference between VARCHAR2 & NVARCHAR2??

  • How to filter French language spam in Mail?

    Is there a way of adding a rule in Mail 7.3 in OS 10.9.4 to automatically identify French-language e-mails as spam (as indeed all the French-language e-mails I receive are)?  Merely marking the messages as spam hasn't made Mail any smarter about recognizing them as spam.

    There are a whole bunch of things that must all be setup properly for this to work; some of them are not related specifically to TimesTen. For example:
    1.   The setting for 'DatabaseCharacterSet' determines how TimesTen will interpret character data (other than NCHAR/NVARCHAR2/NCLOB) stored in the database. So this must be set to a suitable and correct value based on your requirements.
    2.  The DSN setting for 'ConnectionCharacterSet' determines how the TimesTen driver (ODBC, OCI, JDBC etc.) will treat data presented to it by the application. This setting must ,match, or at least be compatible with, the actual data encoding used by the application. Note that the default value for this if it is not set explicitly is US7ASCII which is certainly *not* suitable for your needs. What value do you have for ConnectionCharacterSet?
    3.  If you are using a shell based client (such as ttIsql) to insert or display the data then it is very important that both your shell and terminal NLS parameters are set properly. If they are not then even if the data is correct being inserted and retrieved it may not  display correctly. I cannot provide detailed guidance on this as it depends on your OS etc. but for Mac OS X for example I work in UTF8 and my shell environment specifies NLS_LANG=ENGLISH_UNITED KINGDOM.UTF8.
    4.   Applications often have their own locale settings and those also need to be set properly.
    5.   Java always works in UTF16 so usually that is not an issue.
    So, you need to check all of the above very carefully; if any one of these is not set exactly right you will get unexpected (and possibly incorrect) results. For example, if ConnectionCharacterSet is not set properly at the time the data is inserted then the data in the database may not be correct.
    Hope that helps,
    Chris

  • How will I know my existing DB is in Unicode or Non-Unicode

    I know you will get character set information with following query.
    select PARAMETER, value from nls_database_parameters
    where parameter = 'NLS_CHARACTERSET';
    Result is : WE8MSWIN1252 (My DB is hosted on Windows server 2003 64Bit Machine and created using default parameter)
    My Question is how do I know that WE8MSWIN1252 is unicode- or non-unicode ?
    Thank you for reading.
    Thanks

    Only the values 'AL32UTF8' and 'UTF8' indicate an Unicode encoding if returned by this query. All other character sets are non-Unicode. The query:
    select PARAMETER, value from nls_database_parameters
    where parameter = 'NLS_NCHAR_CHARACTERSET';can also return AL16UTF16, which is Unicode as well, but this character set is valid for NCHAR/NVARCHAR2/NCLOB data types only.
    -- Sergiusz

  • ORACLE 8.X에서의 NATIONAL CHARACTERSET

    제품 : ORACLE SERVER
    작성날짜 : 2002-05-02
    ORACLE 8.x에서의 NATIONAL CHARACTERSET
    =======================================
    PURPOSE
    Oracle8 의 new feature인 National Character Set에 대해 설명한다.
    Explanation
    1. 개관
    Oracle 8.0 에서는 Database Character Set 말고도 National Character Set을 정의할 수 있는 기능이 추가 되었다.
    2. National Character Set
    National character set은 NCHAR, NVARCHAR2, NCLOB 컬럼에 데이터를 저장할 때
    사용되는 character set으로 데이터베이스 character set과는 별도로 정의할 수 있다.
    National Character Set 값은 데이터베이스 생성시 지정되며, 컬럼 정보에
    character set 식별자가 저장된다.
    3. National Character Set이 사용되는 data type
    1) NCHAR - fixed-length national character set을 저장할 수 있는 data type.
         컬럼의 length는 national character set이 fixed-width일 경우
         character의 갯수로 정의가 되고, varying-width character set일
         경우 byte 단위로 정의 된다.
    2) NVARCHAR2 - variable-length national character set을 저장하는 data type.
         NCHAR와 마찬가지로 character set이 fixed-width인지, varing-width
         인지에 따라 컬럼의 length는 byte 단위로 정의가 되거나
         character의 갯수로 정의된다.
    3) NCLOB - 4G 까지 national character set 데이터를 저장할 수 있는 data type.
         fixed-width charater set 만 사용할 수 있다.
    NCHAR, NVARCHAR2, NCLOB data type은 Oracle 8.0 object의 attribute로 사용될
    수 없다.
    4. National Character Set을 사용하는 이유
    National character set을 사용하는 이유는 다음과 같다.
    1) 데이터베이스의 character set의 subset이 ASCII 나 EBCDIC 이어야 할 경우에도
    fixed-width multi-type character set을 사용할 수 있다.
    2) 데이터베이스에서 사용하는 character set이 아닌 다른 character set을 사용할
    수 있게 해 준다.
    3) NCHAR와 관련 data type은 SQL'92 표준안을 준수한다.
    Fixed-width multi-type character set은 variable-width multi-type character set에 비해 성능상의 장점이 있다.
    JS16SJIS나 ZHT32TRIS와 같은 variable-width multi-byte character set에는 수천가지 종류의 character가 있으며, 동일한 character set의 character가 1 byte, 2 byte 혹은 그 이상일 수 있다. 이경우 불가피하게, 각각의 character가 몇 byte인지를 분석하는 부담이 추가 된다.
    반면 fixed-width character set은 이와 같은 추가 부담이 없으므로 national
    character set을 사용할 경우 fixed-width를 사용하는 것이 좀더 효과적인 처리를
    할 수 있게 된다.
    Fixed-width multi-byte character set 가운데 일부 character set은 단순히
    variable-width multi-byte character set의 일부분일 수 있는데,
    이 경우 PL/SQL이나 SQL에서 identifier로 사용하는 single-type character
    ( 7-bit ASCII, EBCDIC )등이 포함되어 있지 않을 수도 있다. 따라서 데이터베이스
    character set 외에 national character set이 필요하게 된다.
    5. 고려사항
    National character set은 CHAR/VARCHAR2/CLOB등에 사용되는 character set에서
    나타낼 수 없는 character를 NCHAR/NVARCHAR2/NCLOB등에 저장하는 데 사용할 수
    있다. 하지만, 다음과 같은 제약사항을 고려하여야 한다.
    1) 한개의 column에 다른 언어를 조합해 저장할 수 없을 수 있다.
    2) NCHAR/NVARCHAR2 컬럼의 liternal을 SQL 문장에 나타낼 수 없을 수 있다.
    3) 데이터베이스의 기본 character set만이 object type에 사용될 수 있다.
    4) character set과 national character set 이외의 다른 언어의 character set을
    나타낼 수 없다.
    다국어 애플리케이션에서는 database character set으로 Unicode( UTF8 )을
    사용하는 것이 좀더 편리하다.
    SQL이나 PL/SQL의 liternal로 national character set을 사용하기 위해서는
    national character set이 데이터베이스의 기본 character set의 subset일 경우
    가능하다.
    National character를 liternal로 사용하기 위해서는 'N'을 앞에 붙인다.
    예)
    WHERE nchar_column = N'<characters>';
    이때 <characters> 는 데이터베이스와 national character set에 모두 속하는
    character이어야 한다.
    경우에 따라서는 데이터베이스의 character set에 포함되지 않는 national
    character set을 사용하여야 할 경우가 있을 수 있다. 이 경우에는
    CHR(n USING NCHAR_CS) 함수를 사용하여 NVARCHAR2 값으로 사용하여야 한다.
    예)
    WHERE nchar_column = CHR(12345 USING NCHAR_CS) || CHR(23456 USING NCHAR_CS);
    6. 주의 사항
    NCHAR/NVARCHAR2을 OCI나 Pro*C에서 사용할 경우 variable-width character set을
    사용할 수 없다. 프로그램을 컴파일할 수는 있으나 실행 시 다음과 같은 에러가
    발생한다.
    ORA-012704: Character set mismatch
    OCI/Pro*C에서 NCHAR/NVARCHAR2에 variable-width character set의 사용은
    Oracle 8.1.6에서 지원이 될 예정이다.
    7. Fixed-Width Multi-Byte Character Set
    Oracle 8에서 새로 지원되는 character set은 다음과 같다.
    JA16SJISFIXED
    JA16EUCFIXED
    JA16DBCSFIXED
    ZHT32TRISFIXED
    KO16KSC5601FIXED
    KO16DBCSFIXED
    Reference Ducumment
    <Note:62107.1>

    Chinese and other Asian Linguistic Sorts are available in Oracle9i only.

Maybe you are looking for