NCHAR & NVARCHAR2

I have recetly heard that Oracle is not longer going to support NCHAR and NVARCHAR
data types. Can any one tell me if you know any thing about this. I can not find any documents for this claim.
Thanks
[email protected]
null

There is no special external data type for NCHAR or NVARCHAR2 columns. You can use any capable data type at your choice.

Similar Messages

  • NChar/NVarchar2/NClob support

    We are currently using single-byte character representation(WE8ISO8859P1 and WE8MSWIN1252) within the db and are wanting to try and keep it that way, so are looking at using the n* data types for handling strings that contain special characters outside the current character set. The kicker her is that we would also like to be able to Text index the fields within these data types.
    First off, anyone know why these data types are not supported within Oracle Text?
    Secondly, am I way off here in thinking this can be done or do we have no choice but to make the character set within the server be a unicode set?
    Finally, if there is a way to do this, could one point me in the direction of a document explaining how this can be done?
    Thanks a ton for your help,
    Dan

    The choices are (1) use UTF8 as the base character set and don't use
    NCHAR/NVARCHAR/NCLOB or (2) submit patches to PHP OCI8 to add NCLOB
    support or (3) use another language.
    I wouldn't trust NCHAR/NVARCHAR to work consistently in PHP OCI8,
    despite (or because of) there being automatic mapping to non N* types
    in some cases.
    I know you might be using XE 11.2 just for testing. However, if you
    are targeting that edition for production use, the character set is
    AL32UTF8 so there isn't a direct need to use NCHAR/NVARCHAR/NCLOB.
    From Choosing a Character Set:
    "Oracle recommends using SQL CHAR, VARCHAR2, and CLOB data types in [a] AL32UTF8
    database to store Unicode character data. Use of SQL NCHAR,NVARCHAR2, and NCLOB
    should be considered only if you must use a database whose database character set
    is not AL32UTF8."
    Regarding CentOS, I would recommend using Oracle Linux which is free
    to download & install, has free patches etc, has a faster kernel than
    RHEL available, and is the OS we test the DB on.  You can get it from
    http://public-yum.oracle.com/
    If possible target a newer version of PHP.  PHP 5.3 is in security
    fix-only mode, and this will end soon.  I would recommend building
    your own PHP, or using Zend Server (either the free or paid edition).
    For new development, use PHP 5.5.

  • Euro-sign (and Greek) doesn't work even with nchar/nvarchar2

    This is something that has been blocking me for a few days now, and I'm running out of ideas.
    Basically, the problem can be summarised as follows:
    declare
        text nvarchar2(100) := 'Make €€€ fast!';
    begin
      dbms_output.put_line( text );
    end;And the output (both in SQL Developer and Toad) is:
    Make ¿¿¿ fast!See, I was under the impression that by using nchar and nvarchar2, you avoid the problems you get with character sets. What I need this for is to check (in PL/SQL) what the length of a string is in 7-bit units when converted to the GSM 03.38 character set. In that character set, there are 128 characters: mostly Latin characters, a couple of Greek characters that differ from the Latin ones, and some Scandinavian glyphs.
    Some 10 other characters, including square brackets and the euro sign, are escaped and take two 7-bit units. So, the above message takes 17 7-bit spaces.
    However, if I make a PL/SQL function that defines an nvarchar2(128) with the 128 standard characters and another nvarchar2(10) for the extended characters like the euro sign (the ones that take two 7-bit units), and I do an instr() for each character in the source string, the euro sign gets converted to an upside-down question mark, and because the delta (the first Greek character in the GSM 03.38 character set) also becomes an upside-down question mark, the function thinks that the euro sign is in fact a delta, and so assigns a length of 1.
    To try to solve it, I created a table with an nchar(1) for the character and a smallint for the number of units it occupies. The characters are entered correctly, and show as euro signs and Greek letters, but as soon as I do a query, I get the same problem again. The code for the function is below:
      function get_gsm_0338_length(
        text_content in nvarchar2
      ) return integer
      as
        v_offset integer;
        v_length integer := 0;
        v_char nchar(1);
      begin
        for i in 1..length(text_content)
        loop
          v_char := substr( text_content, i, 1 );
          select l
          into v_offset
          from gsm_0338_charset
          where ch = v_char;
          v_length := v_length + v_offset;
        end loop;
        return v_length;
        exception
          when no_data_found then
            return length(text_content) * 2;
      end get_gsm_0338_length;Does anybody have any idea how I can get this to work properly?
    Thanks,
    - Peter

    Well, the person there used a varchar2, whereas I'm using an nvarchar2. I understand that you need the right codepage and such between the client and the database if you use varchar2, which is exactly the reason why I used the nvarchar2.
    However, if I call the function from /Java/, it does work (I found out just now). But this doesn't explain why SQL Developer and Toad are being difficult, and I'm afraid that, because this function is part of a much bigger application, I'll run into the same problem.
    - Peter

  • Oracle defaultNChar=true SLOW on NCHAR/NVARCHAR2

    Hi all,
    I am using a JDBC Prepared Statement with a bunch of parameters using setString(pos, value). The underlying columns on the tables are all NCHAR and NVARCHAR2. I have set the Oracle JDBC driver's "defaultNChar=true" so that Oracle DB would always treat my parameters as national language characters. The driver file is "ojdbc6.jar".
    My problem: My parametrized query is extremely slow with "defaultNChar=true". But as soon as I set "defaultNChar=false" the query is ultra fast (3 seconds).
    Query usage looks like that:
    String sql = "INSERT INTO MYTABLE_ERROR(MY_NAME,MY_FLAG,MY_VALUE) "
                            + "SELECT ? AS MY_NAME,"
                            + "? AS MY_FLAG,v.MY_VALUE"
                            + " FROM OTHER_TABLE v"
                            + " JOIN ( SELECT * FROM ... iv ... WHERE iv.MY_NAME = ? ) rule1 "
                            + " ON v.\"MY_NAME\"=rule1.\"MY_NAME\" AND v.\"MY_VALUE\"=rule1.\"MY_VALUE\""
                            + " WHERE rule1.\"MY_NAME\" = ? AND v.\"MY_VALUE\" = ?";
                preStatement = conn.prepareStatement (sql);
                int count = 1;
                for (String p : params)
                    // SLOW
                    //preStatement.setNString (count++, p);
                    // SLOW
                    //preStatement.setObject (count++, p, Types.NVARCHAR);
                    // SLOW
                    preStatement.setString (count++, p);
    I have been trying to find the root cause of why my prepared statements executed against an "Oracle Database 11g Release 11.2.0.3.0 - 64bit Production" DB are slow with a JDBC driver "Oracle JDBC driver, 11.2.0.3.0". I could not find any clue!
    I even got the DB NLS config hoping to find anything, but I am not sure here either:
    NLS_LANGUAGE AMERICAN
    NLS_TERRITORY AMERICA
    NLS_CURRENCY $
    NLS_ISO_CURRENCY AMERICA
    NLS_NUMERIC_CHARACTERS .,
    NLS_CALENDAR GREGORIAN
    NLS_DATE_FORMAT DD-MON-RR
    NLS_DATE_LANGUAGE AMERICAN
    NLS_CHARACTERSET AL32UTF8
    NLS_SORT BINARY
    NLS_TIME_FORMAT HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY $
    NLS_NCHAR_CHARACTERSET AL16UTF16
    NLS_COMP BINARY
    Please help!
    Thanks,
    H.

    UPDATE: It looks like the query is stuck when using "defaultNChar=true" somehow. I am seeing this when using JConsole:
    Total blocked: 1  Total waited: 1
    Stack trace:
    java.net.SocketInputStream.socketRead0(Native Method)
    java.net.SocketInputStream.read(Unknown Source)
    java.net.SocketInputStream.read(Unknown Source)
    oracle.net.ns.Packet.receive(Packet.java:311)
    oracle.net.ns.DataPacket.receive(DataPacket.java:103)
    oracle.net.ns.NetInputStream.getNextPacket(NetInputStream.java:312)
    oracle.net.ns.NetInputStream.read(NetInputStream.java:257)
    oracle.net.ns.NetInputStream.read(NetInputStream.java:182)
    oracle.net.ns.NetInputStream.read(NetInputStream.java:99)
    oracle.jdbc.driver.T4CSocketInputStreamWrapper.readNextPacket(T4CSocketInputStreamWrapper.java:121)
    oracle.jdbc.driver.T4CSocketInputStreamWrapper.read(T4CSocketInputStreamWrapper.java:77)
    oracle.jdbc.driver.T4CMAREngine.unmarshalUB1(T4CMAREngine.java:1173)
    oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:309)
    oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:200)
    oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:543)
    oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:238)
    oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:1446)
    oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1757)
    oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:4372)
    oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:4539)
       - locked oracle.jdbc.driver.T4CConnection@7f2315e5
    oracle.jdbc.driver.OraclePreparedStatementWrapper.executeUpdate(OraclePreparedStatementWrapper.java:5577)
    com.mycomp.test.DriverTest.fireStatement(DriverTest.java:253)

  • NCHAR, NVARCHAR2 with JDBC THICK drivers

    Hi,
    I am using weblogic Thick jdbc drivers. We have a requirement of storing data
    in multiple languages. So we have added two columns in the oracle 9i table with
    data type NCHAR and NVARCHAR2.
    I tried the code using Oracle jdbc thin drivers. Its working fine.
    But when I tried with weblogic thick drivers its not able to read the data...
    Its reading null values.
    Any suggestions/links/guidelines would of great help.
    Thanks & Regards,
    Purvesh Vora

    Purvesh Vora wrote:
    Hi,
    I am using weblogic Thick jdbc drivers. We have a requirement of storing data
    in multiple languages. So we have added two columns in the oracle 9i table with
    data type NCHAR and NVARCHAR2.
    I tried the code using Oracle jdbc thin drivers. Its working fine.
    But when I tried with weblogic thick drivers its not able to read the data...
    Its reading null values.
    Any suggestions/links/guidelines would of great help.
    Thanks & Regards,
    Purvesh VoraHi. Our type-2 driver (weblogic.jdbc.oci.Driver) uses Oracle's OCI, which
    may need some OCI-specific charset properties and/or environment variables
    set for what you need. Please check our docs. I wouldn't expect nulls though,
    maybe corrupted data, but not nulls... What happens if you try Oracle's own
    thick driver? (all you would need to do is to change their URL).
    Joe

  • Oracle9i Export Problem with NCHAR

    Hi,
    I have a schema in database (Oracle 9.2.0.7 running on SuSeLinux 9) having following nls parameters
    NLS_CHARACTERSET WE8ISO8859P1
    NLS_NCHAR_CHARACTERSET AL16UTF16
    Now, i want to export the schema with NLS_CHARACTERSET set to AL32UTF8 and NLS_NCHAR_CHARACTERSET set to UTF8
    I used the following commands in database server. BTW, the shell is bash shell
    $ export NLS_LANG=AMERICAN_AMERICA.AL32UTF8
    $ export NLS_NCHAR=UTF8
    $ exp username owner=test file=exp_test.dmp log=exp_test.log
    But, it always exports with AL32UTF8 and NCHAR AL16UTF16 where as i want to export it as UTF8.
    Export done in AL32UTF8 character set and AL16UTF16 NCHAR character set
    server uses WE8ISO8859P1 character set (possible charset conversion)
    Can some one help me do this?
    Thanks in advance for all your help.
    Regards,
    Murali

    Thanks for all the replies and helpful links.
    From the note:15095.1, I see "NCHAR/NVARCHAR2s are always exported in the database's national character set. This is something you can't influence by setting any parameters." so, that means we can only export with what we have :-( and cannot change NCHAR while exporting.

  • NChar support in ADF BC

    Our database uses non-unicode character set and we needed to support unicode using NVARCHAR2 columns. ADF does not support it in current version (10.1.3.0.4 or 10.1.3.1), so I developed a workaround. It uses custom SqlBuilder and publish it here for someone it could help.
    Steps to use it:
    * entities that should support NCHAR must extend MyEntityImpl (source below, just to be able to call protected methods).
    * you need to set the property jbo.SQLBuilder to my sql builder implementation (due to a bug, you have either to set the option for all application modules in a JVM or you'd better to set it for whole JVM using -D option - AM under some circumstances overwrite each other's settings).
    * manually set the Database Column Type on the entity attribute to NCHAR/NVARCHAR2 (synchronization will ask to convert it back to CHAR/VARCHAR2 - don't synchronize!)
    * if you set bind variables in where clause using setWhereClauseParams, convert the string values using getNCharized static method in the sql builder. SQL builder will later recognize it and bind correctly.
    Here are the sources:
    OracleNCharSqlBuilder.java
    package sk.transacty.bc4j.ncharsup;
    import com.sun.java.util.collections.*;
    import java.io.UnsupportedEncodingException;
    import java.sql.PreparedStatement;
    import java.sql.SQLException;
    import oracle.jbo.Transaction;
    import oracle.jbo.Version;
    import oracle.jbo.common.Diagnostic;
    import oracle.jbo.server.AttributeDefImpl;
    import oracle.jbo.server.DBTransaction;
    import oracle.jbo.server.DBTransactionImpl;
    import oracle.jbo.server.EntityDefImpl;
    import oracle.jbo.server.EntityImpl;
    import oracle.jbo.server.OracleSQLBuilderImpl;
    import oracle.jbo.server.SQLBuilder;
    import oracle.jdbc.OraclePreparedStatement;
    import org.apache.commons.codec.binary.Base64;
    * <p>Rozsirenie OracleSQLBuilderImpl o podporu nastavovania setFormOfUse(x, FORM_NCHAR) pre spravne
    * fungovanie unicode parametrov.
    * <p>Kedze sa spolieha na implementaciu v nadtriede, ktora nie je dokumentovana a moze sa zmenit, testuje verziu
    * BC4J a hodi chybu, ak je to nepodporovana verzia. V principe mozno v buducnosti porovnat dekompilovanu
    * OracleSQLBuilderImpl z najvyssej testovanej verzie s touto a ak sa nijako nezmenila (co je, dufam, pravdepodobne),
    * iba povolit vyssiu verziu. V opacnom pripade bude treba mozno robit upravy.
    * <p>NChar stlpce budu fungovat, ak JVM parameter -Doracle.jdbc.defaultNChar je false alebo nie je (kontroluje sa to).
    * Nastavuje sa len FORM_NCHAR pre Nchar stlpce, FORM_CHAR sa nenastavuje (aby sme co najmene ovplyvnili aspon
    * ne-unicode aplikacie, ak by doslo k nekompatibilnym zmenam v ADF BC).
    * <p>NChar stlpce nebudu fungovat korektne, ak:<ul>
    *   <li>režim zamykania je Optimistic update
    *   <li>typ NChar je v kľúčových stĺpcoch (prim. a cudzích kľúčoch, detailrowsetoch cez viewlink apod.)
    *   <li>asi nefunguje pre pomenované bind premenné typu NChar...
    * </ul>
    * @author Viliam Durina, First Data SK, 8. dec 2006
    public class OracleNCharSqlBuilder extends OracleSQLBuilderImpl {
       * BOM - byte order mark UTF-8, ktora sa pridava na zaciatok Stringov na odlisenia ncharizovanych retazcov.
      private static final String BOM;
      static {
        byte[] BOM_BYTES = new byte[] { (byte)0xef, (byte)0xbb, (byte)0xbf };
        try {
          BOM = new String(BOM_BYTES, "windows-1250");
        } catch (UnsupportedEncodingException e) {
          throw new RuntimeException(e);
      private OracleNCharSqlBuilder() {
        // overenie verzie ADF BC
        if ( ! (Version.major == 10 && Version.minor == 1 && Version.revision == 3 && Version.bldnum >= 3653 && Version.bldnum <= 3984)) {
          String msg = "Incorrect ADF BC version, tested with: 10.1.3.36.53 - 10.1.3.39.84";
          System.out.println(msg);
          throw new RuntimeException(msg);
       * Singleton instancia
      private static SQLBuilder mSQLBuilderInterface;
       * Vrati singleton instanciu.
      public static synchronized SQLBuilder getInterface() {
        if (mSQLBuilderInterface == null) {
          if (Diagnostic.isOn())
            Diagnostic.println("OracleNCharSqlBuilder reached getInterface");
          if ("true".equalsIgnoreCase(System.getProperty("oracle.jdbc.defaultNChar"))) {
            System.err.println("Parameter oracle.jdbc.defaultNChar is true, falling back to standard OracleSQLBuilderImpl");
            mSQLBuilderInterface = OracleSQLBuilderImpl.getInterface();
          else
            mSQLBuilderInterface = new OracleNCharSqlBuilder();
        return mSQLBuilderInterface;
      private void setFormOfUse(PreparedStatement stmt, int idx, String typ) {
        /*if ("CHAR".equalsIgnoreCase(typ) || "VARCHAR2".equalsIgnoreCase(typ) || "CLOB".equalsIgnoreCase(typ))
          ((OraclePreparedStatement)stmt).setFormOfUse(idx, OraclePreparedStatement.FORM_CHAR);
        else */
        if ("NCHAR".equalsIgnoreCase(typ) || "NVARCHAR2".equalsIgnoreCase(typ) || "NCLOB".equalsIgnoreCase(typ))
          ((OraclePreparedStatement)stmt).setFormOfUse(idx, OraclePreparedStatement.FORM_NCHAR);
       * retrList by mala byt mapa <Integer,AttributeDef>, kde kluc je bindIndex a attrDef sa cekuje, ci nie je
       * niektory z NChar typov. Odvodene z dekompilovaneho OracleSQLBuilderImpl.
      private void updateRetrList(HashMap retrList, PreparedStatement stmt, StringBuffer indices) {
        Iterator it = retrList.keySet().iterator();
        while (it.hasNext()) {
          Integer idx = (Integer)it.next();
          AttributeDefImpl adef = (AttributeDefImpl)retrList.get(idx);
          setFormOfUse(stmt, idx, adef.getObjectType());
          if (Diagnostic.isOn()) {
            indices.append(idx);
            indices.append(',');
       * Nastavi setFormOfUse pre NChar stlpce.
       * Keďže sa nastavuje podľa indexov, musí to presne kopírovať logiku v nadtriede.
       * Odvodene z dekompilovaneho OracleSQLBuilderImpl.
      public int bindUpdateStatement(EntityImpl _entityContext,
                                   PreparedStatement stmt, AttributeDefImpl[] cols,
                                   AttributeDefImpl[] retrCols, AttributeDefImpl[] retrKeyCols,
                                   HashMap retrList,
                                   boolean batchMode)
      throws SQLException {
        MyEntityImpl entityContext = null;
        StringBuffer indices = null;
        if (_entityContext instanceof MyEntityImpl) entityContext = (MyEntityImpl)_entityContext;
        if (entityContext != null) {
          int i = 1;
          if (Diagnostic.isOn())
            indices = new StringBuffer(256);
          EntityDefImpl entitydefimpl = entityContext.getEntityDef();
          DBTransaction dbtransactionimpl = entityContext.getDBTransaction();
          boolean isLockOptUpdate = dbtransactionimpl.getLockingMode() == Transaction.LOCK_OPTUPDATE;
          boolean onlyChanged = isLockOptUpdate || entitydefimpl.isUpdateChangedColumns();
          for (int j1 = 0; j1 < cols.length; j1++) {
            if (batchMode || (onlyChanged ? entityContext.isAttributeChanged(cols[j1].getIndex())
                : entityContext.isAttributePopulated(cols[j1].getIndex()))
              setFormOfUse(stmt, i, cols[j1].getObjectType());
              if (Diagnostic.isOn()) {
                indices.append(i);
                indices.append(',');
              i++;
        int res = super.bindUpdateStatement(_entityContext, stmt, cols, retrCols, retrKeyCols, retrList, batchMode);
        if (entityContext != null) {
          updateRetrList(retrList, stmt, indices);
        if (Diagnostic.isOn() && indices != null) {
          // odstranime poslednu ciarku
          if (indices.length() > 1) indices.setLength(indices.length() - 1);
          Diagnostic.println("OracleNCharSqlBuilder: FORM_NCHAR set for indices: " + indices);
        return res;
       * Nastavi setFormOfUse pre NChar stlpce.
       * Keďže sa nastavuje podľa indexov, musí to presne kopírovať logiku v nadtriede.
       * Odvodene z dekompilovaneho OracleSQLBuilderImpl.
      public int bindInsertStatement(EntityImpl _entityContext,
                                   PreparedStatement stmt, AttributeDefImpl[] cols,
                                     AttributeDefImpl[] retrCols, AttributeDefImpl[] retrKeyCols,
                                   com.sun.java.util.collections.HashMap retrList,
                                   boolean batchMode)
      throws SQLException {
        MyEntityImpl entityContext = null;
        StringBuffer indices = null;
        if (_entityContext instanceof MyEntityImpl) entityContext = (MyEntityImpl)_entityContext;
        if (entityContext != null) {
          int i = 1;
          if (Diagnostic.isOn())
            indices = new StringBuffer(256);
          EntityDefImpl entitydefimpl = entityContext.getEntityDef();
          boolean onlyChanged = entitydefimpl.isUpdateChangedColumns();
          for (int j1 = 0; j1 < cols.length; j1++)
            if (batchMode || ! onlyChanged || entityContext.isAttributeChanged(cols[j1].getIndex())) {
              setFormOfUse(stmt, i, cols[j1].getObjectType());
              if (Diagnostic.isOn()) {
                indices.append(i);
                indices.append(',');
              i++;
        int res = super.bindInsertStatement(_entityContext, stmt, cols, retrCols, retrKeyCols, retrList, batchMode);
        if (entityContext != null) {
          updateRetrList(retrList, stmt, indices);
        if (Diagnostic.isOn() && indices != null) {
          // odstranime poslednu ciarku
          if (indices.length() > 1) indices.setLength(indices.length() - 1);
          Diagnostic.println("OracleNCharSqlBuilder: FORM_NCHAR set for indices: " + indices);
        return res;
       * Nastavi setFormOfUse pre NChar stlpce.
       * Odvodene z dekompilovaneho OracleSQLBuilderImpl. Volá sa interne z iných metód v nadtriede,
       * sem sa dostane každý objekt nastavený na ViewObjekte cez setWhereClauseParams.
      protected int bindParamValue(int bindingStyle,
                                 java.lang.Object value, DBTransactionImpl trans,
                                 PreparedStatement stmt, AttributeDefImpl attrDef,
                                 int bindIndex,
                                 boolean skipNull)
      throws SQLException {
        if (value != null && value instanceof String) {
          String s = unNCharize((String)value);
          if (s != value) {
            ((OraclePreparedStatement)stmt).setFormOfUse(bindIndex, OraclePreparedStatement.FORM_NCHAR);
            if(Diagnostic.isOn())
              Diagnostic.println("Setting FORM_NCHAR for index " + bindIndex);
            value = s;
        return super.bindParamValue(bindingStyle, value, trans, stmt, attrDef, bindIndex, skipNull);
       * Funkcia na oklamanie DBSerializera a zamaskovanie Unicode-stringov. Takto zamaskovane
       * Stringy sa obnovuju na povodne funkciou unNCharize.
       * Maskovanie prebieha tak, ze sa vyberu bajty v kodovani UTF-8, zakoduju base64 algoritmom a
       * prida sa BOM na zaciatok. Vysledny
       * String bude sice zmrseny, ale nedojde k dalsiemu zmrseniu po deserializacii z PS_TXN.
       * Takto zmrsene stringy sa pred bindovanim detekuju v unNCharize a obnovia na pekne unicode
       * stringy.
       * Je mala sanca, ze bezne stringy budu vyzerat ako zmrsene (teda zacinat znakmi EF BB BF vo
       * win1250 kodovani). Dufam...
       * @see #unNCharize(String)
      public static String getNCharized(String in) {
        if (in == null) return null;
        try {
          byte[] b = in.getBytes("utf-8");
          b = Base64.encodeBase64(b);
          return BOM + new String(b);
        } catch (UnsupportedEncodingException e) {
          throw new RuntimeException(e);
       * Detekuje a pripadne obnovi Stringy zmrsene na ulozitelnu formu funkciou getNCharized
       * @see #getNCharized(String)
      public static String unNCharize(String in) {
        if (in == null) return null;
        if (in.startsWith(BOM)) {
          try {
            byte[] b = in.substring(3).getBytes();
            b = Base64.decodeBase64(b);
            return new String(b, "utf-8");
          } catch (UnsupportedEncodingException e) {
            throw new RuntimeException(e);
          } catch (RuntimeException e) {
            return in;
        } else
          return in;
    }MyEntityImpl.java
    package sk.transacty.bc4j.ncharsup;
    import oracle.jbo.server.EntityDefImpl;
    import oracle.jbo.server.EntityImpl;
    * Implementacia entity, ktora "zviditelni" niektore protected metody z EntityImpl pre nas package
    public class MyEntityImpl extends EntityImpl {
      protected EntityDefImpl getEntityDef() {
        return super.getEntityDef();
      protected boolean isAttributeChanged(int i) {
        return super.isAttributeChanged(i);
      protected boolean isAttributePopulated(int i) {
        return super.isAttributePopulated(i);
    }Notes:
    * NChar will not work if:
    ** "Optimistic update" locking mode is used
    ** NChar columns are in key columns (primary or foreign keys, detail rowsets etc.)
    ** named bind variables probably don't work
    * if jbo.debugoutput is turned on, it will print messages about setting setFormOfUse
    * implementation overrides methods marked as "subject to change" - version check takes place in the constructor and exception is thrown. Will need to be rechecked in future ADF versions (if this functionality will not be implemented by default, I filled an Enhancement request).
    Viliam

    Hi,
    You can get User name using EL ---adf.context.securityContext.userName
    and also using---getUserPrincipalName()
    Thanks,
    Vijay

  • Nls_lang and nvarchar2

    I setup several tables in a database as nvarchar2... The default charset and nls_charset are both japanese (I forget the actual name sjis or something)... At any rate... Whenever I try to insert 20 japanese characters into a nvarchar2(20) field it blows up on the ora-01401 error (data too long for column)... if nvarchar2(20) gives me the same thing as varchar2(20) what's the point??? Am I missing something???

    Well... Even when I used US7ASCII for the charset and JA16SJIS for the nls_charset I still got 20 bytes instead of 20 chars for a nvarchar2(20) field... Upon reading some oracle docs I see... (quoting the 8i nls guide)
    "When using the NCHAR, NVARCHAR2, and NCLOB data types, the width specification can be in terms of bytes or characters depending on the encoding scheme used. If the NCHAR character set uses a variable-width multibyte encoding scheme, the width specification refers to bytes. If the NCHAR character set uses a fixed-width multibyte encoding scheme, the width specification will be in characters. For example, NCHAR(20), using the variable-width multibyte character set JA16EUC, will allocate 20 bytes while NCHAR(20) using the fixed-width multibyte character set JA16EUCFIXED will allocate 40 bytes."
    So that says to me that if I use a variable-width MULTI-byte charset I still only get single byte storage (ie 20 bytes instead of 20 chars).. That makes no sense to me...
    Basically what I'm trying to do is decide on a database schema for a product that will end up in client's databases.. I'm not so sure that I can gaurantee that a Japanese client will use the JA16SJISFIXED charset versus the JA16SJIS charset.... I want to allow for any supported language but I also don't want to waste space by making all my fields triple thier current size (to allow for triple byte charsets)...
    null

  • DatabaseMetaData.getColumns(...) returns an invalid size of NVARCHAR2

    Hi,
    I have the following problem.
    I'm using ojdbc14.jar version 10.2.0.2.0
    I'm trying to read table metadata from database using DatabaseMetaData.getColumns(...). And, when I read the size of a NVARCHAR2 column (using either COLUMN_SIZE or CHAR_OCTET_LENGTH) it returns the double of the maximum length (as if it were expressed in bytes!)
    Javadocs says: COLUMN_SIZE int => column size. For char or date types this is the maximum number of characters, for numeric or decimal types this is precision.
    Does anyone have the same problem?
    Does anyone know if there is an open bug on this topic?
    I will really appreciate your help
    Thanks
    ArielUBA

    Hi Ashok,
    Thanks for your answer.
    I tried changing NLS_LENGTH_SEMANTICS parameter, and unfortunately it was unsuccessful :-(
    I'm using NVARCHAR2 column type and in http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams127.htm says the following:
    NCHAR, NVARCHAR2, CLOB, and NCLOB columns are always character-based
    I also tried to use (in a simple test case) the newest available driver 11.1.0.6.0. (but, for production I need ojdbc14.jar) and I got the same result.
    PreparedStatement smt = connection.prepareStatement("ALTER SESSION SET NLS_LENGTH_SEMANTICS=CHAR");
    smt.execute();
    connection.commit();
    DatabaseMetaData metaData = connection.getMetaData();
    ResultSet columns = metaData.getColumns(null, "ENG_AAM_572_STD_1E", "PPROCINSTANCE", "V_LOAN_NUMBER");
    dispVerticalResultSet(columns);The column length of V_LOAN_NUMBER is 31 characters and the result was:
    TABLE_CAT=null
    TABLE_SCHEM=ENG_AAM_572_STD_1E
    TABLE_NAME=PPROCINSTANCE
    COLUMN_NAME=V_LOAN_NUMBER
    DATA_TYPE=1111
    TYPE_NAME=NVARCHAR2
    COLUMN_SIZE=62
    BUFFER_LENGTH=0
    DECIMAL_DIGITS=null
    NUM_PREC_RADIX=10
    NULLABLE=1
    REMARKS=null
    COLUMN_DEF=NULL
    SQL_DATA_TYPE=0
    SQL_DATETIME_SUB=0
    CHAR_OCTET_LENGTH=62
    ORDINAL_POSITION=32
    IS_NULLABLE=YES
    Are you sure that I'm dealing with the bug 4485954?
    Do you know if there is a workaround?
    Thanks in advance for your time
    ArielUBA

  • What merits of NCHAR as well as NVARCHAR for national language?

    dear all,
    while i set a field's datatype in NCHAR or NVARCHAR2, so as to record simple chinese, however, i found it can not work and prompt on "characterset do not match", althought both c/s charachterset has been set in simplified chinese.
    yet once i set them back in normal CHAR or VARCHAR2 datatype, it record simplfied chinese very good.
    so i question that what merits actually to NCHAR or NVARCHAR2, if there be, how can i use this two datatype?
    thanks for your answer.
    regards,
    fredreick
    null

    In Oracle8i, specifying an NCHAR character set allows you to specify an alternate character set from the database character set for use in NCHAR, NVARCHAR2, and NCLOB columns. This is particularly useful for customers using a variable-width multibyte database character set because NCHAR has the capability to support Asian fixed-width multibyte encoding schemes, whereas the database character set cannot. The benefits in using a fixed-width multibyte encoding over a variable-width one are:
    B7 optimized string processing performance on NCHAR, NVARCHAR2, and NCLOB columns
    B7 ease-of-programming with a fixed-width multibyte character set as opposed to a variable-width multibyte character set.
    The NCHAR datatype has been redefined in Oracle9i to be a Unicode datatype exclusively. You can specify one of the following two Oracle character sets as the national character set:
    B7 AL16UTF16
    B7 UTF8
    The recommendation would be to use char and varchar for simplified chinese support, due to the coming change of NCHAR to be exclusively a Unicode data type.

  • Precautions i need to take when changing the Character set

    Hi,
    ORACLE VERSION: 10G Release 1 (10.1.0.3.0)
    I am going to change my database's characterset from AL32UTF8 to WE8MSWIN1252 character set and AL16UTF16 NCHAR character set. So i have few questions for you.
    1. What is the difference between Character Set and National Character set? Do i have to set both?
    2. What are precautions that i need to take while changing the characterset?
    3. What are JOB_QUEUE_PROCESSES and AQ_TM_PROCESSES parameters in Plain English? Why do i have to set these parameters to 0 as mentioned in this post below.
    Storing Chinese in Oracle Database

    1) The database character set controls (and specifies) the character set of CHAR & VARCHAR2 columns. The national character set controls the character set of NCHAR & NVARCHAR2 columns.
    2) Please make sure that you read the section of the Globalization manual that discusses character set migration. In particular, going from UTF-8 to Windows-1252 is going to require a bit more work since the latter is a subset (and not a strict binary subset) of the former.
    Justin

  • Oracle 10g db character set issue

    I have a database 10g with database character set western
    european "WE8ISO8859P1" and we are receiving data from source
    database with database character set as "UTF8" during data load
    for one of the tables we receive the following error "ORA-29275:
    partial multibyte character" I understand this might be due to
    the fact western european character set is not a subset/superset
    of UTF8 .Am i right ? What would be the way around this ?

    It is certainly possible that the issue is that your database characterset is a subset of UTF8.
    How are you getting the data? Are we talking about a flat file? A query over a database link? Something else?
    Does the data you're getting contain characters that cannot be represented in the ISO-8859 1 character set? It is quite common to send UTF-8 encoded files even when the underlying data is representable in other 8-bit character sets (like ISO-8859 1).
    What are you trying to do with the data? Are you trying to load it into a CHAR/ VARCHAR2 column? A CLOB? A BLOB? An NCHAR/ NVARCHAR2? Something else?
    Justin

  • A record selection problem with a string field when UNICODE database

    We used report files made by Crystal Reports 9 which access string fields
    (char / varchar2 type) of NON-UNICODE database tables.
    Now, our new product needs to deal with UNICODE database, therefore,
    we created another database schema changing table definition as below.
    (The table name and column name are not changed.)
        char type -> nchar type
        varchar2 type -> nvarchar2 type
    When we tried to access the above table, and output a report,
    the SQL statement created from the report seemed to be wrong.
    We confirmed the SQL statement using Oracle trace function.
        SELECT (abbr.) WHERE "XXXVIEW"."YYY"='123'.
    We think the above '123' should be N'123' because UNICODE string
    is stored in nchar / nvarchar2 type field.
    Question:
    How can we obtain the correct SQL statement in this case?
    Is there any option setting?
    FYI:
    The environment are as follows.
        Oracle version: 11.2.0
        ODBC version: 11.2.0.1
        National character set: AL16UTF16

    With further investigating, we found patterns that worked well.
    Worked well patters:
        Oracle version: 11.2.0
        ODBC version: 11.2.0.1
        National character set: AL16UTF16
        Report file made by Crystal Reports 2011
        Crystal Reports XI
    Not worked patters:
        Oracle version: 11.2.0 (same above)
        ODBC version: 11.2.0.1 (same above)
        National character set: AL16UTF16 (same above)
        Report file made by Crystal Reports 2011 (same above)
        Crystal Reports 2008 / 2011
    We think this phenomenon is degraded behavior of Crystal Reports 2008 / 2011.
    But we have to use the not worked patters.
    Anything wrong with us? Pls help.
    -Nobuhiko

  • Tables DBMSHP_FUNCTION_INFO, DBMSHP_PARENT_CHILD_INFO, DBMSHP_RUNS missing

    Background:
    In sqldeveloper 3.1.07 I have a number of views that concatenate the data from several fields to reduce multiple rows to a single row of comma delimited numbers. For example 10 records for Item #1 might contain a list of 10 different assessment files. I reduce them down to A12345, A22637, A32597, A25896, etc. I do the same with other fields, such as mineralization or whatever..
    This system is up on our mainframe and has run perfectly for several years and there has not been a problem generating this report till now. The powers that be have decided to lock down the system and now our report tables cannot be deleted or created as necessary. (The views gather the data, the final result is a table) To modify anything has become a tiresome political issue. However, since I have access to the data and MY code (I was the author and maintainer till now), I exported all the data and code and installed it on my local machine - where it was originally developed and where sqldeveloper was used to generate all the views, etc.
    The Problem:
    To cut to the chase: my concatenation views work perfectly on the mainframe but when I try to run them on my local machine the views are blank. They create properly - no errors, the concatenation function and types compile perfectly - no errors thrown - no messages - no visible warnings - and NO DATA IN THE VIEWS.
    When I finally cllicked on Profiles I found the following WARNING
    *"Required tables DBMSHP_FUNCTION_INFO,DBMSHP_PARENT_CHILD_INFO,DBMSHP_RUNS missing"*
    So my question is -
    1) how do I get these tables installed on my local stand alone computer and
    2) why does this new version of SQLDEVELOPER need them - or to put it in a different way - why is this the first time in 4 years that I need these tables?
    3) why wasnt there an appropriate error message or visible warning.
    Any help would be appreciated. Thank you
    More Background:
    The code for the function and type were borrowed from "somewhere" 4 years ago - the rest is pretty simple. No changes are necessary for any of this as it works perfectly - it is only presented for further information. Database is 11gR2
    The code for the view:
    CREATE OR REPLACE FORCE VIEW "MDD"."Q_AFILES" ("MDD_NO", "AFILE")
    AS
    ( SELECT mdd_no ,
    CONCAT_ALL(CONCAT_EXPR(afile_no, ', ')) AFILE
    FROM mdd_afmddcr
    GROUP BY mdd_no
    The concat function:
    create or replace
    FUNCTION "CONCAT_ALL"(
    ctx IN concat_expr)
    RETURN VARCHAR2 DETERMINISTIC PARALLEL_ENABLE AGGREGATE USING concat_all_ot
    The type:
    create or replace
    TYPE "CONCAT_ALL_OT" AS OBJECT (
    str VARCHAR2 (4000),
    del VARCHAR2 (4000),
    STATIC FUNCTION odciaggregateinitialize (
    sctx IN OUT concat_all_ot)
    RETURN NUMBER,
    MEMBER FUNCTION odciaggregateiterate (
    SELF IN OUT concat_all_ot,
    ctx IN concat_expr)
    RETURN NUMBER,
    MEMBER FUNCTION odciaggregateterminate (
    SELF IN concat_all_ot,
    returnvalue OUT VARCHAR2,
    flags IN NUMBER)
    RETURN NUMBER,
    MEMBER FUNCTION odciaggregatemerge (
    SELF IN OUT concat_all_ot,
    ctx2 concat_all_ot)
    RETURN NUMBER);
    the Type expression:
    create or replace
    TYPE "CONCAT_EXPR" AS OBJECT (
    str VARCHAR2 (4000),
    del VARCHAR2 (4000),
    MAP MEMBER FUNCTION f_map RETURN VARCHAR2);

    >
    The real answer to my question should have simply been - the concat function is not allowed in 11gR2 - LISTAGG s the appropriate replacement. Seems our server is only 10g and that's why it still works there. My local testbed is 11gR2
    >
    Those comments are so far off base I can't even imagine how you arrived at those conclusions.
    1. You never mentioned a 10g system - you talked about a mainframe
    >
    This system is up on our mainframe and has run perfectly for several years
    >
    2. The Oracle concat function is still alive and well in 11g - nothing you posted shows you are using that function
    http://docs.oracle.com/cd/E11882_01/server.112/e10592/functions033.htm#i77004
    >
    Purpose
    CONCAT returns char1 concatenated with char2. Both char1 and char2 can be any of the data types CHAR, VARCHAR2, NCHAR, NVARCHAR2, CLOB, or NCLOB. The string returned is in the same character set as char1. Its data type depends on the data types of the arguments.
    >
    3. The CONCAT_ALL function that you posted is YOUR function, not Oracle's. It is a custom aggregate function written using the Data Cartridge extensions. That function will still work in 11g.
    The concat function:
    create or replace
    FUNCTION "CONCAT_ALL"(
    ctx IN concat_expr)
    RETURN VARCHAR2 DETERMINISTIC PARALLEL_ENABLE AGGREGATE USING concat_all_ot
    ;Your questions were these
    >
    So my question is -
    1) how do I get these tables installed on my local stand alone computer and
    2) why does this new version of SQLDEVELOPER need them - or to put it in a different way - why is this the first time in 4 years that I need these tables?
    3) why wasnt there an appropriate error message or visible warning.
    >
    Providing an answer of 'the concat function is not allowed in 11gR2' to those questions would have been absurd! The concat function has NOTHING to do with your missing tables.
    I told you the reason your tables are missing. They are only created when you run the script I referred you to. Someone ran them on your current system and created hierarchical profiles that used those tables. You DID NOT run that script on 11g so the tables were not created but you tried to reference the profiles that used them and got the error message about the tables being missing.
    All of your questions were answered and the answers have NOTHING to do with concatenation of any type.

  • Problem storing Russian Characters in Oracle 10g

    We are facing an issue in one of our sites which is in Russian Language. Whenever data is submitted with Russian Characters it saves it as Question mark(upside down) in the database. Database is not supporting these characters.The character encoding is done in UTF-8 format from the front end.
    This code use to work fine with Oracle 9i database but after the upgradation to Oracle 10g this problem has started occuring. We have not made any changes to the code after the upgradation of the database.
    How can we resolve this and what are the settings that we can do to make this work fine?

    What is your database character set and national character set?
    SELECT *
      FROM v$nls_parameters
    WHERE parameter like '%CHARACTERSET';Are you storing the data in CHAR/ VARCHAR2/ CLOB columns? Or NCHAR/ NVARCHAR2/ NCLOB?
    Justin

Maybe you are looking for

  • Just upgraded to latest version of iTunes now get blank window in iTunes Store. Am on iMAC.

    Just upgraded to latest iTunes version now iTunes Store is just blank window. How can I get iTunes Store to show up?

  • Find out if a certain Stored Procedure runs

    Hi all, i have to state that i dont have much oracle specific database knowledge but i've spend half a day now gathering information about a certain problem and didnt find any really working solution. What i want is to be able to find out if a certai

  • IPhone Software Keyboard Problem

    Hi, Recently, certain parts of my touchscreen have stopped working. I thought it might be a hardware problem but after performing a restore, it works again! but then after a few minutes (different length of time every time), it stops working again. I

  • AI CS4/CS5 Font selection sections

    Hello, I'd like to ask if there is any information available on how are the fonts in font selection dialog ordered - they are obviously grouped by script - latin, chinese, japanese, korean etc. What is the precise order of the groups? Is this order t

  • Cisco 7925G&7921 SRST issue

       We are having the said issue with all the branches after we have migrated the cucm 6.1 to 8.6, wireless phones register to VGW with SRST mode about 2 times a day, we have to manual kill it on the WLC in order to force them to register on the cucm8