Querying CHAR columns with character length semantics unreliable

Hi again,
It appears that there is a bug in the JDBC drivers whereby it is highly unlikely that the values of CHAR columns that use character length semantics can be accurately queried using ResultSet.getString(). Instead, the drivers return the value padded with space (0x#20) characters out to a number of bytes equal to the number of characters multiplied by 4. The number of bytes varies depending on the number and size of any non-ascii characters stored in the column.
For instance, if I have a CHAR(1) column, a value of 'a' will return 'a ' (4 characters/bytes are returned), a value of '\u00E0' will return '\u00E0 ' (3 characters / 4 bytes), and a value of '\uE000' will return '\uE000 ' (2 characters / 4 bytes).
I'm currently using version 9.2.0.3 of the standalone drivers (ojdbc.jar) with JDK 1.4.1_04 on Redhat Linux 9, connecting to Oracle 9.2.0.2.0 running on Solaris.
The following sample code can be used to demonstrate the problem (where the DDL at the top of the file must be executed first):
import java.sql.*;
import java.util.*;
This sample generates another bug in the Oracle JDBC drivers where it is not
possible to query the values of CHAR columns that use character length semantics
and are NOT full of non-ascii characters. The inclusion of the VARCHAR2 column
is just a control.
CREATE TABLE TMP2
TMP_ID NUMBER(10) NOT NULL PRIMARY KEY,
TMP_CHAR CHAR(10 CHAR),
TMP_VCHAR VARCHAR2(10 CHAR)
public class ClsCharSelection
private static String createString(char character, int length)
char characters[] = new char[length];
Arrays.fill(characters, character);
return new String(characters);
} // private static String createString(char, int)
private static void insertRow(PreparedStatement ps,
int key, char character)
throws SQLException
ps.setInt(1, key);
ps.setString(2, createString(character, 10));
ps.setString(3, createString(character, 10));
ps.executeUpdate();
} // private static String insertRow(PreparedStatement, int, char)
private static void analyseResults(PreparedStatement ps, int key)
throws SQLException
ps.setInt(1, key);
ResultSet results = ps.executeQuery();
results.next();
String tmpChar = results.getString(1);
String tmpVChar = results.getString(2);
System.out.println(key + ", " + tmpChar.length() + ", '" + tmpChar + "'");
System.out.println(key + ", " + tmpVChar.length() + ", '" + tmpVChar + "'");
results.close();
} // private static void analyseResults(PreparedStatement, int)
public static void main(String argv[])
throws Exception
Driver driver = (Driver)Class.forName(
"oracle.jdbc.driver.OracleDriver").newInstance();
DriverManager.registerDriver(driver);
Connection connection = DriverManager.getConnection(
argv[0], argv[1], argv[2]);
PreparedStatement ps = null;
try
ps = connection.prepareStatement(
"DELETE FROM tmp2");
ps.executeUpdate();
ps.close();
ps = connection.prepareStatement(
"INSERT INTO tmp2 ( tmp_id, tmp_char, tmp_vchar " +
") VALUES ( ?, ?, ? )");
insertRow(ps, 1, 'a');
insertRow(ps, 2, '\u00E0');
insertRow(ps, 3, '\uE000');
ps.close();
ps = connection.prepareStatement(
"SELECT tmp_char, tmp_vchar FROM tmp2 WHERE tmp_id = ?");
analyseResults(ps, 1);
analyseResults(ps, 2);
analyseResults(ps, 3);
ps.close();
connection.commit();
catch (SQLException e)
e.printStackTrace();
connection.close();
} // public static void main(String[])
} // public class ClsColumnInsertion

FYI, this has been mentioned as early as November last year:
String with length 1 became 4 when nls_lang_semantics=CHAR
and was also brought up in Feburary:
JDBC thin driver pads CHAR col to byte size when NLS_LENGTH_SEMANTICS=CHAR

Similar Messages

  • Convert all VARCHAR2 data types to character length semantics

    Hi,
    I am wondering if there is an easy way to convert all columns in the database of data type VARCHAR2(x BYTE) to VARCHAR2(x CHAR)?
    Regards
    Håkan

    The DMU does not allow character length semantics migration for the following type of objects:
    - Columns already in character length semantics
    - Data dictionary columns
    - Columns under Oracle-supplied application schemas
    - CHAR attribute columns of ADT
    - Columns in clusters
    - Columns on which partition keys are defined
    Please check if the disabled nodes you observed in the wizard fall under one of these categories.

  • Export with data length semantics

    Hello,
    I've following problem.
    I have a table abcd which contains 2 VARCHAR2 columns with different data length semantics (one with BYTE, one with CHAR). Charset is Single Byte; let's say WE8MSWIN1252, so data length semantics should not be a problem. should not. details later.
    So this would be:
    create table abcd (a_char VARCHAR2(2 CHAR), a_byte VARCHAR2(2 BYTE));after that I export the table via exp. I'm not setting NLS_LENGTH_SEMANTICS environment variable, so BYTE is used.
    In the dump file the data length semantics for the byte col is omitted, as I exported it with BYTE:
    create table abcd (a_char VARCHAR2(2 CHAR), a_byte VARCHAR2(2));after that, I "accidently" import it with data length semantics set to CHAR, and the table looks like this now
    abcd
    a_char VARCHAR2(2 CHAR)
    a_byte VARCHAR2(2 CHAR)Same happens vice versa when using CHAR for export and BYTE for import...
    In single byte charsets this might not be so much of a problem, as one CHAR is equal to one BYTE, but...
    If I compile plsql against the original table, and run against the outcoming table after export, I get an ORA-4062, and I have to recompile...
    Would not be a problem if the plsql I compile would be on the database...Big problem is that the ORA-4062 occurs in forms, where it's difficult for me to recompile (I would have to transfer all the sources to customer and compile there).
    Is there any possibility to export data length semantics regardless which environment variable is set?
    database version would be 9.2.0.6; but if there exists a solution in higher versions I would also be happy to hear them...
    many thanks,
    regards

    I can't reproduce your problem:
    SQL> show parameter nls_length_semantics
    NAME                                 TYPE        VALUE
    nls_length_semantics                 string      BYTE
    SQL> create table scott.demo( col1 varchar2(10 byte), col2 varchar2(10 char) );
    SQL> describe scott.demo
    Name                                      Null?    Type
    COL1                                               VARCHAR2(10)
    COL2                                               VARCHAR2(10 CHAR)
    $ export NLS_LENGTH_SEMANTICS=BYTE
    $ exp scott/tiger file=scott.dmp tables=demo
    SQL> drop table scott.demo;
    $ export NLS_LENGTH_SEMANTICS=CHAR
    $ imp scott/tiger file=scott.dmp
    SQL> describe scott.demo
    Name                                      Null?    Type
    COL1                                               VARCHAR2(10 BYTE)
    COL2                                               VARCHAR2(10)
    SQL> alter session set nls_length_semantics=byte;
    SQL> describe scott.demo
    Name                                      Null?    Type
    COL1                                               VARCHAR2(10)
    COL2                                               VARCHAR2(10 CHAR)Can you post a test like mine?
    Enrique
    PS If you have access to Metalink, read Note:144808.1 Examples and limits of BYTE and CHAR semantics usage. From 9i and up, imp doesn't read nls_length_semantics from the environment.
    Edited by: Enrique Orbegozo on Dec 16, 2008 12:50 PM
    Edited by: Enrique Orbegozo on Dec 16, 2008 12:53 PM

  • Character length semantics

    Hi,
    in 10g R2 how to enable Character length semantics ?
    Thank you.

    You cannot just enable character length semantics.
    The following link would be helpful.
    You need to export schema and import schema after setting the parameter
    NLS_LENGTH_SEMANTICS=CHAR
    [Character semantics |http://www.oracle.com/technology/oramag/oracle/03-mar/o23sql.html]

  • ORA-01401 error on char column with oracle oci driver

    Hello,
    We found a potential bug in the kodo.jdbc.sql.OracleDictionary class
    shipped as source with Kodo:
    In newer Kodo versions (at least in 3.3.4), the method
    public void setString (PreparedStatement stmnt, int idx, String
    val,          Column col)
    has the following code block:
    // call setFixedCHAR for fixed width character columns to get padding
    // semantics
    if (col != null && col.getType () == Types.CHAR
    && val != null && val.length () != col.getSize ())
    ((OraclePreparedStatement) inner).setFixedCHAR (idx, val);
    This block seems to be intended for select statements but is called on
    inserts/updates also. The latter causes a known problem with the Oracle
    oci driver when settings CHAR columns as FixedCHAR, which reports an
    ORA-01401 error (inserted value too large for column) when definitely no
    column is too long. This does not happen with the thin driver.
    We reproduced this with 8.1.7 and 9.2.0 drivers.
    For us we solved the problem by subclassing OracleDictionary and removing
    the new code block.
    Regards,
    Rainer Meyer
    ELAXY Financial Software & Solutions GmbH & Co. KG

    Rainer-
    I read at
    re:'ORA-01401 inserted value too large for column' - 9i that:
    "This is fixed in Oracle9i Release 2"
    Can you try that version of the driver? Also, does it fail in the Oracle
    10 OCI driver?
    Rainer Meyer wrote:
    Hello,
    We found a potential bug in the kodo.jdbc.sql.OracleDictionary class
    shipped as source with Kodo:
    In newer Kodo versions (at least in 3.3.4), the method
    public void setString (PreparedStatement stmnt, int idx, String
    val,          Column col)
    has the following code block:
    // call setFixedCHAR for fixed width character columns to get padding
    // semantics
    if (col != null && col.getType () == Types.CHAR
    && val != null && val.length () != col.getSize ())
    ((OraclePreparedStatement) inner).setFixedCHAR (idx, val);
    This block seems to be intended for select statements but is called on
    inserts/updates also. The latter causes a known problem with the Oracle
    oci driver when settings CHAR columns as FixedCHAR, which reports an
    ORA-01401 error (inserted value too large for column) when definitely no
    column is too long. This does not happen with the thin driver.
    We reproduced this with 8.1.7 and 9.2.0 drivers.
    For us we solved the problem by subclassing OracleDictionary and removing
    the new code block.
    Regards,
    Rainer Meyer
    ELAXY Financial Software & Solutions GmbH & Co. KG
    Marc Prud'hommeaux
    SolarMetric Inc.

  • Forms 6.0 how to query clob column with oracle 9.2 DB

    hi every body,
    i made install for oracle 9.2 oracle DB every thing goes ok but when i made query in my form version 6.0 which have CLOB column the form closed automatically without any message?
    and just for know when i run the same form with oracle 8.1.7 DB the form made query normally without any problem.
    i want your help please.
    Message was edited by:
    mshaqalaih

    I know there was a problem in 6i where you would get a crash if your query returned more than {Max Length} characters of the field representing the CLOB column.

  • How to retrieve 'long' column with 32K length in Java stored procedure

    For some reasons, we are not using CLOB, BLOB, or BFILE to store large objects and I have to live with LONG. So I wrote a Java stored procedure to insert, select and manipulate a LONG column by retrieving the LONG into a java.lang.String class (which happens to be the Java class mapped to the LONG SQL datatype). It all works fine as long as the length of the value being retrieved is less than the magic figure of 32767 bytes (which is the restriction on LONG and VARCHAR2 datatype in PL/SQL as well). So looks like Oracle's implementation of the JVM limits String values to a max of 32767 bytes. Suggestions on how to overcome this limitation (other classes that you suggest or do I have to move to files)?
    Thanks,
    Jeet
    null

    the jvm has nothing to do with it ...
    this is a pol/sql limitation on parameters in stored procedures.
    and java stored procedures require a clal spec that makes the j-s-p look like a pl/qsl stored proc.

  • Query on column with comma separated values

    I have a proposed table with unnormalized data like the following:
    ID COLA COLB REFLIST
    21 xxx  zzz  24,25,78,412
    22 xxx  xxx  21
    24 yyy  xxx  912,22
    25 zzz  fff  433,555,22
    .. ...  ...  ...There are 200 million rows. There is maximum of about 10 IDs in the REFLIST, though typically two or three. How could I efficiently query this data on the REFLIST column? e.g. something like:
    SELECT id FROM mytable WHERE :myval in reflistLogically there is a many to many relationship between rows in this table. The REFLIST column contains pointers to ID values elsewhere in the table. The data could be normalized so that the relationship keys are in a separate table (in fact this is the current solution that we want to change).
    ID  REF
    21  24
    21  25
    21  78
    21  412
    22  21
    24  912
    ... ...The comma separated list seems instinctively like a bad idea, however there are various reasons for proposing it. The main reason is because the source for this data has it structured like the REFLIST example. It is an OLTP-like system rather than a data warehouse. The source code (and edit performance) would benefit greatly from not having to maintain the relationship table as the data changes.
    Going back to querying the REFLIST column, the problem seems to be building an approriate index for the data. The ideas proposed so far are:
    <li>Make a materialized view that presents the relationships as normalized (e.g. as in the example with ID, REF columns above), then index the plain column - the various methods of writing the view SQL have been widely posted.
    <li>Use a Oracle Text Index (not something I have ever had call to use before).
    Any other ideas? Its Oracle 10.2, though 11g could be possible.
    Thanks
    Jim

    Something like this ?
    This is test demo on my 11.2.0.1 Windows XP
    SQL> create table test (id number,reflist varchar2(30));
    Table created.
    SQL> insert into test values (21,'24,25,78,412');
    1 row created.
    SQL> insert into test values (22,'21');
    1 row created.
    SQL> insert into test values (24,'912,22');
    1 row created.
    SQL> insert into test values (25,'433,555,22');
    1 row created.
    SQL> select * from test
      2  where
      3  ',' || reflist || ',' like '%,22,%';
            ID REFLIST
            24 912,22
            25 433,555,22
    SQL>Source:http://stackoverflow.com/questions/7212282/is-it-possible-to-query-a-comma-separated-column-for-a-specific-value
    Regards
    Girish Sharma
    Edited by: Girish Sharma on Jul 12, 2012 2:31 PM

  • How to tokenize a column with variable length string

    I have a question regarding tokenizing string in a column I have table like
    id list
    1 i love dogs
    2 i like cats and dogs
    and so on
    it should be converted to
    id list
    1 i
    1 love
    1 dogs
    2 i
    2 like
    2 cates
    2 and
    2 dogs
    How do I tokenize this? I tried using this code inside cursor and procedure
    SELECT id, regexp_substr(str, '[^ ]+', 1, level) TOKEN from test CONNECT by level <= length(regexp_replace (str, '[^ ]+')) + 1;
    but this is very slow when called from java. Is there any other alternative?
    Thanks

    69edfef1-01fd-49c3-b83d-ece76195d26c wrote:
    I have a question regarding tokenizing string in a column I have table like
    id list
    1 i love dogs
    2 i like cats and dogs
    and so on
    it should be converted to
    id list
    1 i
    1 love
    1 dogs
    2 i
    2 like
    2 cates
    2 and
    2 dogs
    How do I tokenize this? I tried using this code inside cursor and procedure
    SELECT id, regexp_substr(str, '[^ ]+', 1, level) TOKEN from test CONNECT by level <= length(regexp_replace (str, '[^ ]+')) + 1;
    but this is very slow when called from java. Is there any other alternative?
    Thanks
    The reason yours is slow is because you don't have sufficient connect by conditions to restrict the iterations properly...
    e.g. What you are doing...
    SQL> ed
    Wrote file afiedt.buf
      1  with t as (select 1 as id, 'i love dogs' list from dual union all
      2             select 2, 'i like cats and dogs' from dual union all
      3             select 3,'precipitevolissimevolmente' from dual)
      4  --
      5  --
      6  --
      7  SELECT id
      8        ,regexp_substr(list, '[^ ]+', 1, level) TOKEN
      9  from t
    10* CONNECT by level <= length(regexp_replace (list, '[^ ]+')) + 1
    SQL> /
            ID TOKEN
             1 i
             1 love
             1 dogs
             2 and
             2 dogs
             2 cats
             2 and
             2 dogs
             2 like
             1 dogs
             2 and
             2 dogs
             2 cats
             2 and
             2 dogs
             2 i
             1 love
             1 dogs
             2 and
             2 dogs
             2 cats
             2 and
             2 dogs
             2 like
             1 dogs
             2 and
             2 dogs
             2 cats
             2 and
             2 dogs
             3 precipitevolissimevolmente
             1 love
             1 dogs
             2 and
             2 dogs
             2 cats
             2 and
             2 dogs
             2 like
             1 dogs
             2 and
             2 dogs
             2 cats
             2 and
             2 dogs
    45 rows selected.
    And with additional connect by criteria...
    SQL> ed
    Wrote file afiedt.buf
      1  with t as (select 1 as id, 'i love dogs' list from dual union all
      2             select 2, 'i like cats and dogs' from dual union all
      3             select 3,'precipitevolissimevolmente' from dual)
      4  --
      5  --
      6  --
      7  SELECT id
      8        ,regexp_substr(list, '[^ ]+', 1, level) TOKEN
      9  from t
    10  CONNECT by level <= length(regexp_replace (list, '[^ ]+')) + 1
    11         and id = prior id
    12*        and prior sys_guid() is not null
    SQL> /
            ID TOKEN
             1 i
             1 love
             1 dogs
             2 i
             2 like
             2 cats
             2 and
             2 dogs
             3 precipitevolissimevolmente
    9 rows selected.

  • Incorrect data_length for columns with char semantics in 10g

    Hi,
    I was going through a few databases at my work place and I noticed something unusual.
    Database Server - Oracle 10g R2
    Database Client - Oracle 11g R1 (11.1.0.6.0 EE)
    Client OS - Win XP
    SQL>
    SQL> @ver
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE    10.2.0.4.0      Production
    TNS for Linux: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    5 rows selected.
    SQL> --
    SQL> drop table t;
    Table dropped.
    SQL> create table t (
      2    a    char(3 char),
      3    b    char(3 byte),
      4    c    char(3),
      5    d    varchar2(3 char),
      6    e    varchar2(3 byte),
      7    f    varchar2(3)
      8  );
    Table created.
    SQL> --
    SQL> desc t
    Name                                      Null?    Type
    A                                                  CHAR(3 CHAR)
    B                                                  CHAR(3)
    C                                                  CHAR(3 CHAR)      <= why does it show "CHAR" ? isn't "BYTE" semantics the default i.e. CHAR(3) = CHAR(3 BYTE) ?
    D                                                  VARCHAR2(3 CHAR)
    E                                                  VARCHAR2(3)
    F                                                  VARCHAR2(3 CHAR)  <= same here; this should be VARCHAR2(3)
    SQL> --
    SQL> select table_name,
      2         column_name,
      3         data_type,
      4         data_length,
      5         data_precision,
      6         data_scale
      7    from user_tab_columns
      8   where table_name = 'T';
    TABLE_NAME   COLUMN_NAME  DATA_TYPE  DATA_LENGTH DATA_PRECISION DATA_SCALE
    T            A            CHAR                12                               <= why 12 and not 3 ? why multiply by 4 ?
    T            B            CHAR                 3
    T            C            CHAR                12                               <= same here
    T            D            VARCHAR2            12                               <= and here
    T            E            VARCHAR2             3
    T            F            VARCHAR2            12                               <= and here
    6 rows selected.
    SQL>
    SQL>I believe it multiplies the size by 4, because it shows 16 in user_tab_columns when the size is changed to 4.
    When I try this on 11g R1 server, it looks good -
    Database Server - Oracle 11g R1
    Database Client - Oracle 11g R1 (11.1.0.6.0 EE)
    Client OS - Win XP
    SQL>
    SQL> @ver
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
    PL/SQL Release 11.1.0.6.0 - Production
    CORE    11.1.0.6.0      Production
    TNS for 32-bit Windows: Version 11.1.0.6.0 - Production
    NLSRTL Version 11.1.0.6.0 - Production
    5 rows selected.
    SQL> --
    SQL> drop table t;
    Table dropped.
    SQL> create table t (
      2    a    char(3 char),
      3    b    char(3 byte),
      4    c    char(3),
      5    d    varchar2(3 char),
      6    e    varchar2(3 byte),
      7    f    varchar2(3)
      8  );
    Table created.
    SQL> --
    SQL> desc t
    Name                                      Null?    Type
    A                                                  CHAR(3 CHAR)
    B                                                  CHAR(3)
    C                                                  CHAR(3)
    D                                                  VARCHAR2(3 CHAR)
    E                                                  VARCHAR2(3)
    F                                                  VARCHAR2(3)
    SQL> --
    SQL> select table_name,
      2         column_name,
      3         data_type,
      4         data_length,
      5         data_precision,
      6         data_scale
      7    from user_tab_columns
      8   where table_name = 'T';
    TABLE_NAME   COLUMN_NAME  DATA_TYPE    DATA_LENGTH DATA_PRECISION DATA_SCALE
    T            A            CHAR                   3
    T            B            CHAR                   3
    T            C            CHAR                   3
    T            D            VARCHAR2               3
    T            E            VARCHAR2               3
    T            F            VARCHAR2               3
    6 rows selected.
    SQL>
    SQL>Is it a known bug ? Unfortunately, I do not have access to Metalink.
    Thanks,
    isotope
    Edited by: isotope on Mar 3, 2010 6:46 AM

    Anurag Tibrewal wrote:
    It is just because you have different NLS_LENGTH_SEMANTICS in v$nls_parameter for both the database. It is BYTE in R10 and CHAR in R11.
    I cannot query v$nls_parameter in the 10g database. I tried this testcase with the ALTER SESSION and checking with nls_session_parameter in both 10g and 11g. The client is 11g in each case.
    The DESCRIBE table looks ok, but the user_tab_column shows size*4.
    Testcase -
    cl scr
    select * from v$version;
    -- Try CHAR semantics
    alter session set nls_length_semantics=char;
    select * from nls_session_parameters where parameter = 'NLS_LENGTH_SEMANTICS';
    drop table t;
    create table t (
      a    char(3 char),
      b    char(3 byte),
      c    char(3),
      d    varchar2(3 char),
      e    varchar2(3 byte),
      f    varchar2(3)
    desc t
    select table_name,
           column_name,
           data_type,
           data_length,
           data_precision,
           data_scale
      from user_tab_columns
    where table_name = 'T';
    -- Try BYTE semantics
    alter session set nls_length_semantics=byte;
    select * from nls_session_parameters where parameter = 'NLS_LENGTH_SEMANTICS';
    drop table t;
    create table t (
      a    char(3 char),
      b    char(3 byte),
      c    char(3),
      d    varchar2(3 char),
      e    varchar2(3 byte),
      f    varchar2(3)
    desc t
    select table_name,
           column_name,
           data_type,
           data_length,
           data_precision,
           data_scale
      from user_tab_columns
    where table_name = 'T';In 10g R2 server -
    SQL>
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE    10.2.0.4.0      Production
    TNS for Linux: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    SQL>
    SQL> -- Try CHAR semantics
    SQL> alter session set nls_length_semantics=char;
    Session altered.
    SQL> select * from nls_session_parameters where parameter = 'NLS_LENGTH_SEMANTICS';
    PARAMETER            VALUE
    NLS_LENGTH_SEMANTICS CHAR
    SQL> --
    SQL> drop table t;
    Table dropped.
    SQL> create table t (
      2    a    char(3 char),
      3    b    char(3 byte),
      4    c    char(3),
      5    d    varchar2(3 char),
      6    e    varchar2(3 byte),
      7    f    varchar2(3)
      8  );
    Table created.
    SQL> --
    SQL> desc t
    Name                                            Null?    Type
    A                                                        CHAR(3)
    B                                                        CHAR(3 BYTE)
    C                                                        CHAR(3)
    D                                                        VARCHAR2(3)
    E                                                        VARCHAR2(3 BYTE)
    F                                                        VARCHAR2(3)
    SQL> --
    SQL> select table_name,
      2         column_name,
      3         data_type,
      4         data_length,
      5         data_precision,
      6         data_scale
      7    from user_tab_columns
      8   where table_name = 'T';
    TABLE_NAME        COLUMN_NAME  DATA_TYPE    DATA_LENGTH DATA_PRECISION DATA_SCALE
    T                 A            CHAR                  12      <==
    T                 B            CHAR                   3
    T                 C            CHAR                  12      <==
    T                 D            VARCHAR2              12      <==
    T                 E            VARCHAR2               3
    T                 F            VARCHAR2              12      <==
    6 rows selected.
    SQL>
    SQL> -- Try BYTE semantics
    SQL> alter session set nls_length_semantics=byte;
    Session altered.
    SQL> select * from nls_session_parameters where parameter = 'NLS_LENGTH_SEMANTICS';
    PARAMETER            VALUE
    NLS_LENGTH_SEMANTICS BYTE
    SQL> --
    SQL> drop table t;
    Table dropped.
    SQL> create table t (
      2    a    char(3 char),
      3    b    char(3 byte),
      4    c    char(3),
      5    d    varchar2(3 char),
      6    e    varchar2(3 byte),
      7    f    varchar2(3)
      8  );
    Table created.
    SQL> --
    SQL> desc t
    Name                                            Null?    Type
    A                                                        CHAR(3 CHAR)
    B                                                        CHAR(3)
    C                                                        CHAR(3)
    D                                                        VARCHAR2(3 CHAR)
    E                                                        VARCHAR2(3)
    F                                                        VARCHAR2(3)
    SQL> --
    SQL> select table_name,
      2         column_name,
      3         data_type,
      4         data_length,
      5         data_precision,
      6         data_scale
      7    from user_tab_columns
      8   where table_name = 'T';
    TABLE_NAME        COLUMN_NAME  DATA_TYPE    DATA_LENGTH DATA_PRECISION DATA_SCALE
    T                 A            CHAR                  12    <==
    T                 B            CHAR                   3
    T                 C            CHAR                   3
    T                 D            VARCHAR2              12   <==
    T                 E            VARCHAR2               3
    T                 F            VARCHAR2               3
    6 rows selected.
    SQL>
    SQL>In 11g R1 server -
    SQL>
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
    PL/SQL Release 11.1.0.6.0 - Production
    CORE    11.1.0.6.0      Production
    TNS for 32-bit Windows: Version 11.1.0.6.0 - Production
    NLSRTL Version 11.1.0.6.0 - Production
    5 rows selected.
    SQL>
    SQL> -- Try CHAR semantics
    SQL> alter session set nls_length_semantics=char;
    Session altered.
    SQL> select * from nls_session_parameters where parameter = 'NLS_LENGTH_SEMANTICS';
    PARAMETER                      VALUE
    NLS_LENGTH_SEMANTICS           CHAR
    1 row selected.
    SQL> --
    SQL> drop table t;
    Table dropped.
    SQL> create table t (
      2    a    char(3 char),
      3    b    char(3 byte),
      4    c    char(3),
      5    d    varchar2(3 char),
      6    e    varchar2(3 byte),
      7    f    varchar2(3)
      8  );
    Table created.
    SQL> --
    SQL> desc t
    Name                                      Null?    Type
    A                                                  CHAR(3)
    B                                                  CHAR(3 BYTE)
    C                                                  CHAR(3)
    D                                                  VARCHAR2(3)
    E                                                  VARCHAR2(3 BYTE)
    F                                                  VARCHAR2(3)
    SQL> --
    SQL> select table_name,
      2         column_name,
      3         data_type,
      4         data_length,
      5         data_precision,
      6         data_scale
      7    from user_tab_columns
      8   where table_name = 'T';
    TABLE_NAME   COLUMN_NAME  DATA_TYPE    DATA_LENGTH DATA_PRECISION DATA_SCALE
    T            A            CHAR                   3
    T            B            CHAR                   3
    T            C            CHAR                   3
    T            D            VARCHAR2               3
    T            E            VARCHAR2               3
    T            F            VARCHAR2               3
    6 rows selected.
    SQL>
    SQL> -- Try BYTE semantics
    SQL> alter session set nls_length_semantics=byte;
    Session altered.
    SQL> select * from nls_session_parameters where parameter = 'NLS_LENGTH_SEMANTICS';
    PARAMETER                      VALUE
    NLS_LENGTH_SEMANTICS           BYTE
    1 row selected.
    SQL> --
    SQL> drop table t;
    Table dropped.
    SQL> create table t (
      2    a    char(3 char),
      3    b    char(3 byte),
      4    c    char(3),
      5    d    varchar2(3 char),
      6    e    varchar2(3 byte),
      7    f    varchar2(3)
      8  );
    Table created.
    SQL> --
    SQL> desc t
    Name                                      Null?    Type
    A                                                  CHAR(3 CHAR)
    B                                                  CHAR(3)
    C                                                  CHAR(3)
    D                                                  VARCHAR2(3 CHAR)
    E                                                  VARCHAR2(3)
    F                                                  VARCHAR2(3)
    SQL> --
    SQL> select table_name,
      2         column_name,
      3         data_type,
      4         data_length,
      5         data_precision,
      6         data_scale
      7    from user_tab_columns
      8   where table_name = 'T';
    TABLE_NAME   COLUMN_NAME  DATA_TYPE    DATA_LENGTH DATA_PRECISION DATA_SCALE
    T            A            CHAR                   3
    T            B            CHAR                   3
    T            C            CHAR                   3
    T            D            VARCHAR2               3
    T            E            VARCHAR2               3
    T            F            VARCHAR2               3
    6 rows selected.
    SQL>
    SQL>isotope

  • What does this error means? "Line 20 column 57: character content of element "language" invalid; must be a string with length equal to 3 (actual length was 7) at XPath /package/book/metadata/languages/language" "

    Hi there.
    I am about to publish a book in English and Chinese.
    What does this error means?
    Line 20 column 57: character content of element "language" invalid; must be a string with length equal to 3 (actual length was 7) at XPath /package/book/metadata/languages/language"
    And where is line 20, column 57?
    Thanks folks!

    Go into iTunes Producer and select from the dropdown, don't type.
    cs
    iBooks Author Guide

  • SQL Loader Multibyte character error, LENGTH SEMANTICS CHARACTER

    Hi,
    startet SQL Loader Multibyte character error
    {thread:id=2340726}
    some mod locked the thread, why?
    the solution for others:
    add LENGTH SEMANTICS CHARACTER to the controlfile
    LOAD DATA characterset UTF8 LENGTH SEMANTICS CHARACTER
    TRUNCATE                              
    INTO TABLE utf8file_to_we8mswin1252
      ID    CHAR(1)     
    , TEXT  CHAR(40)
    )Regards
    Michael

    Hi Werner,
    on my linux desktop:
    $ file test.dat
    test.dat: UTF-8 Unicode text, with very long lines
    my colleague is working on a windows system.
    On both systems exact the same error from SQL Loader.
    Btw, try with different number of special characters (german umlaute and euro) and there is no chance to load without the error
    when to many (?) special characters or data is long as column length and special characters included.
    Regards
    Michael

  • JDeveloper, JPA named query String parameter with length of 1

    Hi,
    I use JDeveloper 11.1.1.2.0. and have the following table:
    CREATE SEQUENCE COUNTRY_SEQ;
    CREATE TABLE COUNTRY (
    COUNTRY_ID NUMBER NOT NULL,
    COUNTRY_NAME VARCHAR2(40),
    COUNTRY_CODE CHAR(2) NOT NULL,
    CONSTRAINT COUNTRY_ID_PK PRIMARY KEY (COUNTRY_ID)
    INSERT INTO COUNTRY VALUES (COUNTRY_SEQ.NEXTVAL, 'Belgium', 'B');
    INSERT INTO COUNTRY VALUES (COUNTRY_SEQ.NEXTVAL, 'Netherlands', 'NL');
    COMMIT;
    I made a JPA Entity with two queries:
    @NamedQueries({
    @NamedQuery(name = "Country.findAll", query = "select o from Country o"),
    @NamedQuery(name = "Country.findByCountryCode", query = "select o from Country o where o.countryCode = 'B' or o.countryCode ='NL'")
    The first works fine, gives back both B and NL. But the second gives back only NL. I have no clue why.
    If I change 'B' to 'BE' both in the DB and the code, it works. If I change to 'b', it doesn't. So it seems that the problem comes with 1 char long Strings.
    Is it an EclipseLink bug?
    I use Oracle Database 11g Enterprise Edition Release 11.1.0.7.0. But
    SELECT COUNTRY_ID, COUNTRY_NAME, COUNTRY_CODE FROM COUNTRY WHERE COUNTRY_CODE = 'B' OR COUNTRY_CODE = 'NL';
    works in SQL*Plus.
    Thx, Donat

    Hi,
    That's true, it uses value bindings while the values are hard coded. I use the default EclipseLink embedded into JDeveloper 11.1.1.2, with the default settings. The JPA bean runs in the default embedded WebLogic of JDeveloper. This might be a default setting of EclipseLink to use bindings for hard coded values.
    Here is a fragment of Country.java:
    @Entity
    @NamedQueries({
    //@NamedQuery(name = "Country.findAll", query = "select o from Country o")
    @NamedQuery(name = "Country.findAll", query = "select o from Country o where o.countryCode ='B' or o.countryCode ='NL'")
    public class Country implements Serializable {
    @Id
    @Column(name="COUNTRY_ID", nullable = false)
    private Long countryId;
    @Column(name="COUNTRY_CODE", nullable = false)
    private String countryCode;
    @Column(name="COUNTRY_NAME", length = 40)
    private String countryName;
    And I have a JavaServiceFacade.java:
    public class JavaServiceFacade {
    private EntityManagerFactory emf = Persistence.createEntityManagerFactory("EjbModel-1-Outside");
    public JavaServiceFacade() {
    public static void main(String [] args) {
    final JavaServiceFacade javaServiceFacade = new JavaServiceFacade();
    List<Country> countries;
    countries = javaServiceFacade.getCountryFindAll();
    for (Country country : countries) {
    System.out.println(country.getCountryCode());
    private EntityManager getEntityManager() {
    return emf.createEntityManager();
    /** <code>select o from Country o</code> */
    public List<Country> getCountryFindAll() {
    return getEntityManager().createNamedQuery("Country.findAll").getResultList();
    BR, Donat

  • How to determine column length semantics through ANSI Dynamic SQL ?

    I am looking for a way to determine the length semantics used for a column through ANSI Dynamic SQL.
    I have a database with NLS_CHARACTERSET=AL32UTF8.
    In this database I have the following table:
    T1(C1 varchar2(10 char), C2 varchar2(40 byte))
    When I describe this table in SQL*Plus, I get:
    C1 VARCHAR2(10 CHAR)
    C2 VARCHAR2(40)
    In my Pro*C program (mode=ansi), I get the select statement on input, use PREPARE method to prepare it and then use the GET DESCRIPTOR method to obtain colum information for output:
    GET DESCRIPTOR 'output_descriptor' VALUE :col_num
    :name = NAME, :type = TYPE,
    :length = LENGTH, :octet_length = OCTET_LENGTH
    For both C1 and C2 I get the following:
    :type=12
    :length=40
    :octet_length=40
    So, even if I know that my database is AL32UTF8, there doesn't seem to be a way for me to determine whether char or byte length semantics were used in C1 and C2 column definitions.
    Does anybody know how I can obtain this information through ANSI Dynamic SQL?
    Note: the use of system views such as ALL_TAB_COLUMNS is not an option, since we wish to obtain this information even for columns in a complex select statements which may involve multiple tables.
    Note: I believe OCI provides the information that we need through OCI_ATTR_DATA_SIZE (which is in bytes) and OCI_ATTR_CHAR_SIZE (which is in chars). However, switching to OCI is something we would like to avoid at this point.

    Yes, I was wondering which forum would be the best for my question. I see similar questions in various forums, Call Interface, SQL and PL/SQL and Database - General. Unfortunately there is no Pro*C or Dynamic SQL forum which would be my first choice for posting this question.
    Anyway I now posted the same question (same subject) in the Call Interface forum, so hopefully I'll get some answers there.
    Thank you for the suggestion.

  • Sql query slowness due to rank and columns with null values:

        
    Sql query slowness due to rank and columns with null values:
    I have the following table in database with around 10 millions records:
    Declaration:
    create table PropertyOwners (
    [Key] int not null primary key,
    PropertyKey int not null,    
    BoughtDate DateTime,    
    OwnerKey int null,    
    GroupKey int null   
    go
    [Key] is primary key and combination of PropertyKey, BoughtDate, OwnerKey and GroupKey is unique.
    With the following index:
    CREATE NONCLUSTERED INDEX [IX_PropertyOwners] ON [dbo].[PropertyOwners]    
    [PropertyKey] ASC,   
    [BoughtDate] DESC,   
    [OwnerKey] DESC,   
    [GroupKey] DESC   
    go
    Description of the case:
    For single BoughtDate one property can belong to multiple owners or single group, for single record there can either be OwnerKey or GroupKey but not both so one of them will be null for each record. I am trying to retrieve the data from the table using
    following query for the OwnerKey. If there are same property rows for owners and group at the same time than the rows having OwnerKey with be preferred, that is why I am using "OwnerKey desc" in Rank function.
    declare @ownerKey int = 40000   
    select PropertyKey, BoughtDate, OwnerKey, GroupKey   
    from (    
    select PropertyKey, BoughtDate, OwnerKey, GroupKey,       
    RANK() over (partition by PropertyKey order by BoughtDate desc, OwnerKey desc, GroupKey desc) as [Rank]   
    from PropertyOwners   
    ) as result   
    where result.[Rank]=1 and result.[OwnerKey]=@ownerKey
    It is taking 2-3 seconds to get the records which is too slow, similar time it is taking as I try to get the records using the GroupKey. But when I tried to get the records for the PropertyKey with the same query, it is executing in 10 milliseconds.
    May be the slowness is due to as OwnerKey/GroupKey in the table  can be null and sql server in unable to index it. I have also tried to use the Indexed view to pre ranked them but I can't use it in my query as Rank function is not supported in indexed
    view.
    Please note this table is updated once a day and using Sql Server 2008 R2. Any help will be greatly appreciated.

    create table #result (PropertyKey int not null, BoughtDate datetime, OwnerKey int null, GroupKey int null, [Rank] int not null)Create index idx ON #result(OwnerKey ,rnk)
    insert into #result(PropertyKey, BoughtDate, OwnerKey, GroupKey, [Rank])
    select PropertyKey, BoughtDate, OwnerKey, GroupKey,
    RANK() over (partition by PropertyKey order by BoughtDate desc, OwnerKey desc, GroupKey desc) as [Rank]
    from PropertyOwners
    go
    declare @ownerKey int = 1
    select PropertyKey, BoughtDate, OwnerKey, GroupKey
    from #result as result
    where result.[Rank]=1
    and result.[OwnerKey]=@ownerKey
    go
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

Maybe you are looking for

  • Would like to restore my reloaded firefox from sync following a system reload. How do I do this?

    Had to do a total reload of my system. Set up firefox sync before I did it. How do I now sync my newly reloaded firefox with the sync'ed data ? Attempts to do so result in no way to hit the continue button.

  • A port conflict was detected in the server configuration

    Hi Iam getting following error message "A port conflict was detected in the server configuration" error when I run startWeblogic.sh. I checked same in config.sh file about repeatation of port for channel sip but it is not repeated. Please help me out

  • Plz help me in finding out the BADI or menu exit for IW32

    HI in Tcode IW32, based on the user status(ASTTX), the menu items ,EXTRAS -> TASK LIST SELECTION -> all items have to be grayed out. Please help me to find out BADI or menu exit which ever is available. Thanks, Pallavi.

  • Table design help

    I have simple table design issue which I would appreciate some feedback on. Lets say I have two tables to start: users and addresses. Both tables have their primary keys of userid and addressid. Assume no two users can share an addresss. Is it better

  • Time Machine just killed two external HD

    Hi All, Time Machine (TM) just killed two external hard disk drives.  I am running 10.8.3 on a MPB 2.3 GHz I7 with 16GB SDRAM and I have had problems with TM not recognizing the external hard disks.  I purchased a new 2TB firewire HD and started TM a