Bug: nls_length_semantics & getColumns()

Hello all,
when setting nls_length_semantics to char on an UTF8 instance, getColumns() reports the size in bytes and not in characters. According to the JDBC documentation getColumns() should report the size in characters:
"For char or date types this is the maximum number of characters, for numeric or decimal types this is precision."
(from http://java.sun.com/j2se/1.4.2/docs/api/java/sql/DatabaseMetaData.html#getColumns(java.lang.String,%20java.lang.String,%20java.lang.String,%20java.lang.String))
From that I would expect that for a VARCHAR2(10) field in a UTF8 database with nls_length_semantics = char I receive the value 10, but actually I receive the value 30!
This is with Oracle 9iR2 and the current OJDBC14.jar
Any plans to fix this?
Kind regards
Thomas

With SQL*Plus, you can also try:
SQL> show parameter nls_le
NAME                                 TYPE        VALUE
nls_length_semantics                 string      BYTE

Similar Messages

  • DatabaseMetaData.getColumns(...) returns an invalid size of NVARCHAR2

    Hi,
    I have the following problem.
    I'm using ojdbc14.jar version 10.2.0.2.0
    I'm trying to read table metadata from database using DatabaseMetaData.getColumns(...). And, when I read the size of a NVARCHAR2 column (using either COLUMN_SIZE or CHAR_OCTET_LENGTH) it returns the double of the maximum length (as if it were expressed in bytes!)
    Javadocs says: COLUMN_SIZE int => column size. For char or date types this is the maximum number of characters, for numeric or decimal types this is precision.
    Does anyone have the same problem?
    Does anyone know if there is an open bug on this topic?
    I will really appreciate your help
    Thanks
    ArielUBA

    Hi Ashok,
    Thanks for your answer.
    I tried changing NLS_LENGTH_SEMANTICS parameter, and unfortunately it was unsuccessful :-(
    I'm using NVARCHAR2 column type and in http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams127.htm says the following:
    NCHAR, NVARCHAR2, CLOB, and NCLOB columns are always character-based
    I also tried to use (in a simple test case) the newest available driver 11.1.0.6.0. (but, for production I need ojdbc14.jar) and I got the same result.
    PreparedStatement smt = connection.prepareStatement("ALTER SESSION SET NLS_LENGTH_SEMANTICS=CHAR");
    smt.execute();
    connection.commit();
    DatabaseMetaData metaData = connection.getMetaData();
    ResultSet columns = metaData.getColumns(null, "ENG_AAM_572_STD_1E", "PPROCINSTANCE", "V_LOAN_NUMBER");
    dispVerticalResultSet(columns);The column length of V_LOAN_NUMBER is 31 characters and the result was:
    TABLE_CAT=null
    TABLE_SCHEM=ENG_AAM_572_STD_1E
    TABLE_NAME=PPROCINSTANCE
    COLUMN_NAME=V_LOAN_NUMBER
    DATA_TYPE=1111
    TYPE_NAME=NVARCHAR2
    COLUMN_SIZE=62
    BUFFER_LENGTH=0
    DECIMAL_DIGITS=null
    NUM_PREC_RADIX=10
    NULLABLE=1
    REMARKS=null
    COLUMN_DEF=NULL
    SQL_DATA_TYPE=0
    SQL_DATETIME_SUB=0
    CHAR_OCTET_LENGTH=62
    ORDINAL_POSITION=32
    IS_NULLABLE=YES
    Are you sure that I'm dealing with the bug 4485954?
    Do you know if there is a workaround?
    Thanks in advance for your time
    ArielUBA

  • NLS_LENGTH_SEMANTICS and work tables in KMs

    Hi everybody,
    I'm working on an interface that uses a 11g-based workarea. A quick query on v$nls_parameters returns:
    NLS_CHARACTERSET: AL32UTF8
    NLS_LENGTH_SEMANTICS: BYTE
    Because of this every time a km creates a "$" table, CHAR and VARCHAR2 column lengths are implicitly defined in bytes.
    Obviously when a 30-chars string (source datastores are on DB2) exceeds those 30 bytes target length, the execution fails.
    Is there any way to avoid this? I've tried adding "ALTER SESSION ..." as a step in the "LKM SQL to Oracle" km I'm using but from what I understand every action performed by it on the database lives in its own session.
    I'd prefer to avoid performing ALTER SYSTEM as the 11g instance has other apps/products installed.
    Sorry for my bad english and thanks in advance for any suggestion...

    Sutirtha, your post was sort of inspiring!
    I've found a partial solution to my problem by changing in Topology Manager the CHAR and VARCHAR2 datatypes implementation for the Oracle technology:
    !http://www.abload.de/img/odi-datatype-implementu55n.png!
    Now only a little annoyance remains, caused by a known bug (4485954) within the Oracle JDBC driver: when I reverse engineer tables from a UTF8 physical schema, ALL_TAB_COLUMNS.COLUMN_LENGTH is used instead of ALL_TAB_COLUMNS.CHAR_LENGTH.
    I should try to use a newer jdbc .jar but IIRC that bug is resolved only with db 11g that ships with JDK 1.5/1.6 drivers which ODI (1.4 based) cannot use...

  • JPA Metadata issue/ Weird Column error or bug in openjpa ...?

    Hi All,
    I am getting the following exception when using openjpa in my project.
    The line of code that throws the error is also mentioned
    The latter error shows a column mismatch error which i am not sure is correct since the db i am using is Oracle 10g and the datatypes for the column are varchar2.
             BcsPort bcsPort=em.find(BcsPort .class, port);   //Error after this call.
    EJB Exception: : <openjpa-1.1.1-SNAPSHOT-r422266:965591 fatal user error> org.apache.openjpa.per
    sistence.ArgumentException: Errors encountered while resolving metadata.  See nested exceptions for details.
            at org.apache.openjpa.meta.MetaDataRepository.resolve(MetaDataRepository.java:567)
            at org.apache.openjpa.meta.MetaDataRepository.getMetaData(MetaDataRepository.java:308)
            at org.apache.openjpa.kernel.BrokerImpl.newObjectId(BrokerImpl.java:1121)
            at org.apache.openjpa.kernel.DelegatingBroker.newObjectId(DelegatingBroker.java:268)
            at org.apache.openjpa.persistence.EntityManagerImpl.find(EntityManagerImpl.java:451)
            at sun.reflect.GeneratedMethodAccessor472.invoke(Unknown Source)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
            at java.lang.reflect.Method.invoke(Method.java:597)
            at weblogic.deployment.BasePersistenceContextProxyImpl.invoke(BasePersistenceContextProxyImpl.java:93)
            at weblogic.deployment.TransactionalEntityManagerProxyImpl.invoke(TransactionalEntityManagerProxyImpl.java:91)
            at weblogic.deployment.BasePersistenceContextProxyImpl.invoke(BasePersistenceContextProxyImpl.java:80)
            at weblogic.deployment.TransactionalEntityManagerProxyImpl.invoke(TransactionalEntityManagerProxyImpl.java:26)
            at $Proxy76.find(Unknown Source)
           .... so onFollowed by this as the cause from what i can make
    Caused by: <openjpa-1.1.1-SNAPSHOT-r422266:965591 fatal user error> org.apache.openjpa.persistence.ArgumentException: "
    com.test.domain.BcsNe.ne" declares a column that is not compatible with the expected type "varchar".  Column detail
    s:
    Full Name: bcsne.ne
    Type: blob
    Size: 0
    Default: null
    Not Null: false
            at org.apache.openjpa.jdbc.meta.MappingInfo.mergeColumn(MappingInfo.java:660)
            at org.apache.openjpa.jdbc.meta.MappingInfo.createColumns(MappingInfo.java:518)
            at org.apache.openjpa.jdbc.meta.ValueMappingInfo.getColumns(ValueMappingInfo.java:143)
            at org.apache.openjpa.jdbc.meta.strats.StringFieldStrategy.map(StringFieldStrategy.java:79)
            at org.apache.openjpa.jdbc.meta.FieldMapping.setStrategy(FieldMapping.java:120)
            at org.apache.openjpa.jdbc.meta.RuntimeStrategyInstaller.installStrategy(RuntimeStrategyInstaller.java:80)
            at org.apache.openjpa.jdbc.meta.FieldMapping.resolveMapping(FieldMapping.java:438)
            at org.apache.openjpa.jdbc.meta.FieldMapping.resolve(FieldMapping.java:403)
            at org.apache.openjpa.jdbc.meta.ClassMapping.resolveNonRelationMappings(ClassMapping.java:834)
            at org.apache.openjpa.jdbc.meta.MappingRepository.prepareMapping(MappingRepository.java:324)
            at org.apache.openjpa.meta.MetaDataRepository.preMapping(MetaDataRepository.java:667)
            at org.apache.openjpa.meta.MetaDataRepository.resolve(MetaDataRepository.java:549)
            ... 78 moreI have seen this link https://issues.apache.org/jira/browse/OPENJPA-1481
    Can anyone help me out on this as i cannot make out if this is a problem in openjpa or the weblogic server 10.0 server
    that i am using to make the call..
    Any inputs on this highly appraciated

    gimbal2 wrote:
    That link seems to deal with a bug relating to a one to many mapping. I don't see anywhere in your post that you are dealing with the same thing.
    Actually i am using Many to One mapping as you can see from the code excerpt
    BcsPort
    @ManyToOne(optional=false, cascade=CascadeType.ALL, fetch=FetchType.EAGER)
         @JoinColumn(name="ne",referencedColumnName="ne")
         private BcsNe bcsNe;Why i posted the link was that it seems to throw the same weird column exception that i got and more googling revelead that it was indeed some issue between the way the oracle varchar2 field is being handled by openjpa.
    so just needed to confirm if this was a widely faced issue and a possible fix to the same
    Is there a particular reason why you are using openjpaActually it's been added recenlty to the system as before it was using normal JDBC Code...
    In stead of the persistence provider shipped with your JEE container anyway?Do you mean that i should use the Oracle Weblogic 10.3 persistence provider...?
    kindly suggest...
    thanks

  • JDBC BUG WITH SYNONYM TABLES AND DB-LINK

    I have created a database link within my schema.
    Then i have created a synonym for a table in the linked database.
    When using metadata.getColumns() for the synonym table there
    are no columns returned.
    When using metadata.getTables() the synonym table is included in
    the list.
    Driver: Oracle 8.1.5 Thin
    Is this a kown bug.
    Any workarounds available.
    null

    Hi togehter,
    soory for my delayed answer. I was ill.
    I created my scenario but it doesn#t work.
    This is my input:
    <?xml version="1.0" encoding="utf-8" ?>
    <ns:SAKRJOIN.resultSet xmlns:ns="urn:sap.com:jdbcAdapter">
    <row>
    <MANDT>100</MANDT>
    <LIFNR>146128</LIFNR>
    <BUKRS>41</BUKRS>
    <row>
    <MANDT>100</MANDT>
    <LIFNR>146128</LIFNR>
    <BUKRS>42</BUKRS>
    </row>
    <row>
    <MANDT>100</MANDT>
    <LIFNR>146129</LIFNR>
    <BUKRS>42</BUKRS>
    </row>
    </ns:SAKRJOIN.resultSet>
    This is my target:
    IDOC Element CREMAS.CREMAS05 1..unbounded
    BEGIN Attribut xsd:string required
    EDI_DC40 Element EDI_DC40.CREMAS.CREMAS05 1
    E1LFA1M Element CREMAS05.E1LFA1M 1
    SEGMENT Attribut xsd:string required
    MSGFN Element xsd:string 0..1 maxLength="3"
    LIFNR Element xsd:string 0..1 maxLength="10"
    ANRED Element xsd:string 0..1 maxLength="15"
    If I mapped the field row to the target field IDOC I create three IDOCs, but I want only one IDOC per <LIFNR>. In this example I want to see two IDOCs. One for 146128 and one for 146129.
    If I mapped <LIFNR> -> split by value (valuechanged) in field IDOC I get only one IDOC.
    Whats wrong ?
    I tried
    lifnr>sort>splitbyvalue(valuechange)->collapse context->IDOC
    Now I get three once again.
    Kind regards
    Wolfgang

  • Alter system set nls_length_semantics

    Hi all,
    my question concerns the scope in the change of NLS_LENGTH_SEMANTICS can be performed.
    The 10gR2 documentation only the
    "Modifiable      ALTER SESSION"
    But what about altering the system and making your own setting to default for all sessions? With which scope?
    I tried
    alter system set nls_length_semantics='CHAR';
    alter system set nls_length_semantics='CHAR' scope=spfile;
    alter system set nls_length_semantics='CHAR' scope=both;
    None had really any effect. Do I have to bounce the database?

    Hello,
    Do I have to bounce the database?Yes, you have to shutdown and startup the database.
    Else the NLS_LENGTH_SEMANTICS change won't be effective.
    You may have more details on the following thread:
    nls_database_parameters->nls_length_semantics Help!
    There's also an interesting Note from MOS:
    Examples and limits of BYTE and CHAR semantics usage (NLS_LENGTH_SEMANTICS) [ID 144808.1]They give many information about NLS_LENGTH_SEMANTICS and the following Bug:
    Bug 1488174
    Problem: ALTER SYSTEM does not change the setting of NLS_LENGTH_SEMANTICS for the current and new (!) sessions.
    Workaround: Don't use ALTER SYSTEM SET NLS_LENGTH_SEMANTICS scope=both; but set NLS_LENGTH_SEMANTICS as a init.ora parameter or issue ALTER SYSTEM SET NLS_LENGTH_SEMANTICS=CHAR scope=spfile; and bounce the database.Hope this help.
    Best regards,
    Jean-Valentin
    Edited by: Lubiez Jean-Valentin on May 27, 2010 2:06 PM

  • Nls_database_parameters- nls_length_semantics Help!

    Hi all!
    I have to create a database with the parameter nls_length_semantics set to char but I don't know how to do it!
    In a new machine, where was installed only the oracle client(10), I install a new database software (10g), during the installation process I was unable to select the character set.
    Then I install the first instance and select AL32utf8 as character set and select length semantics= char.
    But the select * from nls_database_parameters shows nls_length_semantics=byte ko
    and
    select * from nls_instance_parameters shows nls_length_semantics= char ok
    how can I do to set both with nls_length_semantics= char?
    Yes I will reinstall the database but I do not understand what was wrong?
    Thank you for help
    Mario

    Execute:
    ALTER SYSTEM SET NLS_LENGTH_SEMANTICS=CHAR scope=both;
    And restart the database - despite scope is both, only after restart it will be changed.
    But there is a Bug 1488174 - and this alter system depending on version can have no effect even after restart. The only way to modify is then to change init.ora (create from spfile pfile, change parameter, startup and create spfile from pfile, restart once more database and start with spfile).
    And by Oracle recomendation:
    +Do NOT set the NLS_LENGTH_SEMANTICS=CHAR during database creation, create the database with NLS_LENGTH_SEMANTICS=BYTE (or not set) and set NLS_LENGTH_SEMANTICS=CHAR after the database creation.+
    For more details reffer to metalink note 144808.1

  • NLS_LENGTH_SEMANTICS settings

    Hi,
    Need a suggestion on below scenario.
    We are going to create a new database ( 11g ) having AL32UTF8 as database character set and AL32UTF16 as national character set. Some of the tables columns ( less than 25) might need to store chinese or japanese characters. What is the approach we need to follow for NLS_LENGTH_SEMANTICS settings.
    Option 1 : set NLS_LENGTH_SEMANTICS to CHAR at DB Level.
    Option 2: Leave NLS_LENGTH_SEMANTICS as oracle default ( i.e BYTE) and while defining DDL for those tables mention the respective columns with CHAR semantics i.e varchar2(20 CHAR) etc.
    Which option is better.
    Regards,
    vara.

    see MOS Doc please
    Examples and limits of BYTE and CHAR semantics usage (NLS_LENGTH_SEMANTICS) [ID 144808.1]
    Do not change this parameter in db level even if NLS_LENGTH_SEMANTICS does not apply to tables in SYS and SYSTEM schema. (The data dictionary always uses byte semantics.)
    It is always easier to include an alter session command before table creation in scripts if you dont want to use varchar2(20 CHAR) syntax ...
    But IMHO the best is to use (20 CHAR) precision in scripts to highlight that you use char semantics.
    also check the following MOS doc for open bugs/issues about this parameter
    Init.ora Parameter "NLS_LENGTH_SEMANTICS" Reference Note [ID 153365.1]
    Edited by: Kecskemethy on Sep 16, 2011 5:21 AM

  • NLS_LENGTH_SEMANTICS doesn't work ?

    Hi,
    (Thanks for people who helped me previously!)
    Oracle 8.1.7.3 to 9.2. migraton through exp/imp.
    CSScanner shows a lot of truncation warnings. I've
    got to know from Oracle doc that I have to use
    CHAR semantic during import to avoid errors.
    My actions:
    1. Oracle 9i has NLS_LENGTH_SEMANTICS=BYTE.
    I begin import and have a lot of errors
    ORA-01401: inserted value too large for column
    OK !
    2. I change semantic using
    ALTER SYSTEM SET NLS_LENGTH_SEMANTICS=CHAR
    3. Import again brings a lot of the same errors
    ORA-01401: inserted value too large for column
    Is anybody knows what does it mean ?
    Thanks in advance
    Viacheslav

    Hi Viacheslav,
    This is the current behaviour with export/import, they preserve the length semantics of the original data. The workaround is to precreate the objects before the import.
    There is an enhancement request Bug 3026420 on this issue.
    Nat

  • Problem with getColumns() over Informix

    Hi All.
    I am using getColumns() method to obtain all columns in a table, it works fine except with one table, It has over 60 fields, and I think it could be the trouble.
    Anybody knows if there are some limit to the getColumns() method?
    I'm using Informix JDBC driver 1.1

    I have exactly the same problem using Middlegen. Middlegen makes a call to getColumns(...).
    Did you find a workaround for this bug ? It would be very kind of you if you could post some information via this forum.
    We use :
    Database Product Name : INFORMIX-OnLine
    Database Product Version : 7.31.FD6
    Driver Name : Informix JDBC Driver for
    Informix Dynamic Server
    Driver Version : 2.20.JC1
    Best regards

  • BUG; jdk1.3; Gridcontrol; Attention JDeveloper Team

    I have found yet another bug in the implementation of gridcontrol and jdk1.3.
    I have the following renderer for my text fields.
    import javax.swing.table.*;
    import java.awt.*;
    import java.awt.event.*;
    import javax.swing.*;
    * A Class class.
    * <P>
    * @author Linda B. Rawson
    public class DefaultTextRenderer extends DefaultTableCellRenderer
    * Constructor
    public DefaultTextRenderer() {
    public Component getTableCellRendererComponent(JTable table, Object value, boolean isSelected, boolean hasFocus, int row, int column)
    Component comp = super.getTableCellRendererComponent(table,value,isSelected,hasFocus,row,column);
    String myFlag = "";
    try {
    // this row index must correspond to the run flag column in the gridcontrol
    myFlag = (table.getValueAt(row,15).toString());
    myFlag = (myFlag == null) ? "" : myFlag;
    if (myFlag.equals("P") ) {
    comp.setBackground(Color.lightGray);
    } else if (myFlag.equals("S") ) {
    comp.setBackground(Color.yellow);
    } else if (myFlag.equals("F") ) {
    comp.setBackground(Color.red);
    } else {
    comp.setBackground(Color.white);
    } //This is to catch the null row in gridcontrol from jdev bug
    catch (NullPointerException e) {
    return comp;
    I implement the renderer like this:
    DefaultTextRenderer textRenderer = new DefaultTextRenderer();
    TableCellRenderer renderer;
    renderer = textRenderer;
    m_table = gridControl.getTable();
    m_table.getColumnModel().getColumn(1).setCellRenderer(renderer);
    In JDK 1.2 it colors the appropriate row just like I want.
    In JDK1.3 it will not do the first occurance of the row. In otherwords if I want the first row in the grid colored it fails.
    You need to test this in your upcoming version.
    Linda
    null

    Hi,
    we have faced similar compatibility problems
    when testing our application using JDEV3.2
    and JDK1.3 (Browser crashed);
    Since we use JDK1.3.1 (it has been released aready by SUN) this severe problem vanished.
    I really share your concerns about the grid
    control in JDEV3.2.3 and I hope that we find
    a way to track the successful treatment of
    BUG 1806180!
    Have fun
    @i

  • Querying external tables ERROR no not English version 10g R2 ¿BUG 5172459?

    Hello
    I have a serious problem when trying to view the content of external tables under Oracle 10R2 in Spanish
    Steps to perform:
    1. Make directory on file system (in oracle server side).
    2. Copy a data file into this directory.
    3. Login (sqlplus) as "sys as sysdba"
    4. Make one oracle directory object
    5. Grant permits read / write to a user 'simple_uesr'
    6. Logout sys, and login as 'simple_user'
    7. Make a external table, which uses the directory and data file.
    8. Run query 'select * from myExtTable' to check it.
    I have repeated these steps on Oracle 9i Enterprise, Oracle 10gR2 Enterprise, and Oracle 10g XE, and always, always worked perfectly (no problems).
    The problem occurs in the client's DB (Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 SPANISH), everything works fine until the step 8 (querying external tables), where systematically the following error occurs:
    ERROR en línea 1:
    ORA-29913: error al ejecutar la llamada de ODCIEXTTABLEOPEN
    ORA-29400: error de cartucho de datos
    KUP-00552: internal XAD package failed to load
    ORA-06512: en "SYS.ORACLE_LOADER", línea 19
    I have made many tests, such as assigning a wrong directory to external table, remove the data file, remove access permissions, and always, always gives the same error, nerver error "file not found...etc, etc".
    I have concluded that the failure, which occurs before Oracle even try to access the file system, but I do not know what may be the cause.
    Searching the Internet, I found the following links:
    http://www.dba-oracle.com/t_ora_29913_external_table_error.htm
    http://zalbb.itpub.net/post/980/249423
    Where mention the BUG 5172459 (MetaLink Note: 373168.1), but after follow the directions, still does not work.
    Can anyone help me with this problem?
    Thanks!
    Full details of the DB which gives the error
    SO: Windows 2003 Server Standard SP1.
    Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Prod
    PL/SQL Release 10.2.0.2.0 - Production
    CORE 10.2.0.2.0 Production
    TNS for 32-bit Windows: Version 10.2.0.2.0 - Production
    NLSRTL Version 10.2.0.2.0 - Production
    show parameter nls;
    NAME TYPE VALUE
    nls_calendar string
    nls_comp string
    nls_currency string
    nls_date_format string
    nls_date_language string
    nls_dual_currency string
    nls_iso_currency string
    nls_language string SPANISH
    nls_length_semantics string BYTE
    nls_nchar_conv_excp string FALSE
    nls_numeric_characters string
    nls_sort string
    nls_territory string SPAIN
    nls_time_format string
    nls_timestamp_format string
    nls_timestamp_tz_format string
    nls_time_tz_format string
    -- NLS_SESSION_PARAMETERS
    select * from NLS_SESSION_PARAMETERS order by parameter;
    PARAMETER VALUE
    NLS_CALENDAR GREGORIAN
    NLS_COMP BINARY
    NLS_CURRENCY €
    NLS_DATE_FORMAT DD/MM/RR
    NLS_DATE_LANGUAGE SPANISH
    NLS_DUAL_CURRENCY €
    NLS_ISO_CURRENCY SPAIN
    NLS_LANGUAGE SPANISH
    NLS_LENGTH_SEMANTICS BYTE
    NLS_NCHAR_CONV_EXCP FALSE
    NLS_NUMERIC_CHARACTERS ,.
    NLS_SORT SPANISH
    NLS_TERRITORY SPAIN
    NLS_TIME_FORMAT HH24:MI:SSXFF
    NLS_TIMESTAMP_FORMAT DD/MM/RR HH24:MI:SSXFF
    NLS_TIMESTAMP_TZ_FORMAT DD/MM/RR HH24:MI:SSXFF TZR
    NLS_TIME_TZ_FORMAT HH24:MI:SSXFF TZR
    -- NLS_INSTANCE_PARAMETERS
    select * from NLS_INSTANCE_PARAMETERS order by parameter;
    PARAMETER VALUE
    NLS_CALENDAR
    NLS_COMP
    NLS_CURRENCY
    NLS_DATE_FORMAT
    NLS_DATE_LANGUAGE
    NLS_DUAL_CURRENCY
    NLS_ISO_CURRENCY
    NLS_LANGUAGE SPANISH
    NLS_LENGTH_SEMANTICS BYTE
    NLS_NCHAR_CONV_EXCP FALSE
    NLS_NUMERIC_CHARACTERS
    NLS_SORT
    NLS_TERRITORY SPAIN
    NLS_TIME_FORMAT
    NLS_TIMESTAMP_FORMAT
    NLS_TIMESTAMP_TZ_FORMAT
    NLS_TIME_TZ_FORMAT
    -- NLS_DATABASE_PARAMETERS
    select * from NLS_DATABASE_PARAMETERS order by parameter;
    PARAMETER VALUE
    NLS_CALENDAR GREGORIAN
    NLS_CHARACTERSET WE8MSWIN1252
    NLS_COMP BINARY
    NLS_CURRENCY ?
    NLS_DATE_FORMAT DD/MM/RR
    NLS_DATE_LANGUAGE SPANISH
    NLS_DUAL_CURRENCY ?
    NLS_ISO_CURRENCY SPAIN
    NLS_LANGUAGE SPANISH
    NLS_LENGTH_SEMANTICS BYTE
    NLS_NCHAR_CHARACTERSET AL16UTF16
    NLS_NCHAR_CONV_EXCP FALSE
    NLS_NUMERIC_CHARACTERS ,.
    NLS_RDBMS_VERSION 10.2.0.2.0
    NLS_SORT SPANISH
    NLS_TERRITORY SPAIN
    NLS_TIME_FORMAT HH24:MI:SSXFF
    NLS_TIMESTAMP_FORMAT DD/MM/RR HH24:MI:SSXFF
    NLS_TIMESTAMP_TZ_FORMAT DD/MM/RR HH24:MI:SSXFF TZR
    NLS_TIME_TZ_FORMAT HH24:MI:SSXFF TZR
    END.

    jpadron_uy wrote:
    ERROR en línea 1:
    ORA-29913: error al ejecutar la llamada de ODCIEXTTABLEOPEN
    ORA-29400: error de cartucho de datos
    KUP-00552: internal XAD package failed to load
    ORA-06512: en "SYS.ORACLE_LOADER", línea 19Hola!
    Let's go through errors you posted:
    First error (ORA-29913) indicating error occurs when Oracle tryed to access external table.
    Then ORA-29400 says that error has occurred in a data cartridge external procedure.
    And then finally KUP-00552 - an error was encountered while attempting to initialize the XAD package.
    So did you check state of the XAD package in that database?
    Also, please post code you used in steps 4. and 7.
    HTH

  • ORA-600 with index corruption - Any Bug ?

    Hi All,
    We have Oracle Database 10.2.0.1 on Linux Fedora Core 6
    AL32UTF8 Characterset and NLS_LENGTH_SEMANTICS = CHAR
    For past couple of days, we are experiencing a strange problem of corruption. The indexes of many tables have been corrupted. Deleting or Updating the table rows throws the dreaded ORA-600 error. We tried to trace the error with stack trace in the tool on metalink, but in vain.
    We seek help to find out why the indexes are getting corrupted very often. Is there any parameter which is misbehaving or Is it the CHAR semantics playing the game?
    Any help pls ?
    Thanks a lot :)

    ORA-00600 is not the kind of error to be solved in a forum. If you have access to metalink and you have already performed a search with the ora600 search tool. Then raise a Service Request in metalink.
    On the other hand, I see you are at the first release of 10gR2 without patchset applied. I suggest you to consider applying the latest patchset and CPU available for your platform, just to avoid hitting a known bug. Most probably this will be the initial Oracle Support recommendation.
    ~ Madrid
    http://hrivera99.blogspot.com/

  • DatabaseMetaData.getColumns().next() goes on forever

    Hi,
    I am using the following code to get all the column names using DatabaseMetaData.getColumns().next()
    The loop never stops. I am using Java 1.4 and downloaded the Oracle 9i's (9.2.0.8) ojdbc.jar from
    http://www.oracle.com/technology/software/tech/java/sqlj_jdbc/htdocs/jdbc9201.html
    My table "GA_TABLE" has only 22 columns. But this loop keeps on showing the same column names again and again.
    The next() never stops.
    I am running the Java program from Eclipse 3.2
    Could you please help me?
    I greatly appreciate your help.
    Thank you.
    listColumnNames(String tableName) {
    DriverManager.registerDriver (new oracle.jdbc.OracleDriver());
    connOracle = DriverManager.getConnection(strConnectURL, props);
    DatabaseMetaData dbmeta = connOracle.getMetaData();
    ResultSet dbColumns = dbmeta.getColumns(null, null, tableName,null);
    for (int i =0; dbColumns.next(); i++){
    log.info(dbColumns.getString("COLUMN_NAME"));
    }

    I don't know about it... If there is a fix, it will be in the latest driver
    available on the Oracle download page now, so that's the version of ojdbc14.jar
    that you should try.
    Also, please make a new table, just to test this bug. It can have the same
    DDL as your original table, but name it something unique to the DBMS such as:
    CREATE TABLE EVERYONELOVESJOE(....)
    then run your little metadata query on that table, and tell us if the loop
    never ends... In your original test, just how high does i get before you quit?
    thanks,
    Joe

  • DatabaseMetaData.getColumns COLUMN_DEF

    I want the description of table columns using DatabaseMetaData.getColumns().
    When I loop thru the ResultSet I got an exception each time I try to read the column's default value.
    Debug traces look like this :
    DRVR OPER OracleResultSetImpl.getString(columnIndex=13)
    DRVR WARN DBError.findMessage(errNum, obj): returned Stream wurde schon geschlossen
    java.sql.SQLException: Stream wurde schon geschlossen
    Everything else is just fine

    I don't know about it... If there is a fix, it will be in the latest driver
    available on the Oracle download page now, so that's the version of ojdbc14.jar
    that you should try.
    Also, please make a new table, just to test this bug. It can have the same
    DDL as your original table, but name it something unique to the DBMS such as:
    CREATE TABLE EVERYONELOVESJOE(....)
    then run your little metadata query on that table, and tell us if the loop
    never ends... In your original test, just how high does i get before you quit?
    thanks,
    Joe

Maybe you are looking for

  • Error while alter the table

    Hi Please let me know reasons for this below error Error starting at line 32 in command: ALTER TABLE DEVELOPMENT.BRANDING_STRINGS ADD CONSTRAINT frk_strings_brandingsdetails FOREIGN KEY (CUSTOMIZATION_ID) REFERENCES BRANDING_DETAILS (CUSTOMIZATION_ID

  • Eliminate Scriptlet condtions in JSP

    I have a Servlet that checks for information and if there is an issue it forwards the message to presentation page (JSP). Now I want to stop using conditions in scriptlets in the JSP. Please advise how I can do it in this situation in my Tomcat 4.1.2

  • I need some straight talk about PCM vs. AC3

    Hello, I am getting conflicting messages about which type of audio to use, the PCM audio (AIFF/WAV) or AC3 (Dolby). Several posts on here swear that you should always use AC3, that it helps with playback issues. But when I preview both clips in DVD S

  • Syncing removed apps and settings cannot get some apps, settings and scores back

    When syncing with iTunes 10.1.1 it removed apps after backup and sync, I lost settings , apps and scores Tried restore did not work. Using W7

  • LMS 4.0 issue

    Hi, I am facing issue with LMS 4.0. The Core Switch is showing in RED color,and device type as UNKNOWN. It was working fine but some how it is showing this problem. Can anyone help me out with this issue. Thanks, Shakir Sent from Cisco Technical Supp