Replicat error: ORA-12899: value too large for column ...

Hi,
In our system Source and Target are on the same physical server and in the same Oracle instance. Just different schemes.
Tables on the target were created as 'create table ... as select * from ... source_table', so they have a similar structure. Table names are also similar.
I started replicat, it worked fine for several hours, but when I inserted Chinese symbols into the source table I got an error:
WARNING OGG-00869 Oracle GoldenGate Delivery for Oracle, OGGEX1.prm: OCI Error ORA-12899: value too large for column "MY_TARGET_SCHEMA"."TABLE1"."*FIRSTNAME*" (actual: 93, maximum: 40) (status = 12899), SQL <INSERT INTO "MY_TARGET_SCHEMA"."TABLE1" ("USERID","USERNAME","FIRSTNAME","LASTNAME",....>.
FIRSTNAME is Varchar2(40 char) field.
I suppose the problem probably is our database is running with NLS_LENGTH_SEMANTICS='CHAR'
I've double checked tables structure on the target - it's identical with the source.
I also tried to manually insert this record into the target table using 'insert into ... select * from ... ' statement - it works. The problem seems to be in the replicat.
How to fix this error?
Thanks in advance!
Oracle GoldenGate version: 11.1.1.1
Oracle Database version: 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
NLS_LANG: AMERICAN_AMERICA.AL32UTF8
NLS_LENGTH_SEMANTICS='CHAR'
Edited by: DeniK on Jun 20, 2012 11:49 PM
Edited by: DeniK on Jun 23, 2012 12:05 PM
Edited by: DeniK on Jun 25, 2012 1:55 PM

I've created the definition files and compared them. They are absolutely identical, apart from source and target schema names:
Source definition file:
Definition for table MY_SOURCE_SCHEMA.TABLE1
Record length: 1632
Syskey: 0
Columns: 30
USERID 134 11 0 0 0 1 0 8 8 8 0 0 0 0 1 0 1 3
USERNAME 64 80 12 0 0 1 0 80 80 0 0 0 0 0 1 0 0 0
FIRSTNAME 64 160 98 0 0 1 0 160 160 0 0 0 0 0 1 0 0 0
LASTNAME 64 160 264 0 0 1 0 160 160 0 0 0 0 0 1 0 0 0
PASSWORD 64 160 430 0 0 1 0 160 160 0 0 0 0 0 1 0 0 0
TITLE 64 160 596 0 0 1 0 160 160 0 0 0 0 0 1 0 0 0
Target definition file:
Definition for table MY_TAEGET_SCHEMA.TABLE1
Record length: 1632
Syskey: 0
Columns: 30
USERID 134 11 0 0 0 1 0 8 8 8 0 0 0 0 1 0 1 3
USERNAME 64 80 12 0 0 1 0 80 80 0 0 0 0 0 1 0 0 0
FIRSTNAME 64 160 98 0 0 1 0 160 160 0 0 0 0 0 1 0 0 0
LASTNAME 64 160 264 0 0 1 0 160 160 0 0 0 0 0 1 0 0 0
PASSWORD 64 160 430 0 0 1 0 160 160 0 0 0 0 0 1 0 0 0
TITLE 64 160 596 0 0 1 0 160 160 0 0 0 0 0 1 0 0 0
Edited by: DeniK on Jun 25, 2012 1:56 PM
Edited by: DeniK on Jun 25, 2012 1:57 PM

Similar Messages

  • SQL Error: ORA-12899: value too large for column

    Hi,
    I'm trying to understand the above error. It occurs when we are migrating data from one oracle database to another:
    Error report:
    SQL Error: ORA-12899: value too large for column "USER_XYZ"."TAB_XYZ"."COL_XYZ" (actual: 10, maximum: 8)
    12899. 00000 - "value too large for column %s (actual: %s, maximum: %s)"
    *Cause:    An attempt was made to insert or update a column with a value
    which is too wide for the width of the destination column.
    The name of the column is given, along with the actual width
    of the value, and the maximum allowed width of the column.
    Note that widths are reported in characters if character length
    semantics are in effect for the column, otherwise widths are
    reported in bytes.
    *Action:   Examine the SQL statement for correctness.  Check source
    and destination column data types.
    Either make the destination column wider, or use a subset
    of the source column (i.e. use substring).
    The source database runs - Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    The target database runs - Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    The source and target table are identical and the column definitions are exactly the same. The column we get the error on is of CHAR(8). To migrate the data we use either a dblink or oracle datapump, both result in the same error. The data in the column is a fixed length string of 8 characters.
    To resolve the error the column "COL_XYZ" gets widened by:
    alter table TAB_XYZ modify (COL_XYZ varchar2(10));
    -alter table TAB_XYZ succeeded.
    We now move the data from the source into the target table without problem and then run:
    select max(length(COL_XYZ)) from TAB_XYZ;
    -8
    So the maximal string length for this column is 8 characters. To reduce the column width back to its original 8, we then run:
    alter table TAB_XYZ modify (COL_XYZ varchar2(8));
    -Error report:
    SQL Error: ORA-01441: cannot decrease column length because some value is too big
    01441. 00000 - "cannot decrease column length because some value is too big"
    *Cause:   
    *Action:
    So we leave the column width at 10, but the curious thing is - once we have the data in the target table, we can then truncate the same table at source (ie. get rid of all the data) and move the data back in the original table (with COL_XYZ set at CHAR(8)) - without any issue.
    My guess the error has something to do with the storage on the target database, but I would like to understand why. If anybody has an idea or suggestion what to look for - much appreciated.
    Cheers.

    843217 wrote:
    Note that widths are reported in characters if character length
    semantics are in effect for the column, otherwise widths are
    reported in bytes.You are looking at character lengths vs byte lengths.
    The data in the column is a fixed length string of 8 characters.
    select max(length(COL_XYZ)) from TAB_XYZ;
    -8
    So the maximal string length for this column is 8 characters. To reduce the column width back to its original 8, we then run:
    alter table TAB_XYZ modify (COL_XYZ varchar2(8));varchar2(8 byte) or varchar2(8 char)?
    Use SQL Reference for datatype specification, length function, etc.
    For more info, reference {forum:id=50} forum on the topic. And of course, the Globalization support guide.

  • I am getting error "ORA-12899: value too large for column".

    I am getting error "ORA-12899: value too large for column" after upgrading to 10.2.0.4.0
    Field is updating only through trigger with hard coded value.
    This happens randomly not everytime.
    select * from v$version
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE     10.2.0.4.0     Production
    TNS for Linux: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    Table Structure
    desc customer
    Name Null? Type
    CTRY_CODE NOT NULL CHAR(3 Byte)
    CO_CODE NOT NULL CHAR(3 Byte)
    CUST_NBR NOT NULL NUMBER(10)
    CUST_NAME CHAR(40 Byte)
    RECORD_STATUS CHAR(1 Byte)
    Trigger on the table
    CREATE OR REPLACE TRIGGER CUST_INSUPD
    BEFORE INSERT OR UPDATE
    ON CUSTOMER FOR EACH ROW
    BEGIN
    IF INSERTING THEN
    :NEW.RECORD_STATUS := 'I';
    ELSIF UPDATING THEN
    :NEW.RECORD_STATUS := 'U';
    END IF;
    END;
    ERROR at line 1:
    ORA-01001: invalid cursor
    ORA-06512: at "UPDATE_CUSTOMER", line 1320
    ORA-12899: value too large for column "CUSTOMER"."RECORD_STATUS" (actual: 3,
    maximum: 1)
    ORA-06512: at line 1
    Edited by: user4211491 on Nov 25, 2009 9:30 PM
    Edited by: user4211491 on Nov 25, 2009 9:32 PM

    SQL> create table customer(
      2  CTRY_CODE  CHAR(3 Byte) not null,
      3  CO_CODE  CHAR(3 Byte) not null,
      4  CUST_NBR NUMBER(10) not null,
      5  CUST_NAME CHAR(40 Byte) ,
      6  RECORD_STATUS CHAR(1 Byte)
      7  );
    Table created.
    SQL> CREATE OR REPLACE TRIGGER CUST_INSUPD
      2  BEFORE INSERT OR UPDATE
      3  ON CUSTOMER FOR EACH ROW
      4  BEGIN
      5  IF INSERTING THEN
      6  :NEW.RECORD_STATUS := 'I';
      7  ELSIF UPDATING THEN
      8  :NEW.RECORD_STATUS := 'U';
      9  END IF;
    10  END;
    11  /
    Trigger created.
    SQL> insert into customer(CTRY_CODE,CO_CODE,CUST_NBR,CUST_NAME,RECORD_STATUS)
      2                values('12','13','1','Mahesh Kaila','UPD');
                  values('12','13','1','Mahesh Kaila','UPD')
    ERROR at line 2:
    ORA-12899: value too large for column "HPVPPM"."CUSTOMER"."RECORD_STATUS"
    (actual: 3, maximum: 1)
    SQL> insert into customer(CTRY_CODE,CO_CODE,CUST_NBR,CUST_NAME)
      2                values('12','13','1','Mahesh Kaila');
    1 row created.
    SQL> set linesize 200
    SQL> select * from customer;
    CTR CO_   CUST_NBR CUST_NAME                                R
    12  13           1 Mahesh Kaila                             I
    SQL> update customer set cust_name='tst';
    1 row updated.
    SQL> select * from customer;
    CTR CO_   CUST_NBR CUST_NAME                                R
    12  13           1 tst                                      Urecheck your code once again..somewhere you are using record_status column for insertion or updation.
    Ravi Kumar

  • I am trying to send data from textfield to database but it is showing error "ORA-12899: value too large for column "HR"."DOCTORS"."NAME" (actual: 658, maximum: 20)"

    Although i am entering only one character into the textfield then too it is reflecting the same error.
    private void initEvents()
      okButton.addActionListener(new ActionListener() {
      public void actionPerformed(ActionEvent e) {
      Connection conn = null;
        Statement stmt = null;
        try{
           //STEP 2: Register JDBC driver
           Class.forName("oracle.jdbc.driver.OracleDriver");
           //STEP 3: Open a connection
           System.out.println("Connecting to database...");
           conn = DriverManager.getConnection("jdbc:oracle:thin:@localhost:1521:xe","hr","hr");
           //STEP 4: Execute a query
           System.out.println("Creating statement...");
           stmt = conn.createStatement();
           System.out.println(txtField);
           String sql = "INSERT INTO DOCTORS VALUES ('"+txtField+"','"+textField_1+"','"+textField_1+"','"+textField_2+"','"+textField_3+"')";
           JOptionPane.showMessageDialog(null,"Inserted Successfully!");
           ResultSet rs = stmt.executeQuery(sql);
           rs.close();
           stmt.close();
           conn.close();
        }catch(SQLException se){
           //Handle errors for JDBC
           se.printStackTrace();
        }catch(Exception e1){
           //Handle errors for Class.forName
           e1.printStackTrace();
        }finally{
           //finally block used to close resources
           try{
              if(stmt!=null)
                 stmt.close();
           }catch(SQLException se2){
           }// nothing we can do
           try{
              if(conn!=null)
                 conn.close();
           }catch(SQLException se){
              se.printStackTrace();
           }//end finally try
        }//end try

    What is a the size of the text you are trying to insert?
    Have you tried the increasing the size of 'Name' column to 658?
    Most likely the size of Name column is 20. This isn't enough to fit in the value you are trying to insert.
    You can increase the size of the Name column with alter table Hr.doctors modify ( Name column_type (658));
    Let me know if this helps.
    Regards,
    Suntrupth

  • File_To_RT data truncation ODI error ORA-12899: value too large for colum

    Hi,
    Could you please provide me some idea so that I can truncate the source data grater than max length before inserting into target table.
    Prtoblem details:-
    For my scenario read data from source .txt file and insert the data into target table.suppose source file data length exceeds max col length of the target table.Then How will I truncate the data so that data migration will be successful and also can avoid the ODI error " ORA-12899: value too large for column".
    Thanks
    Anindya

    Bhabani wrote:
    In which step you are getting this error ? If its loading step then try increasing the length for that column from datastore and use substr in the mapping expression.Hi Bhabani,
    You are right.It is for Loading SrcSet0 Load data.I have increased the column length for target table data store
    and then apply the substring function but it results the same.
    If you wanted to say to increase the length for source file data store then please tell me which length ?Physical length or
    logical length?
    Thanks
    Anindya

  • Adding virtual column: ORA-12899: value too large for column

    I'm using Oracle 11g, Win7 OS, SQL Developer
    I'm trying to add virtual column to my test table, but getting ORA-12899: value too large for column error. Below are the details.
    Can someone help me in this?
    CREATE TABLE test_reg_exp
    (col1 VARCHAR2(100));
    INSERT INTO test_reg_exp (col1) VALUES ('ABCD_EFGH');
    INSERT INTO test_reg_exp (col1) VALUES ('ABCDE_ABC');
    INSERT INTO test_reg_exp (col1) VALUES ('WXYZ_ABCD');
    INSERT INTO test_reg_exp (col1) VALUES ('ABCDE_PQRS');
    INSERT INTO test_reg_exp (col1) VALUES ('ABCD_WXYZ');
    ALTER TABLE test_reg_exp
    ADD (col2 VARCHAR2(100) GENERATED ALWAYS AS (REGEXP_REPLACE (col1, '^ABCD[A-Z]*_')));
    SQL Error: ORA-12899: value too large for column "COL2" (actual: 100, maximum: 400)
    12899. 00000 -  "value too large for column %s (actual: %s, maximum: %s)"
    *Cause:    An attempt was made to insert or update a column with a value
               which is too wide for the width of the destination column.
               The name of the column is given, along with the actual width
               of the value, and the maximum allowed width of the column.
               Note that widths are reported in characters if character length
               semantics are in effect for the column, otherwise widths are
               reported in bytes.
    *Action:   Examine the SQL statement for correctness.  Check source
               and destination column data types.
               Either make the destination column wider, or use a subset
               of the source column (i.e. use substring).When I try to select, I'm getting correct results:
    SELECT col1, (REGEXP_REPLACE (col1, '^ABCD[A-Z]*_'))
    FROM test_reg_exp;Thanks.

    Yes RP, it working if you give col2 size >=400.
    @Northwest - Could you please test the same w/o having a regex clause in col2?
    I doubt on the usage of a REGEX in this dynamic col case.
    Refer this (might help) -- http://www.oracle-base.com/articles/11g/virtual-columns-11gr1.php
    Below snippet from above link.... see if this helps...
    >
    Notes and restrictions on virtual columns include:
    Indexes defined against virtual columns are equivalent to function-based indexes.
    Virtual columns can be referenced in the WHERE clause of updates and deletes, but they cannot be manipulated by DML.
    Tables containing virtual columns can still be eligible for result caching.
    Functions in expressions must be deterministic at the time of table creation, but can subsequently be recompiled and made non-deterministic without invalidating the virtual column. In such cases the following steps must be taken after the function is recompiled:
    Constraint on the virtual column must be disabled and re-enabled.
    Indexes on the virtual column must be rebuilt.
    Materialized views that access the virtual column must be fully refreshed.
    The result cache must be flushed if cached queries have accessed the virtual column.
    Table statistics must be regathered.
    Virtual columns are not supported for index-organized, external, object, cluster, or temporary tables.
    The expression used in the virtual column definition has the following restrictions:
    It cannot refer to another virtual column by name.
    It can only refer to columns defined in the same table.
    If it refers to a deterministic user-defined function, it cannot be used as a partitioning key column.
    The output of the expression must be a scalar value. It cannot return an Oracle supplied datatype, a user-defined type, or LOB or LONG RAW.
    >
    Edited by: ranit B on Oct 16, 2012 11:48 PM
    Edited by: ranit B on Oct 16, 2012 11:54 PM

  • Oracle : ORA-12899: value too large for column

    Hi Experts,
    I am loading multibyte data from fixed width flat file to Oracle database(which is a utf8 characterset) via Informatica. I have set utf8 as characterset in both source and target definitions.
    Source flat file data : Münchener(this flat file data was loaded from external oracle database where data looks like Münchener)
    When I load the data I am getting below error
    ORA-12899: value too large for column "schema_name"."table"."column" (actual: 513, maximum: 512)
    I know we can declare the data type as varchar2(512 char) instead of varchar2(512 byte). Please let me know the other solution to load multibyte data into target utf8 database.

    You answered your own question and there isn't another solution. You need to increase that column.
    alter table "schema_name"."table" ("column" varchar2(513)); ---Though you should increase it to be the max length that column will ever be. If you don't know pad it. Pad it high. Oracle is very good at handling the space with the varchar2 datatype.

  • ORA-12899: value too large for column "SYS"."SYS_TEMP_0FD9D6677_2AA50E11"."

    Oracle Database:11.2.0.1
    OS: AIX6.1
    The database is getting the below error in alert log.Can anyone tell me what is the cause of these errors and how they can be rectified.
    ORA-12899: value too large for column "SYS"."SYS_TEMP_0FD9D6677_2AA50E11"."C1" (actual: 15, maximum: 13)
    ORA-12012: error on auto execute of job 724116
    ORA-12899: value too large for column "SYS"."SYS_TEMP_0FD9D6677_2AA50E11"."C1" (actual: 15, maximum: 13)
    ORA-06512: at "SYS.PRVT_ADVISOR", line 2693
    ORA-06512: at "SYS.DBMS_ADVISOR", line 241
    ORA-06512: at "SYS.DBMS_SQLTUNE", line 772
    ORA-06512: at line 4
    Oracle Database:11.2.0.1
    OS: AIX6.1

    Kuljeet Pal Singh wrote:
    ORA-12012: error on auto execute of job 724116
    ORA-06512: at "SYS.DBMS_ADVISOR", line 241Appear error is coming from auto task job which is by default running in database from 10PM.
    this could be due to bug,check MOS or Raise SR to oracle.I have checked the MOS but did not get any reference to the below error
    ORA-12899: value too large for column "SYS"is this cause any harm to my database ?
    Edited by: 953053 on Aug 30, 2012 11:36 PM

  • Fdpstp failed due to ora-12899 value too large for column

    Hi All,
    User facing this problem while running the concurrent program.
    the program is complted but with rhis error.
    fdpstp failed due to ora-12899 value too large for column
    Can any one tell me the excat solution for this?
    RDBMS : 10.2.0.3.0
    Oracle Applications : 11.5.10.2

    User facing this problem while running the concurrent program.
    the program is complted but with rhis error.Is this a seeded or custom concurrent program?
    fdpstp failed due to ora-12899 value too large for column
    Can any one tell me the excat solution for this?Was this working before? If yes, any changes been done recently?
    Can other users run the same concurrent program with no issues?
    Please post the contents of the concurrent request log file here.
    Please ask your developer to open the file using Reports Builder and compile the report and run it (if possible) with the same parameters.
    OERR: ORA-12899 value too large for column %s (actual: %s, maximum: %s) [ID 287754.1]
    Thanks,
    Hussein

  • Install fails due to ORA-12899: value too large for column

    Hi,
    Our WCS 11g installation on Tomcat 7 fails giving a "ORA-12899: value too large for column".
    As per the solution ticket https://support.oracle.com/epmos/faces/DocumentDisplay?id=1539055.1 we have to set "-Dfile.encoding=UTF-8" in tomcat.
    We have done this beforehand by setting the variable in catalina.bat in tomcat 7 bin as shown below
    But still we get the same error while installation.
    If anybody has faced this , let us know how you resolved it

    We were unable to install WCS on Tomcat 7 but on Tomcat 6 by specifying "-Dfile.encoding=UTF-8" in java options using "Tomcat Configure" it was succesful.
    An alternative we found was to increase the value of the column itself.
    Using command
    ALTER TABLE csuser.systemlocalestring
    MODIFY value varchar2 (4000)

  • Rawtohex  - How to Insert ? ORA-12899: value too large for column

    Hi,
    Can any one please help me to resolve the following issue ?
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    Name Null? Type
    ABC_OID NOT NULL RAW(8)
    ABC_NAME NOT NULL VARCHAR2(30 CHAR)
    UPDATE_TIME NOT NULL DATE
    UPDATE_BY_WORKER_NO NOT NULL NUMBER
    I'm able to insert 1st 2 records but when am inseerting 3rd one am getting error:-
    insert into caps.ABC_LOOKUP values( rawtohex('SERIES'), 'SERIES','19-FEB-09','1065449')
    insert into caps.ABC_LOOKUP values(rawtohex('FAMILY'),'FAMILY','19-FEB-09','1065449')
    Insert into caps.ABC_LOOKUP values(rawtohex('CONNECTOR'),'CONNECTOR','19-FEB-09','1065449')
    ERROR at line 1:
    ORA-12899: value too large for column
    "XYZ"."ABC_LOOKUP"."ABC_OID" (actual: 9, maximum: 8)
    Thanks in Advance.....

    Yes, Done...
    Actually I suggested same thing to them (application), But they did not agree with me then i got confused :-)
    Now the same thing worked well..Thanks a lot for your time

  • Java.sql.BatchUpdateException: ORA-12899[ value too large for column.......

    Hi All,
    I am using SOA 11g(11.1.1.3). I am trying to insert data in to a table coming from a file. I have encountered the fallowing error.
    Exception occured when binding was invoked.
    Exception occured during invocation of JCA binding: "JCA Binding execute of Reference operation 'insert' failed due to: DBWriteInteractionSpec Execute Failed Exception.
    *insert failed. Descriptor name: [UploadStgTbl.XXXXStgTbl].*
    Caused by java.sql.BatchUpdateException: ORA-12899: value too large for column "XXXX"."XXXX_STG_TBL"."XXXXXX_XXXXX_TYPE" (actual: 20, maximum: 15)
    *The invoked JCA adapter raised a resource exception.*
    *Please examine the above error message carefully to determine a resolution.*
    The data type of the column errored out is VARCHAR2(25). I found related issue in metalink, java.sql.BatchUpdateException (ORA-12899) Reported When DB Adapter Reads a Row From a Table it is Polling For Added Rows [ID 1113215.1].
    But the solution seems not applicable in my case...
    Can anyone encountered same issue?? Is this a bug? If it is a bug, do we have patch for this bug??
    Please help me out...
    Thank you all...
    Edited by: 806364 on Dec 18, 2010 12:01 PM

    It didn't work.
    After I changed length of that column of the source datastore (from 15 to 16), ODI created temporary tables (C$ with I$) with larger columns (16 instead of 15) but I got the same error message.
    I'm wondering why I have to extend length of source datastore in the source model if there are no values in the source table with a length greather than 15....
    Any other idea? Thanks !

  • Rs.updateBoolean SQLException: ORA-12899: value too large for column

    Complete error is SQLException: ORA-12899: value too large for column "SMSUSER"."PRUEBA"."VLOGIC" (actual: 4, maximum: 1)
    Let's see the code:
    PreparedStatement ps=null;
    ResultSet rs=null;
    try
    ps=conn.prepareStatement("create table prueba(name varchar2(32),vlogic char(1) not null check(vlogic in (0,1)))");
    ps.execute();
    logger.info("Table created.");
    ps=conn.prepareStatement("insert into prueba (name,vlogic) values ('user01',?)");
    ps.setBoolean(1,true);
    ps.executeUpdate();
    logger.info("Data Inserted.");
    ps=conn.prepareStatement("update prueba set vlogic=? where name=?");
    ps.setBoolean(1,false);
    ps.setString(2,"user01");
    ps.executeUpdate();
    logger.info("Data Updated.");
    ; Till here all runs ok, but if we try to modify vía Resulset.....
    ps=conn.prepareStatement("select vlogic from prueba where name=? for update", ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_UPDATABLE, ResultSet.CLOSE_CURSORS_AT_COMMIT);
    ps.setString(1,"user01");
    rs=ps.executeQuery();
    if (rs.next())
    logger.info("Got record.");
    rs.updateBoolean("vlogic",true);
    rs.updateRow();
    logger.info("Column updated.");
    catch (SQLException E)
    logger.info("SQLException: "+E.getMessage());
    finally
    closeResultSet(rs);
    closePreparedStatement(ps);
    The trouble is that when updating via resultset, what is going to be inserted is "true" or "false" and not "0" or "1" as with insertions and modifications via preparedStatements.
    So systems returns error: SQLException: ORA-12899: value too large for column "SMSUSER"."PRUEBA"."VLOGIC" (actual: 4, maximum: 1)
    Cause it is tryng to insert "true".
    Can somebody tell me what's happenign here?
    Thanks in advance.
    Francisco Javier Ascanio Suárez.
    E-mail: [email protected]

    Ok, but why is this behaviour different in ResultSet statements than in Prepared Statements?
    As you can see in my example, prepared statements with set boolean runs ok.
    I like your "proper way", and it resolves my trouble, but it don't tells me why do I have to program a field update in different ways depending of Prepared Statements or updating resultsets.
    Thanks in advance.

  • Imp/exp ORA-12899: value too large for column

    imp/exp ORA-12899: value too large for column
    source :
    os: linux as 4 update4
    .bash_profile NLS_LANG=AMERICAN_AMERICA.US7ASCII
    for run exp bill/admin001 file=bill0518.dmp bill rows=y
    oracle: 10.2.1
    NLS_LANGUAGE AMERICAN
    NLS_TERRITORY AMERICA
    NLS_CHARACTERSET US7ASCII
    target :
    os: linux as 4 update4
    .bash_profile NLS_LANG=AMERICAN_AMERICA.AL32UTF8
    for run
    imp bill/admin001 file=bill0518.dmp
    oracle: 10.2.1
    NLS_LANGUAGE AMERICAN
    NLS_TERRITORY AMERICA
    NLS_CHARACTERSET AL32UTF8
    imp log
    Import: Release 10.2.0.1.0 - Production on Wed May 16 14:57:59 2007
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc tion
    With the Partitioning, Real Application Clusters, OLAP and Data Mining options
    Export file created by EXPORT:V10.02.01 via conventional path
    import done in US7ASCII character set and AL16UTF16 NCHAR character set
    import server uses AL32UTF8 character set (possible charset conversion)
    export client uses AL32UTF8 character set (possible charset conversion)
    . importing BILL's objects into BILL
    . . importing table "MY_SESSION" 44 rows imported
    . . importing table "T1"
    IMP-00019: row rejected due to ORACLE error 12899
    IMP-00003: ORACLE error 12899 encountered
    ORA-12899: value too large for column "BILL"."T1"."NAME" (actual: 62, maximum: 5 0)
    Column 1 1
    Column 2 ÖйúÈË. 0 rows impo rted
    Import terminated successfully with warnings.

    Yes it's probably due to different char sets
    A way around it it to change the DB setup on the new database to use CHAR as default for varchar2 rows, and then use datapump to do your import/export, because datapump uses the default varchar2 type when creating tables that includes varchar2 (which is normally byte). Exp/imp uses the varchar2 type that is in the original database
    Best regards
    /Klaus

  • ORA-12899: value too large for column

    Hi Experts,
    I am getting data from erp systems in the form of feeds,in particular one column length in feed is 3 only.
    In target table also corresponded column also length is varchar2(3)
    but when i am trying to load same into db ti showing error like:
    ORA-12899: value too large for column
    emp_name (actual: 4, maximum: 3)
    i am using data base version :
    Oracle Database 11g Express Edition Release 11.2.0.2.0 - Production
    but this is resolved when the time of increasing target column length to varchar2(5) from varchar2(3)..but i checked length of that column in feed is 3 only...
    my question is why we need to increase the target column length?
    Thanks,
    Surya

    >
    my question is why we need to increase the target column length?
    >
    That can be caused if the two systems are using different character sets. If one is using a single-byte character set like ASCII and the other uses multi-byte like UTF16.
    Three BYTES is three bytes but three CHAR is three bytes in ASCII but six bytes for UTF16.
    Do you know what character sets are being used?
    See the Database Concepts doc
    http://docs.oracle.com/cd/B28359_01/server.111/b28318/datatype.htm
    >
    Length Semantics for Character Datatypes
    Globalization support allows the use of various character sets for the character datatypes. Globalization support lets you process single-byte and multibyte character data and convert between character sets. Client sessions can use client character sets that are different from the database character set.
    Consider the size of characters when you specify the column length for character datatypes. You must consider this issue when estimating space for tables with columns that contain character data.
    The length semantics of character datatypes can be measured in bytes or characters.
    •Byte semantics treat strings as a sequence of bytes. This is the default for character datatypes.
    •Character semantics treat strings as a sequence of characters. A character is technically a codepoint of the database character set.
    For single byte character sets, columns defined in character semantics are basically the same as those defined in byte semantics. Character semantics are useful for defining varying-width multibyte strings; it reduces the complexity when defining the actual length requirements for data storage. For example, in a Unicode database (UTF8), you must define a VARCHAR2 column that can store up to five Chinese characters together with five English characters. In byte semantics, this would require (5*3 bytes) + (1*5 bytes) = 20 bytes; in character semantics, the column would require 10 characters.
    VARCHAR2(20 BYTE) and SUBSTRB(<string>, 1, 20) use byte semantics. VARCHAR2(10 CHAR) and SUBSTR(<string>, 1, 10) use character semantics.
    The parameter NLS_LENGTH_SEMANTICS decides whether a new column of character datatype uses byte or character semantics. The default length semantic is byte. If all character datatype columns in a database use byte semantics (or all use character semantics) then users do not have to worry about which columns use which semantics. The BYTE and CHAR qualifiers shown earlier should be avoided when possible, because they lead to mixed-semantics databases. Instead, the NLS_LENGTH_SEMANTICS initialization parameter should be set appropriately in the server parameter file (SPFILE) or initialization parameter file, and columns should use the default semantics.

Maybe you are looking for