Error :Value Too large for DEF_VALUE of SNP_REV_COL

I am getting the following error while I am importing my work repository .
Error : Value Too large for DEF_VALUE of SNP_REV_COL
Can any one pls let me know the root cause of the issue.
I found the following work around to resolve the issue.
I am changing the 'DEF_VALUE' column length in both 'SNP_REV_COL' and 'SNP_COL' tables.
I am using the ODI 10.1.3.5 version.
alter table SNP_REV_COL modify DEF_VALUE VARCHAR2 (400);
alter table SNP_COL modify DEF_VALUE VARCHAR2(400);
I am able to import the work_rep with out any issues after changing the above columns.
I am looking for the reason why this issues is occurring.
Thanks,
Yellanki
Edited by: Yellanki on Feb 7, 2011 3:18 AM

Ankit,
I am trying to move my Dev WR to Test. And my Source Technologies are SQL Server and Oracle.
Target is Oracle. I got the work around for this issue And I am looking for the root cause of the issue.
Any help is greatly appreciated.
Thanks,
Yellanki

Similar Messages

  • Error: value too large for column ?!?

    I use LKM SQL to Oracle and IKM Oracle Incremental Update. When I execute the interface I get an error message:
    12899 : 72000 : java.sql.BatchUpdateException: ORA-12899: value too large for column "SAMPLE"."C$_0EMP"."C4_EMP_NM" (actual: 17, maximum: 15)
    I checked the source and maximum length is 15 and I don't understand where ODI can find 17 characters. Both columns (source and targer) are Varchar2(15).
    Then I replaced the mapping with Substr(EMP_NM,1,15) to make sure ODI will get only 15 characters in stage area but... it didn't work. I tried even Substr(EMP_NM,1,10) but no luck.
    Why this ODI error occurred ? It doesn't have any sense....

    It didn't work.
    After I changed length of that column of the source datastore (from 15 to 16), ODI created temporary tables (C$ with I$) with larger columns (16 instead of 15) but I got the same error message.
    I'm wondering why I have to extend length of source datastore in the source model if there are no values in the source table with a length greather than 15....
    Any other idea? Thanks !

  • SQL Error: ORA-12899: value too large for column

    Hi,
    I'm trying to understand the above error. It occurs when we are migrating data from one oracle database to another:
    Error report:
    SQL Error: ORA-12899: value too large for column "USER_XYZ"."TAB_XYZ"."COL_XYZ" (actual: 10, maximum: 8)
    12899. 00000 - "value too large for column %s (actual: %s, maximum: %s)"
    *Cause:    An attempt was made to insert or update a column with a value
    which is too wide for the width of the destination column.
    The name of the column is given, along with the actual width
    of the value, and the maximum allowed width of the column.
    Note that widths are reported in characters if character length
    semantics are in effect for the column, otherwise widths are
    reported in bytes.
    *Action:   Examine the SQL statement for correctness.  Check source
    and destination column data types.
    Either make the destination column wider, or use a subset
    of the source column (i.e. use substring).
    The source database runs - Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    The target database runs - Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    The source and target table are identical and the column definitions are exactly the same. The column we get the error on is of CHAR(8). To migrate the data we use either a dblink or oracle datapump, both result in the same error. The data in the column is a fixed length string of 8 characters.
    To resolve the error the column "COL_XYZ" gets widened by:
    alter table TAB_XYZ modify (COL_XYZ varchar2(10));
    -alter table TAB_XYZ succeeded.
    We now move the data from the source into the target table without problem and then run:
    select max(length(COL_XYZ)) from TAB_XYZ;
    -8
    So the maximal string length for this column is 8 characters. To reduce the column width back to its original 8, we then run:
    alter table TAB_XYZ modify (COL_XYZ varchar2(8));
    -Error report:
    SQL Error: ORA-01441: cannot decrease column length because some value is too big
    01441. 00000 - "cannot decrease column length because some value is too big"
    *Cause:   
    *Action:
    So we leave the column width at 10, but the curious thing is - once we have the data in the target table, we can then truncate the same table at source (ie. get rid of all the data) and move the data back in the original table (with COL_XYZ set at CHAR(8)) - without any issue.
    My guess the error has something to do with the storage on the target database, but I would like to understand why. If anybody has an idea or suggestion what to look for - much appreciated.
    Cheers.

    843217 wrote:
    Note that widths are reported in characters if character length
    semantics are in effect for the column, otherwise widths are
    reported in bytes.You are looking at character lengths vs byte lengths.
    The data in the column is a fixed length string of 8 characters.
    select max(length(COL_XYZ)) from TAB_XYZ;
    -8
    So the maximal string length for this column is 8 characters. To reduce the column width back to its original 8, we then run:
    alter table TAB_XYZ modify (COL_XYZ varchar2(8));varchar2(8 byte) or varchar2(8 char)?
    Use SQL Reference for datatype specification, length function, etc.
    For more info, reference {forum:id=50} forum on the topic. And of course, the Globalization support guide.

  • Inserted value too large for column error while scheduling a job

    Hi Everyone,
    I am trying to schedule a PL SQL script as a job in my Oracle 10g installed and running on Windows XP.
    While trying to Submit the job I get the error as "Inserted value too large for column:" followed by my entire code. The code is correct - complies and runs in Oracle ApEx's SQL Workshop.
    The size of my code is 4136 character, 4348 bytes and 107 lines long. It is a code that sends an e-mail and has a +utl_smtp.write_data([Lots of HTML])+
    There is no insert statement in the code whatsoever, the code only queries the database for data...
    Any idea as to why I might be getting this error??
    Thanks in advance
    Sid

    The size of my code is 4136 character, 4348 bytes and 107 lines long. It is a code that sends an e-mail and has a utl_smtp.write_data(Lots of HTML)SQL variable has maximum size of 4000

  • I am getting error "ORA-12899: value too large for column".

    I am getting error "ORA-12899: value too large for column" after upgrading to 10.2.0.4.0
    Field is updating only through trigger with hard coded value.
    This happens randomly not everytime.
    select * from v$version
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE     10.2.0.4.0     Production
    TNS for Linux: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    Table Structure
    desc customer
    Name Null? Type
    CTRY_CODE NOT NULL CHAR(3 Byte)
    CO_CODE NOT NULL CHAR(3 Byte)
    CUST_NBR NOT NULL NUMBER(10)
    CUST_NAME CHAR(40 Byte)
    RECORD_STATUS CHAR(1 Byte)
    Trigger on the table
    CREATE OR REPLACE TRIGGER CUST_INSUPD
    BEFORE INSERT OR UPDATE
    ON CUSTOMER FOR EACH ROW
    BEGIN
    IF INSERTING THEN
    :NEW.RECORD_STATUS := 'I';
    ELSIF UPDATING THEN
    :NEW.RECORD_STATUS := 'U';
    END IF;
    END;
    ERROR at line 1:
    ORA-01001: invalid cursor
    ORA-06512: at "UPDATE_CUSTOMER", line 1320
    ORA-12899: value too large for column "CUSTOMER"."RECORD_STATUS" (actual: 3,
    maximum: 1)
    ORA-06512: at line 1
    Edited by: user4211491 on Nov 25, 2009 9:30 PM
    Edited by: user4211491 on Nov 25, 2009 9:32 PM

    SQL> create table customer(
      2  CTRY_CODE  CHAR(3 Byte) not null,
      3  CO_CODE  CHAR(3 Byte) not null,
      4  CUST_NBR NUMBER(10) not null,
      5  CUST_NAME CHAR(40 Byte) ,
      6  RECORD_STATUS CHAR(1 Byte)
      7  );
    Table created.
    SQL> CREATE OR REPLACE TRIGGER CUST_INSUPD
      2  BEFORE INSERT OR UPDATE
      3  ON CUSTOMER FOR EACH ROW
      4  BEGIN
      5  IF INSERTING THEN
      6  :NEW.RECORD_STATUS := 'I';
      7  ELSIF UPDATING THEN
      8  :NEW.RECORD_STATUS := 'U';
      9  END IF;
    10  END;
    11  /
    Trigger created.
    SQL> insert into customer(CTRY_CODE,CO_CODE,CUST_NBR,CUST_NAME,RECORD_STATUS)
      2                values('12','13','1','Mahesh Kaila','UPD');
                  values('12','13','1','Mahesh Kaila','UPD')
    ERROR at line 2:
    ORA-12899: value too large for column "HPVPPM"."CUSTOMER"."RECORD_STATUS"
    (actual: 3, maximum: 1)
    SQL> insert into customer(CTRY_CODE,CO_CODE,CUST_NBR,CUST_NAME)
      2                values('12','13','1','Mahesh Kaila');
    1 row created.
    SQL> set linesize 200
    SQL> select * from customer;
    CTR CO_   CUST_NBR CUST_NAME                                R
    12  13           1 Mahesh Kaila                             I
    SQL> update customer set cust_name='tst';
    1 row updated.
    SQL> select * from customer;
    CTR CO_   CUST_NBR CUST_NAME                                R
    12  13           1 tst                                      Urecheck your code once again..somewhere you are using record_status column for insertion or updation.
    Ravi Kumar

  • Error on reverse on XML: value too large for column

    Hi All,
    I am trying to reverse engineer while creating the data model on XML technology.
    My JDBC URL on data server reads this:
    jdbc:snps:xml?d=../demo/abc/CustomerPartyEBO.xsd&s=MYEBO
    I get an error while doing the reverse.
    java.sql.SQLException: ORA-12899: value too large for column "PINW"."SNP_REV_KEY_COL"."KEY_NAME" (actual: 102, maximum: 100)
    After doing some check through selective reverse, found that this is happening only for few tables, whose names are quite longer.
    Tried setting the "maximum column name length" and "maximum table name length" to 120 and even higher values on XML technology from Topology Manager. No luck there.
    Thanks in advance for any help here.

    That is not the place to change.
    The error states that the SNP_REV_KEY_COL.KEY_NAME in the Work Repository schema PINW has maximum length defined to be 100.
    I donot know if Oracle will support this change but you will have to make a change to the Work Repository table SNP_REV_KEY_COL and change the column lengths as a workaround.

  • Data Profiling - Value too large for column error

    I am running a data profile which completes with errors. The error being reported is an ORA 12899 Value too large for column actual (41 maximum 40).
    I have checked the actual data in the table and the maximum is only 40 characters.
    Any ideas on how to solve this. Even though it completes no actual profile is done on the data due to the error.
    OWB version 11.2.0.1
    Log file below.
    Job     Rows Selected     Rows Inserted     Rows Updated     Rows Deleted     Errors     Warnings     Start Time     Elapsed Time     
    Profile_1306385940099                                   2011-05-26 14:59:00.0     106     
    Data profiling operations complete.                                             
    Redundant column analysis for objects complete in 0 s.                                              
    Redundant column analysis for objects.                                              
    Referential analysis for objects complete in 0.405 s.                                              
    Referential analysis for objects.                                              
    Referential analysis initialization complete in 8.128 s.                                             
    Referential analysis initialization.                                             
    Data rule analysis for object TABLE_NAME complete in 0 s.                                             
    Data rule analysis for object TABLE_NAME                                             
    Functional dependency and unique key discovery for object TABLE_NAME complete in 0 s.                                             
    Functional dependency and unique key discovery for object TABLE_NAME                                             
    Domain analysis for object TABLE_NAME complete in 0.858 s.                                             
    Domain analysis for object TABLE_NAME                                             
    Pattern analysis for object TABLE_NAME complete in 0.202 s.                                             
    Pattern analysis for object TABLE_NAME                                             
    Aggregation and Data Type analysis for object TABLE_NAME complete in 9.236 s.                                             
    Aggregation and Data Type analysis for object TABLE_NAME                                             
    Functional dependency and unique key discovery for object TABLE_NAME complete in 0 s.                                             
    Functional dependency and unique key discovery for object TABLE_NAME                                             
    Domain analysis for object TABLE_NAME complete in 0.842 s.                                             
    Domain analysis for object TABLE_NAME                                             
    Pattern analysis for object TABLE_NAME complete in 0.187 s.                                             
    Pattern analysis for object TABLE_NAME                                             
    Aggregation and Data Type analysis for object TABLE_NAME complete in 9.501 s.                                             
    Aggregation and Data Type analysis for object TABLE_NAME                                             
    Functional dependency and unique key discovery for object TABLE_NAME complete in 0 s.                                             
    Functional dependency and unique key discovery for object TABLE_NAME                                             
    Domain analysis for object TABLE_NAME complete in 0.717 s.                                             
    Domain analysis for object TABLE_NAME                                             
    Pattern analysis for object TABLE_NAME complete in 0.156 s.                                             
    Pattern analysis for object TABLE_NAME                                             
    Aggregation and Data Type analysis for object TABLE_NAME complete in 9.906 s.                                             
    Aggregation and Data Type analysis for object TABLE_NAME                                             
    Functional dependency and unique key discovery for object TABLE_NAME complete in 0 s.                                             
    Functional dependency and unique key discovery for object TABLE_NAME                                             
    Domain analysis for object TABLE_NAME complete in 0.827 s.                                             
    Domain analysis for object TABLE_NAME                                             
    Pattern analysis for object TABLE_NAME complete in 0.187 s.                                             
    Pattern analysis for object TABLE_NAME                                             
    Aggregation and Data Type analysis for object TABLE_NAME complete in 9.172 s.                                             
    Aggregation and Data Type analysis for object TABLE_NAME                                             
    Functional dependency and unique key discovery for object TABLE_NAME complete in 0 s.                                             
    Functional dependency and unique key discovery for object TABLE_NAME                                             
    Domain analysis for object TABLE_NAME complete in 0.889 s.                                             
    Domain analysis for object TABLE_NAME                                             
    Pattern analysis for object TABLE_NAME complete in 0.202 s.                                             
    Pattern analysis for object TABLE_NAME                                             
    Aggregation and Data Type analysis for object TABLE_NAME complete in 9.313 s.                                             
    Aggregation and Data Type analysis for object TABLE_NAME                                             
    Execute data prepare map for object TABLE_NAME complete in 9.267 s.                                             
    Execute data prepare map for object TABLE_NAME                                             
    Execute data prepare map for object TABLE_NAME complete in 10.187 s.                                             
    Execute data prepare map for object TABLE_NAME                                             
    Execute data prepare map for object TABLE_NAME complete in 8.019 s.                                             
    Execute data prepare map for object TABLE_NAME                                             
    Execute data prepare map for object TABLE_NAME complete in 5.507 s.                                             
    Execute data prepare map for object TABLE_NAME                                             
    Execute data prepare map for object TABLE_NAME complete in 10.857 s.                                             
    Execute data prepare map for object TABLE_NAME                                             
    Parameters                                             
    O82647310CF4D425C8AED9AAE_MAP_ProfileLoader                              1     2011-05-26 14:59:00.0     11     
    ORA-12899: value too large for column "SCHEMA"."O90239B0C1105447EB6495C903678"."ITEM_NAME_1" (actual: 41, maximum: 40)                                             
    Parameters                                             
    O68A16A57F2054A13B8761BDC_MAP_ProfileLoader                              1     2011-05-26 14:59:11.0     5     
    ORA-12899: value too large for column "SCHEMA"."O0D9332A164E649F3B4D05D045521"."ITEM_NAME_1" (actual: 41, maximum: 40)                                             
    Parameters                                             
    O78AD6B482FC44D8BB7AF8357_MAP_ProfileLoader                              1     2011-05-26 14:59:16.0     9     
    ORA-12899: value too large for column "SCHEMA"."OBF77A8BA8E6847B8AAE4522F98D6"."ITEM_NAME_2" (actual: 41, maximum: 40)                                             
    Parameters                                             
    OA79DF482D74847CF8EA05807_MAP_ProfileLoader                              1     2011-05-26 14:59:25.0     10     
    ORA-12899: value too large for column "SCHEMA"."OB0052CBCA5784DAD935F9FCF2E28"."ITEM_NAME_1" (actual: 41, maximum: 40)                                             
    Parameters                                             
    OFFE486BBDB884307B668F670_MAP_ProfileLoader                              1     2011-05-26 14:59:35.0     9     
    ORA-12899: value too large for column "SCHEMA"."O9943284818BB413E867F8DB57A5B"."ITEM_NAME_1" (actual: 42, maximum: 40)                                             
    Parameters

    Found the answer. It was the database character set for multi byte character sets.

  • 'Value too large for column' error in msql

    In my 9ilite database I have a table with a LONG column
    In the documentation it says that a LONG column can hold upto 2 GB data
    However when I try an insert more than 4097 characters into the column I get the error message on insert 'Value too large for column'
    Is this a bug or is the documentation wrong
    or am I doing something wrong ?
    Any help would be much appreciated

    You have run into some bug in handling intermediate results in Oracle 9i Lite. You can by pass the bug as follows in Java.
    public static oracle.lite.poljdbc.BLOB createBlob(Connection conn,
                        byte[] data)
    throws SQLException, IOException
    oracle.lite.poljdbc.BLOB blob = new oracle.lite.poljdbc.BLOB(
    (oracle.lite.poljdbc.OracleConnection)conn);
    OutputStream writer = blob.getBinaryOutputStream();
    writer.write(data);
    writer.flush();
    writer.close();
    return blob;
    public static void insertRow(Connection conn, int num,
                        oracle.lite.poljdbc.BLOB blob)
    throws SQLException
    PreparedStatement ps = conn.prepareStatement(
    "insert into TEST_BLOB values (?, ?)");
    ps.setInt(1, num);
    ps.setBlob(2, blob);
    ps.execute();

  • Replicat error: ORA-12899: value too large for column ...

    Hi,
    In our system Source and Target are on the same physical server and in the same Oracle instance. Just different schemes.
    Tables on the target were created as 'create table ... as select * from ... source_table', so they have a similar structure. Table names are also similar.
    I started replicat, it worked fine for several hours, but when I inserted Chinese symbols into the source table I got an error:
    WARNING OGG-00869 Oracle GoldenGate Delivery for Oracle, OGGEX1.prm: OCI Error ORA-12899: value too large for column "MY_TARGET_SCHEMA"."TABLE1"."*FIRSTNAME*" (actual: 93, maximum: 40) (status = 12899), SQL <INSERT INTO "MY_TARGET_SCHEMA"."TABLE1" ("USERID","USERNAME","FIRSTNAME","LASTNAME",....>.
    FIRSTNAME is Varchar2(40 char) field.
    I suppose the problem probably is our database is running with NLS_LENGTH_SEMANTICS='CHAR'
    I've double checked tables structure on the target - it's identical with the source.
    I also tried to manually insert this record into the target table using 'insert into ... select * from ... ' statement - it works. The problem seems to be in the replicat.
    How to fix this error?
    Thanks in advance!
    Oracle GoldenGate version: 11.1.1.1
    Oracle Database version: 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    NLS_LANG: AMERICAN_AMERICA.AL32UTF8
    NLS_LENGTH_SEMANTICS='CHAR'
    Edited by: DeniK on Jun 20, 2012 11:49 PM
    Edited by: DeniK on Jun 23, 2012 12:05 PM
    Edited by: DeniK on Jun 25, 2012 1:55 PM

    I've created the definition files and compared them. They are absolutely identical, apart from source and target schema names:
    Source definition file:
    Definition for table MY_SOURCE_SCHEMA.TABLE1
    Record length: 1632
    Syskey: 0
    Columns: 30
    USERID 134 11 0 0 0 1 0 8 8 8 0 0 0 0 1 0 1 3
    USERNAME 64 80 12 0 0 1 0 80 80 0 0 0 0 0 1 0 0 0
    FIRSTNAME 64 160 98 0 0 1 0 160 160 0 0 0 0 0 1 0 0 0
    LASTNAME 64 160 264 0 0 1 0 160 160 0 0 0 0 0 1 0 0 0
    PASSWORD 64 160 430 0 0 1 0 160 160 0 0 0 0 0 1 0 0 0
    TITLE 64 160 596 0 0 1 0 160 160 0 0 0 0 0 1 0 0 0
    Target definition file:
    Definition for table MY_TAEGET_SCHEMA.TABLE1
    Record length: 1632
    Syskey: 0
    Columns: 30
    USERID 134 11 0 0 0 1 0 8 8 8 0 0 0 0 1 0 1 3
    USERNAME 64 80 12 0 0 1 0 80 80 0 0 0 0 0 1 0 0 0
    FIRSTNAME 64 160 98 0 0 1 0 160 160 0 0 0 0 0 1 0 0 0
    LASTNAME 64 160 264 0 0 1 0 160 160 0 0 0 0 0 1 0 0 0
    PASSWORD 64 160 430 0 0 1 0 160 160 0 0 0 0 0 1 0 0 0
    TITLE 64 160 596 0 0 1 0 160 160 0 0 0 0 0 1 0 0 0
    Edited by: DeniK on Jun 25, 2012 1:56 PM
    Edited by: DeniK on Jun 25, 2012 1:57 PM

  • Update trigger fails with value too large for column error on timestamp

    Hello there,
    I've got a problem with several update triggers. I've several triggers monitoring a set of tables.
    Upon each update the updated data is compared with the current values in the table columns.
    If different values are detected the update timestamp is set with the current_timestamp. That
    way we have a timestamp that reflects real changes in relevant data. I attached an example for
    that kind of trigger below. The triggers on each monitored table only differ in the columns that
    are compared.
    CREATE OR REPLACE TRIGGER T_ava01_obj_cont
    BEFORE UPDATE on ava01_obj_cont
    FOR EACH ROW
    DECLARE
      v_changed  boolean := false;
    BEGIN
      IF NOT v_changed THEN
        v_changed := (:old.cr_adv_id IS NULL AND :new.cr_adv_id IS NOT NULL) OR
                     (:old.cr_adv_id IS NOT NULL AND :new.cr_adv_id IS NULL)OR
                     (:old.cr_adv_id IS NOT NULL AND :new.cr_adv_id IS NOT NULL AND :old.cr_adv_id != :new.cr_adv_id);
      END IF;
      IF NOT v_changed THEN
        v_changed := (:old.is_euzins_relevant IS NULL AND :new.is_euzins_relevant IS NOT NULL) OR
                     (:old.is_euzins_relevant IS NOT NULL AND :new.is_euzins_relevant IS NULL)OR
                     (:old.is_euzins_relevant IS NOT NULL AND :new.is_euzins_relevant IS NOT NULL AND :old.is_euzins_relevant != :new.is_euzins_relevant);
      END IF;
    [.. more values being compared ..]
        IF v_changed THEN
        :new.update_ts := current_timestamp;
      END IF;
    END T_ava01_obj_cont;Really relevant is the statement
    :new.update_ts := current_timestamp;So far so good. The problem is, it works the most of time. Only sometimes it fails with the following error:
    SQL state [72000]; error code [12899]; ORA-12899: value too large for column "LGT_CLASS_AVALOQ"."AVA01_OBJ_CONT"."UPDATE_TS"
    (actual: 28, maximum: 11)
    I can't see how the value systimestamp or current_timestamp (I tried both) should be too large for
    a column defined as TIMESTAMP(6). We've got tables where more updates occur then elsewhere.
    Thats where the most of the errors pop up. Other tables with fewer updates show errors only
    sporadicly or even never. I can't see a kind of error pattern. It's like that every 10.000th update
    or less failes.
    I was desperate enough to try some language dependend transformation like
    IF v_changed THEN
        l_update_date := systimestamp || '';
        select value into l_timestamp_format from nls_database_parameters where parameter = 'NLS_TIMESTAMP_TZ_FORMAT';
        :new.update_ts := to_timestamp_tz(l_update_date, l_timestamp_format);
    END IF;to be sure the format is right. It didn't change a thing.
    We are using Oracle Version 10.2.0.4.0 Production.
    Did anyone encounter that kind of behaviour and solve it? I'm now pretty certain that it has to
    be an oracle bug. What is the forum's opinion on that? Would you suggest to file a bug report?
    Thanks in advance for your help.
    Kind regards
    Jan

    Could you please edit your post and use formatting and tags.  This is pretty much unreadable and the forum boogered up some of your code.
    Instructions are here: http://forums.oracle.com/forums/help.jspa                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • EXEC SQL Error: ORA-01401: Inserted value too large for column

    Getting this SQL Error: ORA-01401: Inserted value too large for column..on an EXEC SQL Insert statement when writing to an
    external Oracle DB from SAP.
    On further analysis..it appears that this happens to the fields..when the string length matches the field length.
    Example:  Plant field is defined as char(4) on the SAP side and it is a Varchar2(4) on the Oracle side.
    When a value like '1015' is passed thru a variable in the insert statement then this ORA-01401 error pops up.
    No error:
       - if the value '1015' is passed directly in the insert statement to the external table  (or)
       - when a value with 3 chars or less(like the first three chars..101) is passed thru a variable  defined as 'Plant(4) type c'.
       - when using EXEC sql within SAP
       - when reading from the external db table
    This was working ok until the  Oracle Patch P9147110 was installed recently.
    Any suggestions !

    Hello Dvas,
    what's the characterset of your external database?
    What's the column definition in your external database (byte or character based)?
    If you use a characeterset like UTF8 it is possible, that one character needs more than one byte and then you will run into such kind of issues if the definition is too small.
    Regards
    Stefan

  • Snapshot refresh error:  ora-01401 inserted value too large for column

    I have an error ora-01401 "Inserted value too large for column" when I try to do a refresh on a group at the materialized view site.
    My model is 1 master replicating to a readonly materialized view site. I have 2 refresh groups for separate sets of tables. 1 refresh group work fine...the other I got the above error.
    I have doubled the rbs and system tablespace without any help thinking that I must be running out of default rollback segment space.
    Anyone has this before?

    The error is related to a field, not to any tablespace. This normaly happens to me when I change the lenght or resolution of a field in the base tables. The structure changes don't "flow" to the materialized view! I must "regenerate" them. Normally droping and creating it again to make them receive the new lenght of that field.
    Sometimes, when the field changed is not part of any primary key I have changes directly the field in the materialized view as if it was a normal table.
    Hope this helps
    Luis

  • Getting error ORA-01401: inserted value too large for column

    Hello ,
    I have Configured the scenario IDOC to JDBC .In the SXMB_MONI am getting the succes message .But in the Adapter Monitor am getting the error message as
    ORA-01401: inserted value too large for column and the entries also not inserted in to the table.I hope this is because of the date format only.In Oracle table date field has defined in the format of '01-JAN-2005'.I am also passing the date field in the same format only for INVOICE_DATE and INVOICE_DUE_DATE.Please see the target structure .
    <?xml version="1.0" encoding="UTF-8" ?>
    - <ns:INVOICE_INFO_MT xmlns:ns="http://sap.com/xi/InvoiceIDoc_Test">
    - <Statement>
    - <INVOICE_INFO action="INSERT">
    - <access>
      <INVOICE_ID>0090000303</INVOICE_ID>
      <INVOICE_DATE>01-Dec-2005</INVOICE_DATE>
      <INVOICE_DUE_DATE>01-Jan-2005</INVOICE_DUE_DATE>
      <ORDER_ID>0000000000011852</ORDER_ID>
      <ORDER_LINE_NUM>000010</ORDER_LINE_NUM>
      <INVOICE_TYPE>LR</INVOICE_TYPE>
      <INVOICE_ORGINAL_AMT>10000</INVOICE_ORGINAL_AMT>
      <INVOICE_OUTSTANDING_AMT>1000</INVOICE_OUTSTANDING_AMT>
      <INTERNAL_USE_FLG>X</INTERNAL_USE_FLG>
      <BILLTO>0004000012</BILLTO>
      <SHIPTO>40000006</SHIPTO>
      <STATUS_ID>O</STATUS_ID>
      </access>
      </INVOICE_INFO>
      </Statement>
      </ns:INVOICE_INFO_MT>
    Please let me know what are all the possible solution to fix the error and to insert the entries in the table.
    Thanks in Advance!

    Hi muthu,
    // inserted value too large for column
    When your oracle insertion throws this error, it implies that some value that you are trying to insert into the table is larger than the allocated size.
    Just check the format of your table and the respective size of each field on your oracle cleint by using the command,
    DESCRIBE <tablename> .
    and then verify it with the input. I dont think the problem is with the DATE format because if it is not a valid date format, you would have got on error like
    String Literal does not match type
    Hope this helps,
    Regards,
    Bhavesh

  • File_To_RT data truncation ODI error ORA-12899: value too large for colum

    Hi,
    Could you please provide me some idea so that I can truncate the source data grater than max length before inserting into target table.
    Prtoblem details:-
    For my scenario read data from source .txt file and insert the data into target table.suppose source file data length exceeds max col length of the target table.Then How will I truncate the data so that data migration will be successful and also can avoid the ODI error " ORA-12899: value too large for column".
    Thanks
    Anindya

    Bhabani wrote:
    In which step you are getting this error ? If its loading step then try increasing the length for that column from datastore and use substr in the mapping expression.Hi Bhabani,
    You are right.It is for Loading SrcSet0 Load data.I have increased the column length for target table data store
    and then apply the substring function but it results the same.
    If you wanted to say to increase the length for source file data store then please tell me which length ?Physical length or
    logical length?
    Thanks
    Anindya

  • Alter mount database failing: Intel SVR4 UNIX Error: 79: Value too large for defined data type

    Hi there,
    I am having a kind of weird issues with my oracle enterprise db which was perfectly working since 2009. After having had some trouble with my network switch (replaced the switch) the all network came back and all subnet devices are functioning perfect.
    This is an NFS for oracle db backup and the oracle is not starting in mount/alter etc.
    Here the details of my server:
    - SunOS 5.10 Generic_141445-09 i86pc i386 i86pc
    - Oracle Database 10g Enterprise Edition Release 10.2.0.2.0
    - 38TB disk space (plenty free)
    - 4GB RAM
    And when I attempt to start the db, here the logs:
    Starting up ORACLE RDBMS Version: 10.2.0.2.0.
    System parameters with non-default values:
      processes                = 150
      shared_pool_size         = 209715200
      control_files            = /opt/oracle/oradata/CATL/control01.ctl, /opt/oracle/oradata/CATL/control02.ctl, /opt/oracle/oradata/CATL/control03.ctl
      db_cache_size            = 104857600
      compatible               = 10.2.0
      log_archive_dest         = /opt/oracle/oradata/CATL/archive
      log_buffer               = 2867200
      db_files                 = 80
      db_file_multiblock_read_count= 32
      undo_management          = AUTO
      global_names             = TRUE
      instance_name            = CATL
      parallel_max_servers     = 5
      background_dump_dest     = /opt/oracle/admin/CATL/bdump
      user_dump_dest           = /opt/oracle/admin/CATL/udump
      max_dump_file_size       = 10240
      core_dump_dest           = /opt/oracle/admin/CATL/cdump
      db_name                  = CATL
      open_cursors             = 300
    PMON started with pid=2, OS id=10751
    PSP0 started with pid=3, OS id=10753
    MMAN started with pid=4, OS id=10755
    DBW0 started with pid=5, OS id=10757
    LGWR started with pid=6, OS id=10759
    CKPT started with pid=7, OS id=10761
    SMON started with pid=8, OS id=10763
    RECO started with pid=9, OS id=10765
    MMON started with pid=10, OS id=10767
    MMNL started with pid=11, OS id=10769
    Thu Nov 28 05:49:02 2013
    ALTER DATABASE   MOUNT
    Thu Nov 28 05:49:02 2013
    ORA-00202: control file: '/opt/oracle/oradata/CATL/control01.ctl'
    ORA-27037: unable to obtain file status
    Intel SVR4 UNIX Error: 79: Value too large for defined data type
    Additional information: 45
    Trying to start db without mount it starts without issues:
    SQL> startup nomount
    ORACLE instance started.
    Total System Global Area  343932928 bytes
    Fixed Size                  1280132 bytes
    Variable Size             234882940 bytes
    Database Buffers          104857600 bytes
    Redo Buffers                2912256 bytes
    SQL>
    But when I try to mount or alter db:
    SQL> alter database mount;
    alter database mount
    ERROR at line 1:
    ORA-00205: error in identifying control file, check alert log for more info
    SQL>
    From the logs again:
    alter database mount
    Thu Nov 28 06:00:20 2013
    ORA-00202: control file: '/opt/oracle/oradata/CATL/control01.ctl'
    ORA-27037: unable to obtain file status
    Intel SVR4 UNIX Error: 79: Value too large for defined data type
    Additional information: 45
    Thu Nov 28 06:00:20 2013
    ORA-205 signalled during: alter database mount
    We have already checked in everywhere in the system, got oracle support as well without success. The control files are in the place and checked with strings, they are correct.
    Can somebody give a clue please?
    Maybe somebody had similar issue here....
    Thanks in advance.

    Did the touch to update the date, but no joy either....
    These are further logs, so maybe can give a clue:
    Wed Nov 20 05:58:27 2013
    Errors in file /opt/oracle/admin/CATL/bdump/catl_j000_7304.trc:
    ORA-12012: error on auto execute of job 5324
    ORA-27468: "SYS.PURGE_LOG" is locked by another process
    Sun Nov 24 20:13:40 2013
    Starting ORACLE instance (normal)
    control_files = /opt/oracle/oradata/CATL/control01.ctl, /opt/oracle/oradata/CATL/control02.ctl, /opt/oracle/oradata/CATL/control03.ctl
    Sun Nov 24 20:15:42 2013
    alter database mount
    Sun Nov 24 20:15:42 2013
    ORA-00202: control file: '/opt/oracle/oradata/CATL/control01.ctl'
    ORA-27037: unable to obtain file status
    Intel SVR4 UNIX Error: 79: Value too large for defined data type
    Additional information: 45
    Sun Nov 24 20:15:42 2013
    ORA-205 signalled during: alter database mount

Maybe you are looking for

  • Extracting MIME attachment from email using OSB proxy service - email transport

    Hi, I'm reading email messages(multipart/mixed) with attachments(pdf, zip, xml and csv) from MS Exchange Server 2010 using OSB proxy service email transport.I need to save the attachments to a local folder and process one of the attachments, an XML f

  • SQL Developer install problems for Windows 7 Home edition

    Hi there, I understand Windows 7 64-bit is not supported for oracle 11gR2 but I installed it anyway. This is not my problem. I am trying to install SQL Developer and am getting this error when I click on the set up: Unable to launch the Java Virtual

  • Problem in date formate

    There is two table in a database. Both table have a coloum in date formate but I wrote a procedure for some data of one table to another table. In that case date of '04-23-2008' comes in '04-23-0008' formate. Procedure is below.................... CR

  • Dynamic Names for the Outputs

    Hi, I was wondering if it would be possible to name the PDF outputs dynamically? Say for example, I have a tag in the XML <TemplateName>ABC</TemplateName>. Can I have the output PDF with the name 'ABC'? And Is it possible to explicitly specifiy the n

  • Safari won't display  "All Bookmarks"

    Safari won't show the list of Bookmarks. All I get is the small box in a footer bar with a + in it (add Bookmark Collection), and the previous page shows in the background of the window. The Bookmarks Bar is visible and works correctly, and the libra