OPMN Failed to start: Value too large for defined data type

Hello,
Just restared opmn and it failed to start with folloiwing errors in opmn.log:
OPMN worker process exited with status 4. Restarting
/opt/oracle/product/IAS10g/opmn/logs/OC4J~home~default_island~1: Value too large for defined data type
Does anyone have ideas about cause of this error? Server normally worked more than 6 month with periodic restarts...

Hi,
You could get error messages like that if you try to access a file larger than 2GB on a 32-bit OS. Do you have HUGE log files?
Regards,
Mathias

Similar Messages

  • Alter mount database failing: Intel SVR4 UNIX Error: 79: Value too large for defined data type

    Hi there,
    I am having a kind of weird issues with my oracle enterprise db which was perfectly working since 2009. After having had some trouble with my network switch (replaced the switch) the all network came back and all subnet devices are functioning perfect.
    This is an NFS for oracle db backup and the oracle is not starting in mount/alter etc.
    Here the details of my server:
    - SunOS 5.10 Generic_141445-09 i86pc i386 i86pc
    - Oracle Database 10g Enterprise Edition Release 10.2.0.2.0
    - 38TB disk space (plenty free)
    - 4GB RAM
    And when I attempt to start the db, here the logs:
    Starting up ORACLE RDBMS Version: 10.2.0.2.0.
    System parameters with non-default values:
      processes                = 150
      shared_pool_size         = 209715200
      control_files            = /opt/oracle/oradata/CATL/control01.ctl, /opt/oracle/oradata/CATL/control02.ctl, /opt/oracle/oradata/CATL/control03.ctl
      db_cache_size            = 104857600
      compatible               = 10.2.0
      log_archive_dest         = /opt/oracle/oradata/CATL/archive
      log_buffer               = 2867200
      db_files                 = 80
      db_file_multiblock_read_count= 32
      undo_management          = AUTO
      global_names             = TRUE
      instance_name            = CATL
      parallel_max_servers     = 5
      background_dump_dest     = /opt/oracle/admin/CATL/bdump
      user_dump_dest           = /opt/oracle/admin/CATL/udump
      max_dump_file_size       = 10240
      core_dump_dest           = /opt/oracle/admin/CATL/cdump
      db_name                  = CATL
      open_cursors             = 300
    PMON started with pid=2, OS id=10751
    PSP0 started with pid=3, OS id=10753
    MMAN started with pid=4, OS id=10755
    DBW0 started with pid=5, OS id=10757
    LGWR started with pid=6, OS id=10759
    CKPT started with pid=7, OS id=10761
    SMON started with pid=8, OS id=10763
    RECO started with pid=9, OS id=10765
    MMON started with pid=10, OS id=10767
    MMNL started with pid=11, OS id=10769
    Thu Nov 28 05:49:02 2013
    ALTER DATABASE   MOUNT
    Thu Nov 28 05:49:02 2013
    ORA-00202: control file: '/opt/oracle/oradata/CATL/control01.ctl'
    ORA-27037: unable to obtain file status
    Intel SVR4 UNIX Error: 79: Value too large for defined data type
    Additional information: 45
    Trying to start db without mount it starts without issues:
    SQL> startup nomount
    ORACLE instance started.
    Total System Global Area  343932928 bytes
    Fixed Size                  1280132 bytes
    Variable Size             234882940 bytes
    Database Buffers          104857600 bytes
    Redo Buffers                2912256 bytes
    SQL>
    But when I try to mount or alter db:
    SQL> alter database mount;
    alter database mount
    ERROR at line 1:
    ORA-00205: error in identifying control file, check alert log for more info
    SQL>
    From the logs again:
    alter database mount
    Thu Nov 28 06:00:20 2013
    ORA-00202: control file: '/opt/oracle/oradata/CATL/control01.ctl'
    ORA-27037: unable to obtain file status
    Intel SVR4 UNIX Error: 79: Value too large for defined data type
    Additional information: 45
    Thu Nov 28 06:00:20 2013
    ORA-205 signalled during: alter database mount
    We have already checked in everywhere in the system, got oracle support as well without success. The control files are in the place and checked with strings, they are correct.
    Can somebody give a clue please?
    Maybe somebody had similar issue here....
    Thanks in advance.

    Did the touch to update the date, but no joy either....
    These are further logs, so maybe can give a clue:
    Wed Nov 20 05:58:27 2013
    Errors in file /opt/oracle/admin/CATL/bdump/catl_j000_7304.trc:
    ORA-12012: error on auto execute of job 5324
    ORA-27468: "SYS.PURGE_LOG" is locked by another process
    Sun Nov 24 20:13:40 2013
    Starting ORACLE instance (normal)
    control_files = /opt/oracle/oradata/CATL/control01.ctl, /opt/oracle/oradata/CATL/control02.ctl, /opt/oracle/oradata/CATL/control03.ctl
    Sun Nov 24 20:15:42 2013
    alter database mount
    Sun Nov 24 20:15:42 2013
    ORA-00202: control file: '/opt/oracle/oradata/CATL/control01.ctl'
    ORA-27037: unable to obtain file status
    Intel SVR4 UNIX Error: 79: Value too large for defined data type
    Additional information: 45
    Sun Nov 24 20:15:42 2013
    ORA-205 signalled during: alter database mount

  • [SOLVED] Value too large for defined data type in Geany over Samba

    Some months ago Geany started to output an error whith every attempt to open a file mounted in smbfs/cifs.
    The error was:
    Value too large for defined data type
    Now the error is solved thanks to a french user, Pierre, on Ubuntu's Launchpad:
    https://bugs.launchpad.net/ubuntu/+bug/ … comments/5
    The solution is to add this options to your smbfs/cifs mount options (in /etc/fstab for example):
    ,nounix,noserverino
    It works on Arch Linux up-to-date (2009-12-02)
    I've writed it on the ArchWiki too: http://wiki.archlinux.org/index.php/Sam … leshooting

    An update on the original bug. This is the direct link to launchpad bug 455122:
    https://bugs.launchpad.net/ubuntu/+sour … bug/455122

  • 'Value too large for defined data type' error while running flexanlg

    While trying to run flexanlg to analyze my access log file I have received the following error:
    Could not open specified log file 'access': Value too large for defined data type
    The command I was running is
    ${iPLANET_HOME}/extras/flexanlg/flexanlg -F -x -n "Web Server" -i ${TMP_WEB_FILE} -o ${OUT_WEB_FILE} -c hnrfeuok -t s5m5h5 -l h30c+5 -p ctl
    Which should generate a html report of the web statistics
    The file has approx 7 Million entries and is 2.3G in size
    Ideas?

    I've concatenated several files together from my web servers as I wanted a single report, several reports based on individual web servers is no use.
    I'm running iWS 6.1 SP6 on Solaris 10, on a zoned T2000
    SunOS 10 Generic_118833-23 sun4v sparc SUNW,Sun-Fire-T200
    Cheers
    Chris

  • Mkisofs: Value too large for defined data type too large

    Hi:
    Does anyone meet the problem when use mkisofs command?
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
    Warning: creating filesystem that does not conform to ISO-9660.
    mkisofs 2.01 (sparc-sun-solaris2.10)
    Scanning iso
    Scanning iso/rac_stage1
    mkisofs: Value too large for defined data type. File iso/rac_stage3/Server.tar.gz is too large - ignoring
    Using RAC_S000 for /rac_stage3 (rac_stage2)
    >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
    Thanks!

    An update on the original bug. This is the direct link to launchpad bug 455122:
    https://bugs.launchpad.net/ubuntu/+sour … bug/455122

  • Value too large for defined data type

    Hi,
    i have a Sun Netra t1 105. Sometimes when I try to start top, i get the error message in $SUBJECT.
    Does someone have a hint?
    Thanks in advance
    Tosh42

    I've concatenated several files together from my web servers as I wanted a single report, several reports based on individual web servers is no use.
    I'm running iWS 6.1 SP6 on Solaris 10, on a zoned T2000
    SunOS 10 Generic_118833-23 sun4v sparc SUNW,Sun-Fire-T200
    Cheers
    Chris

  • (gdb)procfs: target_wait (wait_for_stop) : Value too large for defined ...

    When trying to debug a cpp program using dbx within the solaris9 I get the following error.
    How Can I fix this error? Please give me a help.
    Thanks.
    [UC]gdb nreUC100
    GNU gdb 6.0
    Copyright 2003 Free Software Foundation, Inc.
    GDB is free software, covered by the GNU General Public License, and you are
    welcome to change it and/or distribute copies of it under certain conditions.
    Type "show copying" to see the conditions.
    There is absolutely no warranty for GDB. Type "show warranty" for details.
    This GDB was configured as "sparc-sun-solaris2.9"...
    (gdb) l
    130 * 1 -
    131 * 99 -
    132 * DB Table : N/A
    133 ******************************************************************************/
    134 int
    135 main(int argc, char* argv[])
    136 {
    137 struct sigaction stSig;
    138
    139 stSig.sa_handler = sigHandler;
    (gdb)
    140 stSig.sa_flags = 0;
    141 (void) sigemptyset(&stSig.sa_mask);
    142
    143 sigaction(SIGSEGV, &stSig, 0);
    144
    145 if ((argc < 5) ||
    146 (strlen(argv[1]) != NRATER_PKG_ID_LEN) ||
    147 (strlen(argv[2]) != NRATER_SVC_ID_LEN) ||
    148 (strlen(argv[3]) != NRATER_PROC_ID_LEN) ||
    149 (isNumber(argv[4])))
    (gdb)
    150 {
    151 Usage(argv[0]);
    152
    153 return NRATER_EXCEPT;
    154 }
    155
    156 ST_PFNM_ARG stArg;
    157 memset(&stArg, 0x00, sizeof(stArg));
    158
    159 memcpy(stArg.strPkgID_, argv[1], NRATER_PKG_ID_LEN);
    (gdb) b 157
    Breakpoint 1 at 0x1a668: file nreUC100.cpp, line 157.
    (gdb) r 02 000001 000001 1
    Starting program: /UC/nreUC100 02 000001 000001 1
    couldn't set locale correctly
    procfs: target_wait (wait_for_stop) line 3931, /proc/19793: Value too large for defined data type.
    (gdb)

    Sorry, there are not too many gdb experts that monitor
    this forum. Assuming you are on Solaris, you can
    use the truss command to see what gdb is doing.
    First start gdb
    % gdb
    (gdb)
    Then in another window, attach truss to it.
    % pgrep gdb
    12345
    % truss -p 12345
    The go back to gdb and run the program.
    IS the line number in the gdb error a line number
    in the gdb source code? Or is gdb complaining
    about a location in your application source code?
    If it's in your app, then looking at that line might
    help you figure out what 's going on.
    Otherwise, you can always download the gdb source
    and grep for that error message and see what
    makes it happen.
    I found this similar problem when a user can't
    debug a setuid program.
    http://sources.redhat.com/ml/gdb-prs/2004-q1/msg00129.html
    Here is another similar warning that I found with google.
    http://www.omniorb-support.com/pipermail/omniorb-list/2005-May/026757.html
    Perhaps you are debugging a 32-bit program with a 64-bit gdb or vice versa?

  • Var/adm/utmpx: value too large for defined datatype

    Hi,
    On a Solaris 10 machine I cannot use last command to view login history etc. It tells something like "/var/adm/utmpx: value too large for defined datatype".
    The size of /var/adm/utmpx is about 2GB.
    I tried renaming the file to utmpx.0 and create a new file using head utmpx.0 > utmpx but after that the last command does not show any output. The new utmpx file seems to be updating with new info though... as seen from file last modified time.
    Is there a standard procedure to recreate a new utmpx file once it grows too largs?? I couldnt find much in man pages
    Thanks in advance for any help

    The easiest way is to cat /dev/null to utmpx - this will clear out the file to 0 bytes but leave it intact.
    from the /var/adm/ directory:
    cat /dev/null > /var/adm/utmpx
    Some docs suggest going to single user mode to do this, or stopping the utmp service daemon first, but I'm not positive this is necessary. Perhaps someone has input on that aspect. I've always just sent /dev/null to utmpx and wtmpx without a problem.
    BTW - I believe "last" works with wtmpx, and "who" works with utmpx.

  • Fdpstp failed due to ora-12899 value too large for column

    Hi All,
    User facing this problem while running the concurrent program.
    the program is complted but with rhis error.
    fdpstp failed due to ora-12899 value too large for column
    Can any one tell me the excat solution for this?
    RDBMS : 10.2.0.3.0
    Oracle Applications : 11.5.10.2

    User facing this problem while running the concurrent program.
    the program is complted but with rhis error.Is this a seeded or custom concurrent program?
    fdpstp failed due to ora-12899 value too large for column
    Can any one tell me the excat solution for this?Was this working before? If yes, any changes been done recently?
    Can other users run the same concurrent program with no issues?
    Please post the contents of the concurrent request log file here.
    Please ask your developer to open the file using Reports Builder and compile the report and run it (if possible) with the same parameters.
    OERR: ORA-12899 value too large for column %s (actual: %s, maximum: %s) [ID 287754.1]
    Thanks,
    Hussein

  • Install fails due to ORA-12899: value too large for column

    Hi,
    Our WCS 11g installation on Tomcat 7 fails giving a "ORA-12899: value too large for column".
    As per the solution ticket https://support.oracle.com/epmos/faces/DocumentDisplay?id=1539055.1 we have to set "-Dfile.encoding=UTF-8" in tomcat.
    We have done this beforehand by setting the variable in catalina.bat in tomcat 7 bin as shown below
    But still we get the same error while installation.
    If anybody has faced this , let us know how you resolved it

    We were unable to install WCS on Tomcat 7 but on Tomcat 6 by specifying "-Dfile.encoding=UTF-8" in java options using "Tomcat Configure" it was succesful.
    An alternative we found was to increase the value of the column itself.
    Using command
    ALTER TABLE csuser.systemlocalestring
    MODIFY value varchar2 (4000)

  • Update trigger fails with value too large for column error on timestamp

    Hello there,
    I've got a problem with several update triggers. I've several triggers monitoring a set of tables.
    Upon each update the updated data is compared with the current values in the table columns.
    If different values are detected the update timestamp is set with the current_timestamp. That
    way we have a timestamp that reflects real changes in relevant data. I attached an example for
    that kind of trigger below. The triggers on each monitored table only differ in the columns that
    are compared.
    CREATE OR REPLACE TRIGGER T_ava01_obj_cont
    BEFORE UPDATE on ava01_obj_cont
    FOR EACH ROW
    DECLARE
      v_changed  boolean := false;
    BEGIN
      IF NOT v_changed THEN
        v_changed := (:old.cr_adv_id IS NULL AND :new.cr_adv_id IS NOT NULL) OR
                     (:old.cr_adv_id IS NOT NULL AND :new.cr_adv_id IS NULL)OR
                     (:old.cr_adv_id IS NOT NULL AND :new.cr_adv_id IS NOT NULL AND :old.cr_adv_id != :new.cr_adv_id);
      END IF;
      IF NOT v_changed THEN
        v_changed := (:old.is_euzins_relevant IS NULL AND :new.is_euzins_relevant IS NOT NULL) OR
                     (:old.is_euzins_relevant IS NOT NULL AND :new.is_euzins_relevant IS NULL)OR
                     (:old.is_euzins_relevant IS NOT NULL AND :new.is_euzins_relevant IS NOT NULL AND :old.is_euzins_relevant != :new.is_euzins_relevant);
      END IF;
    [.. more values being compared ..]
        IF v_changed THEN
        :new.update_ts := current_timestamp;
      END IF;
    END T_ava01_obj_cont;Really relevant is the statement
    :new.update_ts := current_timestamp;So far so good. The problem is, it works the most of time. Only sometimes it fails with the following error:
    SQL state [72000]; error code [12899]; ORA-12899: value too large for column "LGT_CLASS_AVALOQ"."AVA01_OBJ_CONT"."UPDATE_TS"
    (actual: 28, maximum: 11)
    I can't see how the value systimestamp or current_timestamp (I tried both) should be too large for
    a column defined as TIMESTAMP(6). We've got tables where more updates occur then elsewhere.
    Thats where the most of the errors pop up. Other tables with fewer updates show errors only
    sporadicly or even never. I can't see a kind of error pattern. It's like that every 10.000th update
    or less failes.
    I was desperate enough to try some language dependend transformation like
    IF v_changed THEN
        l_update_date := systimestamp || '';
        select value into l_timestamp_format from nls_database_parameters where parameter = 'NLS_TIMESTAMP_TZ_FORMAT';
        :new.update_ts := to_timestamp_tz(l_update_date, l_timestamp_format);
    END IF;to be sure the format is right. It didn't change a thing.
    We are using Oracle Version 10.2.0.4.0 Production.
    Did anyone encounter that kind of behaviour and solve it? I'm now pretty certain that it has to
    be an oracle bug. What is the forum's opinion on that? Would you suggest to file a bug report?
    Thanks in advance for your help.
    Kind regards
    Jan

    Could you please edit your post and use formatting and tags.  This is pretty much unreadable and the forum boogered up some of your code.
    Instructions are here: http://forums.oracle.com/forums/help.jspa                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Exporting Page fails with ORA-1401 inserted value too large for column

    Hi Everyone,
    I have a client what is getting the following error when
    attempting to export a page using pageexp.cmd. A simple page
    works for them but there main page does not. Here is the error:
    Extracting Portal Page Data for Export...
    begin
    ERROR at line 1:
    ORA-01401: inserted value too large for column
    ORA-06512: at "PORTAL30.WWUTL_POB_EXPORT", line 660
    ORA-06512: at "PORTAL30.WWUTL_POB_EXPORT", line 889
    ORA-06512: at line 5
    Has anyone seen this before?
    Is there any way we can narrow down why this occurs?
    There is no logging on this export option and the stored
    procedures used are wrapped.
    Any ideas?
    Thanks
    Oracle Portal Version: 3.0.9.8.0

    we had this problem.
    We talked to some oracle person who said some portlets on a page had trouble exporting.
    Sure enough after we deleted all the portlets (one at a time to determine which one was giving us the problem. Turned out none of ours worked) the page exported and imported just fine.
    Hopefully this is being worked on...

  • Value too large for column "OIMDB"."UPA_FIELDS"."FIELD_NEW_VALUE"

    I am running OIM 9.1.0.1849.0 build 1849.0 on Windows Server 2003
    I see the following stack trace repeatedly in c:\jboss-4.0.3SP1\server\default\log\server.log
    I am hoping someone might be able help me resolve this issue.
    Thanks in advance
    ...Lyall
    java.sql.SQLException: ORA-12899: value too large for column "OIMDB"."UPA_FIELDS"."FIELD_NEW_VALUE" (actual: 2461, maximum: 2000)
         at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112)
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:331)
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:288)
         at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:745)
         at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:216)
         at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:966)
         at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1170)
         at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3339)
         at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3423)
         at org.jboss.resource.adapter.jdbc.WrappedPreparedStatement.executeUpdate(WrappedPreparedStatement.java:227)
         at com.thortech.xl.dataaccess.tcDataBase.writePreparedStatement(Unknown Source)
         at com.thortech.xl.dataobj.PreparedStatementUtil.executeUpdate(Unknown Source)
         at com.thortech.xl.audit.auditdataprocessors.UserProfileRDGenerator.insertUserProfileChangedAttributes(Unknown Source)
         at com.thortech.xl.audit.auditdataprocessors.UserProfileRDGenerator.processUserProfileChanges(Unknown Source)
         at com.thortech.xl.audit.auditdataprocessors.UserProfileRDGenerator.processAuditData(Unknown Source)
         at com.thortech.xl.audit.genericauditor.GenericAuditor.processAuditMessage(Unknown Source)
         at com.thortech.xl.audit.engine.AuditEngine.processSingleAudJmsEntry(Unknown Source)
         at com.thortech.xl.audit.engine.AuditEngine.processOfflineNew(Unknown Source)
         at com.thortech.xl.audit.engine.jms.XLAuditMessageHandler.execute(Unknown Source)
         at com.thortech.xl.schedule.jms.messagehandler.MessageProcessUtil.processMessage(Unknown Source)
         at com.thortech.xl.schedule.jms.messagehandler.AuditMessageHandlerMDB.onMessage(Unknown Source)
         at sun.reflect.GeneratedMethodAccessor127.invoke(Unknown Source)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:324)
         at org.jboss.invocation.Invocation.performCall(Invocation.java:345)
         at org.jboss.ejb.MessageDrivenContainer$ContainerInterceptor.invoke(MessageDrivenContainer.java:475)
         at org.jboss.resource.connectionmanager.CachedConnectionInterceptor.invoke(CachedConnectionInterceptor.java:149)
         at org.jboss.ejb.plugins.MessageDrivenInstanceInterceptor.invoke(MessageDrivenInstanceInterceptor.java:101)
         at org.jboss.ejb.plugins.CallValidationInterceptor.invoke(CallValidationInterceptor.java:48)
         at org.jboss.ejb.plugins.AbstractTxInterceptor.invokeNext(AbstractTxInterceptor.java:106)
         at org.jboss.ejb.plugins.TxInterceptorCMT.runWithTransactions(TxInterceptorCMT.java:335)
         at org.jboss.ejb.plugins.TxInterceptorCMT.invoke(TxInterceptorCMT.java:166)
         at org.jboss.ejb.plugins.RunAsSecurityInterceptor.invoke(RunAsSecurityInterceptor.java:94)
         at org.jboss.ejb.plugins.LogInterceptor.invoke(LogInterceptor.java:192)
         at org.jboss.ejb.plugins.ProxyFactoryFinderInterceptor.invoke(ProxyFactoryFinderInterceptor.java:122)
         at org.jboss.ejb.MessageDrivenContainer.internalInvoke(MessageDrivenContainer.java:389)
         at org.jboss.ejb.Container.invoke(Container.java:873)
         at org.jboss.ejb.plugins.jms.JMSContainerInvoker.invoke(JMSContainerInvoker.java:1077)
         at org.jboss.ejb.plugins.jms.JMSContainerInvoker$MessageListenerImpl.onMessage(JMSContainerInvoker.java:1379)
         at org.jboss.jms.asf.StdServerSession.onMessage(StdServerSession.java:256)
         at org.jboss.mq.SpyMessageConsumer.sessionConsumerProcessMessage(SpyMessageConsumer.java:904)
         at org.jboss.mq.SpyMessageConsumer.addMessage(SpyMessageConsumer.java:160)
         at org.jboss.mq.SpySession.run(SpySession.java:333)
         at org.jboss.jms.asf.StdServerSession.run(StdServerSession.java:180)
         at EDU.oswego.cs.dl.util.concurrent.PooledExecutor$Worker.run(PooledExecutor.java:748)
         at java.lang.Thread.run(Thread.java:534)
    2008-09-03 14:32:43,281 ERROR [XELLERATE.AUDITOR] Class/Method: UserProfileRDGenerator/insertUserProfileChangedAttributes encounter some problems: Failed to insert change record in table UPA_FIELDS

    Thankyou,
    Being the OIM noob that I am, had no idea where to look.
    We do indeed have some user defined fields of 4000 characters.
    I am now wondering if I can disable auditing, or maybe increase the size of the auditing database column?
    Also, I guess I should raise a defect in OIM as the User Interface should not allow the creation of a user field for which auditing is unable to cope.
    I also wonder if the audit failures (other than causing lots of stack traces) causes any transaction failures due to transaction rollbacks?
    Edited by: lyallp on Sep 3, 2008 4:01 PM

  • Data Profiling - Value too large for column error

    I am running a data profile which completes with errors. The error being reported is an ORA 12899 Value too large for column actual (41 maximum 40).
    I have checked the actual data in the table and the maximum is only 40 characters.
    Any ideas on how to solve this. Even though it completes no actual profile is done on the data due to the error.
    OWB version 11.2.0.1
    Log file below.
    Job     Rows Selected     Rows Inserted     Rows Updated     Rows Deleted     Errors     Warnings     Start Time     Elapsed Time     
    Profile_1306385940099                                   2011-05-26 14:59:00.0     106     
    Data profiling operations complete.                                             
    Redundant column analysis for objects complete in 0 s.                                              
    Redundant column analysis for objects.                                              
    Referential analysis for objects complete in 0.405 s.                                              
    Referential analysis for objects.                                              
    Referential analysis initialization complete in 8.128 s.                                             
    Referential analysis initialization.                                             
    Data rule analysis for object TABLE_NAME complete in 0 s.                                             
    Data rule analysis for object TABLE_NAME                                             
    Functional dependency and unique key discovery for object TABLE_NAME complete in 0 s.                                             
    Functional dependency and unique key discovery for object TABLE_NAME                                             
    Domain analysis for object TABLE_NAME complete in 0.858 s.                                             
    Domain analysis for object TABLE_NAME                                             
    Pattern analysis for object TABLE_NAME complete in 0.202 s.                                             
    Pattern analysis for object TABLE_NAME                                             
    Aggregation and Data Type analysis for object TABLE_NAME complete in 9.236 s.                                             
    Aggregation and Data Type analysis for object TABLE_NAME                                             
    Functional dependency and unique key discovery for object TABLE_NAME complete in 0 s.                                             
    Functional dependency and unique key discovery for object TABLE_NAME                                             
    Domain analysis for object TABLE_NAME complete in 0.842 s.                                             
    Domain analysis for object TABLE_NAME                                             
    Pattern analysis for object TABLE_NAME complete in 0.187 s.                                             
    Pattern analysis for object TABLE_NAME                                             
    Aggregation and Data Type analysis for object TABLE_NAME complete in 9.501 s.                                             
    Aggregation and Data Type analysis for object TABLE_NAME                                             
    Functional dependency and unique key discovery for object TABLE_NAME complete in 0 s.                                             
    Functional dependency and unique key discovery for object TABLE_NAME                                             
    Domain analysis for object TABLE_NAME complete in 0.717 s.                                             
    Domain analysis for object TABLE_NAME                                             
    Pattern analysis for object TABLE_NAME complete in 0.156 s.                                             
    Pattern analysis for object TABLE_NAME                                             
    Aggregation and Data Type analysis for object TABLE_NAME complete in 9.906 s.                                             
    Aggregation and Data Type analysis for object TABLE_NAME                                             
    Functional dependency and unique key discovery for object TABLE_NAME complete in 0 s.                                             
    Functional dependency and unique key discovery for object TABLE_NAME                                             
    Domain analysis for object TABLE_NAME complete in 0.827 s.                                             
    Domain analysis for object TABLE_NAME                                             
    Pattern analysis for object TABLE_NAME complete in 0.187 s.                                             
    Pattern analysis for object TABLE_NAME                                             
    Aggregation and Data Type analysis for object TABLE_NAME complete in 9.172 s.                                             
    Aggregation and Data Type analysis for object TABLE_NAME                                             
    Functional dependency and unique key discovery for object TABLE_NAME complete in 0 s.                                             
    Functional dependency and unique key discovery for object TABLE_NAME                                             
    Domain analysis for object TABLE_NAME complete in 0.889 s.                                             
    Domain analysis for object TABLE_NAME                                             
    Pattern analysis for object TABLE_NAME complete in 0.202 s.                                             
    Pattern analysis for object TABLE_NAME                                             
    Aggregation and Data Type analysis for object TABLE_NAME complete in 9.313 s.                                             
    Aggregation and Data Type analysis for object TABLE_NAME                                             
    Execute data prepare map for object TABLE_NAME complete in 9.267 s.                                             
    Execute data prepare map for object TABLE_NAME                                             
    Execute data prepare map for object TABLE_NAME complete in 10.187 s.                                             
    Execute data prepare map for object TABLE_NAME                                             
    Execute data prepare map for object TABLE_NAME complete in 8.019 s.                                             
    Execute data prepare map for object TABLE_NAME                                             
    Execute data prepare map for object TABLE_NAME complete in 5.507 s.                                             
    Execute data prepare map for object TABLE_NAME                                             
    Execute data prepare map for object TABLE_NAME complete in 10.857 s.                                             
    Execute data prepare map for object TABLE_NAME                                             
    Parameters                                             
    O82647310CF4D425C8AED9AAE_MAP_ProfileLoader                              1     2011-05-26 14:59:00.0     11     
    ORA-12899: value too large for column "SCHEMA"."O90239B0C1105447EB6495C903678"."ITEM_NAME_1" (actual: 41, maximum: 40)                                             
    Parameters                                             
    O68A16A57F2054A13B8761BDC_MAP_ProfileLoader                              1     2011-05-26 14:59:11.0     5     
    ORA-12899: value too large for column "SCHEMA"."O0D9332A164E649F3B4D05D045521"."ITEM_NAME_1" (actual: 41, maximum: 40)                                             
    Parameters                                             
    O78AD6B482FC44D8BB7AF8357_MAP_ProfileLoader                              1     2011-05-26 14:59:16.0     9     
    ORA-12899: value too large for column "SCHEMA"."OBF77A8BA8E6847B8AAE4522F98D6"."ITEM_NAME_2" (actual: 41, maximum: 40)                                             
    Parameters                                             
    OA79DF482D74847CF8EA05807_MAP_ProfileLoader                              1     2011-05-26 14:59:25.0     10     
    ORA-12899: value too large for column "SCHEMA"."OB0052CBCA5784DAD935F9FCF2E28"."ITEM_NAME_1" (actual: 41, maximum: 40)                                             
    Parameters                                             
    OFFE486BBDB884307B668F670_MAP_ProfileLoader                              1     2011-05-26 14:59:35.0     9     
    ORA-12899: value too large for column "SCHEMA"."O9943284818BB413E867F8DB57A5B"."ITEM_NAME_1" (actual: 42, maximum: 40)                                             
    Parameters

    Found the answer. It was the database character set for multi byte character sets.

  • Java.sql.BatchUpdateException: ORA-12899[ value too large for column.......

    Hi All,
    I am using SOA 11g(11.1.1.3). I am trying to insert data in to a table coming from a file. I have encountered the fallowing error.
    Exception occured when binding was invoked.
    Exception occured during invocation of JCA binding: "JCA Binding execute of Reference operation 'insert' failed due to: DBWriteInteractionSpec Execute Failed Exception.
    *insert failed. Descriptor name: [UploadStgTbl.XXXXStgTbl].*
    Caused by java.sql.BatchUpdateException: ORA-12899: value too large for column "XXXX"."XXXX_STG_TBL"."XXXXXX_XXXXX_TYPE" (actual: 20, maximum: 15)
    *The invoked JCA adapter raised a resource exception.*
    *Please examine the above error message carefully to determine a resolution.*
    The data type of the column errored out is VARCHAR2(25). I found related issue in metalink, java.sql.BatchUpdateException (ORA-12899) Reported When DB Adapter Reads a Row From a Table it is Polling For Added Rows [ID 1113215.1].
    But the solution seems not applicable in my case...
    Can anyone encountered same issue?? Is this a bug? If it is a bug, do we have patch for this bug??
    Please help me out...
    Thank you all...
    Edited by: 806364 on Dec 18, 2010 12:01 PM

    It didn't work.
    After I changed length of that column of the source datastore (from 15 to 16), ODI created temporary tables (C$ with I$) with larger columns (16 instead of 15) but I got the same error message.
    I'm wondering why I have to extend length of source datastore in the source model if there are no values in the source table with a length greather than 15....
    Any other idea? Thanks !

Maybe you are looking for

  • Is there a way to quickly make CC notes out of the TTS?

    Hello, My project has TTS notes that are marked up with VTML tags.  (For every line, I have at least 2 VTML tags to slow the speed of the spoken text because there is no way to change the spoken-text speed of the entire slide at once, let alone the e

  • ClassNotFound Exception accessing EJBs in a cluster

    I am experiencing some weird behavior when attempting to access EJBs in a           Weblogic 5.1.0 cluster. The cluster nodes startup fine, and the EJBs are           correctly deployed. However, when my test client attempts to access them I         

  • Java Calendar problem

    Hello I have problems with Java Calendar class: Problem 1: I try to set Java calendar to specific date, but an exception is thrown. Problematic dates are 19210501 and 19420403 (yyyymmdd) at midnight (hour of day = 0, minutes = 0, seconds = 0, millise

  • Links in Hotmail emails don't open in new tab

    Someone apparently made changes in about:config settings. I opened my home page MSN and when I clicked on Hotmail link, the Hotmail sign in page opened in a new tab. Previously when I clicked on teh Hotmail link on MSN home page, the sign in page wou

  • Fail to load in CS5 - File not compatible

    I'm importing a video (tried various video formats) into PS CS5 and when I save without doing any edits the PSD file loads fine. As soon as I have applied an edit to a frame (e.g. stamp tool, context sensitive fill, etc) the edits are fine and I save