JDBC: Complete IDOC too large for Oracle DB

Dear all,
I received IDOCs from SAP system that should be stored into a Oracel DB.
The IDOC should be saved as complete XML structure in a single DB field.
I have two problems:
1) The IDOC contain all fields. If a field is empty, it will be transmit like <ERSDA>/</ERSDA>
     This makes the IDOC very large. This is because of a filtering process in bd53 that
     have  to be keeped.
2) We should save the complete IDOC in on DB field. So I took an xsl mapping to copy the
    complete source into on target field. The DB could only store 4000 characters in one field, but
    our IDOC got more characters.
So I have to downsize the source IDOC (favorite) or have to split the target. But this would make it
very complexe I think.
Can someone give me a hint for filtering the not necessarily fields in PI??
As an alternative we could store every single segment of the IDOC in a single row of the DB.
That would be easy if we would read the IDOC data directly from SAP table, but as IDOC XML it
is almost not technically feasible, is it?
Any hints?
Thanks
Chris

Thanks for replies.
@all:
Is there an option to get an IDOC like it is saved in SAP system in
table EDID4?? There is an IDOC split in segments.
@Baskar
The target field is already CLOB data type. But the admin told me, that the maximum
numbers of characters is reached by 4000 chars.
@Raja
>Well, if you trigger IDoc from ECC,when IDoc reaches PI,it will not have empty Tags,if it
>is coming then you have to work on that why it is behaving like that.
It got this / for fields that should not be change. But our target system does not need this infos. But
the SAP system will not change this.
>After IDoc reaches PI, convert Entire IDoc in to one string,if you are on PI 7.1 then
>it is very easy,use option Retun As XML,it will convert IDoc in to one XML String.
Where do I use this option? In IDOC sender channel I could not find it.
>IDoc(segment)-->anystring function-->DB Field(Target).
What does this mean? Will you filter the segements of an IDOC?
If I could split a complete IDOC into single segments it would help.
Can you please tell in detail what you mean?
Thanks
Chris
Edited by: Christian Riekenberg on Mar 21, 2011 1:30 PM

Similar Messages

  • Oracle Error 1801 - "Date format is too large for internal format"

    We have an application deployed under WebLogic and are using the Oracle JDBC drivers to talk to the DB. It appears that when under heavy load and trying to hit a stored procedure we get "Action:EnrollParticipant,Error
    type:Application error,doing:writeEmpUpdate,dbcode:-1801,ssn:xxxxxxxxxx". The dbcode of 1801 is a "Date format is too large for internal format". Has anyone had this happen, or know what the solution might be?

    We have an application deployed under WebLogic and are using the Oracle JDBC drivers to talk to the DB. It appears that when under heavy load and trying to hit a stored procedure we get "Action:EnrollParticipant,Error
    type:Application error,doing:writeEmpUpdate,dbcode:-1801,ssn:xxxxxxxxxx". The dbcode of 1801 is a "Date format is too large for internal format". Has anyone had this happen, or know what the solution might be?

  • Error while posting Invoice IDOC (The difference is too large for clearing)

    Hi All,
    While posting Invoice IDOC to Remittance Advice IDOC get fails with status 51 &  message ' The difference is too large for clearing'.
    Please suggest any solution or reason to fail IDOC.
    Thanks & Regards,
    Ajay
    Moderator message: please search for information and try to find out yourself before asking, this will be a functional problem anyway that should be asked in the appropriate forum, e.g. ERP Financials.
    locked by: Thomas Zloch on Aug 20, 2010 1:59 PM

    Pls check the tolerance amounts limits:
    SPRO--- F/A --AR/AP -
    Business transactions -Open item clearing--- Clearing differance---- Define / Assign tolerance groups for emp -
    Where you need to change the limits of max amounts with the %
    Let me know for any info.
    Regards
    Suresh

  • Oracle : ORA-12899: value too large for column

    Hi Experts,
    I am loading multibyte data from fixed width flat file to Oracle database(which is a utf8 characterset) via Informatica. I have set utf8 as characterset in both source and target definitions.
    Source flat file data : Münchener(this flat file data was loaded from external oracle database where data looks like Münchener)
    When I load the data I am getting below error
    ORA-12899: value too large for column "schema_name"."table"."column" (actual: 513, maximum: 512)
    I know we can declare the data type as varchar2(512 char) instead of varchar2(512 byte). Please let me know the other solution to load multibyte data into target utf8 database.

    You answered your own question and there isn't another solution. You need to increase that column.
    alter table "schema_name"."table" ("column" varchar2(513)); ---Though you should increase it to be the max length that column will ever be. If you don't know pad it. Pad it high. Oracle is very good at handling the space with the varchar2 datatype.

  • IDOC staus: 51, The difference is too large for clearing

    All,
    IDOC staus: 51, the difference is too large for clearing.  Can somebody throw light on this ?
    -Rajani Sateesh

    Hi,
    Thanks for your response.
    External transaction type 165(+), Posting rule 0001, interpretation algorthim 020
                                             508(-),  Posting rule 0002, interpretation algorthim 020
    Waiting for your reply
    Regards,
    Durgasankar

  • Oracle 11g 64 bit - "Value too large for column" when setting Varchar2

    Hello guys,
    I have a machine running Oracle 11g - 64bits. And I have a table that contains a VARCHAR2(2000) field.
    When I try to set the value of this field to a string that contains double byte characters, I get this error:
    ORA-12899: value too large for column "QAPBG1220_11"."MYTABLE"."MYFIELD" (actual: 2433, maximum: 2000).
    Although the value I'm setting is only 811 characters (€ sign).
    The weird thing is that when I try to run the same query on another PC with Oracle 11g, 32 bits, It runs normally and the values are updated!!
    Anyone has any idea about this? It's driving me crazy
    Thanks in advance
    Zahraa

    create table MYTABLE
    MYTABLEID NUMBER(10) not null,
    MYFIELD VARCHAR2(2000)
    alter table MYTABLE
    add constraint PK_MYTABLE primary key (MYTABLEID);
    INSERT INTO MYTABLE (Mytableid, Myfield) VALUES(1, '€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€fds€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€€')
    COMMIT;
    On the 32 bit, this works fine. I get the record with the values 1 and 2000 euro signs.
    On the 64 bits, there is one machine (oracle 11.2.0.1.0) that adds the row, but when I view it, the value shows as "????"
    and another machine (oracle 11.1.0.7.0) that throws an error:
    - "String Literal is too long" : if there are more than 1333 euro characters
    - Value too large for column .... : if there are less than 1333 and more than 666 characters.
    Any ideas?

  • Oracle JSP Exception code too large for try block

    My jsp is exceeding the memory limit (64KB for Oracle 10g App server i think its the JVM limit),
    so it is throwing exception ,code too large for try block,how to over come this,I cant minimize the use of logic tags because it is business requirement.please help me out.

    I think you need to give a value to the attribute buffer in the <%@ page%> directive to solve the problem.
    There will not be any need of going for pagination then.
    buffer="none | 8kb | sizekb"The buffer size in kilobytes used by the out object to handle output sent from the compiled JSP page to the client Web browser. The default value is 8kb. If you specify a buffer size, the output is buffered with at least the size you specified.
    check the documentation at
    http://java.sun.com/products/jsp/tags/11/syntaxref11.fm7.html
    Uday

  • JDBC THEME-MAPVIEWER-05517 Request string is too long for Oracle Maps' non-

    hi,
    if I need a quite complex query to be added to dynamic JDBC theme I get this error:
    [MAPVIEWER-05517] Request string is too long for Oracle Maps' non-AJAX remoting.
    -why? I am using Oracle Maps JS API so it is AJAX remoting, or not?
    -what is the limit of a JDBC theme definition?
    regards,
    Brano

    hi,
    yes, having look at MVMapView.enableXMLHTTP(true) in doc explains a lot...
    thanks,
    Brano

  • The message I get is: Time Machine could not complete the backup. This backup is too large for the backup disk. The backup requires 111.27 GB but only 42.1 GB are available.

    I have a problem with my Time Capsule.  The message I get is: Time Machine could not complete the backup. This backup is too large for the backup disk. The backup requires 111.27 GB but only 42.1 GB are available. As a result, my backups are no longer running. My umderstanding was that the Time Capsule would automatically delete old backups to make space. Can anyone help me figure out how to get my backups to run again?

    If you have more than one user account, these instructions must be carried out as an administrator.
    Launch the Console application in any of the following ways:
    ☞ Enter the first few letters of its name into a Spotlight search. Select it in the results (it should be at the top.)
    ☞ In the Finder, select Go ▹ Utilities from the menu bar, or press the key combination shift-command-U. The application is in the folder that opens.
    ☞ Open LaunchPad. Click Utilities, then Console in the icon grid.
    Make sure the title of the Console window is All Messages. If it isn't, select All Messages from the SYSTEM LOG QUERIES menu on the left. If you don't see that menu, select
    View ▹ Show Log List
    from the menu bar.
    Enter the word "Starting" (without the quotes) in the String Matching text field. You should now see log messages with the words "Starting * backup," where * represents any of the words "automatic," "manual," or "standard." Note the timestamp of the last such message. Clear the text field and scroll back in the log to that time. Select the messages timestamped from then until the end of the backup, or the end of the log if that's not clear. Copy them (command-C) to the Clipboard. Paste (command-V) into a reply to this message.
    If there are runs of repeated messages, post only one example of each. Don't post many repetitions of the same message.
    When posting a log extract, be selective. Don't post more than is requested.
    Please do not indiscriminately dump thousands of lines from the log into this discussion.
    Some personal information, such as the names of your files, may be included — anonymize before posting.

  • Time Machine completes once, then says "too large for backup disk"

    Hello all,
    I'm a very experienced, certified Apple tech, with over 15 years' experience working on Macs, having used them since 1984.
    Client's iMac (27-inch, Late 2009)
    Mac OS X 10.6.8, fully up to date
    12 GB RAM
    History: Internal 1 TB hard drive was failing. Mac out of warranty, so HDD was replaced with a new 1 TB hard drive.
    Data was cloned from old drive using SuperDuper!
    Now, Time Machine will not back up more than once.
    The error reads:
    "Time Machine could not complete the backup.
    This backup is too large for the backup disk.
    The backup requires 728.32 GB but only 568.57 GB are available."
    The backup drive is 1.5x the size of the internal drive. There are no other hard drives attached that would be included in the backup.
    Internal hard drive: 1 TB
    Available: 392 GB
    Used 608 GB
    External backup drive 1.5 TB
    Available 568.57 GB
    Used 931.39 GB
    I have tried many steps from the pondini.org site and other blogs and internet forums, including this one.
    External hard drive has been reformatted/erased several times.
    We have disabled  indexing, deleted the Spotlight index, and re-enabled indexing.
    I can't determine why the Time Machine backup is 1.5 times the size of the data on the internal boot drive.
    This was never a problem before the internal hard drive was replaced, so I think it's related to that.
    I had the client install BackupLoupe. Her computer is not at our office, but I can get more info from her if needed.
    If there's something obvious I'm missing, let me know.
    Thanks in advance,
    Dave

    That's been done, and here are the results:
    July 13 6:22am
    Starting standard backup
    Backing up to: /Volumes/a***r/Backups.backupdb
    Node requires deep traversal:/ reason:must scan subdirs|
    No pre-backup thinning needed: 678.09 GB requested (including padding),
    831.97 GB available
    Copied 122.0 GB of 564.1 GB, 246633 of 523955 items
    Copied 294.4 GB of 564.1 GB, 320260 of 523955 items
    CoreEndianFlipData: error -4940 returned for rsrc type open (id 128, length
    12, native = no)
    CoreEndianFlipData: error -4940 returned for rsrc type open (id 128, length
    12, native = no)
    Copied 444.4 GB of 564.1 GB, 453652 of 523955 items
    Copied 523956 files (554.8 GB) from volume Macintosh HD.
    Starting post-backup thinning
    No post-back up thinning needed: no expired backups exist
    Backup completed successfully.
    July 13 6:42am
    Starting standard backup
    Backing up to: /Volumes/a***r/Backups.backupdb
    Node requires deep traversal:/ reason:must scan subdirs|
    Starting pre-backup thinning: 678.15 GB requested (including padding),
    268.78 GB available
    No expired backups exist - deleting oldest backups to make room
    Deleted backup /Volumes/a***r/Backups.backupdb/iMac/2013-07-13-021422: 831.05 GB now available
    Pre-backup thinning completed successfully: 1 backups were deleted
    Backup date range was shortened: oldest backup is now Jul 13, 2013
    Copied 69.2 GB of 564.1 GB, 228921 of 523955 items
    Copied 231.8 GB of 564.1 GB, 294537 of 523955 items
    CoreEndianFlipData: error -4940 returned for rsrc type open (id 128, length
    12, native = no)
    CoreEndianFlipData: error -4940 returned for rsrc type open (id 128, length
    12, native = no)
    Copied 379.7 GB of 564.1 GB, 409687 of 523955 items
    Copied 543.7 GB of 564.1 GB, 517846 of 523955 items
    Copied 523967 files (554.8 GB) from volume Macintosh HD.
    July 13 1102 am
    Starting standard backup
    Backing up to: /Volumes/a***r/Backups.backupdb
    Node requires deep traversal:/ reason:must scan subdirs|
    Starting pre-backup thinning: 678.15 GB requested (including padding),
    268.78 GB available
    No expired backups exist - deleting oldest backups to make room
    Deleted backup /Volumes/a***r/Backups.backupdb/iMac/2013-07-13-021422: 831.05 GB now available
    Pre-backup thinning completed successfully: 1 backups were deleted
    Backup date range was shortened: oldest backup is now Jul 13, 2013
    Copied 69.2 GB of 564.1 GB, 228921 of 523955 items
    Copied 231.8 GB of 564.1 GB, 294537 of 523955 items
    CoreEndianFlipData: error -4940 returned for rsrc type open (id 128, length
    12, native = no)
    CoreEndianFlipData: error -4940 returned for rsrc type open (id 128, length
    12, native = no)
    Copied 379.7 GB of 564.1 GB, 409687 of 523955 items
    Copied 543.7 GB of 564.1 GB, 517846 of 523955 items
    Copied 523967 files (554.8 GB) from volume Macintosh HD.
    Backup completed successfully.
    July 13 307pm
    Starting standard backup
    Backing up to: /Volumes/a***r/Backups.backupdb
    Node requires deep traversal:/ reason:must scan subdirs|
    Starting pre-backup thinning: 679.38 GB requested (including padding),
    267.58 GB available
    No expired backups exist - deleting oldest backups to make room
    Deleted backup /Volumes/a***r/Backups.backupdb/iMac/2013-07-13-062230: 829.87 GB now available
    Pre-backup thinning completed successfully: 1 backups were deleted
    Backup date range was shortened: oldest backup is now Jul 13, 2013
    Copied 80.4 GB of 564.1 GB, 235855 of 523977 items
    Copied 218.1 GB of 564.1 GB, 289940 of 523977 items
    CoreEndianFlipData: error -4940 returned for rsrc type open (id 128, length
    12, native = no)
    CoreEndianFlipData: error -4940 returned for rsrc type open (id 128, length
    12, native = no)
    Copied 347.4 GB of 564.1 GB, 395287 of 523977 items
    Stopping backupd to allow ejection of backup destination disk!
    Copied 395788 files (353.7 GB) from volume Macintosh HD.
    Backup canceled.
    July 14 11:02am
    Starting standard backup
    Backing up to: /Volumes/a***r/Backups.backupdb
    Node requires deep traversal:/ reason:must scan subdirs|
    Starting pre-backup thinning: 680.07 GB requested (including padding),
    470.83 GB available
    No expired backups exist - deleting oldest backups to make room
    Error: backup disk is full - all 0 possible backups were removed, but space
    is still needed.
    Backup Failed: unable to free 680.07 GB needed space
    Backup failed with error: Not enough available disk space on the target
    volume.
    July 14 12.48pm
    Starting standard backup
    Backing up to: /Volumes/a***r/Backups.backupdb
    Node requires deep traversal:/ reason:must scan subdirs|
    Starting pre-backup thinning: 680.07 GB requested (including padding),
    470.83 GB available
    No expired backups exist - deleting oldest backups to make room
    Error: backup disk is full - all 0 possible backups were removed, but space
    is still needed.
    Backup Failed: unable to free 680.07 GB needed space
    Backup failed with error: Not enough available disk space on the target
    volume.

  • My time Machine keeps saying, "Time Machine could not complete the backup. This backup is too large for the backup disk. The backup requires 345.74 GB but only 289.80 are available." I have already excluded files. I have a 1tb external drive. HELP!!!

    For over two weeks now I have been frustated and not having my TIme Machine back up to my 1tb external drive. I dont understand why now its a problem.  It keeps saying
    "This backup is too large for the backup disk. The backup requires 345.74GB but only 289.80GB are avialable.  Time Machine needs work space on the bakup disk, in addition to the space required to store backups. Open Time Machine preferences to select a large backup disk or make the bakup smaller by excluding files." So I have already excluded almost all of my files, and even deleted the backup disk yet, that quote still keeps popping up. I am truly at a wall with this. I have a Mac OS X version 10.7.5. CAN SOMEONE HELP ME PLEASE????

    If you have more than one user account, these instructions must be carried out as an administrator.
    Launch the Console application in any of the following ways:
    ☞ Enter the first few letters of its name into a Spotlight search. Select it in the results (it should be at the top.)
    ☞ In the Finder, select Go ▹ Utilities from the menu bar, or press the key combination shift-command-U. The application is in the folder that opens.
    ☞ Open LaunchPad. Click Utilities, then Console in the icon grid.
    Make sure the title of the Console window is All Messages. If it isn't, select All Messages from the SYSTEM LOG QUERIES menu on the left. If you don't see that menu, select
    View ▹ Show Log List
    from the menu bar.
    Enter the word "Starting" (without the quotes) in the String Matching text field. You should now see log messages with the words "Starting * backup," where * represents any of the words "automatic," "manual," or "standard." Note the timestamp of the last such message. Clear the text field and scroll back in the log to that time. Select the messages timestamped from then until the end of the backup, or the end of the log if that's not clear. Copy them (command-C) to the Clipboard. Paste (command-V) into a reply to this message.
    If there are runs of repeated messages, post only one example of each. Don't post many repetitions of the same message.
    When posting a log extract, be selective. Don't post more than is requested.
    Please do not indiscriminately dump thousands of lines from the log into this discussion.
    Some personal information, such as the names of your files, may be included — anonymize before posting.

  • Getting error ORA-01401: inserted value too large for column

    Hello ,
    I have Configured the scenario IDOC to JDBC .In the SXMB_MONI am getting the succes message .But in the Adapter Monitor am getting the error message as
    ORA-01401: inserted value too large for column and the entries also not inserted in to the table.I hope this is because of the date format only.In Oracle table date field has defined in the format of '01-JAN-2005'.I am also passing the date field in the same format only for INVOICE_DATE and INVOICE_DUE_DATE.Please see the target structure .
    <?xml version="1.0" encoding="UTF-8" ?>
    - <ns:INVOICE_INFO_MT xmlns:ns="http://sap.com/xi/InvoiceIDoc_Test">
    - <Statement>
    - <INVOICE_INFO action="INSERT">
    - <access>
      <INVOICE_ID>0090000303</INVOICE_ID>
      <INVOICE_DATE>01-Dec-2005</INVOICE_DATE>
      <INVOICE_DUE_DATE>01-Jan-2005</INVOICE_DUE_DATE>
      <ORDER_ID>0000000000011852</ORDER_ID>
      <ORDER_LINE_NUM>000010</ORDER_LINE_NUM>
      <INVOICE_TYPE>LR</INVOICE_TYPE>
      <INVOICE_ORGINAL_AMT>10000</INVOICE_ORGINAL_AMT>
      <INVOICE_OUTSTANDING_AMT>1000</INVOICE_OUTSTANDING_AMT>
      <INTERNAL_USE_FLG>X</INTERNAL_USE_FLG>
      <BILLTO>0004000012</BILLTO>
      <SHIPTO>40000006</SHIPTO>
      <STATUS_ID>O</STATUS_ID>
      </access>
      </INVOICE_INFO>
      </Statement>
      </ns:INVOICE_INFO_MT>
    Please let me know what are all the possible solution to fix the error and to insert the entries in the table.
    Thanks in Advance!

    Hi muthu,
    // inserted value too large for column
    When your oracle insertion throws this error, it implies that some value that you are trying to insert into the table is larger than the allocated size.
    Just check the format of your table and the respective size of each field on your oracle cleint by using the command,
    DESCRIBE <tablename> .
    and then verify it with the input. I dont think the problem is with the DATE format because if it is not a valid date format, you would have got on error like
    String Literal does not match type
    Hope this helps,
    Regards,
    Bhavesh

  • Value too large for column "OIMDB"."UPA_FIELDS"."FIELD_NEW_VALUE"

    I am running OIM 9.1.0.1849.0 build 1849.0 on Windows Server 2003
    I see the following stack trace repeatedly in c:\jboss-4.0.3SP1\server\default\log\server.log
    I am hoping someone might be able help me resolve this issue.
    Thanks in advance
    ...Lyall
    java.sql.SQLException: ORA-12899: value too large for column "OIMDB"."UPA_FIELDS"."FIELD_NEW_VALUE" (actual: 2461, maximum: 2000)
         at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112)
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:331)
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:288)
         at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:745)
         at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:216)
         at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:966)
         at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1170)
         at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3339)
         at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3423)
         at org.jboss.resource.adapter.jdbc.WrappedPreparedStatement.executeUpdate(WrappedPreparedStatement.java:227)
         at com.thortech.xl.dataaccess.tcDataBase.writePreparedStatement(Unknown Source)
         at com.thortech.xl.dataobj.PreparedStatementUtil.executeUpdate(Unknown Source)
         at com.thortech.xl.audit.auditdataprocessors.UserProfileRDGenerator.insertUserProfileChangedAttributes(Unknown Source)
         at com.thortech.xl.audit.auditdataprocessors.UserProfileRDGenerator.processUserProfileChanges(Unknown Source)
         at com.thortech.xl.audit.auditdataprocessors.UserProfileRDGenerator.processAuditData(Unknown Source)
         at com.thortech.xl.audit.genericauditor.GenericAuditor.processAuditMessage(Unknown Source)
         at com.thortech.xl.audit.engine.AuditEngine.processSingleAudJmsEntry(Unknown Source)
         at com.thortech.xl.audit.engine.AuditEngine.processOfflineNew(Unknown Source)
         at com.thortech.xl.audit.engine.jms.XLAuditMessageHandler.execute(Unknown Source)
         at com.thortech.xl.schedule.jms.messagehandler.MessageProcessUtil.processMessage(Unknown Source)
         at com.thortech.xl.schedule.jms.messagehandler.AuditMessageHandlerMDB.onMessage(Unknown Source)
         at sun.reflect.GeneratedMethodAccessor127.invoke(Unknown Source)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:324)
         at org.jboss.invocation.Invocation.performCall(Invocation.java:345)
         at org.jboss.ejb.MessageDrivenContainer$ContainerInterceptor.invoke(MessageDrivenContainer.java:475)
         at org.jboss.resource.connectionmanager.CachedConnectionInterceptor.invoke(CachedConnectionInterceptor.java:149)
         at org.jboss.ejb.plugins.MessageDrivenInstanceInterceptor.invoke(MessageDrivenInstanceInterceptor.java:101)
         at org.jboss.ejb.plugins.CallValidationInterceptor.invoke(CallValidationInterceptor.java:48)
         at org.jboss.ejb.plugins.AbstractTxInterceptor.invokeNext(AbstractTxInterceptor.java:106)
         at org.jboss.ejb.plugins.TxInterceptorCMT.runWithTransactions(TxInterceptorCMT.java:335)
         at org.jboss.ejb.plugins.TxInterceptorCMT.invoke(TxInterceptorCMT.java:166)
         at org.jboss.ejb.plugins.RunAsSecurityInterceptor.invoke(RunAsSecurityInterceptor.java:94)
         at org.jboss.ejb.plugins.LogInterceptor.invoke(LogInterceptor.java:192)
         at org.jboss.ejb.plugins.ProxyFactoryFinderInterceptor.invoke(ProxyFactoryFinderInterceptor.java:122)
         at org.jboss.ejb.MessageDrivenContainer.internalInvoke(MessageDrivenContainer.java:389)
         at org.jboss.ejb.Container.invoke(Container.java:873)
         at org.jboss.ejb.plugins.jms.JMSContainerInvoker.invoke(JMSContainerInvoker.java:1077)
         at org.jboss.ejb.plugins.jms.JMSContainerInvoker$MessageListenerImpl.onMessage(JMSContainerInvoker.java:1379)
         at org.jboss.jms.asf.StdServerSession.onMessage(StdServerSession.java:256)
         at org.jboss.mq.SpyMessageConsumer.sessionConsumerProcessMessage(SpyMessageConsumer.java:904)
         at org.jboss.mq.SpyMessageConsumer.addMessage(SpyMessageConsumer.java:160)
         at org.jboss.mq.SpySession.run(SpySession.java:333)
         at org.jboss.jms.asf.StdServerSession.run(StdServerSession.java:180)
         at EDU.oswego.cs.dl.util.concurrent.PooledExecutor$Worker.run(PooledExecutor.java:748)
         at java.lang.Thread.run(Thread.java:534)
    2008-09-03 14:32:43,281 ERROR [XELLERATE.AUDITOR] Class/Method: UserProfileRDGenerator/insertUserProfileChangedAttributes encounter some problems: Failed to insert change record in table UPA_FIELDS

    Thankyou,
    Being the OIM noob that I am, had no idea where to look.
    We do indeed have some user defined fields of 4000 characters.
    I am now wondering if I can disable auditing, or maybe increase the size of the auditing database column?
    Also, I guess I should raise a defect in OIM as the User Interface should not allow the creation of a user field for which auditing is unable to cope.
    I also wonder if the audit failures (other than causing lots of stack traces) causes any transaction failures due to transaction rollbacks?
    Edited by: lyallp on Sep 3, 2008 4:01 PM

  • Error on reverse on XML: value too large for column

    Hi All,
    I am trying to reverse engineer while creating the data model on XML technology.
    My JDBC URL on data server reads this:
    jdbc:snps:xml?d=../demo/abc/CustomerPartyEBO.xsd&s=MYEBO
    I get an error while doing the reverse.
    java.sql.SQLException: ORA-12899: value too large for column "PINW"."SNP_REV_KEY_COL"."KEY_NAME" (actual: 102, maximum: 100)
    After doing some check through selective reverse, found that this is happening only for few tables, whose names are quite longer.
    Tried setting the "maximum column name length" and "maximum table name length" to 120 and even higher values on XML technology from Topology Manager. No luck there.
    Thanks in advance for any help here.

    That is not the place to change.
    The error states that the SNP_REV_KEY_COL.KEY_NAME in the Work Repository schema PINW has maximum length defined to be 100.
    I donot know if Oracle will support this change but you will have to make a change to the Work Repository table SNP_REV_KEY_COL and change the column lengths as a workaround.

  • Imp-00020 long column too large for column buffer size (22)

    Hi friends,
    I have exported (through Conventional path) a complete schema from Oracle 7 (Sco unix patform).
    Then transferred the export file to a laptop(window platform) from unix server.
    And tried to import this file into Oracle10.2. on windows XP.
    (Database Configuration of Oracle 10g is
    User tablespace 2 GB
    Temp tablespace 30 Mb
    The rollback segment of 15 mb each
    undo tablespace of 200 MB
    SGA 160MB
    PAGA 16MB)
    All the tables imported success fully except 3 tables which are having AROUND 1 million rows each.
    The error message comes during import are as following for these 3 tables
    imp-00020 long column too large for column buffer size (22)
    imp-00020 long column too large for column buffer size(7)
    The main point here is in all the 3 tables there is no long column/timestamp column (only varchar/number columns are there).
    For solving the problem I tried following options
    1.Incresed the buffer size upto 20480000/30720000.
    2.Commit=Y Indexes=N (in this case does not import complete tables).
    3.first export table structures only and then Data.
    4.Created table manually and tried to import the tables.
    but all efforts got failed.
    still getting the same errors.
    Can some one help me on this issue ?
    I will be grateful to all of you.
    Regards,
    Harvinder Singh
    [email protected]
    Edited by: user462250 on Oct 14, 2009 1:57 AM

    Thanks, but this note is for older releases, 7.3 to 8.0...
    In my case both export and import were made on a 11.2 database.
    I didn't use datapump because we use the same processes for different releases of Oracle, some of them do not comtemplate datapump. By the way, shouldn't EXP / IMP work anyway?

Maybe you are looking for

  • Filename as e-mail subject in Adobe Reader 9

    In Adobe Reader 9, when I send a pdf-file using the "attach to E-Mail" button theres is no subject added to the new E-Mail. This is important for a major part of our company! Is there some way to get this to work in version 9? or do we need to keep u

  • Help with Guilloches and other things?

    Heya, i've used Illustrator before for a few things, but i'm still quite new to it. Hope you can help. Ok so what im trying to do is make bank note from scratch. Dont worry, its for a college project and it'll be an entirely new design so theres no c

  • Can't drag folders in Media Browser

    It says everywhere (in the manuals) that if you open Media Browser (in Pages, KeyNote etc), you can drag in it from the finder your folders of images or sounds or movies. I try to do it, but when I click the folder in the finder, Media Browser disapp

  • Very urgent - Regarding variables on Multicube.

    Hi Experts I created a multicube using two basic cubes(one for R/3 data and another one for CRM Data). In my reports i want to put 2 variables for CRM(Debit memo date) and R/3 (Billing Date). when i execute the report without entering any values for

  • Designer asks "Can my computer go on?"

    I am a full time freelance graphic designer. I usually have Photoshop, Illustrator, Chrome, Mail, Bridge and a few other support programs going at once, all day long. I have a 24" imac, mid 2007 (OMG, where does the time go?!), 2.4 GHz Intel Core 2 D