ORA-28606: block too fragmented to build bitmap index

I'm getting a spurious error on 9.2.0.2.0 running on a HP-UX machine. I running a PL/SQL package produced by OWB. The package works successfully on other environments. The package falls over at a multi-table insert command.
I call the error message spurious as it refers to bitmap indexes and there are no bit map indexes on the target tables, and now none in the DB at all.
Any suggestions ?

There may not be any Bitmap indexes on the target tables but the same might be created due to the table having Primary Key fields. Check for the same if such primary key fields are getting violated.

Similar Messages

  • ORA-01555: snapshot too old while creating large index

    Dear All,
    I have a newly created partitioned table of size 300GB.
    On which i am creating a composite unique index having 5 columns(local partitioned).
    The size of the index may be around 250GB(assume from a similar type table).
    My DB version in Oracle 9i release 2.
    Size of my undo tablespace is 12GB.
    Undo related parameters are as below:
    undo_management AUTO
    undo_retention 18000
    undo_suppress_errors FALSE
    At 10:40:36 AM i fired the "create index .... local;" query
    All day long i was monitoring using "select used_ublk,addr from v$transaction" and same below output all the times:
    USED_UBLK ADDR
    1 C000000185ECF458
    [ using v$session(column TADDR) i found that "C000000185ECF458" is the ADDR of my transaction (if i am not wrong) ]
    But at 11:09:27 PM (about 12 hours later) i found: ORA-01555: snapshot too old
    I am sure that at that time my undo tablespace usages was below < 30%
    At that time some insertion & selection was going on but those were not on this table.
    Why this happened ? As far as i know oracle in AUM oracle doesn't overwrite existing undo data as long as there
    is free space available.
    While searching on web i found one post as below:
    like

    Thanks guys for your prompt replys.
    Yes i've monitored v$TEMPSEG_USAGE and DBA_SEGMENTS for the progress of the index creation, but this are not related to my question and yes i could use nologging or parallel (in gact i'm going to try all these options for faster index creation). On my original post i missed below part, please go through it:
    While searching on web i found one post as below:
    metalinknote #396863.1 which describes it
    => 2) When are UNDO segments OFFLINED?
    Answer:
    SMON decides on the # of undo segs to offline and drop based on the max transaction concurrency over a 12 hour period in 9i. This behavior is altered in 10g where the max concurrency is maintained over a 7-day period. Moreover, in 10g SMON doesn't drop the extra undo segs, but simply offlines them. SMON uses the same values with "fast ramp up" to adjust the number of undo segments online.
    Link:[http://kr.forums.oracle.com/forums/thread.jspa?threadID=620489]
    I says "SMON decides on the # of undo segs to offline and drop based on the max transaction concurrency over a 12 hour period in 9i.".
    what does it mean ? Is this the cause for me, as my query war 12 hours old and my undo_retention is set to 5 hours ?
    I'll be very grateful if some one explains the cause.
    BR
    Obaid

  • Trying to make use of bitmap indexes

    Hello!
    I have a table that contains about 16 mill rows and each night about
    60.000-70.000 rows are proccessed against the table so that part of the rows
    is updated and another part is inserted.
    The table contains three IDEAL columns for bitmap indexes the first of which
    may have only two, the second three and the third four distinct values.
    I was planning to change the index type on these columns to BITMAP but
    Oracle doesnt recommend to build BITMAP indexes on heavily updated or inserted
    columns.
    So the only use of bitmap indexes turns out to be read-only tables.
    From the other hand a sloution might be dropping indexes before the load and rebuilding them after the load has completed what can lead to often tablespace fragmentations.
    So, the question is how can I use bitmap indexes in a case like this one?
    What are wayouts?
    Thank you very mcuh for the reply.

    >
    The table contains three IDEAL columns for bitmap indexes the first of which
    may have only two, the second three and the third four distinct values.
    Contrary to popular legend, and possibly contrary even to the manuals and Metalink, these columns are NOT necessarily ideal for bitmap indexes. Consider a query with:
        col1 = '1_of_2'
    and col2 = '1_of_3'
    and col3 = '1_of_4'You have a total of 24 possible combinations. Given your 16M rows, this means that on average the optimizer will expect to collect about 670.000 rows spread across something like 100,000 to 130,000 blocks. Under these circumstances you may find that Oracle doesn't use the indexes anyway (unless you fool it by fiddling with parameters like the optimizer_index_cost_adj, and that's generally a bad idea) - and if the model is a reasonable description of the actual data it probably shouldn't use the indexes.
    There are various special circumstance that might make the indexes effective for querying, though. (Note - at this point I'm not considering the impact on inserts, updates and deletes). The most obvious example is where all three columns each have at least one very repetitive value and all your queries are trying to find data for the remaining "rare" values. If this is the case then you need to index the columns and collect histograms on the columns so that the optimiser can model the data correctly; and then you may also need to modify your SQL to ensure that your queries against these columns always use literal values, not bind variables.
    If some of your queries are supposed to return small amounts of data, there are various mechanisms you could use to do this efficiently. If your queries are always going to return large amounts of data, then there are other strategies that are likely to be more appropriate.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk

  • Fdpstp failed due to ora-12899 value too large for column

    Hi All,
    User facing this problem while running the concurrent program.
    the program is complted but with rhis error.
    fdpstp failed due to ora-12899 value too large for column
    Can any one tell me the excat solution for this?
    RDBMS : 10.2.0.3.0
    Oracle Applications : 11.5.10.2

    User facing this problem while running the concurrent program.
    the program is complted but with rhis error.Is this a seeded or custom concurrent program?
    fdpstp failed due to ora-12899 value too large for column
    Can any one tell me the excat solution for this?Was this working before? If yes, any changes been done recently?
    Can other users run the same concurrent program with no issues?
    Please post the contents of the concurrent request log file here.
    Please ask your developer to open the file using Reports Builder and compile the report and run it (if possible) with the same parameters.
    OERR: ORA-12899 value too large for column %s (actual: %s, maximum: %s) [ID 287754.1]
    Thanks,
    Hussein

  • ORA-01555: snapshot too old error

    While i was trying to run the following anonymous block to analyze all the tables and indexes in my schema, it ran for approx. 5 hours and ended up with
    ORA-01555: snapshot too old error
    Can anybody explain me why this happened?
    SQL> DECLARE
    2 CURSOR tab_cur
    3 IS
    4 SELECT table_name
    5 FROM user_tables;
    6
    7 CURSOR indx_cur
    8 IS
    9 SELECT index_name
    10 FROM user_indexes;
    11 BEGIN
    12 FOR rec IN tab_cur
    13 LOOP
    14 EXECUTE IMMEDIATE 'ANALYZE TABLE '
    15 || rec.table_name
    16 || ' COMPUTE STATISTICS';
    17 END LOOP;
    18
    19 FOR rec IN indx_cur
    20 LOOP
    21 EXECUTE IMMEDIATE 'ANALYZE INDEX '
    22 || rec.index_name
    23 || ' COMPUTE STATISTICS';
    24 END LOOP;
    25 END;
    26 /
    DECLARE
    ERROR at line 1:
    ORA-01555: snapshot too old: rollback segment number 13 with name "_SYSSMU13$"
    too small
    ORA-06512: at line 12
    Elapsed: 05:01:26.08
    Thanks and Regards
    --DKar                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    Your cursor loop uses the database catalog.
    The analyze updates the database catalog -- including some of the same tables required by the cursor loop.
    The undo retention was not sufficient to hold all of the undo necessary to maintain read consistency of the catalog.
    Try using something like this, instead.
    -- for 9i
    BEGIN                                 
    DBMS_STATS.GATHER_SCHEMA_STATS (
              ownname               => '<YOUR SCHEMA>',                    
              estimate_percent      => DBMS_STATS.AUTO_SAMPLE_SIZE,          
              block_sample          => TRUE,        
              method_opt            => 'FOR ALL COLUMNS SIZE AUTO',
              degree                => 6,
              granularity           => 'ALL',
              cascade               => TRUE,
              options               => 'GATHER'
    END;                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • ORA-22835: Buffer too small for CLOB to CHAR  on Solaris but not Windows

    Hi,
    I get the following error on Solaris but not Windows equivalent 10.2 installations
    ORA-22835: Buffer too small for CLOB to CHAR or BLOB to RAW conversion (actual: 4575, maximum: 4000)
    ORA-06512: at line 65
    the code causing the problem is:
    SELECT A.TABLE_NAME, A.DEFAULT_DIRECTORY_NAME, B.LOCATION,
    CASE
    WHEN INSTR(A.ACCESS_PARAMETERS,'RECORDS FIXED')<>0 THEN
    SUBSTR(A.ACCESS_PARAMETERS,INSTR(A.ACCESS_PARAMETERS,'RECORDS FIXED')+14,3)
    ELSE NULL
    END RSZ FROM USER_EXTERNAL_TABLES A, USER_EXTERNAL_LOCATIONS B
    WHERE A.TABLE_NAME = B.TABLE_NAME(+);
    the entire code being executed is:
    DECLARE
    EX BOOLEAN;
    CNT NUMBER;
    SQL1 VARCHAR2(4000);
    FLEN NUMBER;
    BSIZE NUMBER;
    TABNAME VARCHAR2(4000);
    DEFDIR VARCHAR2(4000);
    RSZ VARCHAR2(4000);
    ECODE NUMBER(38);
    ERRMSG VARCHAR2(4000);
    CURSOR C1 IS
    SELECT A.TABLE_NAME, A.DEFAULT_DIRECTORY_NAME, B.LOCATION,
    CASE
    WHEN INSTR(A.ACCESS_PARAMETERS,'RECORDS FIXED')<>0 THEN
    SUBSTR(A.ACCESS_PARAMETERS,INSTR(A.ACCESS_PARAMETERS,'RECORDS FIXED')+14,3)
    ELSE NULL
    END RSZ FROM USER_EXTERNAL_TABLES A, USER_EXTERNAL_LOCATIONS B
    WHERE A.TABLE_NAME = B.TABLE_NAME(+);
    TYPE C1_TYPE IS TABLE OF C1%ROWTYPE;
    REC1 C1_TYPE;
    BEGIN
    OPEN C1;
    FETCH C1 BULK COLLECT INTO REC1;
    FOR I IN REC1.FIRST .. REC1.LAST
    LOOP
    UTL_FILE.FGETATTR(NVL(REC1(I).DEFAULT_DIRECTORY_NAME,'CARDSLOAD'),
    REC1(I).LOCATION, EX, FLEN, BSIZE);
    IF EX THEN
    IF INSTR(TO_CHAR(REC1(I).RSZ),'\.')<>0 THEN
         DBMS_OUTPUT.PUT_LINE('INVALID RECORDSIZE OR CORRUPTED FILE');
         END IF;
         DBMS_OUTPUT.PUT_LINE('Table Name: ' || TO_CHAR(REC1(I).TABLE_NAME));     
    DBMS_OUTPUT.PUT_LINE('File Exists '||REC1(I).LOCATION);
         DBMS_OUTPUT.PUT_LINE('File Length: ' || TO_CHAR(FLEN));
         DBMS_OUTPUT.PUT_LINE('Record Size: ' || TO_CHAR(REC1(I).RSZ));
    DBMS_OUTPUT.PUT_LINE('Block Size: ' || TO_CHAR(BSIZE));
         DBMS_OUTPUT.PUT_LINE('# RECORDS: ' || FLEN/TO_NUMBER(REC1(I).RSZ));
         BEGIN
         CNT:='';
         SQL1:='SELECT COUNT(*) FROM '|| REC1(I).TABLE_NAME;
         EXECUTE IMMEDIATE SQL1 INTO CNT;
    DBMS_OUTPUT.PUT_LINE('SELECT COUNT FOR: ' || REC1(I).TABLE_NAME||' = '||CNT);
         EXCEPTION
         WHEN OTHERS THEN
         DBMS_OUTPUT.PUT_LINE('SELECT COUNT FAILED FOR: ' || REC1(I).TABLE_NAME);
         ECODE := SQLCODE;
    ERRMSG := SQLERRM;
    DBMS_OUTPUT.PUT_LINE(ECODE||' '||ERRMSG);
         END;
    ELSE
    DBMS_OUTPUT.PUT_LINE(REC1(I).TABLE_NAME||' '||REC1(I).LOCATION||' File Does Not Exist');
    END IF;
         DBMS_OUTPUT.PUT_LINE('-----------------------------------------------------');
    END LOOP;
    CLOSE C1;
    EXCEPTION
    WHEN OTHERS THEN RAISE;
    END;
    Any ideas why Solaris but not Windows would have this problem?
    Thanks,
    Victor

    Check out Bug 4715104 - ORA-22835 / truncated data from USER/ALL/DBA_EXTERNAL_TABLES

  • Why ORA-01555 (Snapshot too old) occures on an unchanged table

    Hello,
    we have to add a not null column into a large table. The simple update is not good, due to UNDO space.
    alter table T add F INTEGER default 0 not null;
    ORA-30036: unable to extend segment by  in undo tablespace ''It is OK. (We cannot use create + insert + drop + rename due to not enough free space. The table is not partitioned current) So we try this script:
    --create the column without default and not null
    alter table T add F INTEGER;
    --create temp table with the rowids of T (nologging -> minimal log)
    create table TMP (row_id rowid) nologging;
    --save the rowid-s. (direct -> minimal undo usage)
    insert /*+APPEND*/ into TMP(row_id) select rowid from T;
    commit;
    --the "insert method"
    --set the column to "0" with frequently commit
    declare
      i integer := 0;
      commit_interval integer := 10000;
    begin
      for c in (select * from TMP) loop
        update T set F=0 where rowid=c.row_id;
        i := i + 1;
        if( mod(i,commit_interval)=0) then commit;
        end if;
      end loop;
      commit;
    end;
    --set to not-null
    alter table T modify F default 0 not null;
    --drop the temp table
    drop table TMP;the insert method occures
    ORA-01555: Snapshot too old (in row 5)The row 5 is the cursor "select * from TMP" in the insert method. The undo usage of this method is below 2 MB.
    My question is:
    Why oracle need snapshot, while the TMP table does not change during the insert method? The TMP table was populated and commited before the insert method. Why oracle try to read the rows of the TMP table from the undo tablespace?
    Thx: lados.

    Thank, I have read this article, but there is something, what I don't understand.
    As I see, the DML does not clear all the modified datablock header, but it clears only the rollback segment slots. Only the next SQL (even if this is a query) clears the datablock header. This next SQL can clear only when the info is still available in the rollback segment, else ORA-1555 occures.
    My question is:
    What happens when the next SQL comes only one year later on this block? I don't believe that oracle doesn't handle this situation, but I didn't find how it is handled.
    Thx: lados.

  • SQL Error: ORA-12899: value too large for column

    Hi,
    I'm trying to understand the above error. It occurs when we are migrating data from one oracle database to another:
    Error report:
    SQL Error: ORA-12899: value too large for column "USER_XYZ"."TAB_XYZ"."COL_XYZ" (actual: 10, maximum: 8)
    12899. 00000 - "value too large for column %s (actual: %s, maximum: %s)"
    *Cause:    An attempt was made to insert or update a column with a value
    which is too wide for the width of the destination column.
    The name of the column is given, along with the actual width
    of the value, and the maximum allowed width of the column.
    Note that widths are reported in characters if character length
    semantics are in effect for the column, otherwise widths are
    reported in bytes.
    *Action:   Examine the SQL statement for correctness.  Check source
    and destination column data types.
    Either make the destination column wider, or use a subset
    of the source column (i.e. use substring).
    The source database runs - Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    The target database runs - Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    The source and target table are identical and the column definitions are exactly the same. The column we get the error on is of CHAR(8). To migrate the data we use either a dblink or oracle datapump, both result in the same error. The data in the column is a fixed length string of 8 characters.
    To resolve the error the column "COL_XYZ" gets widened by:
    alter table TAB_XYZ modify (COL_XYZ varchar2(10));
    -alter table TAB_XYZ succeeded.
    We now move the data from the source into the target table without problem and then run:
    select max(length(COL_XYZ)) from TAB_XYZ;
    -8
    So the maximal string length for this column is 8 characters. To reduce the column width back to its original 8, we then run:
    alter table TAB_XYZ modify (COL_XYZ varchar2(8));
    -Error report:
    SQL Error: ORA-01441: cannot decrease column length because some value is too big
    01441. 00000 - "cannot decrease column length because some value is too big"
    *Cause:   
    *Action:
    So we leave the column width at 10, but the curious thing is - once we have the data in the target table, we can then truncate the same table at source (ie. get rid of all the data) and move the data back in the original table (with COL_XYZ set at CHAR(8)) - without any issue.
    My guess the error has something to do with the storage on the target database, but I would like to understand why. If anybody has an idea or suggestion what to look for - much appreciated.
    Cheers.

    843217 wrote:
    Note that widths are reported in characters if character length
    semantics are in effect for the column, otherwise widths are
    reported in bytes.You are looking at character lengths vs byte lengths.
    The data in the column is a fixed length string of 8 characters.
    select max(length(COL_XYZ)) from TAB_XYZ;
    -8
    So the maximal string length for this column is 8 characters. To reduce the column width back to its original 8, we then run:
    alter table TAB_XYZ modify (COL_XYZ varchar2(8));varchar2(8 byte) or varchar2(8 char)?
    Use SQL Reference for datatype specification, length function, etc.
    For more info, reference {forum:id=50} forum on the topic. And of course, the Globalization support guide.

  • I am getting error "ORA-12899: value too large for column".

    I am getting error "ORA-12899: value too large for column" after upgrading to 10.2.0.4.0
    Field is updating only through trigger with hard coded value.
    This happens randomly not everytime.
    select * from v$version
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE     10.2.0.4.0     Production
    TNS for Linux: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    Table Structure
    desc customer
    Name Null? Type
    CTRY_CODE NOT NULL CHAR(3 Byte)
    CO_CODE NOT NULL CHAR(3 Byte)
    CUST_NBR NOT NULL NUMBER(10)
    CUST_NAME CHAR(40 Byte)
    RECORD_STATUS CHAR(1 Byte)
    Trigger on the table
    CREATE OR REPLACE TRIGGER CUST_INSUPD
    BEFORE INSERT OR UPDATE
    ON CUSTOMER FOR EACH ROW
    BEGIN
    IF INSERTING THEN
    :NEW.RECORD_STATUS := 'I';
    ELSIF UPDATING THEN
    :NEW.RECORD_STATUS := 'U';
    END IF;
    END;
    ERROR at line 1:
    ORA-01001: invalid cursor
    ORA-06512: at "UPDATE_CUSTOMER", line 1320
    ORA-12899: value too large for column "CUSTOMER"."RECORD_STATUS" (actual: 3,
    maximum: 1)
    ORA-06512: at line 1
    Edited by: user4211491 on Nov 25, 2009 9:30 PM
    Edited by: user4211491 on Nov 25, 2009 9:32 PM

    SQL> create table customer(
      2  CTRY_CODE  CHAR(3 Byte) not null,
      3  CO_CODE  CHAR(3 Byte) not null,
      4  CUST_NBR NUMBER(10) not null,
      5  CUST_NAME CHAR(40 Byte) ,
      6  RECORD_STATUS CHAR(1 Byte)
      7  );
    Table created.
    SQL> CREATE OR REPLACE TRIGGER CUST_INSUPD
      2  BEFORE INSERT OR UPDATE
      3  ON CUSTOMER FOR EACH ROW
      4  BEGIN
      5  IF INSERTING THEN
      6  :NEW.RECORD_STATUS := 'I';
      7  ELSIF UPDATING THEN
      8  :NEW.RECORD_STATUS := 'U';
      9  END IF;
    10  END;
    11  /
    Trigger created.
    SQL> insert into customer(CTRY_CODE,CO_CODE,CUST_NBR,CUST_NAME,RECORD_STATUS)
      2                values('12','13','1','Mahesh Kaila','UPD');
                  values('12','13','1','Mahesh Kaila','UPD')
    ERROR at line 2:
    ORA-12899: value too large for column "HPVPPM"."CUSTOMER"."RECORD_STATUS"
    (actual: 3, maximum: 1)
    SQL> insert into customer(CTRY_CODE,CO_CODE,CUST_NBR,CUST_NAME)
      2                values('12','13','1','Mahesh Kaila');
    1 row created.
    SQL> set linesize 200
    SQL> select * from customer;
    CTR CO_   CUST_NBR CUST_NAME                                R
    12  13           1 Mahesh Kaila                             I
    SQL> update customer set cust_name='tst';
    1 row updated.
    SQL> select * from customer;
    CTR CO_   CUST_NBR CUST_NAME                                R
    12  13           1 tst                                      Urecheck your code once again..somewhere you are using record_status column for insertion or updation.
    Ravi Kumar

  • ORA-22835: buffer too small when trying to save pdf file in LONG RAW column

    Hi,
    I get "ORA-22835: Buffer too small for CLOB to CHAR or BLOB to RAW conversion (real : 125695, maximum : 2000)" when i trying to save a 120k pdf file in an Oracle Long Raw column using dotnet 4.0 and Entity FrameWork.
    Dim db As New OracleEntities
    Try
    Dim myEntity = (From e In db.TEXTE _
    Where e.TEXT_ID = txtTextId.Text _
    Select e).Single
    With myEntity
    If txtTextypeId.Text <> "" Then
    .TEXTYPE_ID = txtTextypeId.Text
    Else
    .TEXTYPE_ID = Nothing
    End If
    .TEXT_NUM = txtTextNum.Text
    .TEXT_NAME = txtTextName.Text
    .TEXT_DATE = dtTextDate.SelectedDate
    If DocAdded Then
    .TEXT_DOC = Document
    ElseIf DocDeleted Then
    .TEXT_DOC = Nothing
    End If
    End With
    db.SaveChanges()
    Document is an array of Byte and TEXT_DOC also (mapped to a long row column).
    is it possible to increase the size of the buffer ? how may i do it ?
    Thx in advance.
    Regards.

    Using a custom UPDATE or INSERT stored procedure for LONG RAW column with
    exceed-limit data may still get the following error.
    "ORA-01460: unimplemented or unreasonable conversion requested".
    One option is to use BLOB instead of LONG RAW in your table and regenerate your
    data model from the table. Then using the default UPDATE or INSERT statement
    (created by EF) should work.
    The following will modify your LONG RAW column to BLOB column.
    Disclaimers:
    1. It's irreversible--you cannot modify BLOB back to LONG RAW.
    2. I have not tried that when there are huge data in LONG RAW column.
    So be careful.
    alter table <your_table_name> modify <your_long_raw_type_column> blob;

  • Install fails due to ORA-12899: value too large for column

    Hi,
    Our WCS 11g installation on Tomcat 7 fails giving a "ORA-12899: value too large for column".
    As per the solution ticket https://support.oracle.com/epmos/faces/DocumentDisplay?id=1539055.1 we have to set "-Dfile.encoding=UTF-8" in tomcat.
    We have done this beforehand by setting the variable in catalina.bat in tomcat 7 bin as shown below
    But still we get the same error while installation.
    If anybody has faced this , let us know how you resolved it

    We were unable to install WCS on Tomcat 7 but on Tomcat 6 by specifying "-Dfile.encoding=UTF-8" in java options using "Tomcat Configure" it was succesful.
    An alternative we found was to increase the value of the column itself.
    Using command
    ALTER TABLE csuser.systemlocalestring
    MODIFY value varchar2 (4000)

  • ORA-01426: Numeric Overflow During Cube Build (Doc ID 1494869.1)

    Our cube builds starting failing and we received this error after the cube log table sequence reached 32787.
    After recreating the sequence the builds run successfully. This seems to be an unacceptable bug for an enterprise level product.
    I have been unable to find a patch on the Oracle support site for it. Is a patch avaliable?
    COMP_NAME
    VERSION
    OLAP Analytic Workspace
    11.2.0.4.0
    Oracle OLAP API
    11.2.0.4.0
    OLAP Catalog
    11.2.0.4.0

    I found what appears to be a related bug Bug 14627371 - ORA-01426: NUMERIC OVERFLOW DURING CUBE BUILD
    .  This bug is marked as fixed in 12.1 of the database. 
    I think that you may want to pursue this issue with support to see whether this can be back-ported ton 11.2.0.4.0.
    --Ken Chin

  • Rawtohex  - How to Insert ? ORA-12899: value too large for column

    Hi,
    Can any one please help me to resolve the following issue ?
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    Name Null? Type
    ABC_OID NOT NULL RAW(8)
    ABC_NAME NOT NULL VARCHAR2(30 CHAR)
    UPDATE_TIME NOT NULL DATE
    UPDATE_BY_WORKER_NO NOT NULL NUMBER
    I'm able to insert 1st 2 records but when am inseerting 3rd one am getting error:-
    insert into caps.ABC_LOOKUP values( rawtohex('SERIES'), 'SERIES','19-FEB-09','1065449')
    insert into caps.ABC_LOOKUP values(rawtohex('FAMILY'),'FAMILY','19-FEB-09','1065449')
    Insert into caps.ABC_LOOKUP values(rawtohex('CONNECTOR'),'CONNECTOR','19-FEB-09','1065449')
    ERROR at line 1:
    ORA-12899: value too large for column
    "XYZ"."ABC_LOOKUP"."ABC_OID" (actual: 9, maximum: 8)
    Thanks in Advance.....

    Yes, Done...
    Actually I suggested same thing to them (application), But they did not agree with me then i got confused :-)
    Now the same thing worked well..Thanks a lot for your time

  • Java.sql.BatchUpdateException: ORA-12899[ value too large for column.......

    Hi All,
    I am using SOA 11g(11.1.1.3). I am trying to insert data in to a table coming from a file. I have encountered the fallowing error.
    Exception occured when binding was invoked.
    Exception occured during invocation of JCA binding: "JCA Binding execute of Reference operation 'insert' failed due to: DBWriteInteractionSpec Execute Failed Exception.
    *insert failed. Descriptor name: [UploadStgTbl.XXXXStgTbl].*
    Caused by java.sql.BatchUpdateException: ORA-12899: value too large for column "XXXX"."XXXX_STG_TBL"."XXXXXX_XXXXX_TYPE" (actual: 20, maximum: 15)
    *The invoked JCA adapter raised a resource exception.*
    *Please examine the above error message carefully to determine a resolution.*
    The data type of the column errored out is VARCHAR2(25). I found related issue in metalink, java.sql.BatchUpdateException (ORA-12899) Reported When DB Adapter Reads a Row From a Table it is Polling For Added Rows [ID 1113215.1].
    But the solution seems not applicable in my case...
    Can anyone encountered same issue?? Is this a bug? If it is a bug, do we have patch for this bug??
    Please help me out...
    Thank you all...
    Edited by: 806364 on Dec 18, 2010 12:01 PM

    It didn't work.
    After I changed length of that column of the source datastore (from 15 to 16), ODI created temporary tables (C$ with I$) with larger columns (16 instead of 15) but I got the same error message.
    I'm wondering why I have to extend length of source datastore in the source model if there are no values in the source table with a length greather than 15....
    Any other idea? Thanks !

  • Adding virtual column: ORA-12899: value too large for column

    I'm using Oracle 11g, Win7 OS, SQL Developer
    I'm trying to add virtual column to my test table, but getting ORA-12899: value too large for column error. Below are the details.
    Can someone help me in this?
    CREATE TABLE test_reg_exp
    (col1 VARCHAR2(100));
    INSERT INTO test_reg_exp (col1) VALUES ('ABCD_EFGH');
    INSERT INTO test_reg_exp (col1) VALUES ('ABCDE_ABC');
    INSERT INTO test_reg_exp (col1) VALUES ('WXYZ_ABCD');
    INSERT INTO test_reg_exp (col1) VALUES ('ABCDE_PQRS');
    INSERT INTO test_reg_exp (col1) VALUES ('ABCD_WXYZ');
    ALTER TABLE test_reg_exp
    ADD (col2 VARCHAR2(100) GENERATED ALWAYS AS (REGEXP_REPLACE (col1, '^ABCD[A-Z]*_')));
    SQL Error: ORA-12899: value too large for column "COL2" (actual: 100, maximum: 400)
    12899. 00000 -  "value too large for column %s (actual: %s, maximum: %s)"
    *Cause:    An attempt was made to insert or update a column with a value
               which is too wide for the width of the destination column.
               The name of the column is given, along with the actual width
               of the value, and the maximum allowed width of the column.
               Note that widths are reported in characters if character length
               semantics are in effect for the column, otherwise widths are
               reported in bytes.
    *Action:   Examine the SQL statement for correctness.  Check source
               and destination column data types.
               Either make the destination column wider, or use a subset
               of the source column (i.e. use substring).When I try to select, I'm getting correct results:
    SELECT col1, (REGEXP_REPLACE (col1, '^ABCD[A-Z]*_'))
    FROM test_reg_exp;Thanks.

    Yes RP, it working if you give col2 size >=400.
    @Northwest - Could you please test the same w/o having a regex clause in col2?
    I doubt on the usage of a REGEX in this dynamic col case.
    Refer this (might help) -- http://www.oracle-base.com/articles/11g/virtual-columns-11gr1.php
    Below snippet from above link.... see if this helps...
    >
    Notes and restrictions on virtual columns include:
    Indexes defined against virtual columns are equivalent to function-based indexes.
    Virtual columns can be referenced in the WHERE clause of updates and deletes, but they cannot be manipulated by DML.
    Tables containing virtual columns can still be eligible for result caching.
    Functions in expressions must be deterministic at the time of table creation, but can subsequently be recompiled and made non-deterministic without invalidating the virtual column. In such cases the following steps must be taken after the function is recompiled:
    Constraint on the virtual column must be disabled and re-enabled.
    Indexes on the virtual column must be rebuilt.
    Materialized views that access the virtual column must be fully refreshed.
    The result cache must be flushed if cached queries have accessed the virtual column.
    Table statistics must be regathered.
    Virtual columns are not supported for index-organized, external, object, cluster, or temporary tables.
    The expression used in the virtual column definition has the following restrictions:
    It cannot refer to another virtual column by name.
    It can only refer to columns defined in the same table.
    If it refers to a deterministic user-defined function, it cannot be used as a partitioning key column.
    The output of the expression must be a scalar value. It cannot return an Oracle supplied datatype, a user-defined type, or LOB or LONG RAW.
    >
    Edited by: ranit B on Oct 16, 2012 11:48 PM
    Edited by: ranit B on Oct 16, 2012 11:54 PM

Maybe you are looking for