ORA-06502 during bulk load

I am using v11.2 with the new Jena adapter.
I am trying to upload data from a bunch of ntriple files to the triple store via the bulk load interface in the Jena adaptor- aka. bulk append. The code does something like this
while(moreFiles exist)
readFilesToMemory;
bulkLoadToDatabase using the options "MBV_JOIN_HINT=USE_HASH PARALLEL=4"
Loading the first set of triples goes well. But when I try to load the second set of triples, I get the exception below.
Some thoughts:
1) I dont think this is data problem because I uploaded all the data during an earlier test + when I upload the same data on an empty database it works fine.
2) I saw some earlier posts with similar error but none of the seem to be using the Jena adaptor..
3) The model also has a OWL Prime entailment in incremental mode.
4) I am not sure if this is relevant but... Before I ran the current test, I mistakenly launched multiple of java processes that bulk loaded the data. Ofcourse I killed all the processes and dropped the sem_models and the backing rdf tables they were uploading to.
EXCEPTION
java.sql.SQLException: ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at "MDSYS.SDO_RDF_INTERNAL", line 3164
ORA-06512: at "MDSYS.SDO_RDF_INTERNAL", line 4244
ORA-06512: at "MDSYS.SDO_RDF", line 276
ORA-06512: at "MDSYS.RDF_APIS", line 693
ORA-06512: at line 1
at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70)
at oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:131)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:204)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:455)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:413)
at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:1034)
at oracle.jdbc.driver.T4CCallableStatement.doOall8(T4CCallableStatement.java:191)
at oracle.jdbc.driver.T4CCallableStatement.executeForRows(T4CCallableStatement.java:950)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1222)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3387)
at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3488)
at oracle.jdbc.driver.OracleCallableStatement.execute(OracleCallableStatement.java:3840)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1086)
at oracle.spatial.rdf.client.jena.Oracle.executeCall(Oracle.java:689)
at oracle.spatial.rdf.client.jena.OracleBulkUpdateHandler.addInBulk(OracleBulkUpdateHandler.java:740)
at oracle.spatial.rdf.client.jena.OracleBulkUpdateHandler.addInBulk(OracleBulkUpdateHandler.java:463)
at oracleuploadtest.OracleUploader.loadModelToDatabase(OracleUploader.java:84)
at oracleuploadtest.RunOracleUploadTest.main(RunOracleUploadTest.java:81)
thanks!
Ram.

The addInBulk method needs to be called twice to trigger the bug. Here is a test case that passes only while the bug is present! (It is to remind me to remove the workaround code when the fix gets through to my code).
@Test
     public void testThatOracleBulkBugIsNotYetFixed() throws SQLException {
          char nm[] = new char[22-TestDataUtils.getUserID().length()-TestOracleHelper.ORACLE_USER.length()];
          Arrays.fill(nm,'A');
          TestOracleHelper helper = new TestOracleHelper(new String(nm)); // actual name is TestDataUtils.getUserID() +"_" + nm
          GraphOracleSem og = helper.createGraph();
          Node n = RDF.value.asNode();
          Triple triples[] = new Triple[]{new Triple(n,n,n)};
          try {
               og.getBulkUpdateHandler().addInBulk(triples, null);
               // Oracle bug hits on second call:
               og.getBulkUpdateHandler().addInBulk(triples, null);
          catch (SQLException e) {
               if (e.getErrorCode()==6502) {
               return; // we have a work-around for this expected error;
               throw e; // some other problem.
          Assert.fail("It seems that an Oracle update (has the ora jar been updated?) resolves a silly bug - please modify BulkLoaderExportMode");
Jeremy

Similar Messages

  • ORA-06502 during a procedure which uses Bulk collect feature and nested tab

    Hello Friends,
    have created one procedure which uses Bulk collect and nested table to hold the bulk data. This procedure was using one cursor and a nested table with the same type as the cursor to hold data fetched from cursor. Bulk collection technique was used to collect data from cursor to nested table. But it is giving ORA-06502 error.
    I reduced code of procedure to following to trace the error point. But still error is comming. Please help us to find the cause and solve it.
    Script which is giving error:
    declare
    v_Errorflag BINARY_INTEGER;
    v_flag number := 1;
    CURSOR cur_terminal_info Is
    SELECT distinct
    'a' SettlementType
    FROM
    dual;
    TYPE typ_cur_terminal IS TABLE OF cur_terminal_info%ROWTYPE;
    Tab_Terminal_info typ_cur_Terminal;
    BEGIN
    v_Errorflag := 2;
    OPEN cur_terminal_info;
    LOOP
    v_Errorflag := 4;
    FETCH cur_terminal_info BULK COLLECT INTO tab_terminal_info LIMIT 300;
    EXIT WHEN cur_terminal_info%rowcount <= 0;
    v_Errorflag := 5;
    FOR Y IN Tab_Terminal_Info.FIRST..tab_terminal_info.LAST
    LOOP
    dbms_output.put_line(v_flag);
    v_flag := v_flag + 1;
    end loop;
    END LOOP;
    v_Errorflag := 13;
    COMMIT;
    END;
    I have updated script as following to change datatype as varchar2 for nested table, but still same error is
    comming..
    declare
    v_Errorflag BINARY_INTEGER;
    v_flag number := 1;
    CURSOR cur_terminal_info Is
    SELECT distinct
    'a' SettlementType
    FROM
    dual;
    TYPE typ_cur_terminal IS TABLE OF varchar2(50);
    Tab_Terminal_info typ_cur_Terminal;
    BEGIN
    v_Errorflag := 2;
    OPEN cur_terminal_info;
    LOOP
    v_Errorflag := 4;
    FETCH cur_terminal_info BULK COLLECT INTO tab_terminal_info LIMIT 300;
    EXIT WHEN cur_terminal_info%rowcount <= 0;
    v_Errorflag := 5;
    FOR Y IN Tab_Terminal_Info.FIRST..tab_terminal_info.LAST
    LOOP
    dbms_output.put_line(v_flag);
    v_flag := v_flag + 1;
    end loop;
    END LOOP;
    v_Errorflag := 13;
    COMMIT;
    I could not find the exact reason of error.
    Please help us to solve this error.
    Thanks and Regards..
    Dipali..

    Hello Friends,
    I got the solution.. :)
    I did one mistake in procedure where the loop should end.
    I used the statemetn: EXIT WHEN cur_terminal_info%rowcount <= 0;
    But it should be: EXIT WHEN Tab_Terminal_Info.COUNT <= 0;
    Now my script is working fine.. :)
    Thanks and Regards,
    Dipali..

  • ORA-06502 trying to load a long raw into a variable.

    Hi. In my table "banco_imagem" the bim_src column is a long raw type.
    I´m using oracle forms 6 (not 6i), so I can´t use blob type to save my images.
    Now I´m trying to load the long raw column into a variable in a package that runs on 10g.
    I´m trying to execute de folowing code at sql plus:
    declare
    wbim   long raw;
    begin
    select bim_src into wbim from banco_imagem where rownum=1;
    end;
    The column is not null. It has a value.
    I got the folowing error:
    ORA-06502: PL/SQL: numeric or value error
    ORA-06512: at line 4
    My goal is to load this column to convert it to blob so I can manipulate with my others (already running) functions.
    Can anyone help me?
    Thanks!

    Hi Mcardia,
    not sure where you're going wrong, but perhaps if you compare what you've done up to now to the following code snippet, you may figure it out eventually!
    SQL> drop table test_raw
      2  /
    Table dropped.
    SQL>
    SQL> create table test_raw (col_a long raw, col_b blob)
      2  /
    Table created.
    SQL> set serveroutput on
    SQL> declare
      2 
      3    l1 long raw;
      4    l2 long raw;
      5   
      6    b1 blob;
      7   
      8  begin
      9 
    10    l1:= utl_raw.cast_to_raw('This is a test');
    11   
    12    insert into test_raw (col_a) values (l1);
    13 
    14       
    15    select col_a
    16    into   l2
    17    from    test_raw
    18    where   rownum < 2;
    19   
    20    dbms_lob.createtemporary (b1, false);
    21   
    22    dbms_output.put_line(utl_raw.cast_to_varchar2(l2));
    23    b1 := l2;
    24 
    25    update  test_raw set col_b = b1;
    26   
    27    commit;
    28   
    29    dbms_output.put_line('Done ');
    30   
    31    exception
    32      when others then
    33        dbms_output.put_line('Error ' || sqlerrm);
    34  end;
    35  /
    This is a test
    Done
    PL/SQL procedure successfully completed.Bear in mind that I'm running on the following:
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE 10.2.0.4.0 Production
    TNS for Solaris: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production

  • ORA-30036: During Bulk Insert

    Hi gurus,
    While loading data from stgging table to dimension table we are facing below mention error
    ORA-30036: unable to extend segment by 8 in undo tablespace 'UNDOGTAMAC'
    we are using below script
    INSERT /*+ APPEND */ INTO table_dim (SELECT * FROM TEMP_tab);
    COMMIT;
    Where TEMP_tab table contains around 2,00,00,000 rows
    Can we use sqlloader to achieve it ?
    Please advice
    Thanks in advance
    Edited by: user12084499 on Oct 4, 2010 12:14 AM

    user12084499 wrote:
    While loading data from stgging table to dimension table we are facing below mention error
    ORA-30036: unable to extend segment by 8 in undo tablespace 'UNDOGTAMAC'
    we are using below script
    INSERT /*+ APPEND */ INTO table_dim (SELECT * FROM TEMP_tab);
    COMMIT;
    Where TEMP_tab table contains around 2,00,00,000 rowsYou can do this as a custom parallel-enabled process. You can create a procedure as follows:
    create or replace procedure InsertRowRange( fromRow varchar2, toRow varchar2 ) is
    begin
      insert /*+ append */ into table_dm
      select * from temp_tab where rowid between fromRow and toRow;
    end;You now need to split TEMP_TAB into multiple rowid ranges - let's say 20 ranges. Instead of starting 20 parallel copies of the procedure, each with a unique rowid range, you schedule it as 20 serialised processes.
    The only requirement is that TEMP_TAB remains unchanged for the duration of the serialised processing as adding or deleting or inserting rows into it, will cause problems for the rowid range approach.
    You can also potentially use the primary key (e.g. date based pk) of the source table to govern what ranges of rows to insert per processing step (e.g. only a single day's rows).
    The main thing to stay away from (because of poor design and poor performance) is fetching from a cursor loop, inserting rows, and then committing every x number of rows. This is a horrible and non-scalable approach.

  • Critical performance problem upon bulk load of groups

    All (including product development),
    I think there are missing indexes in wwsec_flat$ and wwsec_sys_priv$. Anyway, I'd like assistance on fixing the critical performance problems I see, properly. Read on...
    During and after bulk load of a few (about 500) users and groups from an external database, it becomes evident that there's a performance problem somewhere. Many of the calls to wwsec_api.addGroupToList took several minutes to finish. Afterwards the machine went 100% CPU just from logging in with the portal30 user (which happens to be group owner for all the groups).
    Running SQL trace points in the directions of the following SQL statement:
    SELECT ID,PARENT_ID,NAME,TITLE_ID,TITLEIMAGE_ID,ROLLOVERIMAGE_ID,
    DESCRIPTION_ID,LAYOUT_ID,STYLE_ID,PAGE_TYPE,CREATED_BY,CREATED_ON,
    LAST_MODIFIED_BY,LAST_MODIFIED_ON,PUBLISHED_ON,HAS_BANNER,HAS_FOOTER,
    EXPOSURE,SHOW_CHILDREN,IS_PUBLIC,INHERIT_PRIV,IS_READY,EXECUTE_MODE,
    CACHE_MODE,CACHE_EXPIRES,TEMPLATE FROM
    WWPOB_PAGE$ WHERE ID = :b1
    I checked the existing indexes, and see that the following ones are missing (I'm about to test with these, but have not yet done so):
    CREATE UNIQUE INDEX "PORTAL30"."WWSEC_FLAT_IX_GROUP_ID"
    ON "PORTAL30"."WWSEC_FLAT$"("GROUP_ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 160K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING
    CREATE UNIQUE INDEX "PORTAL30"."WWSEC_FLAT_IX_PERSON_ID"
    ON "PORTAL30"."WWSEC_FLAT$"("PERSON_ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 160K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING
    CREATE UNIQUE INDEX "PORTAL30"."WWSEC_SYS_PRIV_IX_PATCH1"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("OWNER", "GRANTEE_GROUP_ID",
    "GRANTEE_TYPE", "OWNER", "NAME", "OBJECT_TYPE_NAME")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 80K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING
    Note that when I deleted the newly inserted groups, the CPU consumption immediately went down from 100% to some 2-3%.
    This behaviour has been observed on a Sun Solaris system, but I think it's the same on NT (I have observed it during the bulk load on my NT laptop, but so far have not had the time to test further.).
    Also note: In the call to addGroupToList, I set owner to true for all groups.
    Also note: During loading of the groups, I logged a few errors, all of the same type ("PORTAL30.WWSEC_API", line 2075), as follows:
    Error: Problem calling addGroupToList for child group'Marketing' (8030), list 'NO_OSL_Usenet'(8017). Reason: java.sql.SQLException: ORA-06510: PL/SQL: unhandled user-defined exception ORA-06512: at "PORTAL30.WWSEC_API", line 2075
    Please help. If you like, I may supply the tables and the java program that I use. It's fully reproducable.
    Thanks,
    Erik Hagen (you may call me on +47 90631013)
    null

    YES!
    I have now tested with insertion of the missing indexes. It seems the call to addGroupToList takes just as long time as before, but the result is much better: WITH THE INDEXES DEFINED, THERE IS NO LONGER A PERFORMANCE PROBLEM!! The index definitions that I used are listed below (I added these to the ones that are there in Portal 3.0.8, but I guess some of those could have been deleted).
    About the info at http://technet.oracle.com:89/ubb/Forum70/HTML/000894.html: Yes! Thanks! Very interesting, and I guess you found the cause for the error messages and maybe also for the performance problem during bulk load (I'll look into it as soon as possible anbd report what I find.).
    Note: I have made a pretty foolproof and automated installation script (or actually, it's part of my Java program), that will let anybody interested recreate the problem. Mail your interest to [email protected].
    ============================================
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_PERS_IX1"
    ON "PORTAL30"."WWSEC_PERSON$"("MANAGER")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX PORTAL30.LDAP_WWSEC_PERS_IX2
    ON PORTAL30.WWSEC_PERSON$('ORGANIZATION')
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX PORTAL30.LDAP_WWSEC_PERS_PK
    ON PORTAL30.WWSEC_PERSON$('ID')
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX PORTAL30.LDAP_WWSEC_PERS_UK
    ON PORTAL30.WWSEC_PERSON$('USER_NAME')
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 32K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_UK
    ON PORTAL30.WWSEC_FLAT$("GROUP_ID", "PERSON_ID",
    "SPONSORING_MEMBER_ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_PK
    ON PORTAL30.WWSEC_FLAT$("ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX5
    ON PORTAL30.WWSEC_FLAT$("GROUP_ID", "PERSON_ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX4
    ON PORTAL30.WWSEC_FLAT$("SPONSORING_MEMBER_ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX3
    ON PORTAL30.WWSEC_FLAT$("GROUP_ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX PORTAL30.LDAP_WWWSEC_FLAT_IX2
    ON PORTAL30.WWSEC_FLAT$("PERSON_ID")
    TABLESPACE PORTAL PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 256K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 0 FREELISTS 1);
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_IX1"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("GRANTEE_GROUP_ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_IX2"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("GRANTEE_USER_ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_IX3"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("OBJECT_TYPE_NAME", "NAME")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_PK"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 56K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    CREATE INDEX "PORTAL30"."LDAP_WWSEC_SYSP_UK"
    ON "PORTAL30"."WWSEC_SYS_PRIV$"("OBJECT_TYPE_NAME",
    "NAME", "OWNER", "GRANTEE_TYPE", "GRANTEE_GROUP_ID",
    "GRANTEE_USER_ID")
    TABLESPACE "PORTAL" PCTFREE 10 INITRANS 2 MAXTRANS 255
    STORAGE ( INITIAL 32K NEXT 88K MINEXTENTS 1 MAXEXTENTS 4096
    PCTINCREASE 1 FREELISTS 1)
    LOGGING;
    ==================================
    Thanks,
    Erik Hagen
    null

  • Issue with Bulk Load Post Process

    Hi,
    I ran bulk load command line utility to create users in OIM. I had 5 records in my csv file. Out of which 2 users were successfully created in OIM and for rest i got exception because users already existed. After that if i run bulk load post process for LDAP sync and generate the password and send notification. It is not working even for successfully created users. Ideally it should sync successfully created users. However if there is no exception i during bulk load command line utility then LDAP sync work fine through bulk load post process.Any idea how to resolve this issue and sync the user in OID which were successfully created. Urgent help would be appreciated.

    The scheduled task carries out post-processing activities on the users imported through the bulk load utility.

  • Bulk loading in 11.1.0.6

    Hi,
    I'm using bulk load to load about 200 million triples into one model in 11.1.0.6. The data is splitted into about 60 files with around 3 millions triples in each file. I have a script file which has
    host sqlldr ...FILE1;
    exec sem_apis.bulk_load_from_staging_table(...);
    host sqlldr ...FILE2;
    exec sem_apis.bulk_load_from_staging_table(...);
    for every file to load.
    When I run the script from command line, it looks that the time needed for the loading grows as more files are loaded. The first file took about 8 min to load, the second file took about 25 min,... It's now taking 2 and half hour to load one file after completing loading 14 files.
    Is index rebuild causing this behavior? If that's the case is there any way to turn off the index during bulk loading? If the index rebuild is not the case what other parameters can we adjust to speed up the bulk loading?
    Thanks,
    Weihua

    Bulk-append is slower than bulk-load because of incremental index maintenance. The uniqueness constraint enforcing index cannot be disabled. I'd suggest moving to 11.1.0.7 and then installing patch 7600122 to be able to make use of enhanced bulk-append that performs much better than in 11.1.0.6.
    The best way to load 200 million rows in 11.1.0.6 would be to load into an empty RDF model via a single bulk-load. You can do it as follows (assuming the filenames are f1.nt thru f60.nt):
    - [create a named pipe] mkfifo named_pipe.nt
    - cat f*.nt > named_pipe.nt
    on a different window:
    - run sqlldr with named_pipe.nt as the data file to load all 200 million rows into a staging table (you could create staging table with COMPRESS option to keep the size down)
    - next, run exec sem_apis.bulk_load_from_staging_table(...);
    (I'd also suggest use of COMPRESS for the application table.)

  • Error Message - ORA-06502: PL/SQL: numeric or value error: Bulk Bind: Trunc

    This is driving me nuts!
    I am getting this error from OWB during a mapping process.
    I have checked the input data and it looks fine.
    The run time audit browser just lists all of the steps but does not make it clear which one failed. Is it the last one which is shown (does not have HIDE as selection link.)
    I also tried to determine which row was causing the problem and followed the instructions at http://www.nicholasgoodman.com/bt/blog/2005/07/, but no row_id was recorded in the views. In actual fact there wasn't very much audit info other than that the mapping ran and was complete (even though it failed).
    In the error message section it has, in order
    Map Step - blank
    Rowkey - 35204435256
    Severity - X
    Error Message - ORA-06502: PL/SQL: numeric or value error: Bulk Bind: Truncated Bind
    Object Name - N/A
    Object Column - *
    From the PL/SQL error I thought it may be trying to insert into a data field that was too small, but all of the columns that are used are full of data of length shorter than all of the target tables.
    I have even started running the cursors in the generated PLSQL but I don't get the error by doing this.
    Thanks in advance for any tips at all.

    Thanks for the response.
    I managed to work it out and it had to do with the selection criteria of one of the filters.
    For anyone else facing this error, check to see if any of the rows being inserted has the same key/identifier as some existing in the target table. If so add an extra condition to the where clause.
    This worked for me.

  • ORA-06502: PL/SQL: numeric or value error: Bulk Bind: Truncated Bink

    I have a map which worked fine in 10.2.0.1. The same map in 11.2.0.2 is giving me the error:
    'ORA-06502: PL/SQL: numeric or value error: Bulk Bind: Truncated Bink.'
    I have one source and one target. This is a straight load, no transformations.
    While debugging the map I have noticed the culprit is one column in the source which is varchar2(30),
    I have the target column with the same varchar2(30), and I tried increasing the size of
    the target column but i keep getting the same error. While searching the forum someone suggested
    to change the configuration of code generation options and runtime parameters to set based.
    But strangely it gave me an error because the set based option is not availabe in the new 11.2.0.2.
    Should the set based option be available in this version. Please suggest on how i could resolve the
    error of 'ORA-06502: PL/SQL: numeric or value error: Bulk Bind: Truncated Bink.' Thank you.

    Hi there,
    Following is the description of the error, you are getting.
    ORA-06502:VALUE_ERROR
    An arithmetic, conversion, truncation, or size-constraint error occurs. For example, when your program selects a column value into a character variable, if the value is longer than the declared length of the variable, PL/SQL aborts the assignment and raises VALUE_ERROR. In procedural statements, VALUE_ERROR is raised if the conversion of a character string into a number fails. (In SQL statements, INVALID_NUMBER is raised.)
    Hopefully this will help.

  • ORA-29516: Bulk load of method failed; insufficient shm-object space

    Hello,
    Just installed 11.2.0.1.0 on CentOS 5.5 64-bit. All dependencies satisfied, installation/linking went without a problem.
    Server has 32GB RAM, using AMM with target set at 29GB, no swapping is occuring.
    No matter what i do when loading Java code (loadjava with JARs or "create and compile java source") I keep getting the error:
    ORA-29516: Error in module Aurora: Assertion failure at joez.c:3311
    Bulk load of method java/lang/Object.<init> failed; insufficient shm-object space
    Checked shm-related kernel params, all seems to be normal:
    # Controls the maximum size of a message, in bytes
    kernel.msgmnb = 65536
    # Controls the default maxmimum size of a mesage queue
    kernel.msgmax = 65536
    # Controls the maximum shared segment size, in bytes
    kernel.shmmax = 68719476736
    # Controls the maximum number of shared memory segments, in pages
    kernel.shmall = 4294967296
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    net.core.rmem_default = 262144
    net.core.rmem_max = 4194304
    net.core.wmem_default = 262144
    net.core.wmem_max = 1048576
    Please help.

    Hi there,
    I've stumbled into exactly the same issue for 11g. After I start the database up and I ran loadjava on an externally
    compiled class (Hello.class in my instance) I got the following error:
    Error while testing for existence of dbms_java.handleMd5
    ORA-29516: Aurora assertion failure: Assertion failure at joez.c:3311
    Bulk load of method java/lang/Object.<init> failed; insufficient shm-object space
    ORA-06512: at "SYS.DBMS_JAVA", line 679
    Error while creating class Hello
    ORA-29516: Aurora assertion failure: Assertion failure at joez.c:3311
    Bulk load of method java/lang/Object.<init> failed; insufficient shm-object space
    ORA-06512: at line 1
    The following operations failed
    class Hello: creation (createFailed)
    exiting : Failures occurred during processing
    After this, I checked the trace file and saw the following error message:
    peshmmap_Create_Memory_Map:
    Map_Length = 4096
    Map_Protection = 7
    Flags = 1
    File_Offset = 0
    mmap failed with error 1
    error message:Operation not permitted
    ORA-04035: unable to allocate 4096 bytes of shared memory in shared object cache "JOXSHM" of size "134217728"
    peshmmap_Create_Memory_Map:
    Map_Length = 4096
    Map_Protection = 7
    Flags = 1
    File_Offset = 0
    mmap failed with error 1
    error message:Operation not permitted
    ORA-04035: unable to allocate 4096 bytes of shared memory in shared object cache "JOXSHM" of size "134217728"
    Assertion failure at joez.c:3311
    Bulk load of method java/lang/Object.<init> failed; insufficient shm-object space
    It seems as though that "JOXSHM" of size "134217728" (which is 128MB) corresponds to the java_pool_size setting in my init.ora file:
    memory_target=1000M
    memory_max_target=2000M
    java_pool_size=128M
    shared_pool_size=256M
    Whenever I change that size it propagates to the trace file. I also picked up that only 592MB of shm memory gets used. My df -h dump:
    Filesystem Size Used Avail Use% Mounted on
    /dev/sda7 39G 34G 4.6G 89% /
    udev 10M 288K 9.8M 3% /dev
    /dev/sda5 63M 43M 21M 69% /boot
    /dev/sda4 59G 45G 11G 81% /mnt/data
    shm 2.0G 592M 1.5G 29% /dev/shm
    The only way in which I could get loadjava to work was to remove java from the database by calling the rmjvm.sql script.
    After this I installed java again by calling the initjvm.sql script. I noticed that after these scripts my shm-memory usage
    increased to about 624MB which is 32MB larger than before:
    Filesystem Size Used Avail Use% Mounted on
    /dev/sda7 39G 34G 4.6G 89% /
    udev 10M 288K 9.8M 3% /dev
    /dev/sda5 63M 43M 21M 69% /boot
    /dev/sda4 59G 45G 11G 81% /mnt/data
    shm 2.0G 624M 1.4G 31% /dev/shm
    However, after I stopped the database and started it again my Java was broken again and calling loadjava produced
    the same error message as before. The shm memory usage would also return to 592MB again. Is there something I
    need to do in terms of persisting the changes that initjvm and rmjvm does to the database? Or is there something else
    wrong that I'm overlooking like the memory management settings or something?
    Regards,
    Wiehann

  • ORA-06502: PL/SQL: numeric or value error: Bulk Bind: Truncated Bind

    Hi
    I am getting this run time error ORA-06502: PL/SQL: numeric or value error: Bulk Bind: Truncated Bind in my pl/sql . I tried evrything , changing datatypes ,etc but still this error is coming .What can be the cause , please help.
    declare
    svid xxpor_utility.p_svid@sppmig1%type;
    p_sv_id xxpor_utility.p_svid@sppmig1%type;
    tab xxpor_utility.xxpor_indextab@sppmig1;
    svid1 xxpor_utility.p_svid@sppmig1%type;
    p_sv_id1 xxpor_utility.p_svid@sppmig1%type;
    tab1 xxpor_utility.xxpor_indextab@sppmig1;
    svid2 xxpor_utility.p_svid@sppmig1%type;
    p_sv_id2 xxpor_utility.p_svid@sppmig1%type;
    tab2 xxpor_utility.xxpor_indextab@sppmig1;
    svid3 xxpor_utility.p_svid@sppmig1%type;
    p_sv_id3 xxpor_utility.p_svid@sppmig1%type;
    tab3 xxpor_utility.xxpor_indextab@sppmig1;
    v_index t2_error_table.id_value%type;
    v_code t2_error_table.error_code%type;
    p_error varchar2(600);
    k number(20):=0;
    v_msg varchar2(2000);
    v_commit_count number(10);
    v_at_gpid varchar2(512);
    v_at_oper varchar2(512);
    v_sch varchar2(512);
    v_vat varchar2(512);
    exp exception;
    exp1 exception;
    exp2 exception;
    exp3 exception;
    exp4 exception;
    v_pay varchar2(512);
    v_res varchar2(512);
    v_digit varchar2(512);
    v_agree varchar2(512);
    v_driver_licence PERSON_HISTORY.drivers_licence%TYPE;
    v_cus_gen1 number(10);
    v_cus_gen2 number(10);
    v_cus_gen3 number(10);
    svid_sr number(10);
    v_social PERSON_HISTORY.social_security_number%TYPE;
    CURSOR person_cur (p_person_id person_history.person_id%TYPE)
    IS
    SELECT drivers_licence ,social_security_number
    FROM PERSON_HISTORY@SPPMIG1
    WHERE PERSON_ID=p_person_id --p2(p).person_id
         AND EFFECTIVE_START_DATE = (SELECT MAX(EFFECTIVE_START_DATE)
         FROM PERSON_HISTORY@sppmig1
                                            WHERE PERSON_ID=p_person_id);--p2(p).person_id) ;
    --p number(20):=1;
    --j number(20);
    cursor c1 is
    select * from cus_node_his ;
    type temp_c1 is table of customer_node_history%rowtype
    index by binary_integer;
    t2 temp_c1;
    type temp_c2 is table of customer_node_history@slpmig1%rowtype
    index by binary_integer;
    p2 temp_c2;
    /*cursor c2(p_id customer_query.customer_node_id%type) is
    select general_1,general_2,general_3
    from customer_query@sppmig1 c where c.customer_query_type_id=10003 and
    c.customer_node_id(+) =p_id
    and c.open_date = (select
    max(open_date) from customer_query@sppmig1 where customer_node_id=p_id
    and customer_query_type_id=10003 and c.customer_query_id =(select max(customer_query_id) from customer_query@sppmig1
    where customer_node_id=p_id and customer_query_type_id=10003));*/
    procedure do_bulk_insert is
    bulk_errors EXCEPTION;
    PRAGMA EXCEPTION_INIT(bulk_errors, -24381);
    begin
    forall j in 1..t2.count SAVE EXCEPTIONS
    insert into aaa values t2(j);
    commit;
    --t2.delete;
    k:=0;
    v_msg:=sqlerrm;
    EXCEPTION WHEN bulk_errors THEN
    FOR L IN 1..SQL%bulk_exceptions.count
    LOOP
    v_index := SQL%bulk_exceptions(L).ERROR_INDEX;
    v_code := sqlerrm(-1 * SQL%bulk_exceptions(L).ERROR_CODE);
    --v_index := SQL%bulk_exceptions(j).ERROR_INDEX;
    --v_code := sqlerrm(-1 * SQL%bulk_exceptions(j).ERROR_CODE);
    INSERT INTO t2_error_table
    VALUES('CUSTOMER_NODE_HISTORY',
    'CUSTOMER_NODE_ID',
    v_msg,
    t2(v_index).customer_node_id,
    null,
    'DO_BULK_INSERT',
    v_code
    commit;
    END LOOP;
    end do_bulk_insert;
    begin
    select value into v_at_gpid from t2_system_parameter@sppmig1 where name='atlanta_group_id';
    select value into v_commit_count from t2_system_parameter@sppmig1 where name='batch_size';
    select value into v_sch from t2_system_parameter@sppmig1 where name='schedule_id';
    select value into v_pay from t2_system_parameter@sppmig1 where name='payment_location_code';
    select value into v_at_oper from t2_system_parameter@sppmig1 where name='atlanta_operator_id';
    select value into v_digit from t2_system_parameter@sppmig1 where name='digits_to_be_screened';
    select value into v_res from t2_system_parameter@sppmig1 where name='responsible_agent';
    select value into v_vat from t2_system_parameter@sppmig1 where name='vat_rate';
    select value into v_agree from t2_system_parameter@sppmig1 where name='bank_agreement_status';
    xxpor_utility.xxpor_loadmemory@sppmig1('CUSTOMER_NODE_HISTORY','CUSTOMER_NODE_TYPE_ID',tab);
    xxpor_utility.xxpor_loadmemory@sppmig1('CUSTOMER_NODE_HISTORY','CREDIT_RATING_CODE',tab2);
    xxpor_utility.xxpor_loadmemory@sppmig1('CUSTOMER_NODE_HISTORY','PAYMENT_METHOD_CODE',tab3);
    xxpor_utility.xxpor_loadmemory@sppmig1('CUSTOMER_NODE_HISTORY','CUSTOMER_NODE_STATUS_CODE',tab1);
    open c1;
    loop
    fetch c1 bulk collect into p2 limit v_commit_count;
    for p in 1..p2.count loop
    k:=K+1;
    begin
    xxpor_utility.xxpor_getsvid@sppmig1(p2(p).CUSTOMER_NODE_TYPE_ID,tab,svid);
    p_sv_id:=svid;
    xxpor_utility.xxpor_getsvid@sppmig1(p2(p).CUSTOMER_NODE_STATUS_CODE,tab1,svid1);
    p_sv_id1 :=svid1;
    xxpor_utility.xxpor_getsvid@sppmig1(p2(p).CREDIT_RATING_CODE,tab2,svid2);
    p_sv_id2:=svid2;
    xxpor_utility.xxpor_getsvid@sppmig1(p2(p).PAYMENT_METHOD_CODE,tab3,svid3);
    p_sv_id3:=svid3;
    OPEN person_cur (p2(p).person_id);
    FETCH person_cur INTO v_driver_licence, v_social;
    CLOSE person_cur;
    --select social_security_number  into v_social from person_history@sppmig1 where
    --PERSON_ID=p2(p).person_id AND EFFECTIVE_START_DATE = (SELECT MAX(EFFECTIVE_START_DATE) FROM
    --PERSON_HISTORY@sppmig1 WHERE PERSON_ID=p2(p).person_id) ;
    /*open c2(p2(p).customer_node_id);
    fetch c2 into v_cus_gen1, v_cus_gen2, v_cus_gen3;
    close c2;
    xxpor_utility.get_status_code@sppmig1(v_cus_gen1,v_cus_gen2,v_cus_gen3,svid_sr);*/
    svid_sr:=2600000;
    t2(k).CUSTOMER_NODE_ID     :=     p2(p).CUSTOMER_NODE_ID;
    t2(k).LAST_MODIFIED          :=     p2(p).LAST_MODIFIED;
    t2(k).EFFECTIVE_START_DATE     :=     p2(p).EFFECTIVE_START_DATE;
    t2(k).EFFECTIVE_END_DATE     :=     p2(p).EFFECTIVE_END_DATE;
    t2(k).CUSTOMER_NODE_TYPE_ID     := p_sv_id;
    if p_sv_id is null then
    raise exp1;
    end if;
    t2(k).PRIMARY_IDENTIFIER      :=     p2(p).PRIMARY_IDENTIFIER;
    t2(k).PRIMARY_IDENTIFIER2     :=     p2(p).PRIMARY_IDENTIFIER2;
    t2(k).NODE_NAME           :=     p2(p).NODE_NAME ;
    t2(k).NODE_NAME_UPPERCASE     :=     p2(p).NODE_NAME_UPPERCASE ;
    t2(k).NODE_NAME_SOUNDEX     :=     p2(p).NODE_NAME_SOUNDEX;
    t2(k).ATLANTA_GROUP_ID          := v_at_gpid ;
    t2(k).ATLANTA_OPERATOR_ID     :=     p2(p).ATLANTA_OPERATOR_ID;
    t2(k).GL_CODE_ID          :=     p2(p).GL_CODE_ID;
    t2(k).PARENT_CUSTOMER_NODE_ID     := p2(p).PARENT_CUSTOMER_NODE_ID ;
    t2(k).HIERARCHY_LEVEL          := p2(p).HIERARCHY_LEVEL ;
    t2(k).ROOT_CUSTOMER_NODE_ID      := p2(p).ROOT_CUSTOMER_NODE_ID ;
    t2(k).CUSTOMER_NODE_STATUS_CODE := p_sv_id1 ;
    if p_sv_id1 is null then
    raise exp2;
    end if;
    t2(k).CREATED_DATE     :=          p2(p).CREATED_DATE;
    t2(k).ACTIVE_DATE      :=          p2(p).ACTIVE_DATE ;
    t2(k).PERSON_ID     :=          p2(p).PERSON_ID ;
    t2(k).PRIME_ACCOUNT_ID :=          p2(p).PRIME_ACCOUNT_ID;
    t2(k).REPORT_LEVEL_CODE :=          p2(p).REPORT_LEVEL_CODE;
    t2(k).POSTAL_ADDRESS_ID     :=     p2(p).POSTAL_ADDRESS_ID;
    t2(k).SITE_ADDRESS_ID     :=     p2(p).SITE_ADDRESS_ID ;
    t2(k).CURRENCY_ID      :=          p2(p).CURRENCY_ID;
    t2(k).SCHEDULE_ID     :=          v_sch;
    t2(k).BILLING_PRIORITY     :=     p2(p).BILLING_PRIORITY ;
    t2(k).BILLING_COMPLEXITY:=          p2(p).BILLING_COMPLEXITY ;
    t2(k).BILLING_CONFIGURATION_CODE     := p2(p).BILLING_CONFIGURATION_CODE;
    t2(k).SUPPRESS_IND_CODE           := p2(p).SUPPRESS_IND_CODE ;
    t2(k).SUPPRESS_BILL_CYCLE_COUNT := p2(p).SUPPRESS_BILL_CYCLE_COUNT;
    t2(k).SUPPRESS_UNTIL_ISSUE_DATE := p2(p).SUPPRESS_UNTIL_ISSUE_DATE;
    t2(k).TURNOVER               := p2(p).TURNOVER;
    t2(k).TURNOVER_CURRENCY_ID      :=     p2(p).TURNOVER_CURRENCY_ID ;
    t2(k).CREDIT_LIMIT           :=     p2(p).CREDIT_LIMIT ;
    t2(k).CREDIT_LIMIT_CURRENCY_ID :=     p2(p).CREDIT_LIMIT_CURRENCY_ID;
    t2(k).EXPECTED_REVENUE      :=     p2(p).EXPECTED_REVENUE ;
    t2(k).EXPECTED_REVENUE_CURRENCY_ID     := p2(p).EXPECTED_REVENUE_CURRENCY_ID ;
    t2(k).CREDIT_RATING_CODE      :=     p_sv_id2 ;
    -- if p_sv_id2 is null then
    --raise exp3;
    -- end if;
    t2(k).CREDIT_COMMENTS           := p2(p).CREDIT_COMMENTS ;
    t2(k).TAX_CLASS_CODE          := 1     ;
    t2(k).PAYMENT_METHOD_CODE     :=     p_sv_id3;
    --if p_sv_id3 is null then
    --raise exp4;
    --end if;
    t2(k).PAYMENT_LOCATION_CODE      := v_pay ;
    t2(k).BANK_CODE           :=     NULL;
    t2(k).BRANCH_CODE           :=     NULL ;
    t2(k).BANK_ACCOUNT_NAME     :=     p2(p).NODE_NAME ;
    t2(k).BANK_ACCOUNT_NUMBER     :=     '1000000';
    t2(k).BANK_ACCOUNT_REF      :=     v_agree;
    t2(k).CARD_TYPE_CODE          := p2(p).CARD_TYPE_CODE     ;
    t2(k).CARD_NUMBER          :=     p2(p).CARD_NUMBER ;
    t2(k).CARD_EXPIRY_DATE          := NULL ;
    t2(k).ASSIGNED_OPERATOR_ID      :=     NULL ;
    t2(k).SALES_CHANNEL_CODE     :=     0;
    t2(k).COMPANY_NUMBER          := NULL;
    t2(k).INDUSTRY_CODE          :=     NULL;
    t2(k).REGION_CODE           :=     NULL;
    t2(k).GENERAL_1          :=     v_vat ;
    t2(k).GENERAL_2           :=     svid_sr ;
    if svid_sr is null then
    raise exp;
    end if;
    t2(k).GENERAL_3           :=     v_social ;
    t2(k).GENERAL_4           :=     v_driver_licence ;
    t2(k).GENERAL_5           :=     v_vat;
    t2(k).GENERAL_6           :=     v_res;
    t2(k).GENERAL_7           :=     null||':'||null||':'||'1000000'||':'||null||':'||null||':'||null||':';
    t2(k).GENERAL_8          :=     '2' ;
    t2(k).GENERAL_9           :=     v_digit;
    t2(k).GENERAL_10          :=     p2(p).CUSTOMER_NODE_ID;
    exception when exp then
    p_error:= sqlerrm;
    insert into t2_error_table values ( 'CUSTOMER_NODE_HISTORY','CUSTOMER_NODE_ID',p_error,p2(p).customer_node_id
    ,null,null,null);
    commit;
    when exp1 then
    p_error:= sqlerrm;
    insert into t2_error_table values ( 'CUSTOMER_NODE_HISTORY','CUSTOMER_NODE_ID',p_error,p2(p).customer_node_id
    ,null,null,'customer_node_type_id is null');
    commit;
    when exp2 then
    p_error:= sqlerrm;
    insert into t2_error_table values ( 'CUSTOMER_NODE_HISTORY','CUSTOMER_NODE_ID',p_error,p2(p).customer_node_id
    ,null,null,'customer_node_status_code is null');
    commit;
    /*when exp3 then
    p_error:= sqlerrm;
    insert into t2_error_table values ( 'CUSTOMER_NODE_HISTORY','CUSTOMER_NODE_ID',p_error,p2(p).customer_node_id
    ,null,null,'credit_rating_code is null');
    commit;
    when exp4 then
    p_error:= sqlerrm;
    insert into t2_error_table values ( 'CUSTOMER_NODE_HISTORY','CUSTOMER_NODE_ID',p_error,p2(p).customer_node_id
    ,null,null,null);
    commit;*/
    when others then
    p_error:= sqlerrm;
    insert into t2_error_table values ( 'CUSTOMER_NODE_HISTORY','CUSTOMER_NODE_ID',p_error,p2(p).customer_node_id
    ,null,null,null);
    commit;
    end;
    if mod(k,v_commit_count)=0 then
    do_bulk_insert;
    t2.delete;
    end if;
    end loop;
    do_bulk_insert;
    exit when c1%notfound;
    end loop;
    t2.delete;
    exception when others then
    p_error:= sqlerrm;
    insert into t2_error_table values ( 'CUSTOMER_NODE_HISTORY','CUSTOMER_NODE_ID',p_error,null
    ,null,null,null);
    commit;
    RAISE;
    end;
    /

    Hi there,
    Following is the description of the error, you are getting.
    ORA-06502:VALUE_ERROR
    An arithmetic, conversion, truncation, or size-constraint error occurs. For example, when your program selects a column value into a character variable, if the value is longer than the declared length of the variable, PL/SQL aborts the assignment and raises VALUE_ERROR. In procedural statements, VALUE_ERROR is raised if the conversion of a character string into a number fails. (In SQL statements, INVALID_NUMBER is raised.)
    Hopefully this will help.

  • ORA-06502:numeric or value errORA-04088: error during execution of trigger

    I received the following error message while entering a number within the maximum value (i.e. 9,999,999.) allowed in a data entry form which is separate from the base form:
    ORA-06502: PL/SQL: numeric or value error:number precision too large ORA-06512:at "<Owner>.<Trigger>", line 194 ORA-04088: error during execution of trigger "<Owner>.<Trigger>"
    Where <Owner> is the schema name and <Trigger> is the trigger name. The data block of the entry form is based on a database table, and the sum of all values entered (w/ a maximum value of 99,999,999.) is displayed in the base form. A grand total of this sum along w/ other totals on the base form is also displayed w/ a maximum value defined as 999,999,999. I only receive the above error message when the grand total is greater than 99,999,999. If the grand total is less than or equal to 99,999,999, the base form works fine. Why????
    I have verified all the attributes for the database columns and form fields and all seems to be okay, and I am running out of my wits. I am desperately in need of your help in resolving this issue soon because pressure is on...
    Thanks in advance for any/all the help.

    Orchid wrote:
    I received the following error message while entering a number within the maximum value (i.e. 9,999,999.) allowed in a data entry form which is separate from the base form:
    ORA-06502: PL/SQL: numeric or value error:number precision too large ORA-06512:at "<Owner>.<Trigger>", line 194 ORA-04088: error during execution of trigger "<Owner>.<Trigger>"
    Where <Owner> is the schema name and <Trigger> is the trigger name. The data block of the entry form is based on a database table, and the sum of all values entered (w/ a maximum value of 99,999,999.) is displayed in the base form. A grand total of this sum along w/ other totals on the base form is also displayed w/ a maximum value defined as 999,999,999. I only receive the above error message when the grand total is greater than 99,999,999. If the grand total is less than or equal to 99,999,999, the base form works fine. Why????
    I have verified all the attributes for the database columns and form fields and all seems to be okay, and I am running out of my wits. I am desperately in need of your help in resolving this issue soon because pressure is on...
    Thanks in advance for any/all the help.check your associated database column length. It's precision is not able to hold you said value. increase the length by
    ALTER TABLE table_name 
    MODIFY (column_name NUMBER(20) );it's above solution don't work. Then, probably you use variable in the trigger code which hold the vale and it's size is limited. increase it.
    added
    try this
    increase the length of TAB_S_TOT_COST 9 to 18.
    and your problem will solved.
    Hamid
    Edited by: HamidHelal on Feb 13, 2013 10:28 AM

  • Error occurred during quick migration: ORA-06502: numeric or value error

    I am very beginner on sql developer.I have a problem during a quick migration from mySQL db to Oracle DB as:
    step1: captured model processed successful
    step2: convered mode processed failed @ ORA-06502: numeric or value error
    My work environment:
    SQL Developer (2.1.1.64)
    Java platform: 1.6.0
    Oracle ide:2.1.1
    Ojdbc5.jar
    Oracle server ver. 9i
    mySQL Datatype: datetime (default_value: 0000-00-00 00:00:00) ,float,varchar(100),int(11)
    Please help me!
    Thanks in advance!
    Vien.

    kgronau wrote:
    could you please log into MySQL using mysql-utility, then change to the db (use <your mysql db) and provide the output of
    desc bars_eoptHi kgronau,
    Thanks for your response. Below is the table "bars_eopt" description from MySQL db. (first row is a description) sorry about the display is not in line as typing.
    Field     Type     Collation     Null     Key     Default     Extra     Privileges     Comment
    Bar_Index     int(11)          NO     PRI     0          select,insert,update,references
    Recipe     varchar(100) latin1_swedish_ci NO     MUL               select,insert,update,references
    Date_Time     datetime          NO          0000-00-00 00:00:00          select,insert,update,references
    ThresholdCurrent float          NO          0          select,insert,update,references     
    SlopeEfficiency     float          NO          0          select,insert,update,references     
    Pmax      float          NO          0          select,insert,update,references     
    Voltage_at_Imax     float          NO          0          select,insert,update,references     
    SeriesResistance     float          NO          0          select,insert,update,references     
    PeakWavelength     float          NO          0          select,insert,update,references     
    FWHM     float          NO          0          select,insert,update,references     
    CentroidWavelength float          NO          0          select,insert,update,references     
    Efficiency_at_Imax float          NO          0          select,insert,update,references     
    ForwardVoltage     float          NO          0          select,insert,update,references     
    FW_90_Percent     float          NO          0          select,insert,update,references     
    Emitter_at_Ith     float          NO          0          select,insert,update,references     
    Emitter_at_Imax float          NO          0          select,insert,update,references     
    Delta_Emitter      float          NO          0          select,insert,update,references     
    LOT_ID     varchar(100)     latin1_swedish_ci NO     MUL               select,insert,update,references     
    Part_Number     varchar(100)     latin1_swedish_ci     NO     MUL               select,insert,update,references     
    Pak_Number     int(11)          NO          0          select,insert,update,references     
    Pak_Position     int(11)          NO          0          select,insert,update,references     
    Tracer     int(11)          NO          0          select,insert,update,references     
    OCR     varchar(100)     latin1_swedish_ci NO                    select,insert,update,references     
    Inspection_Result varchar(100)     latin1_swedish_ci     NO                    select,insert,update,references     
    Inspection_Defect varchar(100)     latin1_swedish_ci     NO                    select,insert,update,references     
    Facette     varchar(1000)     latin1_swedish_ci     NO                    select,insert,update,references     
    Facette2     varchar(1000)     latin1_swedish_ci     NO                    select,insert,update,references     
    Upside     varchar(1000)     latin1_swedish_ci     NO                    select,insert,update,references     
    Downside     varchar(1000)     latin1_swedish_ci     NO                    select,insert,update,references     
    Thanks again and Best Regards,
    Vien.T

  • ORA-06502 error during dbms_mview.refresh

    I created a Materialized view MV1 and refreshed using dbms_mview.refresh option and it worked.
    dbms_mview.refresh (list => 'MV1', method => 'F');
    My requirement is to refresh MV1 daily at 10.00 P.M. I created a procedure ( code attached below) to execute the dbms_mview.refresh and scheduled the procedure to run at 10.00 P.M.
    I am getting the following error in a regular basis not always.
    ORA-06502: PL/SQL: numeric or value error: raw variable length too long
    Kindly do the needfull.
    PACKAGE:
    CREATE OR REPLACE PACKAGE BODY PKG_COMMON
    AS
    LOG EXCEPTIONS
    PROCEDURE log (
    p_package IN ttfa_log.package_name%TYPE DEFAULT NULL,
    p_procedure IN ttfa_log.procedure_name%TYPE DEFAULT NULL,
    p_log_priority IN ttfa_log.log_priority%TYPE DEFAULT 'INFO',
    p_params IN ttfa_log.parameters%TYPE DEFAULT NULL,
    p_line_number IN ttfa_log.line_number%TYPE DEFAULT NULL,
    p_error_stack IN VARCHAR2 DEFAULT NULL,
    p_error_backtrace IN VARCHAR2 DEFAULT NULL
    IS
    l_step PLS_INTEGER := 0;
    l_log_date DATE;
    PRAGMA AUTONOMOUS_TRANSACTION;
    BEGIN
    l_step := 20;
    l_log_date := sysdate;
    l_step := 40;
    INSERT INTO ttfa_log
    ( log_id
    ,log_date
    ,log_priority
    ,username
    ,package_name
    ,procedure_name
    ,line_number
    ,error_code
    ,log_message
    ,backtrace_tmp
    ,parameters
    ,partition_key
    VALUES
    ( ttfa_log_id_seq.NEXTVAL
    ,l_log_date
    ,p_log_priority
    ,LOWER(USER)
    ,p_package
    ,p_procedure
    ,p_line_number
    ,SUBSTR( p_error_stack, 1, INSTR(p_error_stack,':') - 1 )
    ,substr(p_error_stack,1,4000)
    ,substr(p_error_backtrace,1,4000)
    ,p_params
    ,EXTRACT(MONTH FROM l_log_date)
    COMMIT;
    EXCEPTION
    WHEN OTHERS
    THEN
    RAISE_APPLICATION_ERROR(-20001, 'Error in pkg_ttfa_log.log ' ||chr(13) || 'Actual Error=>' ||DBMS_UTILITY.FORMAT_ERROR_STACK);
    END log;
    REFRESH MATERIALIZED VIEW
    PROCEDURE refresh_mi_mvs
    IS
    BEGIN
    -- Refresh Materialized View
    BEGIN
    dbms_mview.refresh (list => 'MV1', method => 'F');
    EXCEPTION
    WHEN OTHERS THEN
    log (
    p_package => 'PKG_COMMON'
    ,p_procedure => 'refresh_mvs'
    ,p_log_priority => 'ERROR'
    ,p_params => 'MV1'
    ,p_line_number => 40
    ,p_error_stack => DBMS_UTILITY.FORMAT_ERROR_STACK
    ,p_error_backtrace => DBMS_UTILITY.FORMAT_ERROR_BACKTRACE
    ROLLBACK;
    END;
    EXCEPTION
    WHEN OTHERS THEN
    log (
    p_package => 'PKG_COMMON'
    ,p_procedure => 'refresh_mi_mvs'
    ,p_log_priority => 'ERROR'
    ,p_params => 'Package Exception'
    ,p_line_number => 90
    ,p_error_stack => DBMS_UTILITY.FORMAT_ERROR_STACK
    ,p_error_backtrace => DBMS_UTILITY.FORMAT_ERROR_BACKTRACE
    ROLLBACK;
    END;
    END PKG_TTFA_COMMON;

    It impossible from what you have posted to know whether the issue is what you are passing to Oracle or what you have written yourself. Have your run a trace? If not that is the first step.
    And when you post next ... a version number would be very helpful.

  • ORA-02374: conversion error loading table during import using IMPDP

    HI All,
    We are trying to migrate the data from one database to an other database.
    The source database is having character set
    SQL> select value from nls_database_parameters where parameter='NLS_CHARACTERSET';
    VALUE
    US7ASCII
    The destination database is having character set
    SQL> select value from nls_database_parameters where parameter='NLS_CHARACTERSET';
    VALUE
    AL32UTF8
    We took an export of the whole database using expdp and when we try to import to the destination database using impdp. We are getting the following error.
    ORA-02374: conversion error loading table <TABLE_NAME>
    ORA-12899: value too large for column <COLUMN NAME> (actual: 42, maximum: 40)
    ORA-02372: data for row:<COLUMN NAME> : 0X'4944454E5449464943414349E44E204445204C4C414D414441'
    Kindly let me know how to overcome this issue in destination.
    Thanks & Regards,
    Vikas Krishna

    Hi,
    You can overcome this issue by increasing the column width in the target database for the max value required for all data to be imported successfully in the table.
    Regards

Maybe you are looking for

  • Caching in obiee 11g

    My requirment has an executive dashboard which has 30 reports on it.This dashboard can be accessed by 1000 users. My requirment here is when the users open the dashbard page it has to hit the cache and the reports in the dashboard should appear immed

  • Multipart request and modern browsers

    Hi! I want to write own parser of multipart http request for my http service. I have tried to find not-outdated (as, say, RFCs are) information about the ways modern browsers form the request with all their (browsers's) bugs (IE, am sure, has plenty

  • Can't import grouppolicy module on a DC, with the GMPC installed

    Hi all, I've got a Windows Server 2008 DC which has the GPMC installed already, but I can't import the grouppolicy module. import-module : The specified module 'grouppolicy' was not loaded because no valid module file was found in any module director

  • SAP Graphics and IGS

    Hello all, Lately I've found out about SAP Graphics and thought that it might be useful in future for me, so I've decided to learn it. I've found "User's Guide" and "Programming interface" PDFs, but I've also found information that this is obsolete t

  • Screen exit in QM01 tcode

    Hi all, My requirement is to add a customized screen in QM01 transaction. That is, when the user enters the notification type and press enter, second screen appears. In that 2nd screen (Header data), the user will enter the material number under the