Avoiding ORA-01401 exceptions
Hi,
Is there an easy way to avoid ORA-01401 exceptions (inserted value too large for column)?
For instance, is there a OCI connection option to automatically truncate columns which are too long?
Should I use an INSERT statement which looks like:
INSERT INTO table (..., some_long_column) VALUES (..., SUBSTRB(?, 1, 4000))
Thanks,
ER
Hi,
user10936714 wrote:
Hi,
Is there an easy way to avoid ORA-01401 exceptions (inserted value too large for column)?
For instance, is there a OCI connection option to automatically truncate columns which are too long?A trigger, like Ttt suggested, is the only automatic thing I can think of.
Should I use an INSERT statement which looks like:
INSERT INTO table (..., some_long_column) VALUES (..., SUBSTRB(?, 1, 4000))That's probably what I would do.
It really depends on what the users want. The advantage of a trigger is that it works automatically and quietly, but that can sometimes be a disadvantage. Will users ever be upset because what went into the table was not what they said it should be? For example, the most important part of a long message could be the bottom line. It may actually be better to raise an error in such cases.
Similar Messages
-
ORA-01401 with export/import
Hi!
I'm trying to migrate database from from version 8.0 on novell to 9.2 on Linux.
I have precreated all tablespaces and all tables are succesfully created. In data import stage I got lot of "ORA-01401 inserted value too large for column" errors.
Any hints would be very welcome.
Thanks
UrosThank you for your answer, but I'm aware of what this error means.
Lets go through the procedure one more time:
1. Export form old database (8.0) is done with exp
2. Tablespaces are manually created in new database (9.2)
3. Import of tables, users, data, etc is done with dump from step 1. During import I get a lot of ORA-01401 errors, although I didn't change dump file and data in it.
I have found somewhere on the net that this could be from characterset conversion, but I'm not shure how to set charactersets to avoid this errors.
Regards
Uros -
Ora-01401 error on a complex view
I'm getting a ora-01401 error on a view of the following structure.
SQL> desc vu_mat_product_msds_ingred;
Name Null? Type
MSDS_COMMENTS VARCHAR2(500)
MAT_PROD_MSDS_SOURCE VARCHAR2(30)
MISSING_INGRED_IND CHAR(1)
FLASH_POINT_COMMENTS VARCHAR2(100)
MSDS_ENTER_BY_ID NUMBER
CURRENT_AS_OF_DT DATE
MFG_REVISION_DT DATE
GROUP_ID NUMBER
CALC_VAPOR_PRESSURE NUMBER
CALC_VAPOR_PRESSURE_UOM VARCHAR2(5)
CALC_VAPOR_TEMPERATURE NUMBER
CALC_VAPOR_TEMPERATURE_UOM CHAR(1)
MAT_PROD_SPECIFIC_GRAVITY VARCHAR2(15)
CALC_SPECIFIC_GRAVITY NUMBER(6,4)
MAT_PROD_PROD_ID NOT NULL NUMBER
MAT_PROD_MFG_ID NOT NULL NUMBER
CLASS_ID NUMBER
CHEM_INV_IND CHAR(1)
SLED_EXEMPT_IND CHAR(1)
ACTIVE_IND CHAR(1)
WAIVER_REQD_IND CHAR(1)
ITEM_NAME VARCHAR2(60)
TRADE_NAME VARCHAR2(60)
HAZARD_CD CHAR(1)
FLASH_PT_IND CHAR(1)
FLASH_PT NUMBER
FLASH_PT_CMP CHAR(1)
FLASH_PT_SCALE_CD CHAR(1)
FLASH_PT_METHOD VARCHAR2(6)
VOC_QTY_GL NUMBER
VOC_QTY_PG NUMBER
VOC_QTY_OZ NUMBER
DISPOSAL_CD CHAR(1)
PRODUCT_STATE_CD CHAR(1)
TEMPERATURE_CD CHAR(1)
CTS_CD CHAR(1)
MAT_PROD_PPE_CD CHAR(1)
EXEMPTION_CD CHAR(1)
PCT_VOLAT_VOL NUMBER
PCT_VOLAT_WGT NUMBER
VAPOR_PRESSURE NUMBER
VAPOR_PRESSURE_UOM VARCHAR2(5)
VAPOR_TEMPERATURE NUMBER
VAPOR_TEMPERATURE_UOM CHAR(1)
VOC_COMMENTS VARCHAR2(250)
VOLATILE_LBS_GAL NUMBER
ARC1 VARCHAR2(2)
ARC2 VARCHAR2(2)
ARC3 VARCHAR2(2)
ARC4 VARCHAR2(2)
PURE_IND CHAR(1)
SITE_USAGE_IND CHAR(1)
PROPRIETARY_IND CHAR(1)
MSDS_PREP_DT DATE
MSDS_ENTER_DT DATE
HMOTW_ID NUMBER
ITEM_PRICE NUMBER(9,2)
RESTR_PRODUCT_IND CHAR(1)
CONTAINER_ID NOT NULL NUMBER
MAT_PROD_CONT_PROD_ID NOT NULL NUMBER
UPC_CD VARCHAR2(15)
SKU_NR VARCHAR2(60)
NSN VARCHAR2(15)
KIT_PART_CD VARCHAR2(2)
MFG_PART_NR VARCHAR2(60)
CONTAINER_SIZE NUMBER(10,4)
CONTAINER_SIZE_UOM VARCHAR2(3)
CNTAIN_KGRAMS_QTY NUMBER
MFGKIT_IND CHAR(1)
SEPARATE_IND CHAR(1)
TYP_CNTAIN_CD VARCHAR2(2)
CNTAIN_PRES_CD CHAR(1)
PROD_ST_CD CHAR(1)
PRODUCT_NR NOT NULL NUMBER(7)
PRODUCT_UI VARCHAR2(2)
MAT_PROD_MFG_MFG_ID NOT NULL NUMBER
CAGE NOT NULL VARCHAR2(5)
MFG_UPC VARCHAR2(7)
MFG_NAME NOT NULL VARCHAR2(50)
MFG_ADDR1 VARCHAR2(100)
MFG_ADDR2 VARCHAR2(100)
MFG_CITY VARCHAR2(100)
MFG_STATE_PROVINCE VARCHAR2(60)
MFG_POSTAL_CD VARCHAR2(30)
MFG_COUNTRY VARCHAR2(40)
MFG_EMRG_PHONE VARCHAR2(40)
MFG_INFO_PHONE VARCHAR2(40)
WEB_SITE_URL VARCHAR2(500)
PROP_SHIP_NM_ID NUMBER
MAT_PROD_MSDS_PPE_CD CHAR(1)
MSDS_ID NOT NULL NUMBER
MAT_PROD_MSDS_PROD_ID NOT NULL NUMBER
MAT_PROD_MSDS_MSDS_SOURCE VARCHAR2(30)
PUBLICATION_CD CHAR(1)
HEALTH_CD CHAR(1)
CONTACT_CD CHAR(1)
FIRE_CD CHAR(1)
REACT_CD CHAR(1)
PROT_EYE CHAR(1)
PROT_SKIN CHAR(1)
PROT_RESP CHAR(1)
FOCAL_PT_CD VARCHAR2(2)
SUPPLY_IM VARCHAR2(3)
MSDS_PREPR_NAME VARCHAR2(50)
PREP_COMPANY VARCHAR2(40)
PREP_ADD1 VARCHAR2(100)
PREP_ADD2 VARCHAR2(100)
PREP_CITY VARCHAR2(100)
PREP_STATE_PROVINCE VARCHAR2(60)
PREP_POSTAL_CD VARCHAR2(30)
MSDS_SHIP_NAME VARCHAR2(600)
MSDS_PKG_GRP VARCHAR2(3)
MSDS_UN_NA VARCHAR2(2)
MSDS_UN_NA_NR VARCHAR2(5)
MSDS_UN_NA_PAGE VARCHAR2(5)
SPEC_NR VARCHAR2(20)
SPEC_TYP_GR_CLS VARCHAR2(20)
HAZ_STOR_COMP_CD VARCHAR2(5)
HAZ_CATEGORY_1 VARCHAR2(10)
HAZ_CATEGORY_2 VARCHAR2(10)
NRC_LIC_NR VARCHAR2(15)
NET_PROP_WGT_AMMO VARCHAR2(7)
APPEAR_ODOR VARCHAR2(80)
BOIL_PT VARCHAR2(11)
MELT_PT VARCHAR2(11)
VPR_PRESSURE VARCHAR2(30)
VPR_DENSITY VARCHAR2(30)
ONETOONE_ID NUMBER(1)
VPR_TEMP VARCHAR2(30)
MAT_PROD_MSDS_SPECIFIC_GRAVITY VARCHAR2(15)
DECOMP_TEMP VARCHAR2(11)
EVAP_RATE VARCHAR2(25)
SOLUB_WATER VARCHAR2(20)
CHEM_PH VARCHAR2(11)
CORROSION_RATE VARCHAR2(8)
FLASH_POINT VARCHAR2(20)
LOW_EXPL_LTD VARCHAR2(12)
UP_EXPL_LTD VARCHAR2(12)
EXTINGUISH_MEDIA VARCHAR2(500)
SP_FIRE_FGT_PROCD VARCHAR2(800)
UN_FIRE_EXPL_HAZ VARCHAR2(500)
STABILITY VARCHAR2(3)
COND_AVOID_STAB VARCHAR2(120)
MAT_AVOID VARCHAR2(500)
HAZ_DECOMP_PROD VARCHAR2(500)
HAZ_POLY_OCCUR VARCHAR2(3)
COND_AVOID_POLY VARCHAR2(120)
LD50_LC50_MIX VARCHAR2(40)
ROUTE_ENTRY_INHALE VARCHAR2(3)
ROUTE_ENTRY_SKIN VARCHAR2(3)
ROUTE_ENTRY_INGEST VARCHAR2(3)
HLTH_HAZ_ACUTE_CRON VARCHAR2(500)
CARCIN_NTP VARCHAR2(10)
CARCIN_IARC VARCHAR2(10)
CARCIN_OSHA VARCHAR2(10)
STORAGE_TYPE VARCHAR2(10)
EXPL_CARCIN VARCHAR2(500)
SIGN_SYMPT_OVREXPOS VARCHAR2(600)
MED_COND_AGGR_EXPOS VARCHAR2(500)
EMRG_1ST_AID_PROCD VARCHAR2(600)
STEP_MAT_REL_SPILL VARCHAR2(500)
NEUTRAL_AGENT VARCHAR2(80)
WAST_DISP_METHOD VARCHAR2(600)
HAND_STOR_PRECAUT VARCHAR2(600)
OTHER_PRECAUT VARCHAR2(500)
RESP_PROT VARCHAR2(350)
VENTILATION VARCHAR2(120)
PROT_GLOVE VARCHAR2(120)
EYE_PROT VARCHAR2(120)
OTHER_PROT_EQUIP VARCHAR2(500)
WORK_HYG_PRACT VARCHAR2(500)
SUPP_SAFE_HLTH_DATA VARCHAR2(500)
SPEC_HAZ_AND_PREC VARCHAR2(650)
CHRONIC_CD CHAR(1)
CARCINOGEN_CD CHAR(1)
ACUTE_CD CHAR(1)
REPRO_TOXIN_IND CHAR(1)
ROUTE_ENTRY_EYES VARCHAR2(3)
INGREDIENTINFORMATION MAT_PRODUCT_INGRED_LIST
SQL> desc mat_product_ingred_list;
mat_product_ingred_list TABLE OF MAT_PRODUCT_INGRED_TYPE
Name Null? Type
INGRED_ID NUMBER
PRODUCT_ID NUMBER
MCM_CHEM_MSTR_ID NUMBER
CHEM_VAPOR_ID NUMBER
INGRED_SEQ_NR VARCHAR2(2)
INGRED_NIOSH VARCHAR2(9)
PERCNT VARCHAR2(7)
CALC_PERCNT NUMBER
OSHA_PEL VARCHAR2(20)
ACGIH_TLV VARCHAR2(22)
REC_LIMIT VARCHAR2(20)
MCM_EXEMPT_IND CHAR(1)
VOC_REACTIVITY_CD VARCHAR2(2)
STATE_POLLUTANT_CD VARCHAR2(10)
PERCENT_LOW NUMBER
PERCENT_HIGH NUMBER
PROPRIETARY_IND CHAR(1)
MPI_CHEM_MSTR_ID NUMBER
CHEM_CAS_NO VARCHAR2(12)
CHEM_TYPE VARCHAR2(1)
CHEM_NAME VARCHAR2(255)
CHEM_FORMULA VARCHAR2(35)
CHEM_RCRA_CD VARCHAR2(4)
MOLECULAR_WGT NUMBER(7,3)
MOLECULAR_WGT_SOURCE VARCHAR2(100)
VAPOR_PRESSURE NUMBER(8,2)
VAPOR_PRESSURE_UOM VARCHAR2(5)
VAPOR_PRESSURE_SOURCE VARCHAR2(100)
VAPOR_TEMP NUMBER
VAPOR_TEMP_UOM CHAR(1)
IRIS_IND CHAR(1)
RPT_QTY NUMBER
TPQ1 NUMBER
TPQ2 NUMBER
IC VARCHAR2(3)
OZONE_IND CHAR(1)
EHS_IND CHAR(1)
EPCRA_IND CHAR(1)
CARC_IND CHAR(1)
MPI_EXEMPT_IND CHAR(1)
CHEM_NIOSH VARCHAR2(9)
STATE_CAP NUMBER
LOCAL_CAP NUMBER
TYPE_CD CHAR(1)
CHEM_ACTIVE_IND CHAR(1)
any ideas of why the ora-01401?
Thanks in advance.Did you by any chance buy a Re: Function will not run (and shows with red cross in SQL Developer) from Re: Calling pipelined table functions
-
Linux 10g versus Windows XP 10g Issue - ORA-07445: exception encountered
I did a full database export using exp from a 10g Linux database:
Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Prod
PL/SQL Release 10.2.0.2.0 - Production
"CORE 10.2.0.2.0 Production"
TNS for Linux: Version 10.2.0.2.0 - Production
NLSRTL Version 10.2.0.2.0 - ProductionLinux version:
Red Hat Enterprise Linux ES release 4 (Nahant Update 5)I installed Oracle 10g on Windows XP:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
"CORE 10.2.0.1.0 Production"
TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - ProductionWindows XP version is:
System:
Microsoft Windows XP
Professional
Version 2002
Service Pack 3
Intel(R) Core(TM)2 CPU
6300 @ 1.86GHz
1.86 GHz, 3.25 GM or RAMI performed a full database import after creating the required tablespaces from the 10g database on Linux to Windows.
On the Windows 10g database, I set Maximum SGA size to 1300 MB, and Total SGA size to 1000 MB. I set Aggregate PGA Target to 500 MB.
This Windows box is going to be used by at most 2 developers for a change to our application that is not backward compatible (with respect to the database) with earlier versions of the application that uses the database. I was asked to create this so we could develop this branch without affecting the rest of the development team that are working on other enhancements on the Linux box.
I then ran some packages that refresh the data in a set of tables. This runs through some code that exercises a lot of the database so I could uncover any issues.
At a specific point in the execution of these packages I get the following error:
ORA-07445: exception encountered: core dump [ACCESS_VIOLATION] [_qkkIsOJKey+299] [PC:0x1E58F73] [ADDR:0x80] [UNABLE_TO_READ] []I read the trace file and isolated it to a single, somewhat complex query. I tore the query apart into smaller queries, trying to isolate what was causing the issue. Whenever the core dump occurred, I was able to recover by simply closing the connection and opening a new connection. In my testing I found the specific query that raises the exception, and a less performing version of that query that does not cause the exception. Please note that this exception has never occurred on the Linux box (nor has it occurred on the beta or production database), and the code has been in place for close to 2 years.
The code that causes the issue:
SELECT UNIQUE
oc.ogc_case_id AS oc_ogc_case_id,
ogc.ogc_number AS ogc_number
FROM (SELECT SUBSTR (REPLACE (afd.field_text, '-', '' ), 1, 6 ) AS ogc_number
FROM ewoc_hw_snap.activity_field_data afd
WHERE afd.afdcd_activity_field_data_key = (SELECT afdc.activity_field_data_key
FROM ewoc_hw_snap.activity_field_data_codes afdc
WHERE afdc.name = 'Case Number'
) ogc
LEFT OUTER JOIN lct_snap.ogc_cases oc
ON oc.ogc_number = ogc.ogc_numberThe code that returns the same data, only slower, and does not raise the exception:
SELECT UNIQUE
(SELECT oc.ogc_case_id FROM lct_snap.ogc_cases oc WHERE oc.ogc_number = ogc.ogc_number ) AS oc_ogc_case_id,
ogc.ogc_number AS ogc_number
FROM (SELECT SUBSTR (REPLACE (afd.field_text, '-', '' ), 1, 6 ) AS ogc_number
FROM ewoc_hw_snap.activity_field_data afd
WHERE afd.afdcd_activity_field_data_key = (SELECT afdc.activity_field_data_key
FROM ewoc_hw_snap.activity_field_data_codes afdc
WHERE afdc.name = 'Case Number'
) ogcI executed both queries in SQL Developer version 3.0.04. SQL Developer gives the error message: No more data to read from socket
Any ideas?
The query shown above is as simple as I could pare it down to and still raise and then fix the issue. The fix to the package is not as simple as the second query shown above since there is more than one column of data that I need to return from the lct_snap.ogc_cases table. In any case, I shouldn't have to forgo using the LEFT OUTER JOIN. There must be something wrong with the Windows database. I didn't want to patch the Windows database without first checking on this forum to see if anyone had any less intensive potential solutions.
I searched and found so many posts with the ORA-07445: exception encountered: core dump [ACCESS_VIOLATION] error, that I realize it's a somewhat generic error message.
Edited by: Starlight Rider on Jun 2, 2011 10:01 AMCheck this metalink note : 452951.1
Regards
Raj -
Hi,
let me know which solution option i wish to pursue:
1. The fix for unpublished Bug 5160122 "TEXT QUERY DEADLOCK WITH GATHER_STATS_JOB, THEN COREDUMP ORA-7445 [DREXUMCX]" is included in
Oracle RDBMS 11.1.0 and newer. In Oracle RDBMS 11.1.0 and newer, this problem should no longer occur.
OR
2. The fix for unpublished Bug 5160122 "TEXT QUERY DEADLOCK WITH GATHER_STATS_JOB, THEN COREDUMP ORA-7445 [DREXUMCX]" is included in
Oracle RDBMS 10.2.0.5.0. With the 10.2.0.5.0 patchset installed, this problem should no longer occur.
OR
3. With the fix for unpublished Bug 5160122 "TEXT QUERY DEADLOCK WITH GATHER_STATS_JOB, THEN COREDUMP ORA-7445 [DREXUMCX]" installed
against 10.2.0.4.0 on Linux x86-64, this problem should no longer occur.
Regards.
PrasadProblem Description: ORA-07445: exception encountered: core dump [drexumcx(
stop MULTI-POSTING same problem -
ORA-01401 error on char column with oracle oci driver
Hello,
We found a potential bug in the kodo.jdbc.sql.OracleDictionary class
shipped as source with Kodo:
In newer Kodo versions (at least in 3.3.4), the method
public void setString (PreparedStatement stmnt, int idx, String
val, Column col)
has the following code block:
// call setFixedCHAR for fixed width character columns to get padding
// semantics
if (col != null && col.getType () == Types.CHAR
&& val != null && val.length () != col.getSize ())
((OraclePreparedStatement) inner).setFixedCHAR (idx, val);
This block seems to be intended for select statements but is called on
inserts/updates also. The latter causes a known problem with the Oracle
oci driver when settings CHAR columns as FixedCHAR, which reports an
ORA-01401 error (inserted value too large for column) when definitely no
column is too long. This does not happen with the thin driver.
We reproduced this with 8.1.7 and 9.2.0 drivers.
For us we solved the problem by subclassing OracleDictionary and removing
the new code block.
Regards,
Rainer Meyer
ELAXY Financial Software & Solutions GmbH & Co. KGRainer-
I read at
re:'ORA-01401 inserted value too large for column' - 9i that:
"This is fixed in Oracle9i Release 2"
Can you try that version of the driver? Also, does it fail in the Oracle
10 OCI driver?
Rainer Meyer wrote:
Hello,
We found a potential bug in the kodo.jdbc.sql.OracleDictionary class
shipped as source with Kodo:
In newer Kodo versions (at least in 3.3.4), the method
public void setString (PreparedStatement stmnt, int idx, String
val, Column col)
has the following code block:
// call setFixedCHAR for fixed width character columns to get padding
// semantics
if (col != null && col.getType () == Types.CHAR
&& val != null && val.length () != col.getSize ())
((OraclePreparedStatement) inner).setFixedCHAR (idx, val);
This block seems to be intended for select statements but is called on
inserts/updates also. The latter causes a known problem with the Oracle
oci driver when settings CHAR columns as FixedCHAR, which reports an
ORA-01401 error (inserted value too large for column) when definitely no
column is too long. This does not happen with the thin driver.
We reproduced this with 8.1.7 and 9.2.0 drivers.
For us we solved the problem by subclassing OracleDictionary and removing
the new code block.
Regards,
Rainer Meyer
ELAXY Financial Software & Solutions GmbH & Co. KG
Marc Prud'hommeaux
SolarMetric Inc. -
ORA-01401: inserted value too large for column from 9i to 8i
Hi All,
Am trying to get the data from 9.2.0.6.0 to 8.1.7.0.0.
The character sets in both of them are as follows
9i
NLS_NCHAR_CHARACTERSET : AL16UTF16
NLS_CHARACTERSET : AL32UTF8
8i
NLS_NCHAR_CHARACTERSET : UTF8
NLS_CHARACTERSET : UTF8
And the structure of the Table in 9i which am trying to pull is as follows.
SQL> desc xyz
Name Null? Type
PANEL_SITE_ID NOT NULL NUMBER(15)
PANELIST_ID NUMBER
CHECKSUM VARCHAR2(150)
CONTACT_PHONE VARCHAR2(100)
HH_STATUS NUMBER
HH_STATUS_DT DATE
HH_RECRUITMENT_PHONE VARCHAR2(100)
HH_RECRUITMENT_DT DATE
FIRST_NET_USAGE_DT DATE
INSTALL_DT DATE
FNAME VARCHAR2(4000)
LNAME VARCHAR2(4000)
EMAIL_ADDRESS VARCHAR2(200)
EMAIL_VALID NUMBER
PASSWORD VARCHAR2(4000)
And by connecting to one of the 8i schema am running the following script
CREATE TABLE GPMI.GPM_HOUSEHOLDBASE_FRMP AS
SELECT PANEL_SITE_ID,
PANELIST_ID,
LTRIM(RTRIM(CHECKSUM)) CHECKSUM,
LTRIM(RTRIM(CONTACT_PHONE)) CONTACT_PHONE,
HH_STATUS, HH_STATUS_DT,
LTRIM(RTRIM(HH_RECRUITMENT_PHONE)) HH_RECRUITMENT_PHONE,
HH_RECRUITMENT_DT,
FIRST_NET_USAGE_DT,
INSTALL_DT, LTRIM(RTRIM(FNAME)) FNAME,
LTRIM(RTRIM(LNAME)) LNAME,
LTRIM(RTRIM(EMAIL_ADDRESS)) EMAIL_ADDRESS,
EMAIL_VALID,
PASSWORD
FROM [email protected];
Am gettinh the following error.
Can anyone of you fix this one.
PASSWORD
ERROR at line 14:
ORA-01401: inserted value too large for column
Thanks in Advance
SudarshanAdditionally I found this matrix, which explains your problem:
UTF8 (1 to 3 bytes) AL32UTF8 (1 to 4 bytes)
MIN MAX MIN MAX
CHAR 2000 666 2000 500
VARCHAR2 4000 1333 4000 1000 */
For column PASSWORD the maximum length is used (4000). UTF8 uses maximal 3 bytes for a character, while AL32UTF8 may use up to 4 characters. So a column defined in AL32UTF8 may contain characters, which do not fit in a corresponding UTF8 character. -
ORA-07445: exception encountered
There are several errors occuring at database level
ORA-07445: exception encountered: core dump [ACCESS_VIOLATION] [unable_to_trans_pc] [PC:0x7C34126B] [ADDR:0x0] [UNABLE_TO_WRITE] []
Process m001 died, see its trace file
ksvcreate: Process(m001) creation failed
Process startup failed, error stack:
ORA-27300: OS system dependent operation:CreateThread failed with status: 8
ORA-27301: OS failure message: Not enough storage is available to process this command.
ORA-27302: failure occurred at: ssthrddcr
kkjcre1p: unable to spawn jobq slave process
once db is restarted and service is restarted everything works fine again after 3 to 4 hours database refused to establish new connections.
its a mission critical database hosted on windows server 2003 enterprise edition 32 bit. 4G RAM and 14G virtual
memory.Hi,
Problem had a work around flushing shared pool after every 5 hrs. Permanent solution was adding 2 GB ram to server and application of patch 10.2.0.4
after patch application problem has completely resolved and there has not reoccured.
Regards
Dhiren -
ORA-07445: exception encountered: core dump
HI
we are on 12.0.1
10R2
Dump file /opt/prod/db/tech_st/10.2.0/admin/PROD_ebs12db/udump/prod_ora_13498.trc
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
ORACLE_HOME = /opt/prod/db/tech_st/10.2.0
System name: Linux
Node name: EBS12DB.EROSGROUP.AE
Release: 2.6.9-67.ELlargesmp
Version: #1 SMP Wed Nov 7 14:07:22 EST 2007
Machine: x86_64
Instance name: PROD
Redo thread mounted by this instance: 1
Oracle process number: 298
Unix process pid: 13498, image: [email protected]
*** 2010-05-13 12:41:50.080
*** ACTION NAME:(FRM:HASANAND.T:CWH_OUTBOUND) 2010-05-13 12:41:50.022
*** MODULE NAME:(INVRSVF1) 2010-05-13 12:41:50.022
*** SERVICE NAME:(PROD) 2010-05-13 12:41:50.022
*** SESSION ID:(719.34855) 2010-05-13 12:41:50.022
KGX cleanup...
KGX Atomic Operation Log 0xf08bf9e0
Mutex 0xa48e2be8(719, 0) idn 0 oper EXAM
Cursor Parent uid 719 efd 5 whr 26 slp 0
oper=DEFAULT pt1=(nil) pt2=(nil) pt3=(nil)
pt4=(nil) u41=0 stt=0
Dump file /opt/prod/db/tech_st/10.2.0/admin/PROD_ebs12db/udump/prod_ora_13498.trc
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
ORACLE_HOME = /opt/prod/db/tech_st/10.2.0
System name: Linux
Node name: EBS12DB.EROSGROUP.AE
Release: 2.6.9-67.ELlargesmp
Version: #1 SMP Wed Nov 7 14:07:22 EST 2007
Machine: x86_64
Instance name: PROD
Redo thread mounted by this instance: 1
Oracle process number: 261
Unix process pid: 13498, image: [email protected]
*** 2010-10-27 14:29:13.577
*** ACTION NAME:() 2010-10-27 14:29:13.548
*** MODULE NAME:(Disco10, ACBG.SALES:ACBG SALES) 2010-10-27 14:29:13.548
*** SERVICE NAME:(PROD) 2010-10-27 14:29:13.548
*** SESSION ID:(487.5406) 2010-10-27 14:29:13.548
Exception signal: 11 (SIGSEGV), code: 1 (Address not mapped to object), addr: 0x0, PC: [0x2a955720a8, intelfast_memcpy.A()+10]
*** 2010-10-27 14:29:13.598
ksedmp: internal or fatal error
ORA-07445: exception encountered: core dump [_intel_fast_memcpy.A()+10] [SIGSEGV] [Address not mapped to object] [0x000000000] [] []
Current SQL statement for this session:
select optimizer_cost cost from V$SQL sql, V$OPEN_CURSOR csr, V$SESSION ses where sql.hash_value = csr.hash_value and sql.address = csr.address and csr.sid = ses.sid and ses.audsid = userenv('SESSIONID') and sql.sql_text like :thesql order by sql.last_load_time
Thanks
Valla970106 wrote:
I have the same problem.
I help me, please.
Thank you!Please see the doc referenced above by Helios and select your database version and type the "Error Code First Argument" and go through the docs. If this is your production instance, please log a SR.
Thanks,
Hussein -
'ORA-01401 inserted value too large for column' - 9i
I am migrating a java application from 8i to 9i. The application is writing data to a oracle 8i database without any problem.
When the underlying database source is switched to oracle 9i database(same databaseschema as 8i) by pointing to 9i instance , the application is
encountering the error: 'ORA-01401 inserted value too large for column' when trying to insert a particular field. The fileld is declared as varchar2(400) in the both the database.
The debug output from application also shows that the column data being inserted as 10 characters.The insert statement from SQLPLUS works fine. So it's looks like an issue with the JDBC driver for the 9i server. I am wondering if anyone has observed such strange behavior.
Thanks
VijayVijay,
Are you using OCI driver ? There is a known issue with OCI driver when using setFixedCHAR before an INSERT statement. This problem only happens with a string of length zero. There is no known issue with THIN Driver. This is fixed in Oracle9i Release 2.
regards
Debu Panda
Oracle -
Dear all:
I found many ORA-07445 in the alert_log:
ORA-07445: exception encountered: core dump [kdkbin()+223] [SIGSEGV] [Address not mapped to object] [0x2A96975000] [] []
and I've follow the "Note 153788.1" to find the [ID 1073171.1] fit to me, and I apply the patch and the error still prompt.
Can someone help how to fix this issue?
my environment is Oracle 9.2.0.8 64bit linux Redhat 4.0
I found the trace file and see the error:
*** 2010-07-08 14:05:48.350
*** SESSION ID:(371.375) 2010-07-08 14:05:48.350
Exception signal: 11 (SIGSEGV), code: 1 (Address not mapped to object), addr: 0x2a96975000, PC: [0x10efcdf, kdkbin()+223]
*** 2010-07-08 14:05:48.354
ksedmp: internal or fatal error
ORA-07445: exception encountered: core dump [kdkbin()+223] [SIGSEGV] [Address not mapped to object] [0x2A96975000] [] []
Current SQL statement for this session:
begin WF_EVENT_OJMSTEXT_QH.enqueue(:v1, :v2); end;
----- PL/SQL Call Stack -----
object line object
handle number name
0x2e7f9fc10 204 package body SYS.DBMS_AQ
0x2e93b9210 980 package body APPS.WF_EVENT_OJMSTEXT_QH
0x2e93c20d0 1 anonymous block
0x2e9e58df8 1720 package body APPS.WF_EVENT
0x2e9e58df8 668 package body APPS.WF_EVENT
0x2ec401498 229 package body APPS.WF_RULE
0x2ed89b970 31 package body APPS.FND_BES_PROC
0x2ed8a3198 1 anonymous block
0x2e9e58df8 443 package body APPS.WF_EVENT
0x2e9e58df8 1599 package body APPS.WF_EVENT
0x2e9e58df8 2372 package body APPS.WF_EVENT
0x2e9e58df8 700 package body APPS.WF_EVENT
0x2ed8dc240 4903 package body APPS.FND_FLEX_SERVER
0x2ee799190 3 anonymous block
Regards
Terry1. Check the OS logs.I'm not sure this is the error for the /var/log/messages
Jul 7 02:36:19 csslxa06 kernel: ide-cd: cmd 0x3 timed out
Jul 7 02:36:19 csslxa06 kernel: hda: irq timeout: status=0xd0 { Busy }
Jul 7 02:36:19 csslxa06 kernel: hda: irq timeout: error=0x00
Jul 7 02:36:19 csslxa06 kernel: hda: DMA disabled
Jul 7 02:36:19 csslxa06 kernel: hda: ATAPI reset complete
Jul 7 08:23:24 csslxa06 kernel: ide-cd: cmd 0x3 timed out
Jul 7 08:23:24 csslxa06 kernel: hda: irq timeout: status=0xd0 { Busy }
Jul 7 08:23:24 csslxa06 kernel: hda: irq timeout: error=0x00
Jul 7 08:23:24 csslxa06 kernel: hda: ATAPI reset complete
2. Check the hardware specific the physical memory.--HP had been checked, no error show from the diagnostic tool.
3. Make sure Swap is correctly configured and increase it if necessary.--Could you please tell me how to set the Swap configured.
4. Raise SR with Oracle Support.--Raised, apply the patch, the error still happened. -
ORA-07445: exception encountered: core dump in Standby database
Hello,
This is Oracle 10.1.0 in a Logical Stanby database under Solaris9
The apply stopped with these errors and then it started by itself:
LOGSTDBY status: ORA-16226: DDL skipped due to lack of support
<date>
Errors in <trace file>
ORA-07445: exception encountered: core dump [krvsmso()+812] [SIGSEGV] [Address not mapped to object] [0x000000004] [] []
<date>
Trace dumping is performing id=[cdmp_20100226210050]
<date>
Errors in <trace file>
ORA-12805: parallel query server died unexpectedly
LOGSTDBY Apply process P006 pid=67 OS id=10471 stopped
The trace file is this one:
*** SERVICE NAME:(SYS$BACKGROUND)
*** SESSION ID:
Exception signal: 11 (SIGSEGV), code: 1 (Address not mapped to object), addr: 0x4, PC: [0x101fb226c, krvsmso()+812]
ksedmp: internal or fatal error
ORA-07445: exception encountered: core dump [krvsmso()+812] [SIGSEGV] [Address not mapped to object] [0x000000004] [] []
----- Call Stack Trace -----
calling call entry argument values in hex
location type point (? means dubious value)
ksedmp()+1008 CALL ksedst() 000000042 ? 104C02118 ?
104C02128 ? 000000000 ?
1052DCE18 ? 000000008 ?
ssexhd()+992 CALL ksedmp() 000000002 ? 000105000 ?
10512D000 ? 00010512D ?
000105000 ? 000000001 ?
Has someone faced the same problem or know why and how to solve it?
Thanks!
Edited by: lulon on 04-mar-2010 8:20Bug 4369756 Log apply may dump [krvsmso]
-
Ora 24960 exception with VC9 and Oracle 11g - 11.1.0.6.0 client
Hi,
We have developed a OCCI client application using 10g express edition and visual studio 2003. It was working fine. Now we have migrated to 11g (11.1.0.6.0) client with visual studio 2008. VC version 9. We have migrated all .dll and .lib files based on oracles following link.
http://www.oracle.com/technology/tech/oci/occi/occidownloads.html
Now while trying to make connection it's throwing ORA 24960 exception. (the attribute OCI_ATTR_USERNAME is greater than the maximum allowable length of 255)
but the attribute length is very less.
connection = env->createConnection (oraUser,OraPass,conne);
Anybody can help immediately please?Hi,
Are you compiling in debug mode and using the Instant Client SDK by chance? If so, I suggest taking a look at My Oracle Support Note:747933.1 (ORA-24960 Error When Trying to Connect in Debug Mode With OCCI Instant Client). Essentially the issue is that the OCCI files included with Instant Client SDK do not include the oraocci11d.lib library.
Regards,
Mark -
ORA-01401 in SELECT statement !
Hi all,
We are facing a Strange problem.
Oracle Database 10.2.0.1 on Fedora Core6
AL32UTF8 Charset, NLS_LENGTH_SEMANTICS = CHAR
In a particular select statement, the database gives out ORA-01401 error:
ORA-01401: inserted value too large for column
Here is the select statement:
SELECT a.loc_nbr AS loc_nbr, a.loc_type_code AS loc_type_code,
TO_CHAR (a.loc_eff_dt, 'yyyy-mm-dd') AS loc_eff_dt,
TO_CHAR (a.loc_end_dt, 'yyyy-mm-dd') AS loc_end_dt,
a.loc_nm AS loc_nm, a.loc_city AS loc_city,
a.loc_st_prov AS loc_st_prov, b.acct_loc_nbr AS acct_loc_nbr,
c.noofres AS routes
FROM loc a,
(SELECT c.acct_loc_nbr AS acct_loc_nbr,
a.loc_sys_nbr AS loc_sys_nbr_loc
FROM loc a, loc_attr c
WHERE a.loc_ctry_code = '040'
AND a.loc_co_code = '001'
AND a.loc_type_code LIKE '03%'
AND a.loc_eff_dt <= TO_DATE ('2008-02-22', 'yyyy-mm-dd')
AND a.loc_end_dt >= TO_DATE ('2008-02-22', 'yyyy-mm-dd')
AND a.loc_nm > '%'
AND LPAD (a.loc_nbr, 38, '0') LIKE '%'
AND a.loc_short_nm LIKE '%'
AND a.loc_city LIKE '%'
AND NVL (a.loc_st_prov, ' ') LIKE '%'
AND c.loc_ctry_code = a.loc_ctry_code
AND c.loc_co_code = a.loc_co_code
AND c.loc_nbr = a.loc_nbr
AND c.loc_type_code = a.loc_type_code
AND c.loc_attr_eff_dt <= a.loc_eff_dt
AND c.loc_attr_end_dt >= a.loc_eff_dt
ORDER BY a.loc_nbr) b,
(SELECT COUNT (b.loc_nbr) AS noofres,
a.loc_sys_nbr AS loc_sys_nbr_rt
FROM loc a, rt_loc_assgn b
WHERE a.loc_ctry_code = '040'
AND a.loc_co_code = '001'
AND a.loc_type_code LIKE '03%'
AND a.loc_eff_dt <= TO_DATE ('2008-02-22', 'yyyy-mm-dd')
AND a.loc_end_dt >= TO_DATE ('2008-02-22', 'yyyy-mm-dd')
AND a.loc_nm > '%'
AND LPAD (a.loc_nbr, 38, '0') LIKE '%'
AND a.loc_short_nm LIKE '%'
AND a.loc_city LIKE '%'
AND NVL (a.loc_st_prov, ' ') LIKE '%'
AND b.ctry_code = a.loc_ctry_code
AND b.co_code = a.loc_co_code
AND b.loc_nbr = a.loc_nbr
AND ( a.loc_eff_dt BETWEEN b.rt_loc_eff_dt AND b.rt_loc_end_dt
OR a.loc_end_dt BETWEEN b.rt_loc_eff_dt AND b.rt_loc_end_dt
AND b.loc_svc_type_code = 'S'
GROUP BY a.loc_sys_nbr) c
WHERE a.loc_sys_nbr = b.loc_sys_nbr_loc(+)
AND a.loc_sys_nbr = c.loc_sys_nbr_rt(+)
AND a.loc_ctry_code = '040'
AND a.loc_co_code = '001'
AND a.loc_type_code LIKE '03%'
AND a.loc_eff_dt <= TO_DATE ('2008-02-22', 'yyyy-mm-dd')
AND a.loc_end_dt >= TO_DATE ('2008-02-22', 'yyyy-mm-dd')
AND a.loc_nm > '%'
AND LPAD (a.loc_nbr, 38, '0') LIKE '%'
AND a.loc_short_nm LIKE '%'
AND a.loc_city LIKE '%'
AND NVL (a.loc_st_prov, ' ') LIKE '%'
ORDER BY a.loc_nmWorkarounds:
- If we comment ORDER BY clause, the query works fine
- On line 49, if we surround loc_sys_nbr with TRIM() function, the query works fine
- On line 60, if we use NVL (a.loc_st_prov, '' ) LIKE '%', the query works fine
All these fields are CHAR() datatype.
Any idea what is happening here?
Thanks in advance :)Thanks Satish,
We are hit by the bug 5874989 relating to impdp tool. -
SQL loader error ORA-01401:
I am trying to import a .dat file using SQL loader and
i am getting error:ORA-01401: inserted value too large for column and i tried changing the column length to bigger too.
TABLE STRUCTURE
create table NOTE
NOTE_EFF_DATE NUMBER,
NOTE_NDC_NUMBER NUMBER,
NOTE_PREV_NDC_NUMBER NUMBER,
NOTE_DEA_CLASS_CD VARCHAR2(5),
NOTE_PRODUCT_CTGRY_CD VARCHAR2(5),
NOTE_PRODUCT_NAME VARCHAR2(40),
NOTE_FORM_CODE VARCHAR2(5),
NOTE_PRODUCT_STRENGTH VARCHAR2(35),
NOTE_PRODUCT_MSUR_CD VARCHAR2(5),
NOTE_PRODUCT_PKG_SZ NUMBER,
NOTE_PRODUCT_PKG_QTY_CD VARCHAR2(6),
NOTE_THRPTC_CLS_CD NUMBER,
NOTE_AWP_CRT_PRC_EF_DT NUMBER,
NOTE_AWP_CRT_PRC NUMBER,
NOTE_AWP_UNT_PRC NUMBER,
NOTE_AWP_1ST_PRV_UNT_PRC NUMBER
DATA IN .dat FILE:
093083100002035151 401DARVOCET-N 50 TAB 01000EA 0000000001831780008120000008120000007250
093083100002011104 08SODIUM SALICYLATE CAP10 GR 01000EA 2808020070880920005519000005519000005158
.dat FILE format/structure
01 NOTE-REC.
05 NOTE-KEY.
10 NOTE-EFF-DATE.
15 NOTE-EFF-C PIC X(01).
15 NOTE-EFF-YY PIC X(02).
15 NOTE-EFF-MM PIC X(02).
15 NOTE-EFF-DD PIC X(02).
10 NOTE-NDC-NUMBER.
15 NOTE-NDC-NUM-5 PIC X(05).
15 NOTE-NDC-NUM-4 PIC X(04).
15 NOTE-NDC-NUM-2 PIC X(02).
05 NOTE-PREV-NDC-NUMBER.
10 NOTE-NDC-NUM-5 PIC X(05).
10 NOTE-NDC-NUM-4 PIC X(04).
10 NOTE-NDC-NUM-2 PIC X(02).
05 NOTE-DEA-CLASS-CD PIC X(01).
05 NOTE-PRODUCT-CTGRY-CD PIC X(02).
05 NOTE-PRODUCT-NAME PIC X(35).
05 NOTE-FORM-CODE PIC X(03).
05 NOTE-PRODUCT-STRENGTH PIC X(25).
05 NOTE-PRODUCT-MSUR-CD PIC X(02).
05 NOTE-PRODUCT-PKG-SZ PIC 9(05).
05 NOTE-PRODUCT-PKG-QTY-CD PIC X(03).
05 NOTE-THRPTC-CLS-CD PIC 9(10).
05 NOTE-AWP-CRT-PRC-EF-DT PIC 9(05).
05 NOTE-AWP-CRT-PRC PIC 9(05)V99.
05 NOTE-AWP-UNT-PRC PIC 9(04)V9(05).
05 NOTE-AWP-1ST-PRV-UNT-PRC PIC 9(04)V9(05).
Control FILE:
LOAD DATA
INFILE 'NCD.dat'
BADFILE 'NCD.bad'
DISCARDFILE 'NCD.dsc'
INTO TABLE NOTE
REPLACE
FIELDS TERMINATED BY ' '
TRAILING NULLCOLS
NOTE_EFF_DATE POSITION(1:7) INTEGER EXTERNAL NULLIF(NOTE_EFF_DATE=BLANKS),
NOTE_NDC_NUMBER POSITION(8:18) DECIMAL EXTERNAL NULLIF(NOTE_NDC_NUMBER=BLANKS),
NOTE_PREV_NDC_NUMBER POSITION(19:27) DECIMAL EXTERNAL NULLIF(NOTE_PREV_NDC_NUMBER=BLANKS),
NOTE_DEA_CLASS_CD POSITION(28) CHAR NULLIF (NOTE_DEA_CLASS_CD=BLANKS),
NOTE_PRODUCT_CTGRY_CD POSITION(29:30) CHAR NULLIF (NOTE_PRODUCT_CTGRY_CD=BLANKS),
NOTE_PRODUCT_NAME POSITION(31:65) CHAR NULLIF (NOTE_PRODUCT_NAME=BLANKS),
NOTE_FORM_CODE POSITION(66:68) CHAR NULLIF (NOTE_FORM_CODE=BLANKS),
NOTE_PRODUCT_STRENGTH POSITION(69:93) CHAR NULLIF (NOTE_PRODUCT_STRENGTH=BLANKS),
NOTE_PRODUCT_MSUR_CD POSITION(94:95) CHAR NULLIF (NOTE_PRODUCT_MSUR_CD=BLANKS),
NOTE_PRODUCT_PKG_SZ POSITION(96:100) INTEGER EXTERNAL NULLIF (NOTE_PRODUCT_PKG_SZ=BLANKS),
NOTE_PRODUCT_PKG_QTY_CD POSITION(101:103) CHAR NULLIF (NOTE_PRODUCT_PKG_QTY_CD=BLANKS),
NOTE_THRPTC_CLS_CD POSITION(104:113) INTEGER EXTERNAL NULLIF (NOTE_THRPTC_CLS_CD=BLANKS),
NOTE_AWP_CRT_PRC_EF_DT POSITION(114:118) INTEGER EXTERNAL NULLIF (NOTE_AWP_CRT_PRC_EF_DT=BLANKS),
NOTE_AWP_CRT_PRC POSITION(119:225) DECIMAL EXTERNAL NULLIF (NOTE_AWP_CRT_PRC=BLANKS),
NOTE_AWP_UNT_PRC POSITION(226:234) DECIMAL EXTERNAL NULLIF (NOTE_AWP_UNT_PRC=BLANKS),
NOTE_AWP_1ST_PRV_UNT_PRC POSITION(235:245) DECIMAL EXTERNAL NULLIF (NOTE_AWP_1ST_PRV_UNT_PRC=BLANKS)load data
APPEND into table TBL1
fields terminated by "," optionally enclosed by '"'
TRAILING NULLCOLS
columnname
)Hi!
Brother, pls give the details of your column. Your exact script. Initially, it seems that - your code is ok. But, that is my guess. Please post the complete script. We cannot say anithing - untill we go through the entire code of your script.
Regards.
Satyaki De.
Maybe you are looking for
-
I have a table in SAP(PLM) where I need to change the existing 255 CHAR fields to 1250 CHAR long. There are six fields in the table that I need to change. All the BAPIs using the table are all created and so using LCHAR data type is not going to be
-
Hi, I built a query with 4 tables inside (load from Oracle DB and two of them are quite big, more than millions of rows). After filtering, I tried to build relationships between tables using Table.Join formula. However, the process took extremly long
-
Object Links for HR Person PA10
Hi DMS Experts I want to link DMS Document to a Person in HR via PA10 where we can see data of Person in HR. Also I need help in linking new object to DMS.If these object are not in standard Object link configuration in Define Docuemnt Types. I have
-
Hi people, I create this job 'RSXMB_DELETE_ARCHIVED_MESSAGES' in XIP but the job are canceled by this error message: Job started Step 001 started (program RSXMB_DELETE_ARCHIVED_MESSAGES, variant , user ID ADMINXI) Error when accessing the archive dat
-
Fade transparent images to background image!
This is probably an easy task for most of you, but how do i get a transparent image to fade to a background image in Flash? I have several transparent product images (.png or .gif) and a patterend background. I import my .png or .gif image and conver