Bash script aid - 'value too great for base' error
The error is:
/home/scripts/others/polysleep: line 15: 60 - ((((((08: value too great for base (error
token is "08")
Here's the script (it's an alarm for naps of different lengths) :
#!/bin/bash
if [ $UID != "0" ]; then
echo "You must run this script as root." 1>&2
exit
fi
TIME_SET=`date +%M`
DELAY=60
while [ "1" -lt "2" ]
do
DATE=`date +%k:%M:%S`
MINUTE=`date +%M`
// The offending line:
REMAINING=$(($DELAY - (((((($MINUTE + 60)) - $TIME_SET)) % 60))))
// I want it to show how long is left before the time is up (ie. it subtracts the time that has
// passed from the $DELAY that was set). Since it's in minutes, I need to do the arithmetic
// in modulo 60 (the '% 60' bit) - I tried this initially without the '+ 60' above, but got the
// same error as I've posted, and thought that initially plus-ing 60 to $MINUTE (the
// minute section of 'date' at the current time) would solve it. Apparently that hasn't
// worked though =o(
clear
echo "1. I'm awake"
echo "2. I'm going for a nap"
echo "3. I'm going out"
echo
echo Current Time: $DATE
echo Time Remaining: $REMAINING minutes
read -t 1 -e input
if [ "$REMAINING" = "0" ]
then
play -v .15 /home/.sounds/alarm.mp3&
wait
DELAY=1
TIME_SET=`date +%M`
input=""
fi
if [ "$input" = "1" ]
then
DELAY=60
TIME_SET=`date +%M`
input=""
elif [ "$input" = "2" ]
then
DELAY=35
TIME_SET=`date +%M`
die centericq
input=""
elif [ "$input" = "3" ]
then
DELAY=999999
TIME_SET=`date +%M`
input=""
fi
done
I'd be grateful for advice, since apart from that error popping up after some time has passed, the script runs flawlessly.
Komodo wrote:
This is from http://www.codecoffee.com/tipsforlinux/ … 2/044.html , and explains it better than I could:
"bash allows you to perform arithmetic expressions. As you have already seen, arithmetic is performed using the expr command. However, this, like the true command, is considered to be slow. The reason is that in order to run true and expr, the shell has to start them up. A better way is to use a built in shell feature which is quicker. So an alternative to true, as we have also seen, is the ":" command. An alternative to using expr, is to enclose the arithmetic operation inside $((...)). This is different from $(...)."
Yeah, I knew that already. But what I meant were the extra paranthesis you're using. This is the calculation the way you do it:
REMAINING=$(($DELAY - (((((($MINUTE + 60)) - $TIME_SET)) % 60))))
But this is totally adequate:
REMAINING=$(( $DELAY - (( $MINUTE + 60 ) - $TIME_SET ) % 60 ))
Notice the 6 vs. 2 brackets in front of $MINUTE?
Similar Messages
-
Inserted value too large for column error while scheduling a job
Hi Everyone,
I am trying to schedule a PL SQL script as a job in my Oracle 10g installed and running on Windows XP.
While trying to Submit the job I get the error as "Inserted value too large for column:" followed by my entire code. The code is correct - complies and runs in Oracle ApEx's SQL Workshop.
The size of my code is 4136 character, 4348 bytes and 107 lines long. It is a code that sends an e-mail and has a +utl_smtp.write_data([Lots of HTML])+
There is no insert statement in the code whatsoever, the code only queries the database for data...
Any idea as to why I might be getting this error??
Thanks in advance
SidThe size of my code is 4136 character, 4348 bytes and 107 lines long. It is a code that sends an e-mail and has a utl_smtp.write_data(Lots of HTML)SQL variable has maximum size of 4000
-
Data Profiling - Value too large for column error
I am running a data profile which completes with errors. The error being reported is an ORA 12899 Value too large for column actual (41 maximum 40).
I have checked the actual data in the table and the maximum is only 40 characters.
Any ideas on how to solve this. Even though it completes no actual profile is done on the data due to the error.
OWB version 11.2.0.1
Log file below.
Job Rows Selected Rows Inserted Rows Updated Rows Deleted Errors Warnings Start Time Elapsed Time
Profile_1306385940099 2011-05-26 14:59:00.0 106
Data profiling operations complete.
Redundant column analysis for objects complete in 0 s.
Redundant column analysis for objects.
Referential analysis for objects complete in 0.405 s.
Referential analysis for objects.
Referential analysis initialization complete in 8.128 s.
Referential analysis initialization.
Data rule analysis for object TABLE_NAME complete in 0 s.
Data rule analysis for object TABLE_NAME
Functional dependency and unique key discovery for object TABLE_NAME complete in 0 s.
Functional dependency and unique key discovery for object TABLE_NAME
Domain analysis for object TABLE_NAME complete in 0.858 s.
Domain analysis for object TABLE_NAME
Pattern analysis for object TABLE_NAME complete in 0.202 s.
Pattern analysis for object TABLE_NAME
Aggregation and Data Type analysis for object TABLE_NAME complete in 9.236 s.
Aggregation and Data Type analysis for object TABLE_NAME
Functional dependency and unique key discovery for object TABLE_NAME complete in 0 s.
Functional dependency and unique key discovery for object TABLE_NAME
Domain analysis for object TABLE_NAME complete in 0.842 s.
Domain analysis for object TABLE_NAME
Pattern analysis for object TABLE_NAME complete in 0.187 s.
Pattern analysis for object TABLE_NAME
Aggregation and Data Type analysis for object TABLE_NAME complete in 9.501 s.
Aggregation and Data Type analysis for object TABLE_NAME
Functional dependency and unique key discovery for object TABLE_NAME complete in 0 s.
Functional dependency and unique key discovery for object TABLE_NAME
Domain analysis for object TABLE_NAME complete in 0.717 s.
Domain analysis for object TABLE_NAME
Pattern analysis for object TABLE_NAME complete in 0.156 s.
Pattern analysis for object TABLE_NAME
Aggregation and Data Type analysis for object TABLE_NAME complete in 9.906 s.
Aggregation and Data Type analysis for object TABLE_NAME
Functional dependency and unique key discovery for object TABLE_NAME complete in 0 s.
Functional dependency and unique key discovery for object TABLE_NAME
Domain analysis for object TABLE_NAME complete in 0.827 s.
Domain analysis for object TABLE_NAME
Pattern analysis for object TABLE_NAME complete in 0.187 s.
Pattern analysis for object TABLE_NAME
Aggregation and Data Type analysis for object TABLE_NAME complete in 9.172 s.
Aggregation and Data Type analysis for object TABLE_NAME
Functional dependency and unique key discovery for object TABLE_NAME complete in 0 s.
Functional dependency and unique key discovery for object TABLE_NAME
Domain analysis for object TABLE_NAME complete in 0.889 s.
Domain analysis for object TABLE_NAME
Pattern analysis for object TABLE_NAME complete in 0.202 s.
Pattern analysis for object TABLE_NAME
Aggregation and Data Type analysis for object TABLE_NAME complete in 9.313 s.
Aggregation and Data Type analysis for object TABLE_NAME
Execute data prepare map for object TABLE_NAME complete in 9.267 s.
Execute data prepare map for object TABLE_NAME
Execute data prepare map for object TABLE_NAME complete in 10.187 s.
Execute data prepare map for object TABLE_NAME
Execute data prepare map for object TABLE_NAME complete in 8.019 s.
Execute data prepare map for object TABLE_NAME
Execute data prepare map for object TABLE_NAME complete in 5.507 s.
Execute data prepare map for object TABLE_NAME
Execute data prepare map for object TABLE_NAME complete in 10.857 s.
Execute data prepare map for object TABLE_NAME
Parameters
O82647310CF4D425C8AED9AAE_MAP_ProfileLoader 1 2011-05-26 14:59:00.0 11
ORA-12899: value too large for column "SCHEMA"."O90239B0C1105447EB6495C903678"."ITEM_NAME_1" (actual: 41, maximum: 40)
Parameters
O68A16A57F2054A13B8761BDC_MAP_ProfileLoader 1 2011-05-26 14:59:11.0 5
ORA-12899: value too large for column "SCHEMA"."O0D9332A164E649F3B4D05D045521"."ITEM_NAME_1" (actual: 41, maximum: 40)
Parameters
O78AD6B482FC44D8BB7AF8357_MAP_ProfileLoader 1 2011-05-26 14:59:16.0 9
ORA-12899: value too large for column "SCHEMA"."OBF77A8BA8E6847B8AAE4522F98D6"."ITEM_NAME_2" (actual: 41, maximum: 40)
Parameters
OA79DF482D74847CF8EA05807_MAP_ProfileLoader 1 2011-05-26 14:59:25.0 10
ORA-12899: value too large for column "SCHEMA"."OB0052CBCA5784DAD935F9FCF2E28"."ITEM_NAME_1" (actual: 41, maximum: 40)
Parameters
OFFE486BBDB884307B668F670_MAP_ProfileLoader 1 2011-05-26 14:59:35.0 9
ORA-12899: value too large for column "SCHEMA"."O9943284818BB413E867F8DB57A5B"."ITEM_NAME_1" (actual: 42, maximum: 40)
ParametersFound the answer. It was the database character set for multi byte character sets.
-
'Value too large for column' error in msql
In my 9ilite database I have a table with a LONG column
In the documentation it says that a LONG column can hold upto 2 GB data
However when I try an insert more than 4097 characters into the column I get the error message on insert 'Value too large for column'
Is this a bug or is the documentation wrong
or am I doing something wrong ?
Any help would be much appreciatedYou have run into some bug in handling intermediate results in Oracle 9i Lite. You can by pass the bug as follows in Java.
public static oracle.lite.poljdbc.BLOB createBlob(Connection conn,
byte[] data)
throws SQLException, IOException
oracle.lite.poljdbc.BLOB blob = new oracle.lite.poljdbc.BLOB(
(oracle.lite.poljdbc.OracleConnection)conn);
OutputStream writer = blob.getBinaryOutputStream();
writer.write(data);
writer.flush();
writer.close();
return blob;
public static void insertRow(Connection conn, int num,
oracle.lite.poljdbc.BLOB blob)
throws SQLException
PreparedStatement ps = conn.prepareStatement(
"insert into TEST_BLOB values (?, ?)");
ps.setInt(1, num);
ps.setBlob(2, blob);
ps.execute(); -
Update trigger fails with value too large for column error on timestamp
Hello there,
I've got a problem with several update triggers. I've several triggers monitoring a set of tables.
Upon each update the updated data is compared with the current values in the table columns.
If different values are detected the update timestamp is set with the current_timestamp. That
way we have a timestamp that reflects real changes in relevant data. I attached an example for
that kind of trigger below. The triggers on each monitored table only differ in the columns that
are compared.
CREATE OR REPLACE TRIGGER T_ava01_obj_cont
BEFORE UPDATE on ava01_obj_cont
FOR EACH ROW
DECLARE
v_changed boolean := false;
BEGIN
IF NOT v_changed THEN
v_changed := (:old.cr_adv_id IS NULL AND :new.cr_adv_id IS NOT NULL) OR
(:old.cr_adv_id IS NOT NULL AND :new.cr_adv_id IS NULL)OR
(:old.cr_adv_id IS NOT NULL AND :new.cr_adv_id IS NOT NULL AND :old.cr_adv_id != :new.cr_adv_id);
END IF;
IF NOT v_changed THEN
v_changed := (:old.is_euzins_relevant IS NULL AND :new.is_euzins_relevant IS NOT NULL) OR
(:old.is_euzins_relevant IS NOT NULL AND :new.is_euzins_relevant IS NULL)OR
(:old.is_euzins_relevant IS NOT NULL AND :new.is_euzins_relevant IS NOT NULL AND :old.is_euzins_relevant != :new.is_euzins_relevant);
END IF;
[.. more values being compared ..]
IF v_changed THEN
:new.update_ts := current_timestamp;
END IF;
END T_ava01_obj_cont;Really relevant is the statement
:new.update_ts := current_timestamp;So far so good. The problem is, it works the most of time. Only sometimes it fails with the following error:
SQL state [72000]; error code [12899]; ORA-12899: value too large for column "LGT_CLASS_AVALOQ"."AVA01_OBJ_CONT"."UPDATE_TS"
(actual: 28, maximum: 11)
I can't see how the value systimestamp or current_timestamp (I tried both) should be too large for
a column defined as TIMESTAMP(6). We've got tables where more updates occur then elsewhere.
Thats where the most of the errors pop up. Other tables with fewer updates show errors only
sporadicly or even never. I can't see a kind of error pattern. It's like that every 10.000th update
or less failes.
I was desperate enough to try some language dependend transformation like
IF v_changed THEN
l_update_date := systimestamp || '';
select value into l_timestamp_format from nls_database_parameters where parameter = 'NLS_TIMESTAMP_TZ_FORMAT';
:new.update_ts := to_timestamp_tz(l_update_date, l_timestamp_format);
END IF;to be sure the format is right. It didn't change a thing.
We are using Oracle Version 10.2.0.4.0 Production.
Did anyone encounter that kind of behaviour and solve it? I'm now pretty certain that it has to
be an oracle bug. What is the forum's opinion on that? Would you suggest to file a bug report?
Thanks in advance for your help.
Kind regards
JanCould you please edit your post and use formatting and tags. This is pretty much unreadable and the forum boogered up some of your code.
Instructions are here: http://forums.oracle.com/forums/help.jspa -
Inserted value too large for column Error
I have this table:
CREATE TABLE SMt_Session
SessionID int NOT NULL ,
SessionUID char (36) NOT NULL ,
UserID int NOT NULL ,
IPAddress varchar2 (15) NOT NULL ,
Created timestamp NOT NULL ,
Accessed timestamp NOT NULL ,
SessionInfo nclob NULL
and this insert from a sp (sp name is SMsp_SessionCreate):
Now := (SYSDATE);
SessionUID := SYS_GUID();
/*create the session in the session table*/
INSERT INTO SMt_Session
( SessionUID ,
UserID ,
IPAddress ,
Created ,
Accessed )
VALUES ( SMsp_SessionCreate.SessionUID ,
SMsp_SessionCreate.UserID ,
SMsp_SessionCreate.IPAddress ,
SMsp_SessionCreate.Now ,
SMsp_SessionCreate.Now );
It looks like the param SessionUID is the one with trouble, but the length of sys_guid() is 32, and my column has 36.
IPAddress is passed to the sp with value '192.168.11.11', so it should fit.
UserID is 1.
I am confused, what is the column with problem ?CREATE OR REPLACE PROCEDURE SMsp_SessionCreate
PartitionID IN INT ,
UserID IN INT ,
IPAddress IN VARCHAR2 ,
SessionID IN OUT INT,
SessionUID IN OUT CHAR,
UserName IN OUT VARCHAR2,
UserFirst IN OUT VARCHAR2,
UserLast IN OUT VARCHAR2,
SupplierID IN OUT INT,
PartitionName IN OUT VARCHAR2,
Expiration IN INT ,
RCT1 OUT GLOBALPKG.RCT1
AS
Now DATE;
SCOPE_IDENTITY_VARIABLE INT;
BEGIN
Now := SYSDATE;
-- the new Session UID
SessionUID := SYS_GUID();
/*Cleanup any old sessions for this user*/
INSERT INTO SMt_Session_History
( UserID ,
IPAddress ,
Created ,
LastAccessed ,
LoggedOut )
SELECT
UserID,
IPAddress,
Created,
Accessed,
TO_DATE(Accessed + (1/24/60 * SMsp_SessionCreate.Expiration))
FROM SMt_Session
WHERE UserID = SMsp_SessionCreate.UserID;
--delete old
DELETE FROM SMt_Session
WHERE UserID = SMsp_SessionCreate.UserID;
/*create the session in the session table*/
INSERT INTO SMt_Session
( SessionUID ,
UserID ,
IPAddress ,
Created ,
Accessed )
VALUES ( SMsp_SessionCreate.SessionUID ,
SMsp_SessionCreate.UserID ,
SMsp_SessionCreate.IPAddress ,
SMsp_SessionCreate.Now ,
SMsp_SessionCreate.Now );
SELECT SMt_Session_SessionID_SEQ.CURRVAL INTO SMsp_SessionCreate.SessionID FROM dual;
--SELECT SMt_Session_SessionID_SEQ.CURRVAL INTO SCOPE_IDENTITY_VARIABLE FROM DUAL;
--get VALUES to return
SELECT u.AccountName INTO SMsp_SessionCreate.UserName FROM SMt_Users u WHERE u.UserID = SMsp_SessionCreate.UserID;
SELECT u.SupplierID INTO SMsp_SessionCreate.SupplierID FROM SMt_Users u WHERE u.UserID = SMsp_SessionCreate.UserID;
SELECT u.FirstName INTO SMsp_SessionCreate.UserFirst FROM SMt_Users u WHERE u.UserID = SMsp_SessionCreate.UserID;
SELECT u.LastName INTO SMsp_SessionCreate.UserLast FROM SMt_Users u WHERE u.UserID = SMsp_SessionCreate.UserID;
BEGIN
FOR REC IN ( SELECT
u.AccountName,
u.SupplierID,
u.FirstName,
u.LastName FROM SMt_Users u
WHERE UserID = SMsp_SessionCreate.UserID
LOOP
SMsp_SessionCreate.UserName := REC.AccountName;
SMsp_SessionCreate.SupplierID := REC.SupplierID;
SMsp_SessionCreate.UserFirst := REC.FirstName;
SMsp_SessionCreate.UserLast := REC.LastName;
END LOOP;
END;
BEGIN
FOR REC IN ( SELECT PartitionName FROM SMt_Partitions
WHERE PartitionID = SMsp_SessionCreate.PartitionID
LOOP
SMsp_SessionCreate.PartitionName := REC.PartitionName;
END LOOP;
END;
/*retrieve all user roles*/
OPEN RCT1 FOR
SELECT RoleID FROM SMt_UserRoles
WHERE UserID = SMsp_SessionCreate.UserID;
END;
this is the exact code of the sp. The table definition is this:
CREATE TABLE SMt_Session
SessionID int NOT NULL ,
SessionUID char (36) NOT NULL ,
UserID int NOT NULL ,
IPAddress varchar2 (15) NOT NULL ,
Created timestamp NOT NULL ,
Accessed timestamp NOT NULL ,
SessionInfo nclob NULL
The sp gets executed with this params:
PARTITIONID := -2;
USERID := 1;
IPADDRESS := '192.168.11.11';
SESSIONID := -1;
SESSIONUID := NULL;
USERNAME := '';
USERFIRST := '';
USERLAST := '';
SUPPLIERID := -1;
PARTITIONNAME := '';
EXPIRATION := 300;
if I ran the code inside the procedure in sql+ (not the procedure), it works. when i call the sp i get the error
inserted value too large for column
at line 48 -
Adding virtual column: ORA-12899: value too large for column
I'm using Oracle 11g, Win7 OS, SQL Developer
I'm trying to add virtual column to my test table, but getting ORA-12899: value too large for column error. Below are the details.
Can someone help me in this?
CREATE TABLE test_reg_exp
(col1 VARCHAR2(100));
INSERT INTO test_reg_exp (col1) VALUES ('ABCD_EFGH');
INSERT INTO test_reg_exp (col1) VALUES ('ABCDE_ABC');
INSERT INTO test_reg_exp (col1) VALUES ('WXYZ_ABCD');
INSERT INTO test_reg_exp (col1) VALUES ('ABCDE_PQRS');
INSERT INTO test_reg_exp (col1) VALUES ('ABCD_WXYZ');
ALTER TABLE test_reg_exp
ADD (col2 VARCHAR2(100) GENERATED ALWAYS AS (REGEXP_REPLACE (col1, '^ABCD[A-Z]*_')));
SQL Error: ORA-12899: value too large for column "COL2" (actual: 100, maximum: 400)
12899. 00000 - "value too large for column %s (actual: %s, maximum: %s)"
*Cause: An attempt was made to insert or update a column with a value
which is too wide for the width of the destination column.
The name of the column is given, along with the actual width
of the value, and the maximum allowed width of the column.
Note that widths are reported in characters if character length
semantics are in effect for the column, otherwise widths are
reported in bytes.
*Action: Examine the SQL statement for correctness. Check source
and destination column data types.
Either make the destination column wider, or use a subset
of the source column (i.e. use substring).When I try to select, I'm getting correct results:
SELECT col1, (REGEXP_REPLACE (col1, '^ABCD[A-Z]*_'))
FROM test_reg_exp;Thanks.Yes RP, it working if you give col2 size >=400.
@Northwest - Could you please test the same w/o having a regex clause in col2?
I doubt on the usage of a REGEX in this dynamic col case.
Refer this (might help) -- http://www.oracle-base.com/articles/11g/virtual-columns-11gr1.php
Below snippet from above link.... see if this helps...
>
Notes and restrictions on virtual columns include:
Indexes defined against virtual columns are equivalent to function-based indexes.
Virtual columns can be referenced in the WHERE clause of updates and deletes, but they cannot be manipulated by DML.
Tables containing virtual columns can still be eligible for result caching.
Functions in expressions must be deterministic at the time of table creation, but can subsequently be recompiled and made non-deterministic without invalidating the virtual column. In such cases the following steps must be taken after the function is recompiled:
Constraint on the virtual column must be disabled and re-enabled.
Indexes on the virtual column must be rebuilt.
Materialized views that access the virtual column must be fully refreshed.
The result cache must be flushed if cached queries have accessed the virtual column.
Table statistics must be regathered.
Virtual columns are not supported for index-organized, external, object, cluster, or temporary tables.
The expression used in the virtual column definition has the following restrictions:
It cannot refer to another virtual column by name.
It can only refer to columns defined in the same table.
If it refers to a deterministic user-defined function, it cannot be used as a partitioning key column.
The output of the expression must be a scalar value. It cannot return an Oracle supplied datatype, a user-defined type, or LOB or LONG RAW.
>
Edited by: ranit B on Oct 16, 2012 11:48 PM
Edited by: ranit B on Oct 16, 2012 11:54 PM -
ORA-12899: value too large for column
Hi Experts,
I am getting data from erp systems in the form of feeds,in particular one column length in feed is 3 only.
In target table also corresponded column also length is varchar2(3)
but when i am trying to load same into db ti showing error like:
ORA-12899: value too large for column
emp_name (actual: 4, maximum: 3)
i am using data base version :
Oracle Database 11g Express Edition Release 11.2.0.2.0 - Production
but this is resolved when the time of increasing target column length to varchar2(5) from varchar2(3)..but i checked length of that column in feed is 3 only...
my question is why we need to increase the target column length?
Thanks,
Surya>
my question is why we need to increase the target column length?
>
That can be caused if the two systems are using different character sets. If one is using a single-byte character set like ASCII and the other uses multi-byte like UTF16.
Three BYTES is three bytes but three CHAR is three bytes in ASCII but six bytes for UTF16.
Do you know what character sets are being used?
See the Database Concepts doc
http://docs.oracle.com/cd/B28359_01/server.111/b28318/datatype.htm
>
Length Semantics for Character Datatypes
Globalization support allows the use of various character sets for the character datatypes. Globalization support lets you process single-byte and multibyte character data and convert between character sets. Client sessions can use client character sets that are different from the database character set.
Consider the size of characters when you specify the column length for character datatypes. You must consider this issue when estimating space for tables with columns that contain character data.
The length semantics of character datatypes can be measured in bytes or characters.
•Byte semantics treat strings as a sequence of bytes. This is the default for character datatypes.
•Character semantics treat strings as a sequence of characters. A character is technically a codepoint of the database character set.
For single byte character sets, columns defined in character semantics are basically the same as those defined in byte semantics. Character semantics are useful for defining varying-width multibyte strings; it reduces the complexity when defining the actual length requirements for data storage. For example, in a Unicode database (UTF8), you must define a VARCHAR2 column that can store up to five Chinese characters together with five English characters. In byte semantics, this would require (5*3 bytes) + (1*5 bytes) = 20 bytes; in character semantics, the column would require 10 characters.
VARCHAR2(20 BYTE) and SUBSTRB(<string>, 1, 20) use byte semantics. VARCHAR2(10 CHAR) and SUBSTR(<string>, 1, 10) use character semantics.
The parameter NLS_LENGTH_SEMANTICS decides whether a new column of character datatype uses byte or character semantics. The default length semantic is byte. If all character datatype columns in a database use byte semantics (or all use character semantics) then users do not have to worry about which columns use which semantics. The BYTE and CHAR qualifiers shown earlier should be avoided when possible, because they lead to mixed-semantics databases. Instead, the NLS_LENGTH_SEMANTICS initialization parameter should be set appropriately in the server parameter file (SPFILE) or initialization parameter file, and columns should use the default semantics. -
ORA-01401: inserted value too large for column from 9i to 8i
Hi All,
Am trying to get the data from 9.2.0.6.0 to 8.1.7.0.0.
The character sets in both of them are as follows
9i
NLS_NCHAR_CHARACTERSET : AL16UTF16
NLS_CHARACTERSET : AL32UTF8
8i
NLS_NCHAR_CHARACTERSET : UTF8
NLS_CHARACTERSET : UTF8
And the structure of the Table in 9i which am trying to pull is as follows.
SQL> desc xyz
Name Null? Type
PANEL_SITE_ID NOT NULL NUMBER(15)
PANELIST_ID NUMBER
CHECKSUM VARCHAR2(150)
CONTACT_PHONE VARCHAR2(100)
HH_STATUS NUMBER
HH_STATUS_DT DATE
HH_RECRUITMENT_PHONE VARCHAR2(100)
HH_RECRUITMENT_DT DATE
FIRST_NET_USAGE_DT DATE
INSTALL_DT DATE
FNAME VARCHAR2(4000)
LNAME VARCHAR2(4000)
EMAIL_ADDRESS VARCHAR2(200)
EMAIL_VALID NUMBER
PASSWORD VARCHAR2(4000)
And by connecting to one of the 8i schema am running the following script
CREATE TABLE GPMI.GPM_HOUSEHOLDBASE_FRMP AS
SELECT PANEL_SITE_ID,
PANELIST_ID,
LTRIM(RTRIM(CHECKSUM)) CHECKSUM,
LTRIM(RTRIM(CONTACT_PHONE)) CONTACT_PHONE,
HH_STATUS, HH_STATUS_DT,
LTRIM(RTRIM(HH_RECRUITMENT_PHONE)) HH_RECRUITMENT_PHONE,
HH_RECRUITMENT_DT,
FIRST_NET_USAGE_DT,
INSTALL_DT, LTRIM(RTRIM(FNAME)) FNAME,
LTRIM(RTRIM(LNAME)) LNAME,
LTRIM(RTRIM(EMAIL_ADDRESS)) EMAIL_ADDRESS,
EMAIL_VALID,
PASSWORD
FROM [email protected];
Am gettinh the following error.
Can anyone of you fix this one.
PASSWORD
ERROR at line 14:
ORA-01401: inserted value too large for column
Thanks in Advance
SudarshanAdditionally I found this matrix, which explains your problem:
UTF8 (1 to 3 bytes) AL32UTF8 (1 to 4 bytes)
MIN MAX MIN MAX
CHAR 2000 666 2000 500
VARCHAR2 4000 1333 4000 1000 */
For column PASSWORD the maximum length is used (4000). UTF8 uses maximal 3 bytes for a character, while AL32UTF8 may use up to 4 characters. So a column defined in AL32UTF8 may contain characters, which do not fit in a corresponding UTF8 character. -
Error :Value Too large for DEF_VALUE of SNP_REV_COL
I am getting the following error while I am importing my work repository .
Error : Value Too large for DEF_VALUE of SNP_REV_COL
Can any one pls let me know the root cause of the issue.
I found the following work around to resolve the issue.
I am changing the 'DEF_VALUE' column length in both 'SNP_REV_COL' and 'SNP_COL' tables.
I am using the ODI 10.1.3.5 version.
alter table SNP_REV_COL modify DEF_VALUE VARCHAR2 (400);
alter table SNP_COL modify DEF_VALUE VARCHAR2(400);
I am able to import the work_rep with out any issues after changing the above columns.
I am looking for the reason why this issues is occurring.
Thanks,
Yellanki
Edited by: Yellanki on Feb 7, 2011 3:18 AMAnkit,
I am trying to move my Dev WR to Test. And my Source Technologies are SQL Server and Oracle.
Target is Oracle. I got the work around for this issue And I am looking for the root cause of the issue.
Any help is greatly appreciated.
Thanks,
Yellanki -
Snapshot refresh error: ora-01401 inserted value too large for column
I have an error ora-01401 "Inserted value too large for column" when I try to do a refresh on a group at the materialized view site.
My model is 1 master replicating to a readonly materialized view site. I have 2 refresh groups for separate sets of tables. 1 refresh group work fine...the other I got the above error.
I have doubled the rbs and system tablespace without any help thinking that I must be running out of default rollback segment space.
Anyone has this before?The error is related to a field, not to any tablespace. This normaly happens to me when I change the lenght or resolution of a field in the base tables. The structure changes don't "flow" to the materialized view! I must "regenerate" them. Normally droping and creating it again to make them receive the new lenght of that field.
Sometimes, when the field changed is not part of any primary key I have changes directly the field in the materialized view as if it was a normal table.
Hope this helps
Luis -
Hi ,
I have defined a field called "field1" in tableA of varchar2(6)
but i got the following error : ORA-12899: value too large for column "PP"."TABLEA"."FIELD1" (actual: 8, maximum: 6) while trying to insert from a view
i have tried to run the view looking for length of field1 that is greater than 6 but it returns no record
what might be wrong here ? i am using Oracle 10G
pls advise
tks & rgdsORA-12899: value too large for column "PP"."TABLEA"."FIELD1" (actual: 8, maximum: 6)If your column has a length of 6 chars, you cannot put there more than 6 chars. Isn't it clear enough ?
Since you tried to insert a value on 8 charracters, you raised an error.
Since your column is on 6 char length, Oracle cannot return you any row where this column has a length greater than 6.
Nicolas. -
Unable to evaluate workflow rule - Value too long for field
Need help with a workflow error for a record update before the record is saved. There are 3 calculations that would be done in a particular order - all on number fields. Each time, I am overwriting existing values. The individual numbers could have up to 6 decimal spaces. When I try to update one or more fields that contain the calculation, I get an error message saying that the system is unable to evaluate the workflow rule - value too long for field (zNum6).
This same calculation is fine when a new record is entered and the calculation is done as a default field value.
Any ideas?I actually had to use a ToChar function at the beginning of the calculation and #### to indicate the number of digits to make this work. Oracle Help Desk provided the answer - quickly.
-
SQL Error: ORA-12899: value too large for column
Hi,
I'm trying to understand the above error. It occurs when we are migrating data from one oracle database to another:
Error report:
SQL Error: ORA-12899: value too large for column "USER_XYZ"."TAB_XYZ"."COL_XYZ" (actual: 10, maximum: 8)
12899. 00000 - "value too large for column %s (actual: %s, maximum: %s)"
*Cause: An attempt was made to insert or update a column with a value
which is too wide for the width of the destination column.
The name of the column is given, along with the actual width
of the value, and the maximum allowed width of the column.
Note that widths are reported in characters if character length
semantics are in effect for the column, otherwise widths are
reported in bytes.
*Action: Examine the SQL statement for correctness. Check source
and destination column data types.
Either make the destination column wider, or use a subset
of the source column (i.e. use substring).
The source database runs - Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
The target database runs - Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
The source and target table are identical and the column definitions are exactly the same. The column we get the error on is of CHAR(8). To migrate the data we use either a dblink or oracle datapump, both result in the same error. The data in the column is a fixed length string of 8 characters.
To resolve the error the column "COL_XYZ" gets widened by:
alter table TAB_XYZ modify (COL_XYZ varchar2(10));
-alter table TAB_XYZ succeeded.
We now move the data from the source into the target table without problem and then run:
select max(length(COL_XYZ)) from TAB_XYZ;
-8
So the maximal string length for this column is 8 characters. To reduce the column width back to its original 8, we then run:
alter table TAB_XYZ modify (COL_XYZ varchar2(8));
-Error report:
SQL Error: ORA-01441: cannot decrease column length because some value is too big
01441. 00000 - "cannot decrease column length because some value is too big"
*Cause:
*Action:
So we leave the column width at 10, but the curious thing is - once we have the data in the target table, we can then truncate the same table at source (ie. get rid of all the data) and move the data back in the original table (with COL_XYZ set at CHAR(8)) - without any issue.
My guess the error has something to do with the storage on the target database, but I would like to understand why. If anybody has an idea or suggestion what to look for - much appreciated.
Cheers.843217 wrote:
Note that widths are reported in characters if character length
semantics are in effect for the column, otherwise widths are
reported in bytes.You are looking at character lengths vs byte lengths.
The data in the column is a fixed length string of 8 characters.
select max(length(COL_XYZ)) from TAB_XYZ;
-8
So the maximal string length for this column is 8 characters. To reduce the column width back to its original 8, we then run:
alter table TAB_XYZ modify (COL_XYZ varchar2(8));varchar2(8 byte) or varchar2(8 char)?
Use SQL Reference for datatype specification, length function, etc.
For more info, reference {forum:id=50} forum on the topic. And of course, the Globalization support guide. -
Fdpstp failed due to ora-12899 value too large for column
Hi All,
User facing this problem while running the concurrent program.
the program is complted but with rhis error.
fdpstp failed due to ora-12899 value too large for column
Can any one tell me the excat solution for this?
RDBMS : 10.2.0.3.0
Oracle Applications : 11.5.10.2User facing this problem while running the concurrent program.
the program is complted but with rhis error.Is this a seeded or custom concurrent program?
fdpstp failed due to ora-12899 value too large for column
Can any one tell me the excat solution for this?Was this working before? If yes, any changes been done recently?
Can other users run the same concurrent program with no issues?
Please post the contents of the concurrent request log file here.
Please ask your developer to open the file using Reports Builder and compile the report and run it (if possible) with the same parameters.
OERR: ORA-12899 value too large for column %s (actual: %s, maximum: %s) [ID 287754.1]
Thanks,
Hussein
Maybe you are looking for
-
How create account document from Sales Document?
Hi. I hope you are doing well. The scenario is: 1.- Create Lead (Sucessfully) 2.- Create Quotation wrt Lead ((Sucessfully) 3.- Create Sales Order wrt Quotation (Sucessfully), into status tab I have add Transferred to bill. document status 4.- The bil
-
To enable delta in generic data source
Hi, There is a generic datasource created on top of PRPS & RPSCO tables. Currently the extraction mode is 'full update'. There is no time stamp field available in RPSCO, Please suggest how can I enable delta?
-
Where can I get a free christmas template for Mail?
I would like to send out an invitation for a Christmas luncheon and would like a template with a Christmas theme. Can you tell me how to get one? Thank you
-
Is there any possibility that the format of the lens profile files will be documented?
-
How do I restore permissions to amend Metadata and Keywords on Bridge?
I have just bought a new PC and reinstalled Bridge and CS6. On amending or adding Keywords or metadata to my existing images I receive an error message: 'There was an error writing Metadata to <Filename>.' It has appeared on .jpg , .tif and .cr2 al