ORDER BY on large VARCHAR column

The database driver I am using does not allow me to set an index on a large VARCHAR field. Anyone have any tips for speeding up an ORDER BY on this column for a very large table? There must be some standard tricks out there but I'm having some trouble finding them. Currently, for something like 300,000 records, a simple query without an order by takes a few milliseconds. The same query ORDERed by this VARCHAR column takes a few minutes.
I was thinking of adding a new colum, a LONG called something like NAME_ORDER. Each time I insert a new record, I would search for the record that comes before the new one (decided using a COMPARE like function) and then either make ther NAME_ORDER a value between the previous record and the next record, or if there is no room left, make it the previous record plus on and increment all the following records.
Wow, this sounds like a drag. Anyone have any better ideas? And no I can't shorten or truncate the VARCHAR columns.
TIA!

how much data are you selecting from this table at one time? (not all of it I hope... )
there are two situations i can think of causing your problem...
1) you are selecting 300,000 rows and doing an ORDER on that... even with an index that's might well take a long time...
2) generally the number of rows in the table won't matter it's how many you return. but the other things that make a slow query are ORDER BY in conjunction with any columns that are functions (such as SUM, COUNT, AVG) or any GROUP BY clause.
For example this...
SELECT x,COUNT(*) FROM table GROUP BY xmight well be performance wise better (by an exponential rate) than
SELECT x,COUNT(*) as thecount FROM table GROUP BY x ORDER BY thecount;IMO generally speaking ORDER BY is one of the worst performance things you can do with a database if the field you are sorting isn't indexed.
so here are my tips...
1) Take a look at your query... do you filter out some rows or are you selecting all 300,000? Does your query have functions or GROUP BY statements that are killing your speed?
2) If you are using functions or GROUP BY (and you have to use them) try and use a temporary table and sort afterwards... this may actually be faster.
3) Try and at least build a partial index on the field... it may well be good enough. most databases will let you do this on VARCHAR fields... the idea is that the index is just on the first 50 char's or whatnot.

Similar Messages

  • SQL Query issue with large varchar column

    I have a SQL Query (PL/SQL function body returning SQL query) which contains a large varchar column. Even if I substring the column to 30chars when it displays on the page it wraps. I have tried putting nowrap="wrap" for the HTML table cell attributes and it still wraps. I have tried setting the width attributes on the column even though it's not an updateable column. Does anyone have any ideas on how prevent this from wrapping. In some cases 1 line will take up 3 because of this wrapping issue and it's not nice to look at. It seems that the column is somewhere set to a fixed width, which is less than 30 characters, and anything beyond this fixed width wraps.

    Hi Netha,
    Can you please provide the DDLs of three tables you are using,
    Also post us how many rows you are getting output for this query? 
    select * from dim.store st where
    st.store_code = 'MAUR'
    also try to run and update statement on this table as below and execute your query
    update dim.store
    set store_code
    = ltrim(rtrim(store_code))
    where
    store_code = 'MAUR'
    once you run this update, then run your query.  Let us know the result.

  • WHat is the best index type for non uniqueness / Varchar columns in SQL 2008 R2

    Hello All Greetings,
    Please help me here with my doubt,
    in my table i have two columns about a million rows, it has about 20 columns in it, three columns with name as Period, Gender so most of the time these two columns use in where clause,
    Gender  will contain Either M or F , Period contains YYYY-Month (2013-December, 2013-August) etc so i would like to add a Index to these two columns so that in will increase the performance, so please let me know what type of indexes i need to add to
    these columns in the table,
    please note that only one time we will add data to the table which will take only 2 minutes but we query the table every day
    so my question what is the best index type that i need to create on columns with non uniqueness values in the column.,
    Thank you In Advance,
    Milan

    There is nothing whatever wrong with creating an index on a VARCHAR column, or set of columns.
    Regarding the performance of VARCHAR/INT, as with everything in a RDBMS, it depends on what you are doing. What you may be thinking of is the fact that clustering a table on a VARCHAR key is (in SQL Server) marginally less efficient than clustering on a monotonically
    increasing numerical key, and can introduce fragmentation.
    Or you may be thinking of what you have heard about writing JOINs on VARCHAR columns - it is true, it is a little less efficient than a JOIN on numeric type, but it is only a little less efficient, nothing that would lead you to never join on varchar cols.
    None of this does not mean that you should not create indexes on VARCHAR columns. A needed index on a VARCHAR column will boost query performance, often by orders of magnitude. If you need an index on a VARCHAR, create it. It makes no sense to try to find an
    integer column to create the index on - the engine will never use it.
    Check this reference: http://stackoverflow.com/questions/14041481/is-it-good-to-create-a-nonclustered-index-on-a-column-of-type-varchar
    Mark ANSWER if this reply resolves your query, If helpful then VOTE HELPFUL
    INSQLSERVER.COM
    Mohammad Nizamuddin

  • Imp-00020 long column too large for column buffer size (22)

    Hi friends,
    I have exported (through Conventional path) a complete schema from Oracle 7 (Sco unix patform).
    Then transferred the export file to a laptop(window platform) from unix server.
    And tried to import this file into Oracle10.2. on windows XP.
    (Database Configuration of Oracle 10g is
    User tablespace 2 GB
    Temp tablespace 30 Mb
    The rollback segment of 15 mb each
    undo tablespace of 200 MB
    SGA 160MB
    PAGA 16MB)
    All the tables imported success fully except 3 tables which are having AROUND 1 million rows each.
    The error message comes during import are as following for these 3 tables
    imp-00020 long column too large for column buffer size (22)
    imp-00020 long column too large for column buffer size(7)
    The main point here is in all the 3 tables there is no long column/timestamp column (only varchar/number columns are there).
    For solving the problem I tried following options
    1.Incresed the buffer size upto 20480000/30720000.
    2.Commit=Y Indexes=N (in this case does not import complete tables).
    3.first export table structures only and then Data.
    4.Created table manually and tried to import the tables.
    but all efforts got failed.
    still getting the same errors.
    Can some one help me on this issue ?
    I will be grateful to all of you.
    Regards,
    Harvinder Singh
    [email protected]
    Edited by: user462250 on Oct 14, 2009 1:57 AM

    Thanks, but this note is for older releases, 7.3 to 8.0...
    In my case both export and import were made on a 11.2 database.
    I didn't use datapump because we use the same processes for different releases of Oracle, some of them do not comtemplate datapump. By the way, shouldn't EXP / IMP work anyway?

  • 2014 In-Memory Table Bucket Size for varchar column

    i'm trying to create an In-Memory Table with a hash index on a varchar column. If you have a numeric field, it's supposed to be twice the unique values, but how do you calculate the BUCKET_COUNT for a varchar(20) column? I've an Email column and i want to
    create an index on it but i don't know what the BUCKET_COUNT should be... i cannot find any help about it, every tutorial or help explains hash indexes with numeric values only. 
    thanks!

    I do have a question, though.  What happens if there is a hash collision?  Yes, things slow down a little, but I wonder how much.  Hash table twice the size (row count) of the data, can be pretty large.  Especially if the data is
    not unique and is going to produce collisions anyway!  I assume it will still work even if the hash table bucket count is
    half the number of data rows.  Remember, Hekaton is 100x faster, so what if a few collisions slows it down to 97x?
    Well, it is not 100x; I think they have achieved something like 30x in
    their demos.
    I am not sure that you can have non-unique hash indexes, but in any case it only make sense if duplicates are occasionaly.
    An occasional hash collision is not going cost you a lot, but if you started with a bucket count of one million, and now have five million rows, you are certainly losing performance.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Problem with JDBC and VARCHAR-Columns

    Hi,
    i have a problem handling VARCHAR-Columns via JDBC.
    I.e. by using the JDBC-components of <SunONE Community Edition Update 1> any attempt to store any other then numeric values (like "0", "01", "999" etc.) in a column of type VARCHAR results in an error.
    After entering the string "test" in a JTextArea or JTextField (which is linked by its document to the  VARCHAR-column) i will receive:
    "java.lang.NumberFormatException: For input string: "st"/For input string: "st". I also tried the javax.sql. rowSet methods like updateString() with the same result.
    Are there any knwon issues corresponding to this behaviour?
    <b>Configuration details:</b>
    <i>JDBC-Driver:</i>
    package com.sap.dbtech.jdbc, MaxDB JDBC Driver, MySQL MaxDB, 7.5.1 Build 000-000-002-750 on Java 1.4.2
    <i>Database-version:</i>
    7.5.00.16 on WindowsXP Pro SP1
    <i>JRE:</i>
    1.4.1_02
    The <i>trace-file</i> only show this statements:
    [email protected] (UPDATE NOTARZTEINSATZPROTOKOLL_1 SET NEUROLOGISCHER_ERSTBEFUND = ? WHERE NAP_ID = ? AND EP_ID = ? AND BEMERKUNG IS NULL AND SOZ_ID IS NULL AND PSYZ_ID IS NULL AND ERSTBEFUND_ZEITPUNKT IS NULL AND GCSAO_ID_EB IS NULL AND GCSBVR_ID_EB IS NULL AND GCSBMR_ID_ARMLINKS_EB IS NULL AND GCSBMR_ID_ARMRECHTS_EB IS NULL AND GCSBMR_ID_BEINLINKS_EB IS NULL AND GCSBMR_ID_BEINRECHTS_EB IS NULL AND BWSL_ID_EB IS NULL AND EXTB_ID_ARMLINKS_EB IS NULL AND EXTB_ID_ARMRECHTS_EB IS NULL AND EXTB_ID_BEINLINKS_EB IS NULL AND EXTB_ID_BEINRECHTS_EB IS NULL AND PUPW_ID_LINKS_EB IS NULL AND PUPW_ID_RECHTS_EB IS NULL AND EJN_ID_FEHLTLICHTR_LI_EB IS NULL AND EJN_ID_FEHLTLICHTR_RE_EB IS NULL AND EJN_ID_MENINGISMUS_EB IS NULL AND NEUROLOGISCHER_ERSTBEFUND IS NULL AND TEMPERATUR_EB IS NULL AND RR_SYSTOLISCH_EB IS NULL AND RR_DIASTOLISCH_EB IS NULL AND HERZFREQUENZ_EB IS NULL AND EJN_ID_HF_REGELM_EB IS NULL AND BLUTZUCKER_EB IS NULL AND ATEMFREQUENZ_EB IS NULL AND SPO2_EB IS NULL AND CO2_EB IS NULL AND SCHM_ID_EB IS NULL AND ERH_ID_COR_EB IS NULL AND EELS_ID_COR_EB IS NULL AND EJN_ID_EKG_EMENTKOPPEL_EB IS NULL AND EERBST_ID_COR_EB IS NULL AND EHA_ID_COR_EB IS NULL AND ESVES_ID_COR_EB IS NULL AND EVES_ID_COR_EB IS NULL AND EKG_BEMERKUNG_EB IS NULL AND ATRH_ID_EB IS NULL AND EJN_ID_ZYANOSE_EB IS NULL AND EJN_ID_SPASTIK_EB IS NULL AND EJN_ID_RASSELGER_EB IS NULL AND EJN_ID_STRIDOR_EB IS NULL AND EJN_ID_VERLEGATEMW_EB IS NULL AND BEAM_ID_UEBERNAHME IS NULL AND ATMUNG_FREITEXT_EB IS NULL AND VERLM_ID IS NULL AND SCHWV_ID_SCHAEDELHIRN IS NULL AND SCHWV_ID_GESICHT IS NULL AND SCHWV_ID_HWS IS NULL AND SCHWV_ID_THORAX IS NULL AND SCHWV_ID_ABDOMEN IS NULL AND SCHWV_ID_BWSLWS IS NULL AND SCHWV_ID_BECKEN IS NULL AND SCHWV_ID_OEXTREMITAET IS NULL AND SCHWV_ID_UEXTREMITAET IS NULL AND SCHWV_ID_WEICHTEILE IS NULL AND VBRT_ID IS NULL AND TRT_ID IS NULL AND UHG_ID IS NULL AND SICHTK_ID IS NULL AND UNFALLZEITPUNKT IS NULL AND SAPS_2 IS NULL AND TISS_28 IS NULL AND NACA_ID IS NULL AND ZBV IS NULL )
    => com.sap.dbtech.jdbc.CallableStatementSapDB@11daf60
    <at this position the trace-file ends?!; "NEUROLOGISCHER_ERSTBEFUND ist defined as
    "NEUROLOGISCHER_ERSTBEFUND" Varchar (1000) ASCII; i also encountered this problem while handling shorter VARCHAR-columns with JComboBox-components...>
    Any information would be very helpfully!!!
    Greetings,
    Arnd
    Message was edited by: Arnd Benninghoff
    Message was edited by: Arnd Benninghoff

    Hi Arnd,
    if I understand right you are trying to insert/update value into a Varchar(1000) column. And if you set a non numeric value you get an exception "java.lang.NumberFormatException", Right?
    Of course this should work with MaxDB. The exception you get doesn't come from MaxDB's JDBC driver. The driver will only throw exceptions that are derived from java.sql.Exception.
    So, I guess the error comes from a layer above the JDBC layer, possibly from the JDBC-components of <SunONE Community Edition Update 1>. This would also explain why you don't see any exception in the JDBC trace.
    Did you have defined any constraints for the input field (JTextArea or JTextField)?
    Hope that helps.
    regards,
    Marco

  • SQL Error: ORA-12899: value too large for column

    Hi,
    I'm trying to understand the above error. It occurs when we are migrating data from one oracle database to another:
    Error report:
    SQL Error: ORA-12899: value too large for column "USER_XYZ"."TAB_XYZ"."COL_XYZ" (actual: 10, maximum: 8)
    12899. 00000 - "value too large for column %s (actual: %s, maximum: %s)"
    *Cause:    An attempt was made to insert or update a column with a value
    which is too wide for the width of the destination column.
    The name of the column is given, along with the actual width
    of the value, and the maximum allowed width of the column.
    Note that widths are reported in characters if character length
    semantics are in effect for the column, otherwise widths are
    reported in bytes.
    *Action:   Examine the SQL statement for correctness.  Check source
    and destination column data types.
    Either make the destination column wider, or use a subset
    of the source column (i.e. use substring).
    The source database runs - Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    The target database runs - Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    The source and target table are identical and the column definitions are exactly the same. The column we get the error on is of CHAR(8). To migrate the data we use either a dblink or oracle datapump, both result in the same error. The data in the column is a fixed length string of 8 characters.
    To resolve the error the column "COL_XYZ" gets widened by:
    alter table TAB_XYZ modify (COL_XYZ varchar2(10));
    -alter table TAB_XYZ succeeded.
    We now move the data from the source into the target table without problem and then run:
    select max(length(COL_XYZ)) from TAB_XYZ;
    -8
    So the maximal string length for this column is 8 characters. To reduce the column width back to its original 8, we then run:
    alter table TAB_XYZ modify (COL_XYZ varchar2(8));
    -Error report:
    SQL Error: ORA-01441: cannot decrease column length because some value is too big
    01441. 00000 - "cannot decrease column length because some value is too big"
    *Cause:   
    *Action:
    So we leave the column width at 10, but the curious thing is - once we have the data in the target table, we can then truncate the same table at source (ie. get rid of all the data) and move the data back in the original table (with COL_XYZ set at CHAR(8)) - without any issue.
    My guess the error has something to do with the storage on the target database, but I would like to understand why. If anybody has an idea or suggestion what to look for - much appreciated.
    Cheers.

    843217 wrote:
    Note that widths are reported in characters if character length
    semantics are in effect for the column, otherwise widths are
    reported in bytes.You are looking at character lengths vs byte lengths.
    The data in the column is a fixed length string of 8 characters.
    select max(length(COL_XYZ)) from TAB_XYZ;
    -8
    So the maximal string length for this column is 8 characters. To reduce the column width back to its original 8, we then run:
    alter table TAB_XYZ modify (COL_XYZ varchar2(8));varchar2(8 byte) or varchar2(8 char)?
    Use SQL Reference for datatype specification, length function, etc.
    For more info, reference {forum:id=50} forum on the topic. And of course, the Globalization support guide.

  • Inserted value too large for column error while scheduling a job

    Hi Everyone,
    I am trying to schedule a PL SQL script as a job in my Oracle 10g installed and running on Windows XP.
    While trying to Submit the job I get the error as "Inserted value too large for column:" followed by my entire code. The code is correct - complies and runs in Oracle ApEx's SQL Workshop.
    The size of my code is 4136 character, 4348 bytes and 107 lines long. It is a code that sends an e-mail and has a +utl_smtp.write_data([Lots of HTML])+
    There is no insert statement in the code whatsoever, the code only queries the database for data...
    Any idea as to why I might be getting this error??
    Thanks in advance
    Sid

    The size of my code is 4136 character, 4348 bytes and 107 lines long. It is a code that sends an e-mail and has a utl_smtp.write_data(Lots of HTML)SQL variable has maximum size of 4000

  • Fdpstp failed due to ora-12899 value too large for column

    Hi All,
    User facing this problem while running the concurrent program.
    the program is complted but with rhis error.
    fdpstp failed due to ora-12899 value too large for column
    Can any one tell me the excat solution for this?
    RDBMS : 10.2.0.3.0
    Oracle Applications : 11.5.10.2

    User facing this problem while running the concurrent program.
    the program is complted but with rhis error.Is this a seeded or custom concurrent program?
    fdpstp failed due to ora-12899 value too large for column
    Can any one tell me the excat solution for this?Was this working before? If yes, any changes been done recently?
    Can other users run the same concurrent program with no issues?
    Please post the contents of the concurrent request log file here.
    Please ask your developer to open the file using Reports Builder and compile the report and run it (if possible) with the same parameters.
    OERR: ORA-12899 value too large for column %s (actual: %s, maximum: %s) [ID 287754.1]
    Thanks,
    Hussein

  • Value too large for column "OIMDB"."UPA_FIELDS"."FIELD_NEW_VALUE"

    I am running OIM 9.1.0.1849.0 build 1849.0 on Windows Server 2003
    I see the following stack trace repeatedly in c:\jboss-4.0.3SP1\server\default\log\server.log
    I am hoping someone might be able help me resolve this issue.
    Thanks in advance
    ...Lyall
    java.sql.SQLException: ORA-12899: value too large for column "OIMDB"."UPA_FIELDS"."FIELD_NEW_VALUE" (actual: 2461, maximum: 2000)
         at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112)
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:331)
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:288)
         at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:745)
         at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:216)
         at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:966)
         at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1170)
         at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3339)
         at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3423)
         at org.jboss.resource.adapter.jdbc.WrappedPreparedStatement.executeUpdate(WrappedPreparedStatement.java:227)
         at com.thortech.xl.dataaccess.tcDataBase.writePreparedStatement(Unknown Source)
         at com.thortech.xl.dataobj.PreparedStatementUtil.executeUpdate(Unknown Source)
         at com.thortech.xl.audit.auditdataprocessors.UserProfileRDGenerator.insertUserProfileChangedAttributes(Unknown Source)
         at com.thortech.xl.audit.auditdataprocessors.UserProfileRDGenerator.processUserProfileChanges(Unknown Source)
         at com.thortech.xl.audit.auditdataprocessors.UserProfileRDGenerator.processAuditData(Unknown Source)
         at com.thortech.xl.audit.genericauditor.GenericAuditor.processAuditMessage(Unknown Source)
         at com.thortech.xl.audit.engine.AuditEngine.processSingleAudJmsEntry(Unknown Source)
         at com.thortech.xl.audit.engine.AuditEngine.processOfflineNew(Unknown Source)
         at com.thortech.xl.audit.engine.jms.XLAuditMessageHandler.execute(Unknown Source)
         at com.thortech.xl.schedule.jms.messagehandler.MessageProcessUtil.processMessage(Unknown Source)
         at com.thortech.xl.schedule.jms.messagehandler.AuditMessageHandlerMDB.onMessage(Unknown Source)
         at sun.reflect.GeneratedMethodAccessor127.invoke(Unknown Source)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:324)
         at org.jboss.invocation.Invocation.performCall(Invocation.java:345)
         at org.jboss.ejb.MessageDrivenContainer$ContainerInterceptor.invoke(MessageDrivenContainer.java:475)
         at org.jboss.resource.connectionmanager.CachedConnectionInterceptor.invoke(CachedConnectionInterceptor.java:149)
         at org.jboss.ejb.plugins.MessageDrivenInstanceInterceptor.invoke(MessageDrivenInstanceInterceptor.java:101)
         at org.jboss.ejb.plugins.CallValidationInterceptor.invoke(CallValidationInterceptor.java:48)
         at org.jboss.ejb.plugins.AbstractTxInterceptor.invokeNext(AbstractTxInterceptor.java:106)
         at org.jboss.ejb.plugins.TxInterceptorCMT.runWithTransactions(TxInterceptorCMT.java:335)
         at org.jboss.ejb.plugins.TxInterceptorCMT.invoke(TxInterceptorCMT.java:166)
         at org.jboss.ejb.plugins.RunAsSecurityInterceptor.invoke(RunAsSecurityInterceptor.java:94)
         at org.jboss.ejb.plugins.LogInterceptor.invoke(LogInterceptor.java:192)
         at org.jboss.ejb.plugins.ProxyFactoryFinderInterceptor.invoke(ProxyFactoryFinderInterceptor.java:122)
         at org.jboss.ejb.MessageDrivenContainer.internalInvoke(MessageDrivenContainer.java:389)
         at org.jboss.ejb.Container.invoke(Container.java:873)
         at org.jboss.ejb.plugins.jms.JMSContainerInvoker.invoke(JMSContainerInvoker.java:1077)
         at org.jboss.ejb.plugins.jms.JMSContainerInvoker$MessageListenerImpl.onMessage(JMSContainerInvoker.java:1379)
         at org.jboss.jms.asf.StdServerSession.onMessage(StdServerSession.java:256)
         at org.jboss.mq.SpyMessageConsumer.sessionConsumerProcessMessage(SpyMessageConsumer.java:904)
         at org.jboss.mq.SpyMessageConsumer.addMessage(SpyMessageConsumer.java:160)
         at org.jboss.mq.SpySession.run(SpySession.java:333)
         at org.jboss.jms.asf.StdServerSession.run(StdServerSession.java:180)
         at EDU.oswego.cs.dl.util.concurrent.PooledExecutor$Worker.run(PooledExecutor.java:748)
         at java.lang.Thread.run(Thread.java:534)
    2008-09-03 14:32:43,281 ERROR [XELLERATE.AUDITOR] Class/Method: UserProfileRDGenerator/insertUserProfileChangedAttributes encounter some problems: Failed to insert change record in table UPA_FIELDS

    Thankyou,
    Being the OIM noob that I am, had no idea where to look.
    We do indeed have some user defined fields of 4000 characters.
    I am now wondering if I can disable auditing, or maybe increase the size of the auditing database column?
    Also, I guess I should raise a defect in OIM as the User Interface should not allow the creation of a user field for which auditing is unable to cope.
    I also wonder if the audit failures (other than causing lots of stack traces) causes any transaction failures due to transaction rollbacks?
    Edited by: lyallp on Sep 3, 2008 4:01 PM

  • I am getting error "ORA-12899: value too large for column".

    I am getting error "ORA-12899: value too large for column" after upgrading to 10.2.0.4.0
    Field is updating only through trigger with hard coded value.
    This happens randomly not everytime.
    select * from v$version
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE     10.2.0.4.0     Production
    TNS for Linux: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    Table Structure
    desc customer
    Name Null? Type
    CTRY_CODE NOT NULL CHAR(3 Byte)
    CO_CODE NOT NULL CHAR(3 Byte)
    CUST_NBR NOT NULL NUMBER(10)
    CUST_NAME CHAR(40 Byte)
    RECORD_STATUS CHAR(1 Byte)
    Trigger on the table
    CREATE OR REPLACE TRIGGER CUST_INSUPD
    BEFORE INSERT OR UPDATE
    ON CUSTOMER FOR EACH ROW
    BEGIN
    IF INSERTING THEN
    :NEW.RECORD_STATUS := 'I';
    ELSIF UPDATING THEN
    :NEW.RECORD_STATUS := 'U';
    END IF;
    END;
    ERROR at line 1:
    ORA-01001: invalid cursor
    ORA-06512: at "UPDATE_CUSTOMER", line 1320
    ORA-12899: value too large for column "CUSTOMER"."RECORD_STATUS" (actual: 3,
    maximum: 1)
    ORA-06512: at line 1
    Edited by: user4211491 on Nov 25, 2009 9:30 PM
    Edited by: user4211491 on Nov 25, 2009 9:32 PM

    SQL> create table customer(
      2  CTRY_CODE  CHAR(3 Byte) not null,
      3  CO_CODE  CHAR(3 Byte) not null,
      4  CUST_NBR NUMBER(10) not null,
      5  CUST_NAME CHAR(40 Byte) ,
      6  RECORD_STATUS CHAR(1 Byte)
      7  );
    Table created.
    SQL> CREATE OR REPLACE TRIGGER CUST_INSUPD
      2  BEFORE INSERT OR UPDATE
      3  ON CUSTOMER FOR EACH ROW
      4  BEGIN
      5  IF INSERTING THEN
      6  :NEW.RECORD_STATUS := 'I';
      7  ELSIF UPDATING THEN
      8  :NEW.RECORD_STATUS := 'U';
      9  END IF;
    10  END;
    11  /
    Trigger created.
    SQL> insert into customer(CTRY_CODE,CO_CODE,CUST_NBR,CUST_NAME,RECORD_STATUS)
      2                values('12','13','1','Mahesh Kaila','UPD');
                  values('12','13','1','Mahesh Kaila','UPD')
    ERROR at line 2:
    ORA-12899: value too large for column "HPVPPM"."CUSTOMER"."RECORD_STATUS"
    (actual: 3, maximum: 1)
    SQL> insert into customer(CTRY_CODE,CO_CODE,CUST_NBR,CUST_NAME)
      2                values('12','13','1','Mahesh Kaila');
    1 row created.
    SQL> set linesize 200
    SQL> select * from customer;
    CTR CO_   CUST_NBR CUST_NAME                                R
    12  13           1 Mahesh Kaila                             I
    SQL> update customer set cust_name='tst';
    1 row updated.
    SQL> select * from customer;
    CTR CO_   CUST_NBR CUST_NAME                                R
    12  13           1 tst                                      Urecheck your code once again..somewhere you are using record_status column for insertion or updation.
    Ravi Kumar

  • Install fails due to ORA-12899: value too large for column

    Hi,
    Our WCS 11g installation on Tomcat 7 fails giving a "ORA-12899: value too large for column".
    As per the solution ticket https://support.oracle.com/epmos/faces/DocumentDisplay?id=1539055.1 we have to set "-Dfile.encoding=UTF-8" in tomcat.
    We have done this beforehand by setting the variable in catalina.bat in tomcat 7 bin as shown below
    But still we get the same error while installation.
    If anybody has faced this , let us know how you resolved it

    We were unable to install WCS on Tomcat 7 but on Tomcat 6 by specifying "-Dfile.encoding=UTF-8" in java options using "Tomcat Configure" it was succesful.
    An alternative we found was to increase the value of the column itself.
    Using command
    ALTER TABLE csuser.systemlocalestring
    MODIFY value varchar2 (4000)

  • Error on reverse on XML: value too large for column

    Hi All,
    I am trying to reverse engineer while creating the data model on XML technology.
    My JDBC URL on data server reads this:
    jdbc:snps:xml?d=../demo/abc/CustomerPartyEBO.xsd&s=MYEBO
    I get an error while doing the reverse.
    java.sql.SQLException: ORA-12899: value too large for column "PINW"."SNP_REV_KEY_COL"."KEY_NAME" (actual: 102, maximum: 100)
    After doing some check through selective reverse, found that this is happening only for few tables, whose names are quite longer.
    Tried setting the "maximum column name length" and "maximum table name length" to 120 and even higher values on XML technology from Topology Manager. No luck there.
    Thanks in advance for any help here.

    That is not the place to change.
    The error states that the SNP_REV_KEY_COL.KEY_NAME in the Work Repository schema PINW has maximum length defined to be 100.
    I donot know if Oracle will support this change but you will have to make a change to the Work Repository table SNP_REV_KEY_COL and change the column lengths as a workaround.

  • Data Profiling - Value too large for column error

    I am running a data profile which completes with errors. The error being reported is an ORA 12899 Value too large for column actual (41 maximum 40).
    I have checked the actual data in the table and the maximum is only 40 characters.
    Any ideas on how to solve this. Even though it completes no actual profile is done on the data due to the error.
    OWB version 11.2.0.1
    Log file below.
    Job     Rows Selected     Rows Inserted     Rows Updated     Rows Deleted     Errors     Warnings     Start Time     Elapsed Time     
    Profile_1306385940099                                   2011-05-26 14:59:00.0     106     
    Data profiling operations complete.                                             
    Redundant column analysis for objects complete in 0 s.                                              
    Redundant column analysis for objects.                                              
    Referential analysis for objects complete in 0.405 s.                                              
    Referential analysis for objects.                                              
    Referential analysis initialization complete in 8.128 s.                                             
    Referential analysis initialization.                                             
    Data rule analysis for object TABLE_NAME complete in 0 s.                                             
    Data rule analysis for object TABLE_NAME                                             
    Functional dependency and unique key discovery for object TABLE_NAME complete in 0 s.                                             
    Functional dependency and unique key discovery for object TABLE_NAME                                             
    Domain analysis for object TABLE_NAME complete in 0.858 s.                                             
    Domain analysis for object TABLE_NAME                                             
    Pattern analysis for object TABLE_NAME complete in 0.202 s.                                             
    Pattern analysis for object TABLE_NAME                                             
    Aggregation and Data Type analysis for object TABLE_NAME complete in 9.236 s.                                             
    Aggregation and Data Type analysis for object TABLE_NAME                                             
    Functional dependency and unique key discovery for object TABLE_NAME complete in 0 s.                                             
    Functional dependency and unique key discovery for object TABLE_NAME                                             
    Domain analysis for object TABLE_NAME complete in 0.842 s.                                             
    Domain analysis for object TABLE_NAME                                             
    Pattern analysis for object TABLE_NAME complete in 0.187 s.                                             
    Pattern analysis for object TABLE_NAME                                             
    Aggregation and Data Type analysis for object TABLE_NAME complete in 9.501 s.                                             
    Aggregation and Data Type analysis for object TABLE_NAME                                             
    Functional dependency and unique key discovery for object TABLE_NAME complete in 0 s.                                             
    Functional dependency and unique key discovery for object TABLE_NAME                                             
    Domain analysis for object TABLE_NAME complete in 0.717 s.                                             
    Domain analysis for object TABLE_NAME                                             
    Pattern analysis for object TABLE_NAME complete in 0.156 s.                                             
    Pattern analysis for object TABLE_NAME                                             
    Aggregation and Data Type analysis for object TABLE_NAME complete in 9.906 s.                                             
    Aggregation and Data Type analysis for object TABLE_NAME                                             
    Functional dependency and unique key discovery for object TABLE_NAME complete in 0 s.                                             
    Functional dependency and unique key discovery for object TABLE_NAME                                             
    Domain analysis for object TABLE_NAME complete in 0.827 s.                                             
    Domain analysis for object TABLE_NAME                                             
    Pattern analysis for object TABLE_NAME complete in 0.187 s.                                             
    Pattern analysis for object TABLE_NAME                                             
    Aggregation and Data Type analysis for object TABLE_NAME complete in 9.172 s.                                             
    Aggregation and Data Type analysis for object TABLE_NAME                                             
    Functional dependency and unique key discovery for object TABLE_NAME complete in 0 s.                                             
    Functional dependency and unique key discovery for object TABLE_NAME                                             
    Domain analysis for object TABLE_NAME complete in 0.889 s.                                             
    Domain analysis for object TABLE_NAME                                             
    Pattern analysis for object TABLE_NAME complete in 0.202 s.                                             
    Pattern analysis for object TABLE_NAME                                             
    Aggregation and Data Type analysis for object TABLE_NAME complete in 9.313 s.                                             
    Aggregation and Data Type analysis for object TABLE_NAME                                             
    Execute data prepare map for object TABLE_NAME complete in 9.267 s.                                             
    Execute data prepare map for object TABLE_NAME                                             
    Execute data prepare map for object TABLE_NAME complete in 10.187 s.                                             
    Execute data prepare map for object TABLE_NAME                                             
    Execute data prepare map for object TABLE_NAME complete in 8.019 s.                                             
    Execute data prepare map for object TABLE_NAME                                             
    Execute data prepare map for object TABLE_NAME complete in 5.507 s.                                             
    Execute data prepare map for object TABLE_NAME                                             
    Execute data prepare map for object TABLE_NAME complete in 10.857 s.                                             
    Execute data prepare map for object TABLE_NAME                                             
    Parameters                                             
    O82647310CF4D425C8AED9AAE_MAP_ProfileLoader                              1     2011-05-26 14:59:00.0     11     
    ORA-12899: value too large for column "SCHEMA"."O90239B0C1105447EB6495C903678"."ITEM_NAME_1" (actual: 41, maximum: 40)                                             
    Parameters                                             
    O68A16A57F2054A13B8761BDC_MAP_ProfileLoader                              1     2011-05-26 14:59:11.0     5     
    ORA-12899: value too large for column "SCHEMA"."O0D9332A164E649F3B4D05D045521"."ITEM_NAME_1" (actual: 41, maximum: 40)                                             
    Parameters                                             
    O78AD6B482FC44D8BB7AF8357_MAP_ProfileLoader                              1     2011-05-26 14:59:16.0     9     
    ORA-12899: value too large for column "SCHEMA"."OBF77A8BA8E6847B8AAE4522F98D6"."ITEM_NAME_2" (actual: 41, maximum: 40)                                             
    Parameters                                             
    OA79DF482D74847CF8EA05807_MAP_ProfileLoader                              1     2011-05-26 14:59:25.0     10     
    ORA-12899: value too large for column "SCHEMA"."OB0052CBCA5784DAD935F9FCF2E28"."ITEM_NAME_1" (actual: 41, maximum: 40)                                             
    Parameters                                             
    OFFE486BBDB884307B668F670_MAP_ProfileLoader                              1     2011-05-26 14:59:35.0     9     
    ORA-12899: value too large for column "SCHEMA"."O9943284818BB413E867F8DB57A5B"."ITEM_NAME_1" (actual: 42, maximum: 40)                                             
    Parameters

    Found the answer. It was the database character set for multi byte character sets.

  • Inserted value too large for column

    Hi,
    I have a table (desc below), with only one trigger wich fill the operatcreat, operatmodif, datecreat and datemodif column at insert and update for each row. When I try to insert, I got the following messages :
    INSERT INTO tarifclient_element(tarifclient_code,article_code,prix) VALUES('12','087108',3.94);
    ERROR at line 1:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-01401: inserted value too large for column
    SQL> desc tarifclient_element
    Name Null? Type
    TARIFCLIENT_CODE NOT NULL CHAR(10)
    ARTICLE_CODE NOT NULL CHAR(20)
    PRIX NUMBER
    OPERATCREAT NOT NULL VARCHAR2(30)
    OPERATMODIF VARCHAR2(30)
    DATECREAT NOT NULL DATE
    DATEMODIF DATE
    NB : tarifclient_code is an ENABLED fk, article_code is a DISABLED fk. All values exists in both referenced tables.
    Any idea ?

    My trigger is not the problem, I tried to delete it and fill the columns manually and got the same error.
    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by allanplumb ():
    The SQL you have shown us looks OK
    (to me, au moins). However, perhaps the
    error is in the trigger which fills in
    the two operativ fields. It would execute
    at the same time as your insert, near
    enough.
    -- Allan Plumb<HR></BLOCKQUOTE>
    null

Maybe you are looking for