Alter varchar2 column to clob

Hi all,
have a table like :
create table test(col1 varchar2(20),col2 number); -- with data
now i have to change col1 to clob.
alter table test modify col1 clob; ... is not working..
can i change the data type to clob without dropping the table .. as it contains huge data.. i will take a lot of time repopulating it..

Hi,
You can cant modify from VARCHAR2 to CLOB but you can achieve your result like this:
1. Add a new column as CLOB
2. UPDATE varchar date to CLOB column;
3. DROP VARCHAR column
4. Rename CLOB column to VARCHAR column name
SQL>CREATE TABLE t ( name VARCHAR2(20), age number(3));
Table created.
SQL>INSERT INTO t VALUES('aaa',20);
1 row created.
SQL>INSERT INTO t VALUES('bbb',30);
1 row created.
SQL>COMMIT;
Commit complete.
SQL>ALTER TABLE t MODIFY name CLOB;
ALTER TABLE t MODIFY name CLOB
ERROR at line 1:
ORA-22858: invalid alteration of datatype
SQL>ALTER TABLE t ADD tmp_name CLOB;
Table altered.
SQL>UPDATE t SET tmp_name=name;
2 rows updated.
SQL>
SQL>ALTER TABLE t DROP COLUMN name;
Table altered.
SQL>
SQL>ALTER TABLE t RENAME COLUMN tmp_name to name;
Table altered.
SQL>
SQL>desc T;
Name                                                  Null?    Type
AGE                                                            NUMBER(3)
NAME                                                           CLOB
SQL>SELECT * FROM  v$version;
BANNER
Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
PL/SQL Release 9.2.0.6.0 - Production
CORE    9.2.0.6.0       Production
TNS for Solaris: Version 9.2.0.6.0 - Production
NLSRTL Version 9.2.0.6.0 - Production
5 rows selected.
SQL>Regards

Similar Messages

  • Storing more than 2000 characters in a varchar2 column in Oracle 11g?

    We have a table in Oracle 11g with a varchar2 column. We use a proprietary programming language where this column is defined as string. Maximum we can store 2000 characters (4000 bytes) in this column. Now the requirement is such that the column needs to store more than 2000 characters. The DBAs don't like BLOB, CLOB or LONG datatypes for maintenance reasons.
    There are 2 solutions I can think of -
    1. Remove this column from the original table and have a separate table for this column and then store each character in a row, in order to get more than 2000 characters. This table will be joined with the original table for queries.
    2. If maximum I need is 8000 characters, can I just add 3 more columns so that I will have 4 columns with 2000 char each to get 8000 chars. So when the first column is full, values would be spilled over to the next column and so on.
    Which one is a better and easier approach? Please suggest.

    Visu - Some people also do not like to use LOBs because of difficulty in reclaiming space and ever increasing LOB segments. Some of these problems were caused by Oracle bugs (eg, Bug 2944866 Free space in LOB table / tablespace not reused with ASSM, Bug 3019979 Space may not be reused efficiently in a LOB segment) - albeit in 9.2. I've seen a few bug reports for similar things in 10.2 (don't have the references). Still, if there is a workaround/patch is this reason enough to steer the application development into a new direction?

  • Sort Order for a VARCHAR2 Column

    Hello Everyone,
    This is probably quite simple, however I cannot see it. I am trying to sort on a varchar2 column that has numbers, text, dashes, tildes, carets and underscores which are mixed to compose a wire number. I have come up with an order by that changes the sort order from alpha to numeric satisfying one portion of the requirement.
    select wirenbr,fromitem "from",frompin,diagnbr,sht,toitem,topin,effect
    from acft_wires
    order by decode(to_char(nvl(length(translate(wirenbr,'A1234567890','A')),0)),'0',lpad(wirenbr,18),wirenbr);
    The select returns:
    1
    2
    3
    20
    21
    200
    201
    1000
    A
    ADT1
    AEN1
    AE1
    AE9
    AFA
    B
    1A
    2U7230A-ZZ
    2000A
    213-22
    22X100A20
    220-11
    The customer preferred sort looks like:
    1
    2
    3
    10
    10A
    11
    200
    201
    201-01
    202
    1000
    1000A
    1001
    I will build a shrine in my hall of honor to this forum if someone can point me in the right direction on how to solve this problem.
    Thanks everyone for your help.

    ok, start with this
    order by
    translate(substr(wirenbr,1,1),'1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ-','0000000000ZZZZZZZZZZZZZZZZZZZZZZZZZZZ')
    , nvl(length(substr(wirenbr,1,instr(translate(wirenbr,'1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ-','1234567890XXXXXXXXXXXXXXXXXXXXXXXXXXX'),'X')-1)),length(wirenbr))
    , to_number(translate(wirenbr,'1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ-','123456789000000000000000000000000000'))
    First, all that start with number go to the beginning and all that start with a letter go to the end.
    Next, sort by the length of the first numerical part, as per Al's first solution.
    Next, sort by the numerical part.
    I started getting what I think is the right sort by that point. However, I think that you'll also have to add
    stuff to sort by the alpha part, mine was
    , NVL(RTRIM(LTRIM(wirenbr,'0123456789'),'0123456789'),0)
    The biggest problem I see is all of the different number-letter-number
    number-letter-number-letter formats. How many levels of recursion you'll have to go through to get it perfectly sorted, could you have ZZ999ZZ99Z9Z9Z9Z99?
    Any ideas for breaking this value up?

  • TO_NUMBER function on varchar2 column with numbers and strings

    I need to create a column in a view that converts a varchar2 column data to number. The problem is that some
    of the data have strings and some numbers. I get "ORA-01722: invalid number" error when Oracle tries to covert
    strings (e.g. 'ABCD') to number. What I need is to get a NULL value if the data is an invalid number.
    How can I rewrite the following to get NULL value is a string is invalid number?
    select to_number('abcd') numberColumn
    from dual
    thanx
    Alfred

    SQL> select * from test_char_num;
    ALPHA_NUM
    ABC
    DEF
    123
    234
    A12
    SQL> select decode(NVL(length(trim(translate(alpha_num, '1234567890', ' '))), 0),
      2                0, alpha_num, 'NULL') new_alpha_num
      3  from test_char_num;
    NEW_ALPHA_
    NULL
    NULL
    123
    234
    NULLP.S I am printing 'NULL' just to show the result, Please replace the 'NULL string with NULL value in SQL.
    Thx,
    Sri

  • Range Partitioning on Varchar2 column???

    We hava table and it has a date column and its type is varchar2.
    This column's format is '16021013' ('ddmmyyyy').
    We want to make range partition on this column. What will be the best way?
    do you think virtal column partitioning will be efficient?

    >
    I'm not a new DBA:-) You can be sure. I asked only about range partitioning trick with varchar2 column because we did our examination about this table. We will need to archive this table quarter by quarter to the other archive database. Then, off course, nearly all queries are coming firstly by using this date column and they can have different filtering inside "where" conditions. Already, this table has index on this column but with huge number of data, index performance is not enough for us. This is a 7x24 banking system and we are lately joined to this project. Because of this, we cant change everything like changing data type of that column after this moment.
    >
    You are taking things way too personally. No one said anything about whether you were, or were not, a DBA or made any other comments about your skill or abilities.
    What we ask was for you to tell us WHAT the problem was that you were trying to solve and WHY you thought partitioning would solve it.
    And now, after several generic posts, you have finally provided that information. We can only comment based on the information that you post. We can't guess as to what types of queries you use or what kinds of predicates you use in those queries. But we need to know that information in order to provide the best advice.
    Next time you post put the important information in your original question:
    1. A table column is VARCHAR2 but contains a date value. We are unable to change the datatype of this column.
    2. We need to archive data quarterly based on this date value
    3. Nearly all queries use this date value and then also may have additional filter conditions
    4. This date column is indexed but we would like to improve the performance beyond what this index can give us.
    The above is a summary that includes all important information that is needed to know how best to help you.
    And I made a pretty good guess since two replies ago I provided you with example code that shows just how to partition the table.
    >
    Now, our only aim is how to make range partitioning this varchar2 date column.
    >
    As I showed you in my example code earlier you can add a virtual column to the table and partition on it. My example code creates a monthly partitioned table that allows you to archive by month or archive every three months to archive by quarter.
    You can modify that example to use quarterly partitions if you want but I would recommend that you use standard monthly partitions since they will satisfy the widest range of predicates.

  • Format varchar2 column which stores a number and display it in 10,000 forma

    Problem Description
    I am having a Standard VO , which has a varchar2 datatype attribute.
    In my view page i am dispaly this vo in a OATABLE.
    I want to foramt one of the varchar2 column to display it in a requried format
    Eg :- if the value in the column is 9999 i want to display it as 9,999.
    I have tried few sources but i did not work...
    I tried all the sources metioned below
    OAMessageStyledTextBean arcmrecovalue1=(OAMessageStyledTextBean)oawebbean.findChildRecursive("Arcmrecovalue2");
    arcmrecovalue1.setDataType("NUMBER");
    arcmrecovalue1.setAttributeValue(TABULAR_FUNCTION_VALUE_ATTR,formatter);
    String currency = "USD";
    arcmrecovalue1.setAttributeValue(CURRENCY_CODE,currency);
    arcmrecovalue1.setAttributeValue(OAWebBeanConstants.CURRENCY_CODE,"USD");;
    arcmrecovalue1.setAttributeValue(*ON_SUBMIT_VALIDATER_ATTR*,*formatter*);
    OAMessageStyledTextBean arcmrecovalue2=(OAMessageStyledTextBean)oawebbean.findChildRecursive("Arcmrecovalue2");
    if(arcmrecovalue2 != null)
    String arcmrecovalue = (String)((OAMessageStyledTextBean)oawebbean.findChildRecursive("Arcmrecovalue2")).getValue(oapagecontext);
    String retValue = null;
    try
    OADBTransaction oadbtransaction = l_rootAM.getOADBTransaction();
    String s = "{? = call convert_string(?)}";
    OracleCallableStatement oraclecallablestatement = (OracleCallableStatement)oadbtransaction.createCallableStatement(s,0);
    oraclecallablestatement.registerOutParameter(1,Types.VARCHAR);
    oraclecallablestatement.setString(2,arcmrecovalue);
    oraclecallablestatement.execute();
    retValue = oraclecallablestatement.getString(1);
    catch(SQLException sqlexception)
    throw OAException.wrapperException(sqlexception);
    catch(Exception exception)
    throw OAException.wrapperException(exception);
    arcmrecovalue2.setAttributeValue("Arcmrecovalue2",retValue);
    * The sql function wil return the formatted string.. but it is able to format only the first value of the table.. the second row is not formatted.
    Can anyone help me

    Problem Description
    I am having a Standard VO , which has a varchar2 datatype attribute.
    In my view page i am dispaly this vo in a OATABLE.
    I want to foramt one of the varchar2 column to display it in a requried format
    Eg :- if the value in the column is 9999 i want to display it as 9,999.
    I have tried few sources but i did not work...
    I tried all the sources metioned below
    OAMessageStyledTextBean arcmrecovalue1=(OAMessageStyledTextBean)oawebbean.findChildRecursive("Arcmrecovalue2");
    arcmrecovalue1.setDataType("NUMBER");
    arcmrecovalue1.setAttributeValue(TABULAR_FUNCTION_VALUE_ATTR,formatter);
    String currency = "USD";
    arcmrecovalue1.setAttributeValue(CURRENCY_CODE,currency);
    arcmrecovalue1.setAttributeValue(OAWebBeanConstants.CURRENCY_CODE,"USD");;
    arcmrecovalue1.setAttributeValue(*ON_SUBMIT_VALIDATER_ATTR*,*formatter*);
    OAMessageStyledTextBean arcmrecovalue2=(OAMessageStyledTextBean)oawebbean.findChildRecursive("Arcmrecovalue2");
    if(arcmrecovalue2 != null)
    String arcmrecovalue = (String)((OAMessageStyledTextBean)oawebbean.findChildRecursive("Arcmrecovalue2")).getValue(oapagecontext);
    String retValue = null;
    try
    OADBTransaction oadbtransaction = l_rootAM.getOADBTransaction();
    String s = "{? = call convert_string(?)}";
    OracleCallableStatement oraclecallablestatement = (OracleCallableStatement)oadbtransaction.createCallableStatement(s,0);
    oraclecallablestatement.registerOutParameter(1,Types.VARCHAR);
    oraclecallablestatement.setString(2,arcmrecovalue);
    oraclecallablestatement.execute();
    retValue = oraclecallablestatement.getString(1);
    catch(SQLException sqlexception)
    throw OAException.wrapperException(sqlexception);
    catch(Exception exception)
    throw OAException.wrapperException(exception);
    arcmrecovalue2.setAttributeValue("Arcmrecovalue2",retValue);
    * The sql function wil return the formatted string.. but it is able to format only the first value of the table.. the second row is not formatted.
    Can anyone help me

  • SSDT tries to alter timestamp column in TFS build

    We're trying to perform an upgrade test against a copy (backup/restore) of our customer database as target. There are some tables with  timestamp column in the database. The way we do this is by having a database project with a publish profile targeting
    that copy of customer database and then with TFS build server is used to build the database but only to generate a publish script (/p:UpdateDatabase=False) set in the build definition - msbuild argument.
    Example of table definition:
    CREATE TABLE dbo.CodeTable1
    (ID INT IDENTITY(1,1) NOT NULL PRIMARY KEY
    ,Code CHAR(6)
    ,[Timestamp] TIMESTAMP NULL);
    We would like to have the "Code" column to have CHAR(7), so in the project we modify the table definition:
    CREATE TABLE dbo.CodeTable1
    (ID INT IDENTITY(1,1) NOT NULL PRIMARY KEY
    ,Code CHAR(7)
    ,[Timestamp] TIMESTAMP NULL);
    Expecting SSDT build will generate alter script:
    ALTER TABLE dbo.CodeTable1 ALTER COLUMN Code CHAR(7);
    To our surprise the generated script was:
    ALTER TABLE dbo.CodeTable1 ALTER COLUMN Code CHAR(7);
    ALTER TABLE dbo.CodeTable1 ALTER COLUMN [Timestamp] TIMESTAMP NULL;
    Which will cause error when the script is executed: "Cannot alter column 'TIMESTAMP' to be data type timestamp."
    Why is SSDT generating the change script for that timestamp column??
    We then try a local build in VS, the issue is not happening, SSDT correctly generates alter script only for the "Code" column to CHAR(7);
    Both local machine and TFS Build server are having VS 2013 Update 4- SSDT 12.0.50318.0 installed.
    As we tried to troubleshoot further, we found out that it seems it only happens on a restored database from a backup copy of our customer database. It doesn't happen for databases created by SSDT build from scratch or that we manually created. We've tried make
    sure all database properties are the same as the database that correctly built. But still if the target database is the one we restored from a customer's copy, SSDT always tries to alter timestamp column (on server build).
    Anyone have same experience?
    I have posted a bug in ms connect: https://connect.microsoft.com/SQLServer/feedback/details/1266051
    Thanks!

    Thanks Paul!
    However, it doesn't happen when I build the database project locally or if the target database was created by SSDT (or manually for that matter). The issue happens when I change the target database to the one we restored from a backup copy of our customer's
    database and run the build through our TFS build server.
    So I thought there must be something different with the restored database (which causes SSDT to alter timestamp column) as opposed to the one SSDT/manually created (which doesn't alter the timestamp column). Maybe there is difference on database property/settings?
    Whatever it is, I just couldn't find it.
    The only thing we will do now as workaround is to get db schema creation script from that of customer's database and run that script to re-create the database from scratch and use that as target database instead, as luck would have it, the issue would be
    gone.
    Still, why the heck SSDT tries to alter timestamp column in that specific case and not in other case as described above??
    Elvin

  • Oracle Best practices for changing  Byte to Char on Varchar2 columns

    Dear Team,
    Application Team wanted to change Byte to Char on Varchar2 columns to accommodate Multi byte character  on couple of production tables.
    Wanted to know is it safe to have mixture of BYTE and CHAR semantics in the same table i have read on the couple of documents that It's good practice to avoid using a mixture of BYTE and CHAR semantics columns in the same table.
    What happens if we have mixture of BYTE and CHAR semantics columns in the same table?
    Do we need to gather stats & rebuild indexes on the table after these column changes .
    Thanks in Advance !!!
    SK

    Application Team wanted to change Byte to Char on Varchar2 columns to accommodate Multi byte character  on couple of production tables.
    Wanted to know is it safe to have mixture of BYTE and CHAR semantics in the same table i have read on the couple of documents that It's good practice to avoid using a mixture of BYTE and CHAR semantics columns in the same table.
    No change is needed to 'accommodate Multibyte characters'. That support has NOTHING to do with whether a column is specified using BYTE or CHAR.
    In 11g the limit for a VARCHAR2 column is 4000 bytes, period. If you specify CHAR and try to insert 1001 characters that each take 4 bytes you will get an exception since that would require 4004 bytes and the limit is 4000 bytes.
    In practice the use of CHAR is mostly a convenience to the developer when defining columns for multibyte characters. For example for a NAME column you might want to make sure Oracle will allocate room for 50 characters REGARDLESS of the actual length in bytes.
    If you provide a name of 50 one byte characters then only 50 bytes will be used. Provide a name of 50 four byte characters and 200 bytes will be used.
    So if  that NAME column was defined using BYTE how would you know what length to use for the column? Fifty BYTES will seldom be long enough and 200 bytes SEEMS large since the business user wants a limit of FIFTY characters.
    That is why such columns would typically use CHAR; so that the length (fifty) defined for the column matches the logical length of the number of characters.
    What happens if we have mixture of BYTE and CHAR semantics columns in the same table?
    Nothing happens - Oracle could care less.
    Do we need to gather stats & rebuild indexes on the table after these column changes .
    No - not if you by 'need' you mean simply because you made ONLY that change.
    But that begs the question: if the table already exists, has data and has been in use without their being any problems then why bother changing things now?
    In other words: if it ain't broke why try to fix it?
    So back to your question of 'best practices'
    Best practices is to set the length semantics at the database level when the database is first created and to then use that same setting (BYTE or CHAR) when you create new objects or make DDL changes.
    Best practices is also to not fix things that aren't broken.
    See the 'Length Semantics' section of the globalization support guide for more best practices
    http://docs.oracle.com/cd/E11882_01/server.112/e10729/ch2charset.htm#i1006683

  • Appending a varchar2 to a clob

    I'm trying to append a varchar2 to a clob, using a package variable called glb_clob_var.
    if(xmlFlag = -1) then -- first time
    BEGIN
    glb_clob_var := inp_xmlMsg;
    END;
    end if;
    if(xmlFlag = 1) then -- neither first nor last time
    BEGIN
    dbms_lob.append(glb_clob_var, inp_xmlMsg);
    END;
    end if;
    However, I find that this append command just keeps the current value of inp_xmlMsg in glb_clob_var after each call to this procedure. That is, glb_clob_var is being overwritten instead of getting appended.
    Am I using the wrong call?

    However, I find that this append command just keeps the current value of inp_xmlMsg in
    glb_clob_var after each call to this procedure. That is, glb_clob_var is being overwritten instead of getting appended. But why do you think so?:
    SQL> DECLARE
       glb_clob_var   CLOB             := LPAD ('x', 32767) || LPAD ('y', 32767);
       inp_xmlmsg     VARCHAR2 (32767) := LPAD ('z', 32767);
    BEGIN
       DBMS_LOB.append (glb_clob_var, inp_xmlmsg);
       DBMS_OUTPUT.put_line (DBMS_LOB.getlength (glb_clob_var));
       DBMS_OUTPUT.put_line (3 * 32767);
    END;
    98301
    98301

  • Storing a Non Printable -from keyboard- Character in Varchar2 Column

    Hi,
    I want to store a non-printable character in a varchar2 column. This character should be non-printable -from keyboard- for all character sets -or very difficult to print from keyboard-. It doesn't matter, whether it can be displayed on screen or not. It does matter INSTR function returns the character position of that character. It should be able to import export data without any problem for any NLS specific operating environment. For example, I plan to use chr(1) do you think it's appropriate? I appreciate your helps.
    Best Regards,
    Salim

    What is the business requirement you're trying to satisfy?
    If your database character set is based on ASCII, CHR(1) (the Start of Header) character is likely to be transferred between systems without character set conversion. Non-ASCII character sets (i.e. Big5 for Chinese data) don't necessarily share the same control characters, though, and generally won't have the same binary representation of an ASCII control character (so CHR(1) in a non-ASCII based database wouldn't necessarily return the same character that CHR(1) would in an ASCII based database).
    Justin

  • HsqlDB throws exception-- trying to alter the columns from unique- primary

    Hi,
    I am trying to migrate my old DB Schema to a new DB schema and the underlying database is HsqlDB. One of the tables has two of its columns described as UNIQUE in the older schema, whereas they were made PRIMARY in the new schema. I went through the documentation provided by HSQL (http://www.hsqldb.org/doc/guide/ch09.html) and tried to execute the following query to alter the columns:
    stmt.execute("Alter Table user_type_table add primary key (user_type_name,attribute_name)");
    Somehow I get the following exception when I tried to do this:
    java.sql.SQLException: Wrong data type: KEY in statement [Alter Table user_type_table add primary key]
    at org.hsqldb.jdbc.jdbcUtil.sqlException(Unknown Source)
    at org.hsqldb.jdbc.jdbcStatement.fetchResult(Unknown Source)
    at org.hsqldb.jdbc.jdbcStatement.execute(Unknown Source)
    Am i doing something wrong?

    Rich,
    Whichever column(s) you changed to generate the conflicts will be present in the logs anyway. Since, your prebuilt conflict handler is not set up for those columns, you end up with apply errors. You can remove those columns from being part of conflict detection by calling the DBMS_APPLY_ADM.COMPARE_OLD_VALUES procedure. Hope this helps.

  • Alter numeric column to allow NULL

    hi all
      while i am trying to Alter numeric column to allow NULL,i lost my digits after decimal point.How to alter the numeric column without lose the data after decimal.
    Thanks.
    Thanks - SelvaKumarSubramaniam.Please MARK AS ANSWER, if my answer is useful to U.

    While altering the Numeric column, did you change DECIMAL precision and scale too??
    I think you might be altering the scale because, run the below example in ssms.
    CREATE TABLE #TABLE
    Sal NUMERIC(10,5) NOT NULL
    INSERT INTO #TABLE VALUES(5.123), (423.13223), (444.44)
    SELECT * FROM #TABLE
    ALTER TABLE #TABLE
    ALTER COLUMN Sal NUMERIC(10,5)
    SELECT * FROM #TABLE
    DROP TABLE #TABLE
    There is no change in the data.
    Please mark as answered if this has helped you solve the issue.
    Good Luck :) .. visit www.sqlsaga.com for more t-sql code snippets and BI related how to articles.

  • I HAVE ONE VARCHAR2 COLUMN.

    hI
    i HAVE ONE VARCHAR2 COLUMN. IN THIS COLUMN I STROED ALL CHARECTER DATA .SOME EXCTION FILE NAME ,
    TEST.DOC,TEST2.DOC,TEST3.DOC AND TEXT_MAIN.DOC NOW I WANT EXTRACT STRING UP TO . CHARECTER WHICH FUNCTION I WANT USE TO GET THE VALUE UP TO . CHERECTER,MEANS I DON'T WANT TO EXTENSION I WANT ONLY FILE NAME.
    thank's

    You did not mention the version of Oracle you are using
    SQL> with d as
        ( select 'test.doc' name from dual union all
          select 'test2.doc' from dual
        select name, regexp_substr(name,'[^.]+') str
        , substr(name,1,instr(name,'.')-1) str9iVersion
       from d
    NAME      STR       STR9IVERS
    test.doc  test      test
    test2.doc test2     test2SS

  • Data of column datatype CLOB is moved to other columns of the same table

    Hi all,
    I have an issue with the tables having a CLOB datatype field.
    When executing a simple query on a table with a column of type CLOB it returns error [POL-2403] value too large for column.
    SQL> desc od_stock_nbcst_notes;
    Name Null? Type
    OD_STOCKID N NUMBER
    NBC_SERVICETYPE N VARCHAR(40)
    LANGUAGECODE N VARCHAR(8)
    AU_USERIDINS Y NUMBER
    INSERTDATE Y DATE
    AU_USERIDUPD Y NUMBER
    MODIFYDATE Y DATE
    VERSION Y SMALLINT(4)
    DBUSERINS Y VARCHAR(120)
    DBUSERUPD Y VARCHAR(120)
    TEXT Y CLOB(2000000000)
    NBC_PROVIDERCODE N VARCHAR(40)
    SQL> select * from od_stock_nbcst_notes;
    [POL-2403] value too large for column
    Checking deeply, some of the rows have got the data of the CLOB column moved in another column of the table.
    When doing select length(nbc_providercode) the length is bigger than the datatype of the field (varchar(40)).
    When doing substr(nbc_providercode,1,40) to see the content of the field, a portion of the Clob data is retrieved.
    SQL> select max(length(nbc_providercode)) from od_stock_nbcst_notes;
    MAX(LENGTH(NBC_PROVIDERCODE))
    162
    Choosing one random record, this is the stored information.
    SQL> select length(nbc_providerCode), text from od_stock_nbcst_notes where length(nbc_providerCode)=52;
    LENGTH(NBC_PROVIDERCODE) | TEXT
    -------------------------+-----------
    52 | poucos me
    SQL> select nbc_providerCode from od_stock_nbcst_notes where length(nbc_providerCode)=52;
    [POL-2403] value too large for column
    SQL> select substr(nbc_providercode,1,40) from od_stock_nbcst_notes where length(nbc_providercode)=52 ;
    SUBSTR(NBC_PROVIDERCODE
    Aproveite e deixe o seu carro no parque
    The content of the field is part of the content of the field text (datatype CLOB, containts an XML)!!!
    The right content of the field must be 'MTS' (retrieved from Central DB).
    The CLOB is being inserted into the Central DB, not into the Client ODB. Data is synchronized from CDB to ODB and the data is reaching the client in a wrong way.
    The issue can be recreated all the time in the same DB, but between different users the "corrupted" records are different.
    Any idea?

    939569 wrote:
    Hello,
    I am using Oracle 11.2, I would like to use SQL to update one column based on values of other rows at the same table. Here are the details:
    create table TB_test (myId number(4), crtTs date, updTs date);
    insert into tb_test(1, to_date('20110101', 'yyyymmdd'), null);
    insert into tb_test(1, to_date('20110201', 'yyyymmdd'), null);
    insert into tb_test(1, to_date('20110301', 'yyyymmdd'), null);
    insert into tb_test(2, to_date('20110901', 'yyyymmdd'), null);
    insert into tb_test(2, to_date('20110902', 'yyyymmdd'), null);
    After running the SQL, I would like have the following result:
    1, 20110101, 20110201
    1, 20110201, 20110301
    1, 20110301, null
    2, 20110901, 20110902
    2, 20110902, null
    Thanks for your suggestion.How do I ask a question on the forums?
    SQL and PL/SQL FAQ

  • How to find apostrophes in a VARCHAR2 column

    I have a value "GOT UP LATE - DIDN'T HAVE TIME"
    stored in varchar2 field of my table through Oracle forms. I want to check whether the values stored in this column have apostrophes or not (e.g. as in DIDN'T). Could any body please let me know how to do this.
    Rajeev
    [email protected]

    You can use the INSTR function
    INSTR(string, set[,start[,occurence]])
    INSTR('abcdecg', 'c') would return 3 since the 1st occurence of 'c' is at position 3 in the string.
    INSTR('abcdecg', 'c', 1, 2) would return 6 since the 2nd occurence of 'c' is at position 6 in the string.
    INSTR('abcdecg', 'c', 4) would return 6 since the 1st occurence of 'c', starting at position 4, is at position 6 in the string.
    Good luck.

Maybe you are looking for

  • Adobe creative cloud cc downloading issues

    Hi, I signed up for and am paying for the Adobe Creative Cloud but for some insane reason it has never really worked on my Mac home computer as it's never really given me the option to download the CC versions only the CC (2014) or the trials and has

  • Please help, java program terminating unexpectedly without reason

    ok, so I have a project I'm working on, here's its description: Create a new project named FML_Pig where F is your first initial, M is your middle initial, and L is your last initial. For example, if your name is Alfred Edward Neuman then you would n

  • Psds not saving in CS6

    Hi, I hope someone can help me! I am having problems with my CS6 to lightroom workflow! Since I updated CS6 recently (I use the creative cloud) I have had problems with saving some PSDs in photoshop. My workflow is as follows: I import from lightroom

  • Activation of Treasury & Risk Management EA-FS

    Hi All, It belongs to activation of EA-FS Component for Treasury Risk Mgt. Even after activating EA-FS with tcode SFW5 in IMG, the field Global Status not changing from 'Off' to 'On' and therefore all treasury components are missing from both IMG & e

  • Converting Web Page to PDF (What happened)?

    Something happened, either with an adobe update or to my PC, in the last 24 hours that has caused the convert to web page function within both firefox and chrome to no longer be usable for work purposes. Before web pages would retain their native loo