Can we check CLOB datatype to null?

Hi ,
i want check clob datatype to null so i have used decode(col1,null,n1,n2) ..it executes the query but it gives nothing ..doesn't return data only show o ..
can you please tell me how to do ?

Take a look at http://docs.oracle.com/cd/E11882_01/appdev.112/e40758/d_lob.htm#ARPLS66622
CONVERTTOBLOB Procedure
Regards
Etbin

Similar Messages

  • What control i can use for Clob datatype in forms?

    Hi,
    Backend side
    I had created a table that having a column namely sql_text which hold the query string and defined as CLOB data type.
    On Form 6i
    Just i created a textitem(long) for the clob datatype field
    but it only hold very limited length of charter, i want to be more. Any other control is there?
    please guid me what should i want to do.
    kanish

    There is limited support for CLOBs in Forms. I guess you could read it in piecemeal with a procedure using the DBMS_LOB package

  • Clob DataType, NULL and ADO

    Hello,
    First, I'm french so my english isn't very good
    I have a problem with Oracle Clob DataType. When I try to put NULL value to
    CLOB DataType with ADO, changes aren't not made.
    rs.open "SELECT ....", adocn, adOpenKeyset, adLockOptimistic
    rs.Fields("ClobField") = Null ' In this case, the Update doesn't work
    rs.update
    rs.Close
    This code works if I try to write a value which is different than Null but
    not if is equal to Null. Instead of having Null, I have the old value
    (before changes), the update of the field doesn't work.

    I experience the same, did you find a solution to your problem?
    Kind regards,
    Roel de Bruyn

  • Can we strore .CSV file into CLOB datatype

    hi
    can we strore .CSV file into CLOB datatype
    its giving me error ot hexa coonversion?
    can anyone provide sample code...
    when i m sending mail from oracle database when i send as blob object then nope but when i send attachment stored in table with clob column it gives me hex to raw conversion error
    folllowing is my code--
    CREATE OR REPLACE PROCEDURE com_maildata_prc1( pi_sender IN com_batch_contact_dtl.strreceivername%TYPE,
    pi_recipients IN com_batch_contact_dtl.stremailaddr%TYPE,
    pi_subject IN VARCHAR2 ,
    pi_text IN VARCHAR2 ,
    pi_filename IN VARCHAR2 ,
    pi_blob IN cLOB
    IS
    conn utl_smtp.connection;
    i NUMBER;
    len NUMBER;
    BEGIN
    ('com_send_email_prc',Vn_Process_Id);
    conn := demo_mail.begin_mail( sender => pi_sender,
    recipients => pi_recipients,
    subject => pi_subject,
    mime_type => demo_mail.MULTIPART_MIME_TYPE
    demo_mail.begin_attachment(conn => conn,
    mime_type => 'application/csv',
    inline => TRUE,
    filename => pi_filename,
    transfer_enc => 'base64'
    -- split the Base64 encoded attachment into multiple lines
    i := 1;
    len := DBMS_LOB.getLength(pi_blob);
    WHILE (i < len) LOOP
    IF(i + demo_mail.MAX_BASE64_LINE_WIDTH < len)THEN
    UTL_SMTP.Write_raw_Data (conn
    , UTL_ENCODE.Base64_Encode(
    DBMS_LOB.Substr(pi_blob, demo_mail.MAX_BASE64_LINE_WIDTH, i)));
    ELSE
    UTL_SMTP.Write_raw_Data (conn
    , UTL_ENCODE.Base64_Encode(
    DBMS_LOB.Substr(pi_blob, (len - i)+1, i)));
    END IF;
    UTL_SMTP.Write_Data(conn, UTL_TCP.CRLF);
    i := i + demo_mail.MAX_BASE64_LINE_WIDTH;
    END LOOP;
    demo_mail.end_attachment(conn => conn);
    demo_mail.attach_text(
    conn => conn,
    data => pi_text,
    mime_type => 'text/csv');
    demo_mail.end_mail( conn => conn );
    END;
    Thanx in advance...
    Message was edited by:
    user549162
    Message was edited by:
    user549162

    Hi
    Ignore "SQL*Loader-292: ROWS parameter ignored when an XML, LOB or VARRAY column is loaded" error
    after importing your csv file just change length CHAR(100000).
    ex: your column col1 CHAR(1000) to change CHAR(100000).
    and deploy your mapping and execute
    Regards,
    Venkat

  • LogMiner puzzle - CLOB datatype

    Hello, everybody!
    Sorry for the cross-post here and in "Database\SQL and PL/SQL" forum, but the problem I am trying to dig is somewhere between those two areas.
    I need a bit of an advice whether the following behavior is wrong an requires SR to be initiated – or I am just missing something.
    Setting:
    -     Oracle 11.2.0.3 Enterprise Edition 64-bit on Win 2008.
    -     Database is running in ARCHIVELOG mode with supplemental logging enabled
    - DB_SECUREFILE=PERMITTED (so, by default LOBs will be created as BasicFiles - but I didn't notice any behavior difference comparing to SecureFile implementation)
    Test #1. Initial discovery of a problem
    1. Setup:
    <li> I created a table MISHA_TEST that contains CLOB column
    create table misha_test  (a number primary key, b_cl CLOB)<li> I run anonymous block that would insert into this table WITHOUT referencing CLOB column
    begin
         insert into misha_test (a) values (1);
         commit;
    end;2. I looked at generated logs via the LogMiner and found the following entries in V$LOGMNG_CONTENTS:
    SQL_REDO
    set transaction read write;
    insert into "MISHA_TEST"("A","B_CL") values ('1',EMPTY_CLOB());
    set transaction read write;
    commit;
    update "MISHA_TEST" set "B_CL" = NULL where "A" = '1' and ROWID = 'AAAj90AAKAACfqnAAA';
    commit;And here I am puzzled: why do we have two operations for a single insert – first write EMPTY_CLOB into B_CL and then update it to NULL? But I didn’t even touch the column B_CL! Seems very strange – why can’t we write NULL to B_CL from the very beginning instead of first creating a pointer and than destroying it.
    Key question:
    - why NULL value in CLOB column should be handled differently than NULL value in VARCHAR2 column?
    Test #2. Quantification
    Question:
    -     having LOB column in the table seems to cause an overhead of generating more logs. But could it be quantified?
    Assumption:
    -     My understanding is that CLOBs defined with “storage in row enabled = true” (default) up to ~ 4k of size behave like Varchar2(4000) and only when the size goes above 4k we start using real LOB mechanisms.
    Basic test:
    1.     Two tables:
    <li> With CLOB:
    create table misha_test_clob2  (a_nr number primary key, b_tx varchar2(4000), c_dt date, d_cl CLOB)<li>With VARCHAR2:
    create table misha_test_clob   (a_nr number primary key, b_tx varchar2(4000), c_dt date, d_cl VARCHAR2(4000))2.     Switch logfile/Insert 1000 rows and populate only A_NR/Switch logfile
    insert into misha_test_clob (a_nr)
    select level
    from dual
    connect by level < 10013.     Check sizes of generated logs:
    <li>With CLOB – 689,664 bytes
    <li>With Varchar2 – 509.440 (<b>or about 26% reduction</b>)
    Summary:
    <li>the overhead is real. It means that table with VARCHAR2 column is cheaper to maintain, even if you are not using that column. So adding LOB columns to a table "just in case" is a really bad idea.
    <li>Having LOB columns in the table that has tons of INSERT operations is expensive.
    Just to clarify a real business case - I have a table with some number of attributes, one attribute has CLOB datatype. Frequency of inserts in this table is pretty high, frequency of using CLOB column is pretty low (NOT NULL ~0.1%). But because of that CLOB column I generate a lot more LOG data than I need (about 30% extra). Seems like a real waste of time! For now I requested development team to split the table into two, but that's still a bandage.
    So, does anybody care? Comments/suggestions are very welcome!
    Thanks a lot!
    Michael Rosenblum

    Hello, everybody!
    Sorry for the cross-post here and in "Database\SQL and PL/SQL" forum, but the problem I am trying to dig is somewhere between those two areas.
    I need a bit of an advice whether the following behavior is wrong an requires SR to be initiated – or I am just missing something.
    Setting:
    -     Oracle 11.2.0.3 Enterprise Edition 64-bit on Win 2008.
    -     Database is running in ARCHIVELOG mode with supplemental logging enabled
    - DB_SECUREFILE=PERMITTED (so, by default LOBs will be created as BasicFiles - but I didn't notice any behavior difference comparing to SecureFile implementation)
    Test #1. Initial discovery of a problem
    1. Setup:
    <li> I created a table MISHA_TEST that contains CLOB column
    create table misha_test  (a number primary key, b_cl CLOB)<li> I run anonymous block that would insert into this table WITHOUT referencing CLOB column
    begin
         insert into misha_test (a) values (1);
         commit;
    end;2. I looked at generated logs via the LogMiner and found the following entries in V$LOGMNG_CONTENTS:
    SQL_REDO
    set transaction read write;
    insert into "MISHA_TEST"("A","B_CL") values ('1',EMPTY_CLOB());
    set transaction read write;
    commit;
    update "MISHA_TEST" set "B_CL" = NULL where "A" = '1' and ROWID = 'AAAj90AAKAACfqnAAA';
    commit;And here I am puzzled: why do we have two operations for a single insert – first write EMPTY_CLOB into B_CL and then update it to NULL? But I didn’t even touch the column B_CL! Seems very strange – why can’t we write NULL to B_CL from the very beginning instead of first creating a pointer and than destroying it.
    Key question:
    - why NULL value in CLOB column should be handled differently than NULL value in VARCHAR2 column?
    Test #2. Quantification
    Question:
    -     having LOB column in the table seems to cause an overhead of generating more logs. But could it be quantified?
    Assumption:
    -     My understanding is that CLOBs defined with “storage in row enabled = true” (default) up to ~ 4k of size behave like Varchar2(4000) and only when the size goes above 4k we start using real LOB mechanisms.
    Basic test:
    1.     Two tables:
    <li> With CLOB:
    create table misha_test_clob2  (a_nr number primary key, b_tx varchar2(4000), c_dt date, d_cl CLOB)<li>With VARCHAR2:
    create table misha_test_clob   (a_nr number primary key, b_tx varchar2(4000), c_dt date, d_cl VARCHAR2(4000))2.     Switch logfile/Insert 1000 rows and populate only A_NR/Switch logfile
    insert into misha_test_clob (a_nr)
    select level
    from dual
    connect by level < 10013.     Check sizes of generated logs:
    <li>With CLOB – 689,664 bytes
    <li>With Varchar2 – 509.440 (<b>or about 26% reduction</b>)
    Summary:
    <li>the overhead is real. It means that table with VARCHAR2 column is cheaper to maintain, even if you are not using that column. So adding LOB columns to a table "just in case" is a really bad idea.
    <li>Having LOB columns in the table that has tons of INSERT operations is expensive.
    Just to clarify a real business case - I have a table with some number of attributes, one attribute has CLOB datatype. Frequency of inserts in this table is pretty high, frequency of using CLOB column is pretty low (NOT NULL ~0.1%). But because of that CLOB column I generate a lot more LOG data than I need (about 30% extra). Seems like a real waste of time! For now I requested development team to split the table into two, but that's still a bandage.
    So, does anybody care? Comments/suggestions are very welcome!
    Thanks a lot!
    Michael Rosenblum

  • A clob datatype and LogMiner question?

    HI,
    I am using Logminer to caputer all DMLs agaist rows with clob datatype, find a problem.
    --log in as scott/tiger
    conn scott/tiger
    SQL> desc clobtest
    Name Null? Type
    SNO NUMBER
    CLOBTYPE CLOB
    --make a update
    update clobtest set CLOBTYPE = 'Hello New York' where sno = 11;
    commit;
    after using LogMiner to analyze redo log files, to query.
    select sql_redo from v$logmnr_contents where username = 'SCOTT';
    update "SCOTT"."CLOBTEST" set "CLOBTYPE" = 'Hello New York' where and ROWID = 'AAD0ZqAAEAAAAhsAAC';
    My quesion:
    As to caputured DML
    update "SCOTT"."CLOBTEST" set "CLOBTYPE" = 'Hello New York' where and ROWID = 'AAD0ZqAAEAAAAhsAAC';
    it shows "where and", why there is missing after where clause????? --(anyway, I can overcome this by using REGEXP_REPLACE(sql_redo,'where and','where ')
    Thanks
    Roy
    Edited by: ROY123 on Mar 16, 2010 10:25 AM

    I checked logminer documetation:
    http://74.125.93.132/search?q=cache:19bBhYX3Xs4J:download.oracle.com/docs/cd/B19306_01/server.102/b14215/logminer.htm+NOTE:LogMiner+does+not+support+these+datatypes+and+table+storage+attributes:&cd=1&hl=en&ct=clnk&gl=us
    it says 10GR2 support LOB datatype.
    but why "where clause" omit the clob datatype column (becaume "where and rowid")????
    Edited by: ROY123 on Mar 16, 2010 2:12 PM

  • Problem on CLOB datatype after import

    I got problem and call to Oracle support and they use DUL for extract data from datafile to dump file and I import everything is done and no error but when I check in CLOB datatype that have space(blank character) separate each character see below
    Original
    Oracle
    After Import
    O R A C L E
    So the application cannot execute those data.
    Anyone have solution how to fix this problem?
    Thanks,
    Taohiko

    if you use a direct insert you are restricted to 4000 characters.
    You can put your value in a varchar2 variable and that allows you to insert up to 32767 characters.
    declare
    my_clob_variable := rpad('x','X',25000);
    begin
    insert into my_table(my_clob_column)
    values(my_clob_variable);
    end;

  • QUERYING CLOB DATATYPE

    Hello Everyone,
    Before I go to my question let me give you the context. I wanted to upload the description of a set of products with their IDs into my database. Hence I created a table 'demo' with two columns of INT and CLOB datatypes using the following script. *create table demo ( id int primary key, theclob Clob );*
    Then I create a directory using the following script, *Create Or Replace Directory MY_FILES as 'C:\path of the folder.......\';*
    In the above mentioned directory I create one .txt file for each product with the description of the product. Using the below script I created a procedure to load the contents of the .txt files into my 'demo' table.
    *CREATE OR REPLACE*
    *PROCEDURE LOAD_A_FILE( P_ID IN NUMBER, P_FILENAME IN VARCHAR2 ) AS*
    *L_CLOB CLOB;*
    *L_BFILE BFILE;*
    *BEGIN*
    *INSERT INTO DEMO VALUES ( P_ID, EMPTY_CLOB() )*
    *RETURNING THECLOB INTO L_CLOB;*
    *L_BFILE := BFILENAME( 'MY_FILES', P_FILENAME );*
    *DBMS_LOB.FILEOPEN( L_BFILE );*
    *DBMS_LOB.LOADFROMFILE( L_CLOB, L_BFILE,*
    *DBMS_LOB.GETLENGTH( L_BFILE ) );*
    *DBMS_LOB.FILECLOSE( L_BFILE );*
    *END;*
    After which I called the procedure using, *exec load_a_file(1, 'filename.txt' );*
    When I queried the table like, select * from demo; I am getting the following output..... which is all fine.
    ID THECLOB
    1 "product x is an excellent way to improve your production process and enhance your turnaround time....."
    _*QUESTION*_
    When I did the exact same thing in my friend's machine and query the demo table, I get garbage value in the 'theclob' column (as shown below). The only difference is that mine is an enterprise edition of Oracle 11.2.0.1 and my friends is an Express edition of Oracle 11.2.0.2. Does this has anything to do with the problem?
    1 ⁦     猺⁁摶慮捥搠摡瑡潬汥捴楯渠捡灡扩汩瑩敳㨠扡牣潤攠獣慮湩湧Ⱐ灡湩挠慬敲琬⁷潲欠潲摥爠浡湡来浥湴Ⱐ睩牥汥獳⁦潲浳湤⁳異敲癩獯爠瑩浥⁥湴特⸊੃潭整⁍潢楬攠坯牫敲㨠周攠浯獴⁲潢畳琠灡捫慧攮⁐牯癩摥猠扵獩湥獳敳⁷楴栠愠捯浰汥瑥汹⁷楲敬敳猠潰敲慴楯湡氠浡湡来浥湴⁳祳瑥洮⁉湣汵摥猠慬氠潦⁃潭整⁔牡捫敲❳⁦敡瑵牥猠灬畳㨠䍡汥湤慲猬畴潭慴敤畳瑯浥爠捯浭畮楣慴楯湳Ⱐ睯牫牤敲⽩湶潩捥⁵灤慴楮朠晲潭⁴桥⁦楥汤Ⱐ睯牫牤敲⁳敱略湣楮本⁥硣敳獩癥⁳瑯瀠瑩浥汥牴猬⁷楲敬敳猠景牭猬⁴畲渭批⵴畲渠癯楣攠湡癩条瑩潮Ⱐ慮搠浯牥⸊ੁ摶慮捥搠坩
    2     ≁否吠潦晥牳摶慮捥搠睩牥汥獳⁦潲浳慰慢楬楴礠睩瑨⁃潭整⁅娠䍯浥琬⁔牡捫敲湤⁃潭整⁍潢楬攠坯牫敲ਊ䍯浥琠䕚㨠周攠浯獴⁲潢畳琬潳琠敦晥捴楶攠睥戠扡獥搠䵒䴠慰灬楣慴楯渠楮⁴桥⁩湤畳瑲礮⁃慰慢楬楴楥猠楮捬畤攠䝐匠汯捡瑩潮⁴牡捫楮本⁷楲敬敳猠瑩浥汯捫Ⱐ来漭晥湣楮朠睩瑨汥牴猬⁳灥敤湤⁳瑯瀠瑩浥汥牴猬湤渭摥浡湤爠獣桥摵汥搠牥灯牴楮朮ਊ䍯浥琠呲慣步爺⁁⁰潷敲晵氠捬楥湴ⵢ慳敤⁰污瑦潲洠瑨慴晦敲猠慬氠瑨攠晥慴畲敳映䍯浥琠䕚⁰汵猺⁁摶慮捥搠摡瑡潬汥捴楯渠捡灡扩汩瑩敳㨠扡牣潤攠獣慮湩湧Ⱐ灡湩挠慬敲琬⁷潲欠潲摥爠浡湡来浥湴Ⱐ睩牥汥獳⁦潲浳湤⁳異敲癩獯爠瑩浥⁥湴特⸊੃潭整⁍潢楬攠坯牫敲㨠周攠浯獴⁲潢畳琠灡捫慧攮⁐牯癩摥猠扵獩湥獳敳⁷楴栠愠捯浰汥瑥汹⁷楲敬敳猠潰敲慴楯湡氠浡湡来浥湴⁳祳瑥洮⁉湣汵摥猠慬氠潦⁃潭整⁔牡捫敲❳⁦敡瑵牥猠灬畳㨠䍡汥湤慲猬畴潭慴敤畳瑯浥爠捯浭畮楣慴楯湳Ⱐ睯牫牤敲⽩湶潩捥⁵灤慴楮朠晲潭⁴桥⁦楥汤Ⱐ睯牫牤敲⁳敱略湣楮本⁥硣敳獩癥⁳瑯瀠瑩浥汥牴猬⁷楲敬敳猠景牭猬⁴畲渭批⵴畲渠癯楣攠湡癩条瑩潮Ⱐ慮搠浯牥⸊ੁ摶慮捥搠坩牥汥獳⁆潲浳㨠呵牮湹⁰慰敲⁦潲洠楮瑯⁷楲敬敳猠捬潮攠潦⁴桥⁳慭攠楮景牭慴楯渠ⴠ湯慴瑥爠桯眠捯浰汩捡瑥搮⁓慶攠瑩浥礠瑲慮獦敲物湧⁩湦潲浡瑩潮慣欠瑯⁴桥晦楣攠睩瑨⁷楲敬敳猠獰敥搮⁓慶攠灡灥爠慮搠敬業楮慴攠摵慬•ഊ
    3     ≁䥒呉䵅⁍慮慧敲⁦牯洠䅔♔⁰牯癩摥猠愠浯扩汥灰汩捡瑩潮猠摥獩杮敤⁴漠瑲慣欠扩汬慢汥⁨潵牳⸠⁔桥⁁㑐⁳潬畴楯湳畴潭慴楣慬汹潧⁷楲敬敳猠敭慩氬慬汳Ⱐ慮搠扩汬慢汥⁥癥湴猬獳潣楡瑥猠瑨敭⁷楴栠捬楥湴爠灲潪散琠捯摥猠慮搠摩牥捴猠扩汬慢汥⁲散潲摳⁴漠扩汬楮朠獹獴敭献†周攠呩浥乯瑥⁳潬畴楯湳⁰牯癩摥⁳汩浭敤潷渠數灥物敮捥Ⱐ慬汯睩湧⁦潲牥慴楯渠潦慮畡氠扩汬慢汥⁥癥湴献†周敲攠慲攠瑷漠癥牳楯渠潦⁁㑐湤⁔業敎潴攮ਊ䭥礠䙥慴畲敳㨊⨠䥮捬畤攠捡灴畲攠慤⵨潣楬污扬攠敶敮瑳ਪ⁃慰瑵牥潢楬攠灨潮攠捡汬湤⁥浡楬†慳楬污扬攠敶敮瑳Ⱐਪ⁁扩汩瑹⁴漠慳獩杮楬污扬攠敶敮琠瑯汩敮琠慮搠灲潪散琊⨠䅢楬楴礠瑯⁳敡牣栠慮搠獣牯汬⁴桲潵杨楬污扬攠敶敮瑳Ⱐ潰瑩潮⁴漠楮瑥杲慴攠睩瑨楬汩湧⁳祳瑥浳 ⨠偯瑥湴楡氠扥湥晩瑳⁩湣汵摥⁩湣牥慳敤⁰牯摵捴楶楴礠慮搠牥摵捥搠慤浩湩獴牡瑩癥癥牨敡搠湤⁩湣牥慳敤⁲敶敮略略⁴漠浯牥捣畲慴攠捡灴畲楮朠潦楬污扬攠敶敮瑳•ഊ
    4     ≁灲楶慐慹⁁乄⁁灲楶慐慹⁐牯晥獳楯湡氠晲潭⁁否吠瑵牮⁹潵爠浯扩汥敶楣攠楮瑯⁰潲瑡扬攠捲敤楴慲搠瑥牭楮慬⸠坩瑨潭灡瑩扬攠䅔♔⁳浡牴灨潮攬⁁灲楶慐慹爠䅰物癡偡礠偲潦敳獩潮慬⁳潦瑷慲攬湤敲捨慮琠慣捯畮琬⁹潵爠浯扩汥⁷潲武潲捥慮⁰牯捥獳牥摩琠潲敢楴慲搠灡祭敮瑳⁦牯洠瑨攠晩敬搮ਊ䭥礠䙥慴畲敳㨠 ⨠卭慲瑰桯湥ⵢ慳敤⁳潬畴楯渠⁴漠灲潣敳猠捲敤楴慲搠灡祭敮瑳 ⨠䙵汬ⵦ敡瑵牥搠灯楮琭潦⵳慬攠獯汵瑩潮⁳異灯牴楮朠慬氠浡橯爠瑲慮獡捴楯渠瑹灥ਠ⨠卵灰潲瑳牥摩琠慮搠摥扩琠瑲慮獡捴楯湳 ਊ∍
    To make sure that the .txt files are accessible in the directory I executed the following script, Host Echo Hello World > C:\...path...\1.Txt
    After which I found the contents of the file changed to "Hello World". Later I loaded the .txt file with "Hello World" and queried the table. Still I am getting some garbage value. However since the string "Hello World" is much smaller than the previous contents, the garbage size is also smaller for ID 1. I don't get any errors, but you can see the output as follows.
    1      䠀攀氀氀漀 圀漀爀氀搀  ഀ਀
    2     ≁否吠潦晥牳摶慮捥搠睩牥汥獳⁦潲浳慰慢楬楴礠睩瑨⁃潭整⁅娠䍯浥琬⁔牡捫敲湤⁃潭整⁍潢楬攠坯牫敲ਊ䍯浥琠䕚㨠周攠浯獴⁲潢畳琬潳琠敦晥捴楶攠睥戠扡獥搠䵒䴠慰灬楣慴楯渠楮⁴桥⁩湤畳瑲礮⁃慰慢楬楴楥猠楮捬畤攠䝐匠汯捡瑩潮⁴牡捫楮本⁷楲敬敳猠瑩浥汯捫Ⱐ来漭晥湣楮朠睩瑨汥牴猬⁳灥敤湤⁳瑯瀠瑩浥汥牴猬湤渭摥浡湤爠獣桥摵汥搠牥灯牴楮朮ਊ䍯浥琠呲慣步爺⁁⁰潷敲晵氠捬楥湴ⵢ慳敤⁰污瑦潲洠瑨慴晦敲猠慬氠瑨攠晥慴畲敳映䍯浥琠䕚⁰汵猺⁁摶慮捥搠摡瑡潬汥捴楯渠捡灡扩汩瑩敳㨠扡牣潤攠獣慮湩湧Ⱐ灡湩挠慬敲琬⁷潲欠潲摥爠浡湡来浥湴Ⱐ睩牥汥獳⁦潲浳湤⁳異敲癩獯爠瑩浥⁥湴特⸊੃潭整⁍潢楬攠坯牫敲㨠周攠浯獴⁲潢畳琠灡捫慧攮⁐牯癩摥猠扵獩湥獳敳⁷楴栠愠捯浰汥瑥汹⁷楲敬敳猠潰敲慴楯湡氠浡湡来浥湴⁳祳瑥洮⁉湣汵摥猠慬氠潦⁃潭整⁔牡捫敲❳⁦敡瑵牥猠灬畳㨠䍡汥湤慲猬畴潭慴敤畳瑯浥爠捯浭畮楣慴楯湳Ⱐ睯牫牤敲⽩湶潩捥⁵灤慴楮朠晲潭⁴桥⁦楥汤Ⱐ睯牫牤敲⁳敱略湣楮本⁥硣敳獩癥⁳瑯瀠瑩浥汥牴猬⁷楲敬敳猠景牭猬⁴畲渭批⵴畲渠癯楣攠湡癩条瑩潮Ⱐ慮搠浯牥⸊ੁ摶慮捥搠坩牥汥獳⁆潲浳㨠呵牮湹⁰慰敲⁦潲洠楮瑯⁷楲敬敳猠捬潮攠潦⁴桥⁳慭攠楮景牭慴楯渠ⴠ湯慴瑥爠桯眠捯浰汩捡瑥搮⁓慶攠瑩浥礠瑲慮獦敲物湧⁩湦潲浡瑩潮慣欠瑯⁴桥晦楣攠睩瑨⁷楲敬敳猠獰敥搮⁓慶攠灡灥爠慮搠敬業楮慴攠摵慬•ഊ
    3     ≁䥒呉䵅⁍慮慧敲⁦牯洠䅔♔⁰牯癩摥猠愠浯扩汥灰汩捡瑩潮猠摥獩杮敤⁴漠瑲慣欠扩汬慢汥⁨潵牳⸠⁔桥⁁㑐⁳潬畴楯湳畴潭慴楣慬汹潧⁷楲敬敳猠敭慩氬慬汳Ⱐ慮搠扩汬慢汥⁥癥湴猬獳潣楡瑥猠瑨敭⁷楴栠捬楥湴爠灲潪散琠捯摥猠慮搠摩牥捴猠扩汬慢汥⁲散潲摳⁴漠扩汬楮朠獹獴敭献†周攠呩浥乯瑥⁳潬畴楯湳⁰牯癩摥⁳汩浭敤潷渠數灥物敮捥Ⱐ慬汯睩湧⁦潲牥慴楯渠潦慮畡氠扩汬慢汥⁥癥湴献†周敲攠慲攠瑷漠癥牳楯渠潦⁁㑐湤⁔業敎潴攮ਊ䭥礠䙥慴畲敳㨊⨠䥮捬畤攠捡灴畲攠慤⵨潣楬污扬攠敶敮瑳ਪ⁃慰瑵牥潢楬攠灨潮攠捡汬湤⁥浡楬†慳楬污扬攠敶敮瑳Ⱐਪ⁁扩汩瑹⁴漠慳獩杮楬污扬攠敶敮琠瑯汩敮琠慮搠灲潪散琊⨠䅢楬楴礠瑯⁳敡牣栠慮搠獣牯汬⁴桲潵杨楬污扬攠敶敮瑳Ⱐ潰瑩潮⁴漠楮瑥杲慴攠睩瑨楬汩湧⁳祳瑥浳 ⨠偯瑥湴楡氠扥湥晩瑳⁩湣汵摥⁩湣牥慳敤⁰牯摵捴楶楴礠慮搠牥摵捥搠慤浩湩獴牡瑩癥癥牨敡搠湤⁩湣牥慳敤⁲敶敮略略⁴漠浯牥捣畲慴攠捡灴畲楮朠潦楬污扬攠敶敮瑳•ഊ
    4     ≁灲楶慐慹⁁乄⁁灲楶慐慹⁐牯晥獳楯湡氠晲潭⁁否吠瑵牮⁹潵爠浯扩汥敶楣攠楮瑯⁰潲瑡扬攠捲敤楴慲搠瑥牭楮慬⸠坩瑨潭灡瑩扬攠䅔♔⁳浡牴灨潮攬⁁灲楶慐慹爠䅰物癡偡礠偲潦敳獩潮慬⁳潦瑷慲攬湤敲捨慮琠慣捯畮琬⁹潵爠浯扩汥⁷潲武潲捥慮⁰牯捥獳牥摩琠潲敢楴慲搠灡祭敮瑳⁦牯洠瑨攠晩敬搮ਊ䭥礠䙥慴畲敳㨠 ⨠卭慲瑰桯湥ⵢ慳敤⁳潬畴楯渠⁴漠灲潣敳猠捲敤楴慲搠灡祭敮瑳 ⨠䙵汬ⵦ敡瑵牥搠灯楮琭潦⵳慬攠獯汵瑩潮⁳異灯牴楮朠慬氠浡橯爠瑲慮獡捴楯渠瑹灥ਠ⨠卵灰潲瑳牥摩琠慮搠摥扩琠瑲慮獡捴楯湳 ਊ∍
    Edited by: Arunkumar Gunasekaran on Jan 3, 2013 11:38 AM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    >
    To make sure that the .txt files are accessible in the directory I executed the following script, Host Echo Hello World > C:\...path...\1.Txt
    After which I found the contents of the file changed to "Hello World". Later I loaded the .txt file with "Hello World" and queried the table. Still I am getting some garbage value. However since the string "Hello World" is much smaller than the previous contents, the garbage size is also smaller for ID 1. I don't get any errors, but you can see the output as follows.
    >
    The most common problem I have seen using BFILEs is the character set; BFILEs do NOT handle character set conversion.
    That is the main reason I don't recommend using BFILEs for loading data like this. Either SQL*Loader or external tables can do the job and they both handle character set conversions properly.
    See the LOADFROMFILE Procedure of DBMS_LOB package in the PL/SQL Language doc
    http://docs.oracle.com/cd/B28359_01/appdev.111/b28419/d_lob.htm#i998778
    >
    Note:
    The input BFILE must have been opened prior to using this procedure. No character set conversions are performed implicitly when binary BFILE data is loaded into a CLOB. The BFILE data must already be in the same character set as the CLOB in the database. No error checking is performed to verify this.
    Note:
    If the character set is varying width, UTF-8 for example, the LOB value is stored in the fixed-width UCS2 format. Therefore, if you are using DBMS_LOB.LOADFROMFILE, the data in the BFILE should be in the UCS2 character set instead of the UTF-8 character set. However, you should use sql*loader instead of LOADFROMFILE to load data into a CLOB or NCLOB because sql*loader will provide the necessary character set conversions.
    >
    I suggest you use an external table definition to do this load. You can do an ALTER to change the file name for each load.
    See External Tables Concepts in the Utilities doc for the basics
    http://docs.oracle.com/cd/B28359_01/server.111/b28319/et_concepts.htm
    See Altering External Tables in the DBA doc for detailed information
    http://docs.oracle.com/cd/B28359_01/server.111/b28310/tables013.htm
    >
    DEFAULT DIRECTORY
    Changes the default directory specification
    ALTER TABLE admin_ext_employees
    DEFAULT DIRECTORY admin_dat2_dir;
    LOCATION
    Allows data sources to be changed without dropping and re-creating the external table metadata
    ALTER TABLE admin_ext_employees
    LOCATION ('empxt3.txt',
    'empxt4.txt');
    >
    You can also load in parallel if you have licensed that option.

  • Importing CLOB datatype

    Dear All,
    I'm facing a problem whle importing a table (specifically a column with CLOB datatype) to the existing tablespace as explained below. Kindly let me know the solution, you can mail me to [email protected]
    Importing a CLOB datatype from a different tablespace to a different tablespace without creating the source tablespace at the destination.
    Now to import a table and the data without creating the tablespace i.e. XYZ_DATA as mentioned below.
    TABLESPACE "XYZ_DATA" CLOB ("CLOB_SYNTAX") STORE AS (TA"
    "BLESPACE "XYZ_DATA" ....
    IMP-00017: following statement failed with ORACLE error 959:
    "CREATE TABLE "R_DWSYN" ("R_IDSCR" NUMBER(9, 0) NOT NULL ENABLE, "N_DW" NUMB"
    "ER(1, 0) NOT NULL ENABLE, "D_UPDATE" DATE, "N_X" NUMBER(4, 0), "N_Y" NUMBER"
    "(4, 0), "N_WIDTH" NUMBER(4, 0), "CLOB_SYNTAX" CLOB) PCTFREE 10 PCTUSED 40 "
    "INITRANS 1 MAXTRANS 255 LOGGING STORAGE(INITIAL 1048576 NEXT 1048576 MINEXT"
    "ENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 20 FREELISTS 1 FREELIST GROUPS 1 B"
    "UFFER_POOL DEFAULT) TABLESPACE "XYZ_DATA" LOB ("CLOB_SYNTAX") STORE AS (TA"
    "BLESPACE "XYZ_DATA" ENABLE STORAGE IN ROW CHUNK 2048 PCTVERSION 10 NOCACHE "
    " STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PC"
    "TINCREASE 20 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT))"
    IMP-00003: ORACLE error 959 encountered
    ORA-00959: tablespace 'XYZ_DATA' does not exist
    rgds
    prashanth

    I have not used the DESTROY option myself but what I can see from imp help=y, I assume that this option goes with the TRANSPORT_TABLESPACE option where you are exporting tablespaces (with their datafiles) and then importing the same to another instance or so. This option might allow you to overwrite any datafile that was existing with the same name.
    DESTROY overwrite tablespace data file (N)
    The below link gives more information:
    http://download-west.oracle.com/docs/cd/A87860_01/doc/server.817/a76955/ch02.htm#17077
    Rgds,
    Sunil

  • CLOB Datatype (Assgin more than 32k fails)

    Dear All
    Can anyone tell me why i am getting this error if i assign more than 32k character to clob variable in pl/sql
    but i can assign it from table to a variable
    Pl/sql 1(ORA-06502: PL/SQL: numeric or value error: character string buffer too small)
    declare
    c clob;
    v varchar(32767);
    begin
    for i in 1..90000 (just assuming it as 90k)
    loop
    v:=v||'x';
    if length(v) > 31000 then
    c:=c||v; -- here iam getting error while assigning character to clob if it is more than 32k
    v:=null;
    end if;
    end loop;
    c:=v;
    end;
    Pl/sql 2 (works fine when assgin from table)
    declare
    c clob;
    begin
    select clob_data into c from x; -- clob data is more than 32k;
    end;
    But it works fine with database 9i rel2 but in 11g i am facing this problem.
    Regards

    hey U r getting the error because of v varchar(32767) , but clob data type can take large amont of data atleast 1000000 this much data can be stored in clob data type try it with this. or else see below
    Hi see the example below
    create table temp(col clob);
    declare
    v_string clob;
    begin
    for i in 1..1000000
    v_string:=v_string||'A';
    end loop;
    insert into temp values(v_string);
    end;
    now
    select length(col) from temp; you will get output as
    length(col)
    1000000
    this means that clob datatype can store minimum of 1000000 character.

  • CLOB Datatype with JDBC Adapter

    Hi,
    we try to fill a Clob Datatype to JDBC Database.
    We try 2 ways with the JDBC Adapter:
    action="SQL_DML" with an SQL Statment and $placeholders$
    But how can i say the key element that it is a CLOB type?
    He used this a VARCHAR and there a not more than 4k Chars allowed.
    second way is action="EXECUTE" to call a Stored Procedure, but there we got the error that CLOB type is an Unsupported feature.
    Any Idea?
    Regards,
    Robin
    Message was edited by: Robin Schroeder

    Ok i will check this...
    But i'm right when i say that the only way to fill CLOB Type is to use a Stored Procedure ?
    or is there any possibility to do this with action="SQL_DML" ?
    Regards,
    Robin

  • Using CLOB datatypes in WHERE clause

    Hi All,
    I have a table with two columns as CLOB datatype. I'm using Oracle 8i Enterprise Edition. I can do a query, insert, update and even delete the data in the CLOB field using Oracle's SQL Plus.
    What I want is to do a search on those fields.. that is include that field in a WHERE clause. I'm using this datatype to store large number of data.
    I'd like to see this query working...
    SELECT * FROM MyTable WHERE CLOBFLD LIKE 'Something...';
    Now this query doesn't work. It returns: 'Inconsistent datatype' near the word CLOBFLD.
    Please Help me out.
    Regards,
    Gopi
    null

    I presume you want to query based on the contents of the CLOB, right ? If that is true, then you have to create a text index, using Oracle Context and then use "Contains" in the where clause to query. Hope this helps.

  • Using dblinks to access CLOB datatype

    Hi ,
    I am using dblink to access data data from remote database. I need to access data form a table on remote database which have fields of type VARCHAR2 and CLOB. Since I can't access CLOB data directly through dblinks, i created a view for that CLOB datatype. Now when I try to retrieve the data from remote tab1 by joining that view and tab1 ,I am getting the error
    ORA-22992: cannot use LOB locators selected from remote tables .
    But if I retrieve the data from view alone, I am able to retrieve the information.
    And if try to access the varchar2 field alone using dblinks that also will work. But if I combine both the view and same table used in that view, it doesn't work.
    I think this can be solved by puting all the required fields(CLOB and non-CLOB) in view. In real situation, the number of fileds that will not have CLOB datatype in tab1 is high.
    Can someone tell some suggestion on how to resolve this?
    Thanks in Advamce,
    Dalia

    I assume that you likely do not have Metalink (Oracle Support) access. Here is the Metalink Note on this topic.
    Subject:  ORA-22992 When Trying To Select Lob Columns Over A Database Link
    Doc ID:   Note:119897.1      
    Type:      BULLETIN
    Last Revision Date:      24-JUL-2002      
    Status:  PUBLISHED
    PURPOSE
    This document discusses whether or not a lob column can be selected over a
    remote database link.
    SCOPE & APPLICATION
    This document is written for all audiences.
    This information is based on the tests made in Oracle 8.1.5 and Oracle 8.1.6
    and the available Oracle documentation on LOB (large Object) data and issues
    logged with Development on the select of a LOB column over a remote database
    link.
    How to select a LOB column over a database link?
    (A) You cannot actually select a lob column (i.e. CLOB column) from a table
         using remote database link.  This is not a supported feature.
    For example,
    You run the following command from the local instance to a remote database
    instance called R816 over a dblink R816 created in the local instance for user
    SCOTT. You have logged in as user Scott and the remote table CLOBT has 2
    columns C1 and C2 as below:
    $ sqlplus scott/tiger@r816utf8
    SQL> create database link r816
          connect to scott identified by tiger
          using 'rtcsol1_r816.us.oracle.com';
    SQL> desc clobt@r816
          Name                          NUll?    Type
          C1                            NOT NULL NUMBER
          C2                                     CLOB
      SQL> select c1, c2
           from clobt@R816;
           ORA-22992: cannot use LOB locators selected from remote tables
      SQL> select c1, dbms_lob.getlength(c2)
           from clobt@R816;
           ORA-22992: cannot use LOB locators selected from remote tables
      Error:  ORA-22992
      Text:   cannot use LOB locators selected from remote tables
      Cause:  A remote LOB column cannot be referenced.
      Action: Remove references to LOBs in remote tables.
    (B) Also, these are the INVALID operations on a LOB column:
         1. SELECT lobcol from table1@remote_site;
         2. INSERT INTO lobtable select type1.lobattr from table1@remote_site;
         3. SELECT dbms_lob.getlength(lobcol) from table1@remote_site;
    In short, you cannot select a lob column using remote dblink. You have to use
    DBMS_LOB package in a programming language like PL/SQL, JAVA to manipulate LOB
    data.
    RELATED DOCUMENTS
    Oracle8i Application Developer's Guide - Large Objects (LOBs) Release 2 (8.1.6)
    Chapter Managing LOBs: LOB Restrictions

  • Can't fetch clob and long in one select/query

    I created a nightmare table containing numerous binary data types to test an application I was working on, and believe I have found an undocumented bug in Oracle's JDBC drivers that is preventing me from loading a CLOB and a LONG in a single SQL select statement. I can load the CLOB successfully, but attempting to call ResultSet.get...() for the LONG column always results in
    java.sql.SQLException: Stream has already been closed
    even when processing the columns in the order of the SELECT statement.
    I have demonstrated this behaviour with version 9.2.0.3 of Oracle's JDBC drivers, running against Oracle 9.2.0.2.0.
    The following Java example contains SQL code to create and populate a table containing a collection of nasty binary columns, and then Java code that demonstrates the problem.
    I would really appreciate any workarounds that allow me to pull this data out of a single query.
    import java.sql.*;
    This class was developed to verify that you can't have a CLOB and a LONG column in the
    same SQL select statement, and extract both values. Calling get...() for the LONG column
    always causes 'java.sql.SQLException: Stream has already been closed'.
    CREATE TABLE BINARY_COLS_TEST
    PK INTEGER PRIMARY KEY NOT NULL,
    CLOB_COL CLOB,
    BLOB_COL BLOB,
    RAW_COL RAW(100),
    LONG_COL LONG
    INSERT INTO BINARY_COLS_TEST (
    PK,
    CLOB_COL,
    BLOB_COL,
    RAW_COL,
    LONG_COL
    ) VALUES (
    1,
    '-- clob value --',
    HEXTORAW('01020304050607'),
    HEXTORAW('01020304050607'),
    '-- long value --'
    public class JdbcLongTest
    public static void main(String argv[])
    throws Exception
    Driver driver = (Driver)Class.forName("oracle.jdbc.driver.OracleDriver").newInstance();
    DriverManager.registerDriver(driver);
    Connection connection = DriverManager.getConnection(argv[0], argv[1], argv[2]);
    Statement stmt = connection.createStatement();
    ResultSet results = null;
    try
    String query = "SELECT pk, clob_col, blob_col, raw_col, long_col FROM binary_cols_test";
    results = stmt.executeQuery(query);
    while (results.next())
    int pk = results.getInt(1);
    System.out.println("Loaded int");
    Clob clob = results.getClob(2);
    // It doesn't work if you just close the ascii stream.
    // clob.getAsciiStream().close();
    String clobString = clob.getSubString(1, (int)clob.length());
    System.out.println("Loaded CLOB");
    // Streaming not strictly necessary for short values.
    // Blob blob = results.getBlob(3);
    byte blobData[] = results.getBytes(3);
    System.out.println("Loaded BLOB");
    byte rawData[] = results.getBytes(4);
    System.out.println("Loaded RAW");
    byte longData[] = results.getBytes(5);
    System.out.println("Loaded LONG");
    catch (SQLException e)
    e.printStackTrace();
    results.close();
    stmt.close();
    connection.close();
    } // public class JdbcLongTest

    The problem is that LONGs are not buffered but are read from the wire in the order defined. The problem is the same as
    rs = stmt.executeQuery("select myLong, myNumber from tab");
    while (rs.next()) {
    int n = rs.getInt(2);
    String s = rs.getString(1);
    The above will fail for the same reason. When the statement is executed the LONG is not read immediately. It is buffered in the server waiting to be read. When getInt is called the driver reads the bytes of the LONG and throws them away so that it can get to the NUMBER and read it. Then when getString is called the LONG value is gone so you get an exception.
    Similar problem here. When the query is executed the CLOB and BLOB locators are read from the wire, but the LONG is buffered in the server waiting to be read. When Clob.getString is called, it has to talk to the server to get the value of the CLOB, so it reads the LONG bytes from the wire and throws them away. That clears the connection so that it can ask the server for the CLOB bytes. When the code reads the LONG value, those bytes are gone so you get an exception.
    This is a long standing restriction on using LONG and LONG RAW values and is a result of the network protocol. It is one of the reasons that Oracle deprecates LONGs and recommends using BLOBs and CLOBs instead.
    Douglas

  • Can not Load CLOB data 32k to target table

    SQL> DESC testmon1 ;
    Name Null? Type
    FILENAME VARCHAR2(200)
    SCANSTARTTIME VARCHAR2(50)
    SCANENDTIME VARCHAR2(50)
    JOBID VARCHAR2(50)
    SCANNAME VARCHAR2(200)
    SCANTYPE VARCHAR2(200)
    FAULTLINEID VARCHAR2(50)
    RISK VARCHAR2(5)
    VULNNAME VARCHAR2(2000)
    CVE VARCHAR2(200)
    DESCRIPTION CLOB
    OBSERVATION CLOB
    RECOMMENDATION CLOB
    SQL> DESC test_target;
    Name Null? Type
    LOCALID NOT NULL NUMBER
    DESCRIPTION NOT NULL CLOB
    SCANTYPE NOT NULL VARCHAR2(12)
    RISK NOT NULL VARCHAR2(6)
    TIMESTAMP NOT NULL DATE
    VULNERABILITY_NAME NOT NULL VARCHAR2(2000)
    CVE_ID VARCHAR2(200)
    BUGTRAQ_ID VARCHAR2(200)
    ORIGINAL VARCHAR2(50)
    RECOMMEND CLOB
    VERSION VARCHAR2(15)
    FAMILY VARCHAR2(15)
    XREF VARCHAR2(15)
    create or replace PROCEDURE proc1 AS
    CURSOR C1 IS
    SELECT FAULTLINEID,VULNNAME,scanstarttime, risk,
    dbms_lob.substr(DESCRIPTION,dbms_lob.getlength(DESCRIPTION),1) "DESCR",dbms_lob.substr(OBSERVATION,dbms_lob.getlength(OBSERVATION),1) "OBS",
    dbms_lob.substr(RECOMMENDATION) "REC",CVE
    FROM testmon1;
    c_rec C1%ROWTYPE;
    descobs clob;
    FSCAN_VULN_TRANS_REC VULN_TRANSFORM_STG%ROWTYPE;
    TIMESTAMP varchar2(50);
    riskval varchar2(10);
    pCTX PLOG.LOG_CTX := PLOG.init (pSECTION => 'foundscanVuln Procedure',
    pLEVEL => PLOG.LDEBUG,
    pLOG4J => TRUE,
    pLOGTABLE => TRUE,
    pOUT_TRANS => TRUE,
    pALERT => TRUE,
    pTRACE => TRUE,
    pDBMS_OUTPUT => TRUE);
    amount number;
    buffer varchar2(32000);
    BEGIN
    ---INITIALIZE THE LOCATOR FOR CLOB DATA TYPE
    select observation into descobs from testmon1 where rownum=1;
    OPEN C1;
    loop
    fetch C1 INTO c_rec;
    exit when C1%NOTFOUND;
    --LOAD THE DESCRIPTION FIELD FROM CURSOR AND WRITE IT TO THE CLOB LOCATOR descobs.
    dbms_lob.Write(descobs,dbms_lob(c_rec.DESCR),1,c_rec.DESCR);
    ------APPEND THE OBSERVATION FIELD FROM CURSOR TO THE CLOB LOCATOR descobs.
    dbms_lob.Writeappend(descobs,dbms_lob(c_rec.DESCR),c_rec.OBS);
    -- dbms_output.put_line ('the timestamp is :'||c_rec.scanstarttime);
    --dbms_lob.write(descobs,amount,1,buffer);
    descobs:=c_rec.OBS;
    --dbms_lob.read(descobs,amount,1,buffer);
    --dbms_lob.append(c_rec.DESCR,c_rec.OBS);
    --descobs:=c_rec.OBS;
    --dbms_output.put_line ('the ADDED DESCROBS is :'||dbms_lob.substr(c_rec.DESCR,dbms_lob.getlength(c_rec.DESCR),1));
    dbms_output.put_line ('the ADDED DESCRIPTION AND OBSERVATION is :'||descobs);
    --dbms_output.put_line ('the DESCROBS buffer  is :'||buffer);
    SELECT DESCRIPTION INTO FSCAN_VULN_TRANS_REC.DESCRIPTION
    FROM TESTMON1 WHERE ROWNUM=1;
    ---------LOAD THE DESCRIPTION+ observation value into the target table description
    DBMS_LOB.WRITE(FSCAN_VULN_TRANS_REC.DESCRIPTION, dbms_lob.getlength(descobs),1,descobs);
    TIMESTAMP:=substr(c_rec.scanstarttime,1,10)||' '|| substr(c_rec.scanstarttime,12,8);
    IF c_rec.risk <3
    THEN riskval:='Low';
    ELSIF c_rec.risk <6
    THEN riskval:='Medium';
    ELSIF c_rec.risk <10
    THEN riskval:='High';
    END IF;
    FSCAN_VULN_TRANS_REC.TIMESTAMP:=TO_DATE(TIMESTAMP, 'YYYY/MM/DD HH24:MI:SS');
    FSCAN_VULN_TRANS_REC.risk:= riskval;
    --dbms_lob.append(c_rec.DESCR,c_rec.OBS);
    FSCAN_VULN_TRANS_REC.DESCRIPTION:=c_rec.DESCR;
    FSCAN_VULN_TRANS_REC.RECOMMEND:=c_rec.REC;
    FSCAN_VULN_TRANS_REC.LocalID:=to_number(c_rec.FAULTLINEID);
    FSCAN_VULN_TRANS_REC.SCANTYPE:='FOUNDSCAN';
    FSCAN_VULN_TRANS_REC.CVE_ID:=c_rec.CVE;
    FSCAN_VULN_TRANS_REC.VULNERABILITY_NAME:=c_rec.VULNNAME;
    -- dbms_output.put_line ('the plog timestamp is :'||timestamp);
    -- dbms_output.put_line ('the timestamp is :'||riskval);
    --dbms_output.put_line ('the recommend is :'||FSCAN_VULN_TRANS_REC.RECOMMEND);
    --dbms_output.put_line ('the app desc is :'||FSCAN_VULN_TRANS_REC.DESCRIPTION);
    insert into test_target values FSCAN_VULN_TRANS_REC;
    End loop;
    close C1;
    commit;
    EXCEPTION
    WHEN OTHERS THEN
    -- dbms_output.put_line ('Data not found');
    -----------dbms_output.put_line (sqlcode|| ':'||sqlerrm);
    end proc1;
    using dbms_lob package is not helping. Either DB stops responding. Or the Observation field ( which has max length >300000) can not be loaed into a CLOB variable.
    Please help or give me a sample code that helps.

    select     
         BANKING_INSTITUTION.BANK_REF_CODE     C1_BANK_ID,
         BANKING_INSTITUTION.NAME_BANK     C2_BANK_NAME,
         BANKING_INSTITUTION.BANK_NUMBER     C3_BANK_NUMBER,
         BANKING_INSTITUTION.ISO_CODE     C4_GBA_CODE,
         BANKING_INSTITUTION.STATUS     C5_STATUS,
         BANKING_INSTITUTION.SOURCE     C6_SOURCE,
         BANKING_INSTITUTION.START_DATE_BANK     C7_START_DATE,
         BANKING_INSTITUTION.ADDRESS_BANK     C8_BANK_ADDRESS1
    from     REF_DATA_DB.BANKING_INSTITUTION BANKING_INSTITUTION
    where     (1=1)
    insert /*+ append */ into XXSVB.C$_0XXSVB_BANKS_STAGING
         C1_BANK_ID,
         C2_BANK_NAME,
         C3_BANK_NUMBER,
         C4_GBA_CODE,
         C5_STATUS,
         C6_SOURCE,
         C7_START_DATE,
         C8_BANK_ADDRESS1
    values
         :C1_BANK_ID,
         :C2_BANK_NAME,
         :C3_BANK_NUMBER,
         :C4_GBA_CODE,
         :C5_STATUS,
         :C6_SOURCE,
         :C7_START_DATE,
         :C8_BANK_ADDRESS1
    )

Maybe you are looking for

  • ATV and Air Video Streaming stopped working

    I have been using AIR VIDEO (Standard and HD Version) for months now flawlessly, no issues at all, until the other day any of my devices (iPhones and iPads) will only stream to my ATV's for a few seconds then they cut off and jump back to home screen

  • Invalid object access execption

    JBO-25036 an invalid object operation was invoked on type View Object with name <viewname> can somebody tell me what is the root cause for the exception. Some time its working fine and sometimes its giving the error. frequency level is approx. once i

  • Using my iMac to its fullest

    I know the title is mainly towards my iMac, but i have OS 9 installed on it. So I was wondering, what kind of programs bring out the best of an older computer like this? Is there a list of programs somewhere that is updated that I could look at?

  • NWDI/DI configuration

    Hi! Can anybody here say, where is full step by step configuration guide for NWDI or CBS/DTR/CMS/...? We have DTR/CBS etc. on DEV server. But some configuration issues were missed: 1. CMS Landscape configurator -> Domain Data -> Update CMS: SLD (URL

  • Help ddclient pppoe [SOLVED]

    my problem is i have pppoe connection , in rc.conf responsible for connection only the adsl daemon, i did  everything on the opendns wiki page, have set the servers in resolv.conf, installed ddclient, configured it, but noticed resolv.conf does not k