CLOB datatype question
Hi,
I have a question regarding CLOB datatypes.
The scenario is as follows:
In our existing oracle 8i database, we have a tables say "inquiry" and another table "inq_details". The details table has a LONG field where we create a new record every time details for a specific inquiry is logged. The length of details can be any where from few characters to the field length.
We are planning to move the DB to oracle 9i and I heard it is better to move from LONG to CLOB datatype.
The question is If we log a new record (as it is today) for every detail record, should we be concerned with the data field size and retrieval performance?
Am I better of appending the details to the same column instead of creating new record? (Only diff between two diff records is the details section nothing more than that). When we display the record, we always get all the details in the order in which they are logged.
thank you
A quick look in the manual would have shown you that LPAD returns the same datatype as its first argument : LPAD
The string returned is of VARCHAR2 data type if expr1 is a character data type, NVARCHAR2 if expr1 is a national character data type, and a LOB if expr1 is a LOB data type.
So, this works :
SQL> declare
2 r clob;
3 begin
4 r := lpad(to_clob('0'), 32768, '0');
5 end;
6 /
PL/SQL procedure successfully completed
Similar Messages
-
A clob datatype and LogMiner question?
HI,
I am using Logminer to caputer all DMLs agaist rows with clob datatype, find a problem.
--log in as scott/tiger
conn scott/tiger
SQL> desc clobtest
Name Null? Type
SNO NUMBER
CLOBTYPE CLOB
--make a update
update clobtest set CLOBTYPE = 'Hello New York' where sno = 11;
commit;
after using LogMiner to analyze redo log files, to query.
select sql_redo from v$logmnr_contents where username = 'SCOTT';
update "SCOTT"."CLOBTEST" set "CLOBTYPE" = 'Hello New York' where and ROWID = 'AAD0ZqAAEAAAAhsAAC';
My quesion:
As to caputured DML
update "SCOTT"."CLOBTEST" set "CLOBTYPE" = 'Hello New York' where and ROWID = 'AAD0ZqAAEAAAAhsAAC';
it shows "where and", why there is missing after where clause????? --(anyway, I can overcome this by using REGEXP_REPLACE(sql_redo,'where and','where ')
Thanks
Roy
Edited by: ROY123 on Mar 16, 2010 10:25 AMI checked logminer documetation:
http://74.125.93.132/search?q=cache:19bBhYX3Xs4J:download.oracle.com/docs/cd/B19306_01/server.102/b14215/logminer.htm+NOTE:LogMiner+does+not+support+these+datatypes+and+table+storage+attributes:&cd=1&hl=en&ct=clnk&gl=us
it says 10GR2 support LOB datatype.
but why "where clause" omit the clob datatype column (becaume "where and rowid")????
Edited by: ROY123 on Mar 16, 2010 2:12 PM -
LogMiner puzzle - CLOB datatype
Hello, everybody!
Sorry for the cross-post here and in "Database\SQL and PL/SQL" forum, but the problem I am trying to dig is somewhere between those two areas.
I need a bit of an advice whether the following behavior is wrong an requires SR to be initiated – or I am just missing something.
Setting:
- Oracle 11.2.0.3 Enterprise Edition 64-bit on Win 2008.
- Database is running in ARCHIVELOG mode with supplemental logging enabled
- DB_SECUREFILE=PERMITTED (so, by default LOBs will be created as BasicFiles - but I didn't notice any behavior difference comparing to SecureFile implementation)
Test #1. Initial discovery of a problem
1. Setup:
<li> I created a table MISHA_TEST that contains CLOB column
create table misha_test (a number primary key, b_cl CLOB)<li> I run anonymous block that would insert into this table WITHOUT referencing CLOB column
begin
insert into misha_test (a) values (1);
commit;
end;2. I looked at generated logs via the LogMiner and found the following entries in V$LOGMNG_CONTENTS:
SQL_REDO
set transaction read write;
insert into "MISHA_TEST"("A","B_CL") values ('1',EMPTY_CLOB());
set transaction read write;
commit;
update "MISHA_TEST" set "B_CL" = NULL where "A" = '1' and ROWID = 'AAAj90AAKAACfqnAAA';
commit;And here I am puzzled: why do we have two operations for a single insert – first write EMPTY_CLOB into B_CL and then update it to NULL? But I didn’t even touch the column B_CL! Seems very strange – why can’t we write NULL to B_CL from the very beginning instead of first creating a pointer and than destroying it.
Key question:
- why NULL value in CLOB column should be handled differently than NULL value in VARCHAR2 column?
Test #2. Quantification
Question:
- having LOB column in the table seems to cause an overhead of generating more logs. But could it be quantified?
Assumption:
- My understanding is that CLOBs defined with “storage in row enabled = true” (default) up to ~ 4k of size behave like Varchar2(4000) and only when the size goes above 4k we start using real LOB mechanisms.
Basic test:
1. Two tables:
<li> With CLOB:
create table misha_test_clob2 (a_nr number primary key, b_tx varchar2(4000), c_dt date, d_cl CLOB)<li>With VARCHAR2:
create table misha_test_clob (a_nr number primary key, b_tx varchar2(4000), c_dt date, d_cl VARCHAR2(4000))2. Switch logfile/Insert 1000 rows and populate only A_NR/Switch logfile
insert into misha_test_clob (a_nr)
select level
from dual
connect by level < 10013. Check sizes of generated logs:
<li>With CLOB – 689,664 bytes
<li>With Varchar2 – 509.440 (<b>or about 26% reduction</b>)
Summary:
<li>the overhead is real. It means that table with VARCHAR2 column is cheaper to maintain, even if you are not using that column. So adding LOB columns to a table "just in case" is a really bad idea.
<li>Having LOB columns in the table that has tons of INSERT operations is expensive.
Just to clarify a real business case - I have a table with some number of attributes, one attribute has CLOB datatype. Frequency of inserts in this table is pretty high, frequency of using CLOB column is pretty low (NOT NULL ~0.1%). But because of that CLOB column I generate a lot more LOG data than I need (about 30% extra). Seems like a real waste of time! For now I requested development team to split the table into two, but that's still a bandage.
So, does anybody care? Comments/suggestions are very welcome!
Thanks a lot!
Michael RosenblumHello, everybody!
Sorry for the cross-post here and in "Database\SQL and PL/SQL" forum, but the problem I am trying to dig is somewhere between those two areas.
I need a bit of an advice whether the following behavior is wrong an requires SR to be initiated – or I am just missing something.
Setting:
- Oracle 11.2.0.3 Enterprise Edition 64-bit on Win 2008.
- Database is running in ARCHIVELOG mode with supplemental logging enabled
- DB_SECUREFILE=PERMITTED (so, by default LOBs will be created as BasicFiles - but I didn't notice any behavior difference comparing to SecureFile implementation)
Test #1. Initial discovery of a problem
1. Setup:
<li> I created a table MISHA_TEST that contains CLOB column
create table misha_test (a number primary key, b_cl CLOB)<li> I run anonymous block that would insert into this table WITHOUT referencing CLOB column
begin
insert into misha_test (a) values (1);
commit;
end;2. I looked at generated logs via the LogMiner and found the following entries in V$LOGMNG_CONTENTS:
SQL_REDO
set transaction read write;
insert into "MISHA_TEST"("A","B_CL") values ('1',EMPTY_CLOB());
set transaction read write;
commit;
update "MISHA_TEST" set "B_CL" = NULL where "A" = '1' and ROWID = 'AAAj90AAKAACfqnAAA';
commit;And here I am puzzled: why do we have two operations for a single insert – first write EMPTY_CLOB into B_CL and then update it to NULL? But I didn’t even touch the column B_CL! Seems very strange – why can’t we write NULL to B_CL from the very beginning instead of first creating a pointer and than destroying it.
Key question:
- why NULL value in CLOB column should be handled differently than NULL value in VARCHAR2 column?
Test #2. Quantification
Question:
- having LOB column in the table seems to cause an overhead of generating more logs. But could it be quantified?
Assumption:
- My understanding is that CLOBs defined with “storage in row enabled = true” (default) up to ~ 4k of size behave like Varchar2(4000) and only when the size goes above 4k we start using real LOB mechanisms.
Basic test:
1. Two tables:
<li> With CLOB:
create table misha_test_clob2 (a_nr number primary key, b_tx varchar2(4000), c_dt date, d_cl CLOB)<li>With VARCHAR2:
create table misha_test_clob (a_nr number primary key, b_tx varchar2(4000), c_dt date, d_cl VARCHAR2(4000))2. Switch logfile/Insert 1000 rows and populate only A_NR/Switch logfile
insert into misha_test_clob (a_nr)
select level
from dual
connect by level < 10013. Check sizes of generated logs:
<li>With CLOB – 689,664 bytes
<li>With Varchar2 – 509.440 (<b>or about 26% reduction</b>)
Summary:
<li>the overhead is real. It means that table with VARCHAR2 column is cheaper to maintain, even if you are not using that column. So adding LOB columns to a table "just in case" is a really bad idea.
<li>Having LOB columns in the table that has tons of INSERT operations is expensive.
Just to clarify a real business case - I have a table with some number of attributes, one attribute has CLOB datatype. Frequency of inserts in this table is pretty high, frequency of using CLOB column is pretty low (NOT NULL ~0.1%). But because of that CLOB column I generate a lot more LOG data than I need (about 30% extra). Seems like a real waste of time! For now I requested development team to split the table into two, but that's still a bandage.
So, does anybody care? Comments/suggestions are very welcome!
Thanks a lot!
Michael Rosenblum -
Hello Everyone,
Before I go to my question let me give you the context. I wanted to upload the description of a set of products with their IDs into my database. Hence I created a table 'demo' with two columns of INT and CLOB datatypes using the following script. *create table demo ( id int primary key, theclob Clob );*
Then I create a directory using the following script, *Create Or Replace Directory MY_FILES as 'C:\path of the folder.......\';*
In the above mentioned directory I create one .txt file for each product with the description of the product. Using the below script I created a procedure to load the contents of the .txt files into my 'demo' table.
*CREATE OR REPLACE*
*PROCEDURE LOAD_A_FILE( P_ID IN NUMBER, P_FILENAME IN VARCHAR2 ) AS*
*L_CLOB CLOB;*
*L_BFILE BFILE;*
*BEGIN*
*INSERT INTO DEMO VALUES ( P_ID, EMPTY_CLOB() )*
*RETURNING THECLOB INTO L_CLOB;*
*L_BFILE := BFILENAME( 'MY_FILES', P_FILENAME );*
*DBMS_LOB.FILEOPEN( L_BFILE );*
*DBMS_LOB.LOADFROMFILE( L_CLOB, L_BFILE,*
*DBMS_LOB.GETLENGTH( L_BFILE ) );*
*DBMS_LOB.FILECLOSE( L_BFILE );*
*END;*
After which I called the procedure using, *exec load_a_file(1, 'filename.txt' );*
When I queried the table like, select * from demo; I am getting the following output..... which is all fine.
ID THECLOB
1 "product x is an excellent way to improve your production process and enhance your turnaround time....."
_*QUESTION*_
When I did the exact same thing in my friend's machine and query the demo table, I get garbage value in the 'theclob' column (as shown below). The only difference is that mine is an enterprise edition of Oracle 11.2.0.1 and my friends is an Express edition of Oracle 11.2.0.2. Does this has anything to do with the problem?
1 猺⁁摶慮捥搠摡瑡潬汥捴楯渠捡灡扩汩瑩敳㨠扡牣潤攠獣慮湩湧Ⱐ灡湩挠慬敲琬⁷潲欠潲摥爠浡湡来浥湴Ⱐ睩牥汥獳潲浳湤異敲癩獯爠瑩浥湴特⸊潭整⁍潢楬攠坯牫敲㨠周攠浯獴潢畳琠灡捫慧攮⁐牯癩摥猠扵獩湥獳敳⁷楴栠愠捯浰汥瑥汹⁷楲敬敳猠潰敲慴楯湡氠浡湡来浥湴祳瑥洮⁉湣汵摥猠慬氠潦⁃潭整⁔牡捫敲❳敡瑵牥猠灬畳㨠䍡汥湤慲猬畴潭慴敤畳瑯浥爠捯浭畮楣慴楯湳Ⱐ睯牫牤敲⽩湶潩捥⁵灤慴楮朠晲潭⁴桥楥汤Ⱐ睯牫牤敲敱略湣楮本硣敳獩癥瑯瀠瑩浥汥牴猬⁷楲敬敳猠景牭猬⁴畲渭批畲渠癯楣攠湡癩条瑩潮Ⱐ慮搠浯牥⸊ੁ摶慮捥搠坩
2 ≁否吠潦晥牳摶慮捥搠睩牥汥獳潲浳慰慢楬楴礠睩瑨⁃潭整⁅娠䍯浥琬⁔牡捫敲湤⁃潭整⁍潢楬攠坯牫敲ਊ䍯浥琠䕚㨠周攠浯獴潢畳琬潳琠敦晥捴楶攠睥戠扡獥搠䵒䴠慰灬楣慴楯渠楮⁴桥湤畳瑲礮⁃慰慢楬楴楥猠楮捬畤攠䝐匠汯捡瑩潮⁴牡捫楮本⁷楲敬敳猠瑩浥汯捫Ⱐ来漭晥湣楮朠睩瑨汥牴猬灥敤湤瑯瀠瑩浥汥牴猬湤渭摥浡湤爠獣桥摵汥搠牥灯牴楮朮ਊ䍯浥琠呲慣步爺⁁⁰潷敲晵氠捬楥湴ⵢ慳敤⁰污瑦潲洠瑨慴晦敲猠慬氠瑨攠晥慴畲敳映䍯浥琠䕚⁰汵猺⁁摶慮捥搠摡瑡潬汥捴楯渠捡灡扩汩瑩敳㨠扡牣潤攠獣慮湩湧Ⱐ灡湩挠慬敲琬⁷潲欠潲摥爠浡湡来浥湴Ⱐ睩牥汥獳潲浳湤異敲癩獯爠瑩浥湴特⸊潭整⁍潢楬攠坯牫敲㨠周攠浯獴潢畳琠灡捫慧攮⁐牯癩摥猠扵獩湥獳敳⁷楴栠愠捯浰汥瑥汹⁷楲敬敳猠潰敲慴楯湡氠浡湡来浥湴祳瑥洮⁉湣汵摥猠慬氠潦⁃潭整⁔牡捫敲❳敡瑵牥猠灬畳㨠䍡汥湤慲猬畴潭慴敤畳瑯浥爠捯浭畮楣慴楯湳Ⱐ睯牫牤敲⽩湶潩捥⁵灤慴楮朠晲潭⁴桥楥汤Ⱐ睯牫牤敲敱略湣楮本硣敳獩癥瑯瀠瑩浥汥牴猬⁷楲敬敳猠景牭猬⁴畲渭批畲渠癯楣攠湡癩条瑩潮Ⱐ慮搠浯牥⸊ੁ摶慮捥搠坩牥汥獳⁆潲浳㨠呵牮湹⁰慰敲潲洠楮瑯⁷楲敬敳猠捬潮攠潦⁴桥慭攠楮景牭慴楯渠ⴠ湯慴瑥爠桯眠捯浰汩捡瑥搮⁓慶攠瑩浥礠瑲慮獦敲物湧湦潲浡瑩潮慣欠瑯⁴桥晦楣攠睩瑨⁷楲敬敳猠獰敥搮⁓慶攠灡灥爠慮搠敬業楮慴攠摵慬•ഊ
3 ≁䥒呉䵅⁍慮慧敲牯洠䅔♔⁰牯癩摥猠愠浯扩汥灰汩捡瑩潮猠摥獩杮敤⁴漠瑲慣欠扩汬慢汥潵牳⸠⁔桥⁁㑐潬畴楯湳畴潭慴楣慬汹潧⁷楲敬敳猠敭慩氬慬汳Ⱐ慮搠扩汬慢汥癥湴猬獳潣楡瑥猠瑨敭⁷楴栠捬楥湴爠灲潪散琠捯摥猠慮搠摩牥捴猠扩汬慢汥散潲摳⁴漠扩汬楮朠獹獴敭献†周攠呩浥乯瑥潬畴楯湳⁰牯癩摥汩浭敤潷渠數灥物敮捥Ⱐ慬汯睩湧潲牥慴楯渠潦慮畡氠扩汬慢汥癥湴献†周敲攠慲攠瑷漠癥牳楯渠潦⁁㑐湤⁔業敎潴攮ਊ䭥礠䙥慴畲敳㨊⨠䥮捬畤攠捡灴畲攠慤潣楬污扬攠敶敮瑳ਪ⁃慰瑵牥潢楬攠灨潮攠捡汬湤浡楬†慳楬污扬攠敶敮瑳Ⱐਪ⁁扩汩瑹⁴漠慳獩杮楬污扬攠敶敮琠瑯汩敮琠慮搠灲潪散琊⨠䅢楬楴礠瑯敡牣栠慮搠獣牯汬⁴桲潵杨楬污扬攠敶敮瑳Ⱐ潰瑩潮⁴漠楮瑥杲慴攠睩瑨楬汩湧祳瑥浳 ⨠偯瑥湴楡氠扥湥晩瑳湣汵摥湣牥慳敤⁰牯摵捴楶楴礠慮搠牥摵捥搠慤浩湩獴牡瑩癥癥牨敡搠湤湣牥慳敤敶敮略略⁴漠浯牥捣畲慴攠捡灴畲楮朠潦楬污扬攠敶敮瑳•ഊ
4 ≁灲楶慐慹⁁乄⁁灲楶慐慹⁐牯晥獳楯湡氠晲潭⁁否吠瑵牮⁹潵爠浯扩汥敶楣攠楮瑯⁰潲瑡扬攠捲敤楴慲搠瑥牭楮慬⸠坩瑨潭灡瑩扬攠䅔♔浡牴灨潮攬⁁灲楶慐慹爠䅰物癡偡礠偲潦敳獩潮慬潦瑷慲攬湤敲捨慮琠慣捯畮琬⁹潵爠浯扩汥⁷潲武潲捥慮⁰牯捥獳牥摩琠潲敢楴慲搠灡祭敮瑳牯洠瑨攠晩敬搮ਊ䭥礠䙥慴畲敳㨠 ⨠卭慲瑰桯湥ⵢ慳敤潬畴楯渠⁴漠灲潣敳猠捲敤楴慲搠灡祭敮瑳 ⨠䙵汬ⵦ敡瑵牥搠灯楮琭潦慬攠獯汵瑩潮異灯牴楮朠慬氠浡橯爠瑲慮獡捴楯渠瑹灥ਠ⨠卵灰潲瑳牥摩琠慮搠摥扩琠瑲慮獡捴楯湳 ਊ∍
To make sure that the .txt files are accessible in the directory I executed the following script, Host Echo Hello World > C:\...path...\1.Txt
After which I found the contents of the file changed to "Hello World". Later I loaded the .txt file with "Hello World" and queried the table. Still I am getting some garbage value. However since the string "Hello World" is much smaller than the previous contents, the garbage size is also smaller for ID 1. I don't get any errors, but you can see the output as follows.
1 䠀攀氀氀漀 圀漀爀氀搀 ഀ
2 ≁否吠潦晥牳摶慮捥搠睩牥汥獳潲浳慰慢楬楴礠睩瑨⁃潭整⁅娠䍯浥琬⁔牡捫敲湤⁃潭整⁍潢楬攠坯牫敲ਊ䍯浥琠䕚㨠周攠浯獴潢畳琬潳琠敦晥捴楶攠睥戠扡獥搠䵒䴠慰灬楣慴楯渠楮⁴桥湤畳瑲礮⁃慰慢楬楴楥猠楮捬畤攠䝐匠汯捡瑩潮⁴牡捫楮本⁷楲敬敳猠瑩浥汯捫Ⱐ来漭晥湣楮朠睩瑨汥牴猬灥敤湤瑯瀠瑩浥汥牴猬湤渭摥浡湤爠獣桥摵汥搠牥灯牴楮朮ਊ䍯浥琠呲慣步爺⁁⁰潷敲晵氠捬楥湴ⵢ慳敤⁰污瑦潲洠瑨慴晦敲猠慬氠瑨攠晥慴畲敳映䍯浥琠䕚⁰汵猺⁁摶慮捥搠摡瑡潬汥捴楯渠捡灡扩汩瑩敳㨠扡牣潤攠獣慮湩湧Ⱐ灡湩挠慬敲琬⁷潲欠潲摥爠浡湡来浥湴Ⱐ睩牥汥獳潲浳湤異敲癩獯爠瑩浥湴特⸊潭整⁍潢楬攠坯牫敲㨠周攠浯獴潢畳琠灡捫慧攮⁐牯癩摥猠扵獩湥獳敳⁷楴栠愠捯浰汥瑥汹⁷楲敬敳猠潰敲慴楯湡氠浡湡来浥湴祳瑥洮⁉湣汵摥猠慬氠潦⁃潭整⁔牡捫敲❳敡瑵牥猠灬畳㨠䍡汥湤慲猬畴潭慴敤畳瑯浥爠捯浭畮楣慴楯湳Ⱐ睯牫牤敲⽩湶潩捥⁵灤慴楮朠晲潭⁴桥楥汤Ⱐ睯牫牤敲敱略湣楮本硣敳獩癥瑯瀠瑩浥汥牴猬⁷楲敬敳猠景牭猬⁴畲渭批畲渠癯楣攠湡癩条瑩潮Ⱐ慮搠浯牥⸊ੁ摶慮捥搠坩牥汥獳⁆潲浳㨠呵牮湹⁰慰敲潲洠楮瑯⁷楲敬敳猠捬潮攠潦⁴桥慭攠楮景牭慴楯渠ⴠ湯慴瑥爠桯眠捯浰汩捡瑥搮⁓慶攠瑩浥礠瑲慮獦敲物湧湦潲浡瑩潮慣欠瑯⁴桥晦楣攠睩瑨⁷楲敬敳猠獰敥搮⁓慶攠灡灥爠慮搠敬業楮慴攠摵慬•ഊ
3 ≁䥒呉䵅⁍慮慧敲牯洠䅔♔⁰牯癩摥猠愠浯扩汥灰汩捡瑩潮猠摥獩杮敤⁴漠瑲慣欠扩汬慢汥潵牳⸠⁔桥⁁㑐潬畴楯湳畴潭慴楣慬汹潧⁷楲敬敳猠敭慩氬慬汳Ⱐ慮搠扩汬慢汥癥湴猬獳潣楡瑥猠瑨敭⁷楴栠捬楥湴爠灲潪散琠捯摥猠慮搠摩牥捴猠扩汬慢汥散潲摳⁴漠扩汬楮朠獹獴敭献†周攠呩浥乯瑥潬畴楯湳⁰牯癩摥汩浭敤潷渠數灥物敮捥Ⱐ慬汯睩湧潲牥慴楯渠潦慮畡氠扩汬慢汥癥湴献†周敲攠慲攠瑷漠癥牳楯渠潦⁁㑐湤⁔業敎潴攮ਊ䭥礠䙥慴畲敳㨊⨠䥮捬畤攠捡灴畲攠慤潣楬污扬攠敶敮瑳ਪ⁃慰瑵牥潢楬攠灨潮攠捡汬湤浡楬†慳楬污扬攠敶敮瑳Ⱐਪ⁁扩汩瑹⁴漠慳獩杮楬污扬攠敶敮琠瑯汩敮琠慮搠灲潪散琊⨠䅢楬楴礠瑯敡牣栠慮搠獣牯汬⁴桲潵杨楬污扬攠敶敮瑳Ⱐ潰瑩潮⁴漠楮瑥杲慴攠睩瑨楬汩湧祳瑥浳 ⨠偯瑥湴楡氠扥湥晩瑳湣汵摥湣牥慳敤⁰牯摵捴楶楴礠慮搠牥摵捥搠慤浩湩獴牡瑩癥癥牨敡搠湤湣牥慳敤敶敮略略⁴漠浯牥捣畲慴攠捡灴畲楮朠潦楬污扬攠敶敮瑳•ഊ
4 ≁灲楶慐慹⁁乄⁁灲楶慐慹⁐牯晥獳楯湡氠晲潭⁁否吠瑵牮⁹潵爠浯扩汥敶楣攠楮瑯⁰潲瑡扬攠捲敤楴慲搠瑥牭楮慬⸠坩瑨潭灡瑩扬攠䅔♔浡牴灨潮攬⁁灲楶慐慹爠䅰物癡偡礠偲潦敳獩潮慬潦瑷慲攬湤敲捨慮琠慣捯畮琬⁹潵爠浯扩汥⁷潲武潲捥慮⁰牯捥獳牥摩琠潲敢楴慲搠灡祭敮瑳牯洠瑨攠晩敬搮ਊ䭥礠䙥慴畲敳㨠 ⨠卭慲瑰桯湥ⵢ慳敤潬畴楯渠⁴漠灲潣敳猠捲敤楴慲搠灡祭敮瑳 ⨠䙵汬ⵦ敡瑵牥搠灯楮琭潦慬攠獯汵瑩潮異灯牴楮朠慬氠浡橯爠瑲慮獡捴楯渠瑹灥ਠ⨠卵灰潲瑳牥摩琠慮搠摥扩琠瑲慮獡捴楯湳 ਊ∍
Edited by: Arunkumar Gunasekaran on Jan 3, 2013 11:38 AM>
To make sure that the .txt files are accessible in the directory I executed the following script, Host Echo Hello World > C:\...path...\1.Txt
After which I found the contents of the file changed to "Hello World". Later I loaded the .txt file with "Hello World" and queried the table. Still I am getting some garbage value. However since the string "Hello World" is much smaller than the previous contents, the garbage size is also smaller for ID 1. I don't get any errors, but you can see the output as follows.
>
The most common problem I have seen using BFILEs is the character set; BFILEs do NOT handle character set conversion.
That is the main reason I don't recommend using BFILEs for loading data like this. Either SQL*Loader or external tables can do the job and they both handle character set conversions properly.
See the LOADFROMFILE Procedure of DBMS_LOB package in the PL/SQL Language doc
http://docs.oracle.com/cd/B28359_01/appdev.111/b28419/d_lob.htm#i998778
>
Note:
The input BFILE must have been opened prior to using this procedure. No character set conversions are performed implicitly when binary BFILE data is loaded into a CLOB. The BFILE data must already be in the same character set as the CLOB in the database. No error checking is performed to verify this.
Note:
If the character set is varying width, UTF-8 for example, the LOB value is stored in the fixed-width UCS2 format. Therefore, if you are using DBMS_LOB.LOADFROMFILE, the data in the BFILE should be in the UCS2 character set instead of the UTF-8 character set. However, you should use sql*loader instead of LOADFROMFILE to load data into a CLOB or NCLOB because sql*loader will provide the necessary character set conversions.
>
I suggest you use an external table definition to do this load. You can do an ALTER to change the file name for each load.
See External Tables Concepts in the Utilities doc for the basics
http://docs.oracle.com/cd/B28359_01/server.111/b28319/et_concepts.htm
See Altering External Tables in the DBA doc for detailed information
http://docs.oracle.com/cd/B28359_01/server.111/b28310/tables013.htm
>
DEFAULT DIRECTORY
Changes the default directory specification
ALTER TABLE admin_ext_employees
DEFAULT DIRECTORY admin_dat2_dir;
LOCATION
Allows data sources to be changed without dropping and re-creating the external table metadata
ALTER TABLE admin_ext_employees
LOCATION ('empxt3.txt',
'empxt4.txt');
>
You can also load in parallel if you have licensed that option. -
Grouping clause failing for CLOB Datatype
Hi,
I am facing error while Selecting the following expression:
ORA-00932: inconsistent datatypes: expected - got CLOB
OBJECT_CONTENT is a CLOB Datatype
SELECT
MAX(A.VERSION_ID)
OBJECT_CONTENT
FROM
VERSION_HISTORY A,
OBJECT_MASTER B
WHERE
A.BUOBJECT_ID=B.BUOBJECT_ID
GROUP BY OBJECT_CONTENT
Any help will be needful for rmeSo, what's your question? You can use CLOB in the GROUP BY ....
quote from doc:
Restrictions on the GROUP BY Clause:
This clause is subject to the following restrictions:
You cannot specify LOB columns, nested tables, or varrays as part of expr.
The expressions can be of any form except scalar subquery expressions.
If the group_by_clause references any object type columns, then the query will not be parallelized.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_10002.htm#SQLRF01702
Edited by: Alex Nuijten on Aug 11, 2009 9:02 AM -
Clob DataType, NULL and ADO
Hello,
First, I'm french so my english isn't very good
I have a problem with Oracle Clob DataType. When I try to put NULL value to
CLOB DataType with ADO, changes aren't not made.
rs.open "SELECT ....", adocn, adOpenKeyset, adLockOptimistic
rs.Fields("ClobField") = Null ' In this case, the Update doesn't work
rs.update
rs.Close
This code works if I try to write a value which is different than Null but
not if is equal to Null. Instead of having Null, I have the old value
(before changes), the update of the field doesn't work.I experience the same, did you find a solution to your problem?
Kind regards,
Roel de Bruyn -
Error in CLOB datatype for transfering data from Excel to Oracle database
Am using excel sheet as source and Oracle as target Database.
For all tables the operations works smoothly except for tables containing CLOB datatype.
Initially i have used SQL Developer to transfer some data into excell sheets, now i want to transfer those file's data into another Oracle Database.
What other options do i have?
Is excel not the right tool to transfer CLOB datatypes?well,
I couldn't suggest an excel to do it...
You can go from Oracle to Oracle with the PL/SQL IKM. It works fine.
Does it help you? -
Setting of CLOB Datatype storage space
Hello All!
I unble to insert more then 4000 characters in clob datatype field
how I increate the storage size of the clob field
I'm working in VB 6.0 and oracle 9iOracle will allocate CLOB segments using some default storage options linked to column, table and tablespace.
Example with Oracle 11.2 XE:
SQL> select * from v$version;
BANNER
Oracle Database 11g Express Edition Release 11.2.0.2.0 - Beta
PL/SQL Release 11.2.0.2.0 - Beta
CORE 11.2.0.2.0 Production
TNS for 32-bit Windows: Version 11.2.0.2.0 - Beta
NLSRTL Version 11.2.0.2.0 - Production
SQL> create user test identified by test;
User created.
SQL> grant create session, create table to test;
Grant succeeded.
SQL> alter user test quota unlimited on users;
User altered.
SQL> alter user test default tablespace users;
User altered.
SQL> connect test/test;
Connected.
SQL> create table tl(x clob);
Table created.
SQL> column segment_name format a30
SQL> select segment_name, bytes/(1024*1024) as mb
2 from user_segments;
SEGMENT_NAME MB
TL ,0625
SYS_IL0000020403C00001$$ ,0625
SYS_LOB0000020403C00001$$ ,0625
SQL> insert into tl values('01234456789');
1 row created.
SQL> commit;
Commit complete.
SQL> select segment_name, bytes/(1024*1024) as mb
2 from user_segments;
SEGMENT_NAME MB
TL ,0625
SYS_IL0000020403C00001$$ ,0625
SYS_LOB0000020403C00001$$ ,0625
SQL>Same example run with Oracle XE 10.2 :Re: CLOB Datatype [About Space allocation]
Edited by: P. Forstmann on 24 juin 2011 09:24 -
Problem on CLOB datatype after import
I got problem and call to Oracle support and they use DUL for extract data from datafile to dump file and I import everything is done and no error but when I check in CLOB datatype that have space(blank character) separate each character see below
Original
Oracle
After Import
O R A C L E
So the application cannot execute those data.
Anyone have solution how to fix this problem?
Thanks,
Taohikoif you use a direct insert you are restricted to 4000 characters.
You can put your value in a varchar2 variable and that allows you to insert up to 32767 characters.
declare
my_clob_variable := rpad('x','X',25000);
begin
insert into my_table(my_clob_column)
values(my_clob_variable);
end; -
Importing and Exporting Data with a Clob datatype with HTML DB
I would like to know what to do and what to be aware of when Importing and Exporting data with a Clob Datatype with HTML DB?
Colin - what kind of import/export operation would that be, which pages are you referring to?
Scott -
Load Clob datatype into xml db
Hi All,
Please can I know how to load clob datatype in xml database.
In Oracle Data Integrator mapping, my source is clob column and target is xml db.
I get error “incompatible data type in conversion”.
Please can I get some help in resolving the issue.
Thanks.Also tried
http://docs.oracle.com/cd/B19306_01/appdev.102/b14258/t_xml.htm
getStringVal funtion
but it cannot handle more than 4000 characters.
Please can I know how to map clob source to xml db in ODI -
Hello,
Does anyone have any examples of how a CLOB datatype can be returned from a stored procedure and used in java?
Currently we are building up strings and returning a VARCHAR2 datatype, but are running into the 32k size restriction as some of the strings are too long.
It seems that converting the strings to CLOBs and then returning is the best solution. Has anyone had a similar problem and solved it?
Regards,
EoinCreate stored procedure like this :
create or replace procedure getclob(var out clob) as
str varchar2(20);
templob CLOB;
begin
str :='mystring';
DBMS_LOB.CREATETEMPORARY(templob, TRUE, dbms_lob.session);
dbms_lob.write(templob,length(str),1,str);
var :=templob;
DBMS_LOB.FREETEMPORARY(templob);
end;
java program to call above stored procedure
import java.sql.*;
import oracle.jdbc.driver.*;
class SPLobInJava
public static void main(String args[]) throws Exception
DriverManager.registerDriver(new
oracle.jdbc.driver.OracleDriver());
Connection conn = DriverManager.getConnection
("jdbc:oracle:thin:@insn104:1521:ora9idb", "scott", "tiger");
CallableStatement cs;
cs = conn.prepareCall("{call getclob(?)}");
cs.registerOutParameter(1,java.sql.Types.CLOB);
cs.execute();
Clob clob =cs.getClob(1);
// do whatever you want with the clob
System.out.println(clob.getSubString(1,5));
cs.close();
conn.close();
} -
Hi,
I searched and found how to insert into clob datatype
SQL> create table employee(ename char(11),id number,info clob);
Table created.
SQL> insert into employee values('anil',100,empty_clob());
1 row created.
SQL> update employee set info='jkkkkkkkkkksdflfweuikddddddddddddddddddddddddddddaaaaaaaaaaaaaaaaaaaa';
1 row updated.
SQL> select * from employee;
ENAME ID
INFO
anil 100
jkkkkkkkkkksdflfweuikdddddddddddddaaaaaaaaaaaaaaaaaaaaaaaaaaaNow if i want to insert another row into this table having clob datatype,
Then how to insert into itWhat's the problem exactly?
SQL> create table employee (ename char(11), id number, info clob);
Table created.
SQL> insert into employee values ('anil',100,'here is some info');
1 row created.
SQL> insert into employee values ('fred',200,'here is freds info');
1 row created.
SQL> select * from employee;
ENAME ID INFO
anil 100 here is some info
fred 200 here is freds info
SQL> -
Create clob datatype in a column
Hi,
I am working in oracle9i and solaris 5.8.
I want to create a table with a cloumn contains clob Datatype..
Please explain me how to create a clob datatype in a column and how to assign this and how to insert datas into the datatype...
Regs..Hey,
Read this below link. It will useful for inserting a clob datatype column,
Re: CLOB
Regards,
Moorthy.GS -
Passing CLOB datatype to a stored procedure
Hi,
How do I pass a CLOB value to a stored procedure?
I am creating a stored procedure which appends a value to a CLOB datatype. The procedure has 2 in parameter (one CLOB and one CLOB). The procedure is compiled but I'm having problem executing it. Below is a simplified version of the procedure and the error given when the procedure is executed.
SQL> CREATE OR REPLACE PROCEDURE prUpdateContent (
2 p_contentId IN NUMBER,
3 p_body IN CLOB)
4 IS
5 v_id NUMBER;
6 v_orig CLOB;
7 v_add CLOB;
8
9 BEGIN
10 v_id := p_contentId;
11 v_add := p_body;
12
13 SELECT body INTO v_orig FROM test WHERE id=v_id FOR UPDATE;
14
15 DBMS_LOB.APPEND(v_orig, v_add);
16 commit;
17 END;
18 /
Procedure created.
SQL> exec prUpdateContent (1, 'testing');
BEGIN prUpdateContent (1, 'testing'); END;
ERROR at line 1:
ORA-06550: line 1, column 7:
PLS-00306: wrong number or types of arguments in call to 'PRUPDATECONTENT'
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored
Any help or hints please.
nullsorry I made a mistake with the in parameter types - it's one NUMBER and one CLOB.
Maybe you are looking for
-
How to Substr field in SAP Query.
Dear Developer, How to substr any field in SAP Query ? Regards, Ujed.
-
The SD version of my tv download didnt sync & I have an iPod. I can play it in iTunes but it won't sync with my iPod. I've never had this problem before, the other tv episodes have worked/downloaded both versions fine. How do I fix this?
-
Navigation of iPhoto and iTunes not possible by Remote Control
Navigation of iPhoto and iTunes not possible by Remote Control after upgrading from Leo to Mountain Lion. I miss this functionality because it was nice to see may photoes directly on the iMac-Screen as well to hear the Music on my Radio. No chance to
-
Missing features in UCM for decision making based on cost vs needs
Hi Experts, I understand that UCM is a component in ECM. However what I'd like to know and can't seem to find anywhere is a summary or explanation on missing core features of UCM that can be found in IPM, URM and IRM. For example, a feature that is h
-
Hello friends I want to know where we change the currency notation. in my system value Rs 2500.000 is shown as 2.500,000 which is not easily understandable. I want decimal point ( . ) after 2500. I am giving below how my currency notation in systems