Select with CLOB datatype
Hi all....
I must create a view to retrieve the data from a table "texto_email" ... and that table has a field named Texto - datatype CLOB. This field has some html definitions and formating tags.
but when i create the view, i get a error in plsql saying "view created with compilation errors"
when i coment " -- " the line responsable for the CLOB field, its created sucesful.
what I´m doing wrong? there´s a something diferent to do in this case?
that is my view... if someone can help...
CREATE OR REPLACE FORCE VIEW vw_texto_email
AS
SELECT DISTINCT
te.id_texto_email
,te.id_texto_email_tipo as id_tet
,te.data_cadastro
,te.data_inicio
,te.descricao
,te.assunto
,te.texto AS texto ---- that is the CLOB field -
,tet.descricao as tipo_email
FROM
texto_email te
,texto_email_tipo tet
WHERE
te.id_texto_email_tipo = tet.id_texto_email_tipo
tnks....
To work with LOBs you have to know something about the concept of a LOB locator( a short tour in the documentation would be appreciable).
A LOB locator is a pointer to large object data in a database.
Database LOB columns store LOB locators, and those locators point to the real data stored in a LOB data.
To work with LOB data, you have to:
1) Retrieve LOB locator
2) Use a built-in package named DBMS_LOB to modify LOB
2.1 Open lob
2.2 ...
2.3 read/write LOB
2.4 close LOB
Why do you want to retrieve a lob locator in a view?
Stated that you could retreive its state in :
1 null
2 empty
3 populated with a valid pointer to data
Similar Messages
-
A clob datatype and LogMiner question?
HI,
I am using Logminer to caputer all DMLs agaist rows with clob datatype, find a problem.
--log in as scott/tiger
conn scott/tiger
SQL> desc clobtest
Name Null? Type
SNO NUMBER
CLOBTYPE CLOB
--make a update
update clobtest set CLOBTYPE = 'Hello New York' where sno = 11;
commit;
after using LogMiner to analyze redo log files, to query.
select sql_redo from v$logmnr_contents where username = 'SCOTT';
update "SCOTT"."CLOBTEST" set "CLOBTYPE" = 'Hello New York' where and ROWID = 'AAD0ZqAAEAAAAhsAAC';
My quesion:
As to caputured DML
update "SCOTT"."CLOBTEST" set "CLOBTYPE" = 'Hello New York' where and ROWID = 'AAD0ZqAAEAAAAhsAAC';
it shows "where and", why there is missing after where clause????? --(anyway, I can overcome this by using REGEXP_REPLACE(sql_redo,'where and','where ')
Thanks
Roy
Edited by: ROY123 on Mar 16, 2010 10:25 AMI checked logminer documetation:
http://74.125.93.132/search?q=cache:19bBhYX3Xs4J:download.oracle.com/docs/cd/B19306_01/server.102/b14215/logminer.htm+NOTE:LogMiner+does+not+support+these+datatypes+and+table+storage+attributes:&cd=1&hl=en&ct=clnk&gl=us
it says 10GR2 support LOB datatype.
but why "where clause" omit the clob datatype column (becaume "where and rowid")????
Edited by: ROY123 on Mar 16, 2010 2:12 PM -
Dear All,
I'm facing a problem whle importing a table (specifically a column with CLOB datatype) to the existing tablespace as explained below. Kindly let me know the solution, you can mail me to [email protected]
Importing a CLOB datatype from a different tablespace to a different tablespace without creating the source tablespace at the destination.
Now to import a table and the data without creating the tablespace i.e. XYZ_DATA as mentioned below.
TABLESPACE "XYZ_DATA" CLOB ("CLOB_SYNTAX") STORE AS (TA"
"BLESPACE "XYZ_DATA" ....
IMP-00017: following statement failed with ORACLE error 959:
"CREATE TABLE "R_DWSYN" ("R_IDSCR" NUMBER(9, 0) NOT NULL ENABLE, "N_DW" NUMB"
"ER(1, 0) NOT NULL ENABLE, "D_UPDATE" DATE, "N_X" NUMBER(4, 0), "N_Y" NUMBER"
"(4, 0), "N_WIDTH" NUMBER(4, 0), "CLOB_SYNTAX" CLOB) PCTFREE 10 PCTUSED 40 "
"INITRANS 1 MAXTRANS 255 LOGGING STORAGE(INITIAL 1048576 NEXT 1048576 MINEXT"
"ENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 20 FREELISTS 1 FREELIST GROUPS 1 B"
"UFFER_POOL DEFAULT) TABLESPACE "XYZ_DATA" LOB ("CLOB_SYNTAX") STORE AS (TA"
"BLESPACE "XYZ_DATA" ENABLE STORAGE IN ROW CHUNK 2048 PCTVERSION 10 NOCACHE "
" STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PC"
"TINCREASE 20 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT))"
IMP-00003: ORACLE error 959 encountered
ORA-00959: tablespace 'XYZ_DATA' does not exist
rgds
prashanthI have not used the DESTROY option myself but what I can see from imp help=y, I assume that this option goes with the TRANSPORT_TABLESPACE option where you are exporting tablespaces (with their datafiles) and then importing the same to another instance or so. This option might allow you to overwrite any datafile that was existing with the same name.
DESTROY overwrite tablespace data file (N)
The below link gives more information:
http://download-west.oracle.com/docs/cd/A87860_01/doc/server.817/a76955/ch02.htm#17077
Rgds,
Sunil -
How to copy a table with LONG and CLOB datatype over a dblink?
Hi All,
I need to copy a table from an external database into a local one. Note that this table has both LONG and CLOB datatypes included.
I have taken 2 approaches to do this:
1. Use the CREATE TABLE AS....
SQL> create table XXXX_TEST as select * from XXXX_INDV_DOCS@ext_db;
create table XXXX_TEST as select * from XXXX_INDV_DOCS@ext_db
ERROR at line 1:
ORA-00997: illegal use of LONG datatype
2. After reading some threads I tried to use the COPY command:
SQL> COPY FROM xxxx/pass@ext_db TO xxxx/pass@target_db REPLACE XXXX_INDV_DOCS USING SELECT * FROM XXXX_INDV_DOCS;
Array fetch/bind size is 15. (arraysize is 15)
Will commit when done. (copycommit is 0)
Maximum long size is 80. (long is 80)
CPY-0012: Datatype cannot be copied
If my understanding is correct the 1st statement fails because there is a LONG datatype in XXXX_INDV_DOCS table and 2nd one fails because there is a CLOB datatype.
Is there a way to copy the entire table (all columns including both LONG and CLOB) over a dblink?
Would greatelly appriciate any workaround or ideas!
Regards,
Pawel.Hi Nicolas,
There is a reason I am not using export/import:
- I would like to have a one-script solution for this problem (meaning execute one script on one machine)
- I am not able to make an SSH connection from the target DB to the local one (although the otherway it works fine) which means I cannot copy the dump file from target server to local one.
- with export/import I need to have an SSH connection on the target DB in order to issue the exp command...
Therefore, I am looking for a solution (or a workaround) which will work over a DBLINK.
Regards,
Pawel. -
Creation of view with clob column in select and group by clause.
Hi,
We are trying to migrate a view from sql server2005 to oracle 10g. It has clob column which is used in group by clause. How can the same be achived in oracle 10g.
Below is the sql statament used in creating view aling with its datatypes.
CREATE OR REPLACE FORCE VIEW "TEST" ("CONTENT_ID", "TITLE", "KEYWORDS", "CONTENT", "ISPOPUP", "CREATED", "SEARCHSTARTDATE", "SEARCHENDDATE", "HITS", "TYPE", "CREATEDBY", "UPDATED", "ISDISPLAYED", "UPDATEDBY", "AVERAGERATING", "VOTES") AS
SELECT content_ec.content_id,
content_ec.title,
content_ec.keywords,
content_ec.content content ,
content_ec.ispopup,
content_ec.created,
content_ec.searchstartdate,
content_ec.searchenddate,
COUNT(contenttracker_ec.contenttracker_id) hits,
contenttypes_ec.type,
users_ec_1.username createdby,
Backup_Latest.created updated,
Backup_Latest.isdisplayed,
users_ec_1.username updatedby,
guideratings.averagerating,
guideratings.votes
FROM users_ec users_ec_1
JOIN Backup_Latest
ON users_ec_1.USER_ID = Backup_Latest.USER_ID
RIGHT JOIN content_ec
JOIN contenttypes_ec
ON content_ec.contenttype_id = contenttypes_ec.contenttype_id
ON Backup_Latest.content_id = content_ec.content_id
LEFT JOIN guideratings
ON content_ec.content_id = guideratings.content_id
LEFT JOIN contenttracker_ec
ON content_ec.content_id = contenttracker_ec.content_id
LEFT JOIN users_ec users_ec_2
ON content_ec.user_id = users_ec_2.USER_ID
GROUP BY content_ec.content_id,
content_ec.title,
content_ec.keywords,
to_char(content_ec.content) ,
content_ec.ispopup,
content_ec.created,
content_ec.searchstartdate,
content_ec.searchenddate,
contenttypes_ec.type,
users_ec_1.username,
Backup_Latest.created,
Backup_Latest.isdisplayed,
users_ec_1.username,
guideratings.averagerating,
guideratings.votes;
Column Name Data TYpe
CONTENT_ID NUMBER(10,0)
TITLE VARCHAR2(50)
KEYWORDS VARCHAR2(100)
CONTENT CLOB
ISPOPUP NUMBER(1,0)
CREATED TIMESTAMP(6)
SEARCHSTARTDATE TIMESTAMP(6)
SEARCHENDDATE TIMESTAMP(6)
HITS NUMBER
TYPE VARCHAR2(50)
CREATEDBY VARCHAR2(20)
UPDATED TIMESTAMP(6)
ISDISPLAYED NUMBER(1,0)
UPDATEDBY VARCHAR2(20)
AVERAGERATING NUMBER
VOTES NUMBERAny help realyy appreciated.
Thanks in advance
Edited by: user512743 on Dec 10, 2008 10:46 PMHello,
Specifically, this should be asked in the
ASP.Net MVC forum on forums.asp.net.
Karl
When you see answers and helpful posts, please click Vote As Helpful, Propose As Answer, and/or Mark As Answer.
My Blog: Unlock PowerShell
My Book: Windows PowerShell 2.0 Bible
My E-mail: -join ('6F6C646B61726C40686F746D61696C2E636F6D'-split'(?<=\G.{2})'|%{if($_){[char][int]"0x$_"}}) -
Importing and Exporting Data with a Clob datatype with HTML DB
I would like to know what to do and what to be aware of when Importing and Exporting data with a Clob Datatype with HTML DB?
Colin - what kind of import/export operation would that be, which pages are you referring to?
Scott -
Clob datatype with pipelined table function.
hi
i made two functions one of them which use varchar2 data type with pipelined and
another with clob data type with pipelined.
i am giving parameters to both of them first varch2 with pipelined is working fine.
but another is not.
and i made diff type for both of them.
like clob object type for second.
and varchar2 type for first.
my first function is like
TYPE "CSVOBJECTFORMAT" AS OBJECT ( "S" VARCHAR2(500));
TYPE "CSVTABLETYPE" AS TABLE OF CSVOBJECTFORMAT;
CREATE OR REPLACE FUNCTION "FN_PARSECSVSTRING" (p_list
VARCHAR2, p_delim VARCHAR2:=' ') RETURN CsvTableType PIPELINED
IS
l_idx PLS_INTEGER;
l_list VARCHAR2(32767) := p_list;
l_value VARCHAR2(32767);
BEGIN
LOOP
l_idx := INSTR(l_list, p_delim);
IF l_idx > 0 THEN
PIPE ROW(CsvObjectFormat(SUBSTR(l_list, 1, l_idx-1)));
l_list := SUBSTR(l_list, l_idx+LENGTH(p_delim));
ELSE
PIPE ROW(CsvObjectFormat(l_list));
EXIT;
END IF;
END LOOP;
RETURN;
END fn_ParseCSVString;
and out put for this function is like
which is correct.
SQL> SELECT s FROM TABLE( CAST( fn_ParseCSVString('+588675,1~#588675^1^99^~2~16~115~99~SP5601~~~~~0~~', '~') as CsvTableType)) ;
S
+588675,1
#588675^1^99^
2
16
115
99
SP5601
S
0
14 rows selected.
SQL>
my second function is like
TYPE "CSVOBJECTFORMAT1" AS OBJECT ( "S" clob);
TYPE "CSVTABLETYPE1" AS TABLE OF CSVOBJECTFORMAT1;
CREATE OR REPLACE FUNCTION "FN_PARSECSVSTRING1" (p_list
clob, p_delim VARCHAR2:=' ') RETURN CsvTableType1 PIPELINED
IS
l_idx PLS_INTEGER;
l_list clob := p_list;
l_value VARCHAR2(32767);
BEGIN
dbms_output.put_line('hello');
LOOP
l_idx := INSTR(l_list, p_delim);
IF l_idx > 0 THEN
PIPE ROW(CsvObjectFormat1(substr(l_list, 1, l_idx-1)));
l_list := dbms_lob.substr(l_list, l_idx+LENGTH(p_delim));
ELSE
PIPE ROW(CsvObjectFormat1(l_list));
exit;
END IF;
END LOOP;
RETURN;
END fn_ParseCSVString1;
SQL> SELECT s FROM TABLE( CAST( fn_ParseCSVString1('+588675,1~#588675^1^99^~2~16~115~99~SP5601~~~~~0~~', '~') as CsvTableType1)) ;
S
+588675,1
+588675,1
+588675,1
+588675,1
+588675,1
+588675,1
+588675,1
+588675,1
+588675,1
+588675,1
+588675,1
and it goes on until i use ctrl+C to break it.
actually i want to make a function which can accept large values so i am trying to change first function.thanksRTFM DBMS_LOB.SUBSTR. Unlike built-in function SUBSTR, second parameter in DBMS_LOB.SUBSTR is length, not position. Also, PL/SQL fully supports CLOBs, so there is no need to use DBMS_LOB:
SQL> CREATE OR REPLACE
2 FUNCTION FN_PARSECSVSTRING1(p_list clob,
3 p_delim VARCHAR2:=' '
4 )
5 RETURN CsvTableType1
6 PIPELINED
7 IS
8 l_pos PLS_INTEGER := 1;
9 l_idx PLS_INTEGER;
10 l_value clob;
11 BEGIN
12 LOOP
13 l_idx := INSTR(p_list, p_delim,l_pos);
14 IF l_idx > 0
15 THEN
16 PIPE ROW(CsvObjectFormat1(substr(p_list,l_pos,l_idx-l_pos)));
17 l_pos := l_idx+LENGTH(p_delim);
18 ELSE
19 PIPE ROW(CsvObjectFormat1(substr(p_list,l_pos)));
20 RETURN;
21 END IF;
22 END LOOP;
23 RETURN;
24 END fn_ParseCSVString1;
25 /
Function created.
SQL> SELECT rownum,s FROM TABLE( CAST( fn_ParseCSVString1('+588675,1~#588675^1^99^~2~16~115~99~SP5
601~~~~~0~~', '~') as CsvTableType1)) ;
ROWNUM S
1 +588675,1
2 #588675^1^99^
3 2
4 16
5 115
6 99
7 SP5601
8
9
10
11
ROWNUM S
12 0
13
14
14 rows selected.
SQL> SY. -
CLOB Datatype with JDBC Adapter
Hi,
we try to fill a Clob Datatype to JDBC Database.
We try 2 ways with the JDBC Adapter:
action="SQL_DML" with an SQL Statment and $placeholders$
But how can i say the key element that it is a CLOB type?
He used this a VARCHAR and there a not more than 4k Chars allowed.
second way is action="EXECUTE" to call a Stored Procedure, but there we got the error that CLOB type is an Unsupported feature.
Any Idea?
Regards,
Robin
Message was edited by: Robin SchroederOk i will check this...
But i'm right when i say that the only way to fill CLOB Type is to use a Stored Procedure ?
or is there any possibility to do this with action="SQL_DML" ?
Regards,
Robin -
How to select all the colomns_names from a table, with their datatypes ..
hi :)
i would like to know, how to select in SQL all the columns names from a table with their datatypes so that i get something like this :
Table 1 : table_name
the column ID has the Datatype NUMBER
the column name has the Datatype Varchar2
Table 2 : table_name
the column check has the Datatype NUMBER
the column air has the Datatype Varchar2
and that has to be for all the tables that i own ! ..
P. S : i m trying to do this with java, so it s would be enough if you just tell me how to select all the tables_names with all their colums_names and with all their datatypes ! ..
thank you :)
i ve heard it can be done with USER_TABLES .. but i have no idea how :( ..
Edited by: user8865125 on 17.05.2011 12:22Hi,
The data dictionary view USER_TAB_COLUMNS has one row for every column in every table in your schema. The columns TABLE_NAME, COLUMN_NAME and DATA_TYPE have all the information you need.
Another data dictionary view, USER_TABLES, may be useful, too. It has one row pre table. -
Clob DataType, NULL and ADO
Hello,
First, I'm french so my english isn't very good
I have a problem with Oracle Clob DataType. When I try to put NULL value to
CLOB DataType with ADO, changes aren't not made.
rs.open "SELECT ....", adocn, adOpenKeyset, adLockOptimistic
rs.Fields("ClobField") = Null ' In this case, the Update doesn't work
rs.update
rs.Close
This code works if I try to write a value which is different than Null but
not if is equal to Null. Instead of having Null, I have the old value
(before changes), the update of the field doesn't work.I experience the same, did you find a solution to your problem?
Kind regards,
Roel de Bruyn -
Setting of CLOB Datatype storage space
Hello All!
I unble to insert more then 4000 characters in clob datatype field
how I increate the storage size of the clob field
I'm working in VB 6.0 and oracle 9iOracle will allocate CLOB segments using some default storage options linked to column, table and tablespace.
Example with Oracle 11.2 XE:
SQL> select * from v$version;
BANNER
Oracle Database 11g Express Edition Release 11.2.0.2.0 - Beta
PL/SQL Release 11.2.0.2.0 - Beta
CORE 11.2.0.2.0 Production
TNS for 32-bit Windows: Version 11.2.0.2.0 - Beta
NLSRTL Version 11.2.0.2.0 - Production
SQL> create user test identified by test;
User created.
SQL> grant create session, create table to test;
Grant succeeded.
SQL> alter user test quota unlimited on users;
User altered.
SQL> alter user test default tablespace users;
User altered.
SQL> connect test/test;
Connected.
SQL> create table tl(x clob);
Table created.
SQL> column segment_name format a30
SQL> select segment_name, bytes/(1024*1024) as mb
2 from user_segments;
SEGMENT_NAME MB
TL ,0625
SYS_IL0000020403C00001$$ ,0625
SYS_LOB0000020403C00001$$ ,0625
SQL> insert into tl values('01234456789');
1 row created.
SQL> commit;
Commit complete.
SQL> select segment_name, bytes/(1024*1024) as mb
2 from user_segments;
SEGMENT_NAME MB
TL ,0625
SYS_IL0000020403C00001$$ ,0625
SYS_LOB0000020403C00001$$ ,0625
SQL>Same example run with Oracle XE 10.2 :Re: CLOB Datatype [About Space allocation]
Edited by: P. Forstmann on 24 juin 2011 09:24 -
LogMiner puzzle - CLOB datatype
Hello, everybody!
Sorry for the cross-post here and in "Database\SQL and PL/SQL" forum, but the problem I am trying to dig is somewhere between those two areas.
I need a bit of an advice whether the following behavior is wrong an requires SR to be initiated – or I am just missing something.
Setting:
- Oracle 11.2.0.3 Enterprise Edition 64-bit on Win 2008.
- Database is running in ARCHIVELOG mode with supplemental logging enabled
- DB_SECUREFILE=PERMITTED (so, by default LOBs will be created as BasicFiles - but I didn't notice any behavior difference comparing to SecureFile implementation)
Test #1. Initial discovery of a problem
1. Setup:
<li> I created a table MISHA_TEST that contains CLOB column
create table misha_test (a number primary key, b_cl CLOB)<li> I run anonymous block that would insert into this table WITHOUT referencing CLOB column
begin
insert into misha_test (a) values (1);
commit;
end;2. I looked at generated logs via the LogMiner and found the following entries in V$LOGMNG_CONTENTS:
SQL_REDO
set transaction read write;
insert into "MISHA_TEST"("A","B_CL") values ('1',EMPTY_CLOB());
set transaction read write;
commit;
update "MISHA_TEST" set "B_CL" = NULL where "A" = '1' and ROWID = 'AAAj90AAKAACfqnAAA';
commit;And here I am puzzled: why do we have two operations for a single insert – first write EMPTY_CLOB into B_CL and then update it to NULL? But I didn’t even touch the column B_CL! Seems very strange – why can’t we write NULL to B_CL from the very beginning instead of first creating a pointer and than destroying it.
Key question:
- why NULL value in CLOB column should be handled differently than NULL value in VARCHAR2 column?
Test #2. Quantification
Question:
- having LOB column in the table seems to cause an overhead of generating more logs. But could it be quantified?
Assumption:
- My understanding is that CLOBs defined with “storage in row enabled = true” (default) up to ~ 4k of size behave like Varchar2(4000) and only when the size goes above 4k we start using real LOB mechanisms.
Basic test:
1. Two tables:
<li> With CLOB:
create table misha_test_clob2 (a_nr number primary key, b_tx varchar2(4000), c_dt date, d_cl CLOB)<li>With VARCHAR2:
create table misha_test_clob (a_nr number primary key, b_tx varchar2(4000), c_dt date, d_cl VARCHAR2(4000))2. Switch logfile/Insert 1000 rows and populate only A_NR/Switch logfile
insert into misha_test_clob (a_nr)
select level
from dual
connect by level < 10013. Check sizes of generated logs:
<li>With CLOB – 689,664 bytes
<li>With Varchar2 – 509.440 (<b>or about 26% reduction</b>)
Summary:
<li>the overhead is real. It means that table with VARCHAR2 column is cheaper to maintain, even if you are not using that column. So adding LOB columns to a table "just in case" is a really bad idea.
<li>Having LOB columns in the table that has tons of INSERT operations is expensive.
Just to clarify a real business case - I have a table with some number of attributes, one attribute has CLOB datatype. Frequency of inserts in this table is pretty high, frequency of using CLOB column is pretty low (NOT NULL ~0.1%). But because of that CLOB column I generate a lot more LOG data than I need (about 30% extra). Seems like a real waste of time! For now I requested development team to split the table into two, but that's still a bandage.
So, does anybody care? Comments/suggestions are very welcome!
Thanks a lot!
Michael RosenblumHello, everybody!
Sorry for the cross-post here and in "Database\SQL and PL/SQL" forum, but the problem I am trying to dig is somewhere between those two areas.
I need a bit of an advice whether the following behavior is wrong an requires SR to be initiated – or I am just missing something.
Setting:
- Oracle 11.2.0.3 Enterprise Edition 64-bit on Win 2008.
- Database is running in ARCHIVELOG mode with supplemental logging enabled
- DB_SECUREFILE=PERMITTED (so, by default LOBs will be created as BasicFiles - but I didn't notice any behavior difference comparing to SecureFile implementation)
Test #1. Initial discovery of a problem
1. Setup:
<li> I created a table MISHA_TEST that contains CLOB column
create table misha_test (a number primary key, b_cl CLOB)<li> I run anonymous block that would insert into this table WITHOUT referencing CLOB column
begin
insert into misha_test (a) values (1);
commit;
end;2. I looked at generated logs via the LogMiner and found the following entries in V$LOGMNG_CONTENTS:
SQL_REDO
set transaction read write;
insert into "MISHA_TEST"("A","B_CL") values ('1',EMPTY_CLOB());
set transaction read write;
commit;
update "MISHA_TEST" set "B_CL" = NULL where "A" = '1' and ROWID = 'AAAj90AAKAACfqnAAA';
commit;And here I am puzzled: why do we have two operations for a single insert – first write EMPTY_CLOB into B_CL and then update it to NULL? But I didn’t even touch the column B_CL! Seems very strange – why can’t we write NULL to B_CL from the very beginning instead of first creating a pointer and than destroying it.
Key question:
- why NULL value in CLOB column should be handled differently than NULL value in VARCHAR2 column?
Test #2. Quantification
Question:
- having LOB column in the table seems to cause an overhead of generating more logs. But could it be quantified?
Assumption:
- My understanding is that CLOBs defined with “storage in row enabled = true” (default) up to ~ 4k of size behave like Varchar2(4000) and only when the size goes above 4k we start using real LOB mechanisms.
Basic test:
1. Two tables:
<li> With CLOB:
create table misha_test_clob2 (a_nr number primary key, b_tx varchar2(4000), c_dt date, d_cl CLOB)<li>With VARCHAR2:
create table misha_test_clob (a_nr number primary key, b_tx varchar2(4000), c_dt date, d_cl VARCHAR2(4000))2. Switch logfile/Insert 1000 rows and populate only A_NR/Switch logfile
insert into misha_test_clob (a_nr)
select level
from dual
connect by level < 10013. Check sizes of generated logs:
<li>With CLOB – 689,664 bytes
<li>With Varchar2 – 509.440 (<b>or about 26% reduction</b>)
Summary:
<li>the overhead is real. It means that table with VARCHAR2 column is cheaper to maintain, even if you are not using that column. So adding LOB columns to a table "just in case" is a really bad idea.
<li>Having LOB columns in the table that has tons of INSERT operations is expensive.
Just to clarify a real business case - I have a table with some number of attributes, one attribute has CLOB datatype. Frequency of inserts in this table is pretty high, frequency of using CLOB column is pretty low (NOT NULL ~0.1%). But because of that CLOB column I generate a lot more LOG data than I need (about 30% extra). Seems like a real waste of time! For now I requested development team to split the table into two, but that's still a bandage.
So, does anybody care? Comments/suggestions are very welcome!
Thanks a lot!
Michael Rosenblum -
Hello Everyone,
Before I go to my question let me give you the context. I wanted to upload the description of a set of products with their IDs into my database. Hence I created a table 'demo' with two columns of INT and CLOB datatypes using the following script. *create table demo ( id int primary key, theclob Clob );*
Then I create a directory using the following script, *Create Or Replace Directory MY_FILES as 'C:\path of the folder.......\';*
In the above mentioned directory I create one .txt file for each product with the description of the product. Using the below script I created a procedure to load the contents of the .txt files into my 'demo' table.
*CREATE OR REPLACE*
*PROCEDURE LOAD_A_FILE( P_ID IN NUMBER, P_FILENAME IN VARCHAR2 ) AS*
*L_CLOB CLOB;*
*L_BFILE BFILE;*
*BEGIN*
*INSERT INTO DEMO VALUES ( P_ID, EMPTY_CLOB() )*
*RETURNING THECLOB INTO L_CLOB;*
*L_BFILE := BFILENAME( 'MY_FILES', P_FILENAME );*
*DBMS_LOB.FILEOPEN( L_BFILE );*
*DBMS_LOB.LOADFROMFILE( L_CLOB, L_BFILE,*
*DBMS_LOB.GETLENGTH( L_BFILE ) );*
*DBMS_LOB.FILECLOSE( L_BFILE );*
*END;*
After which I called the procedure using, *exec load_a_file(1, 'filename.txt' );*
When I queried the table like, select * from demo; I am getting the following output..... which is all fine.
ID THECLOB
1 "product x is an excellent way to improve your production process and enhance your turnaround time....."
_*QUESTION*_
When I did the exact same thing in my friend's machine and query the demo table, I get garbage value in the 'theclob' column (as shown below). The only difference is that mine is an enterprise edition of Oracle 11.2.0.1 and my friends is an Express edition of Oracle 11.2.0.2. Does this has anything to do with the problem?
1 猺⁁摶慮捥搠摡瑡潬汥捴楯渠捡灡扩汩瑩敳㨠扡牣潤攠獣慮湩湧Ⱐ灡湩挠慬敲琬⁷潲欠潲摥爠浡湡来浥湴Ⱐ睩牥汥獳潲浳湤異敲癩獯爠瑩浥湴特⸊潭整⁍潢楬攠坯牫敲㨠周攠浯獴潢畳琠灡捫慧攮⁐牯癩摥猠扵獩湥獳敳⁷楴栠愠捯浰汥瑥汹⁷楲敬敳猠潰敲慴楯湡氠浡湡来浥湴祳瑥洮⁉湣汵摥猠慬氠潦⁃潭整⁔牡捫敲❳敡瑵牥猠灬畳㨠䍡汥湤慲猬畴潭慴敤畳瑯浥爠捯浭畮楣慴楯湳Ⱐ睯牫牤敲⽩湶潩捥⁵灤慴楮朠晲潭⁴桥楥汤Ⱐ睯牫牤敲敱略湣楮本硣敳獩癥瑯瀠瑩浥汥牴猬⁷楲敬敳猠景牭猬⁴畲渭批畲渠癯楣攠湡癩条瑩潮Ⱐ慮搠浯牥⸊ੁ摶慮捥搠坩
2 ≁否吠潦晥牳摶慮捥搠睩牥汥獳潲浳慰慢楬楴礠睩瑨⁃潭整⁅娠䍯浥琬⁔牡捫敲湤⁃潭整⁍潢楬攠坯牫敲ਊ䍯浥琠䕚㨠周攠浯獴潢畳琬潳琠敦晥捴楶攠睥戠扡獥搠䵒䴠慰灬楣慴楯渠楮⁴桥湤畳瑲礮⁃慰慢楬楴楥猠楮捬畤攠䝐匠汯捡瑩潮⁴牡捫楮本⁷楲敬敳猠瑩浥汯捫Ⱐ来漭晥湣楮朠睩瑨汥牴猬灥敤湤瑯瀠瑩浥汥牴猬湤渭摥浡湤爠獣桥摵汥搠牥灯牴楮朮ਊ䍯浥琠呲慣步爺⁁⁰潷敲晵氠捬楥湴ⵢ慳敤⁰污瑦潲洠瑨慴晦敲猠慬氠瑨攠晥慴畲敳映䍯浥琠䕚⁰汵猺⁁摶慮捥搠摡瑡潬汥捴楯渠捡灡扩汩瑩敳㨠扡牣潤攠獣慮湩湧Ⱐ灡湩挠慬敲琬⁷潲欠潲摥爠浡湡来浥湴Ⱐ睩牥汥獳潲浳湤異敲癩獯爠瑩浥湴特⸊潭整⁍潢楬攠坯牫敲㨠周攠浯獴潢畳琠灡捫慧攮⁐牯癩摥猠扵獩湥獳敳⁷楴栠愠捯浰汥瑥汹⁷楲敬敳猠潰敲慴楯湡氠浡湡来浥湴祳瑥洮⁉湣汵摥猠慬氠潦⁃潭整⁔牡捫敲❳敡瑵牥猠灬畳㨠䍡汥湤慲猬畴潭慴敤畳瑯浥爠捯浭畮楣慴楯湳Ⱐ睯牫牤敲⽩湶潩捥⁵灤慴楮朠晲潭⁴桥楥汤Ⱐ睯牫牤敲敱略湣楮本硣敳獩癥瑯瀠瑩浥汥牴猬⁷楲敬敳猠景牭猬⁴畲渭批畲渠癯楣攠湡癩条瑩潮Ⱐ慮搠浯牥⸊ੁ摶慮捥搠坩牥汥獳⁆潲浳㨠呵牮湹⁰慰敲潲洠楮瑯⁷楲敬敳猠捬潮攠潦⁴桥慭攠楮景牭慴楯渠ⴠ湯慴瑥爠桯眠捯浰汩捡瑥搮⁓慶攠瑩浥礠瑲慮獦敲物湧湦潲浡瑩潮慣欠瑯⁴桥晦楣攠睩瑨⁷楲敬敳猠獰敥搮⁓慶攠灡灥爠慮搠敬業楮慴攠摵慬•ഊ
3 ≁䥒呉䵅⁍慮慧敲牯洠䅔♔⁰牯癩摥猠愠浯扩汥灰汩捡瑩潮猠摥獩杮敤⁴漠瑲慣欠扩汬慢汥潵牳⸠⁔桥⁁㑐潬畴楯湳畴潭慴楣慬汹潧⁷楲敬敳猠敭慩氬慬汳Ⱐ慮搠扩汬慢汥癥湴猬獳潣楡瑥猠瑨敭⁷楴栠捬楥湴爠灲潪散琠捯摥猠慮搠摩牥捴猠扩汬慢汥散潲摳⁴漠扩汬楮朠獹獴敭献†周攠呩浥乯瑥潬畴楯湳⁰牯癩摥汩浭敤潷渠數灥物敮捥Ⱐ慬汯睩湧潲牥慴楯渠潦慮畡氠扩汬慢汥癥湴献†周敲攠慲攠瑷漠癥牳楯渠潦⁁㑐湤⁔業敎潴攮ਊ䭥礠䙥慴畲敳㨊⨠䥮捬畤攠捡灴畲攠慤潣楬污扬攠敶敮瑳ਪ⁃慰瑵牥潢楬攠灨潮攠捡汬湤浡楬†慳楬污扬攠敶敮瑳Ⱐਪ⁁扩汩瑹⁴漠慳獩杮楬污扬攠敶敮琠瑯汩敮琠慮搠灲潪散琊⨠䅢楬楴礠瑯敡牣栠慮搠獣牯汬⁴桲潵杨楬污扬攠敶敮瑳Ⱐ潰瑩潮⁴漠楮瑥杲慴攠睩瑨楬汩湧祳瑥浳 ⨠偯瑥湴楡氠扥湥晩瑳湣汵摥湣牥慳敤⁰牯摵捴楶楴礠慮搠牥摵捥搠慤浩湩獴牡瑩癥癥牨敡搠湤湣牥慳敤敶敮略略⁴漠浯牥捣畲慴攠捡灴畲楮朠潦楬污扬攠敶敮瑳•ഊ
4 ≁灲楶慐慹⁁乄⁁灲楶慐慹⁐牯晥獳楯湡氠晲潭⁁否吠瑵牮⁹潵爠浯扩汥敶楣攠楮瑯⁰潲瑡扬攠捲敤楴慲搠瑥牭楮慬⸠坩瑨潭灡瑩扬攠䅔♔浡牴灨潮攬⁁灲楶慐慹爠䅰物癡偡礠偲潦敳獩潮慬潦瑷慲攬湤敲捨慮琠慣捯畮琬⁹潵爠浯扩汥⁷潲武潲捥慮⁰牯捥獳牥摩琠潲敢楴慲搠灡祭敮瑳牯洠瑨攠晩敬搮ਊ䭥礠䙥慴畲敳㨠 ⨠卭慲瑰桯湥ⵢ慳敤潬畴楯渠⁴漠灲潣敳猠捲敤楴慲搠灡祭敮瑳 ⨠䙵汬ⵦ敡瑵牥搠灯楮琭潦慬攠獯汵瑩潮異灯牴楮朠慬氠浡橯爠瑲慮獡捴楯渠瑹灥ਠ⨠卵灰潲瑳牥摩琠慮搠摥扩琠瑲慮獡捴楯湳 ਊ∍
To make sure that the .txt files are accessible in the directory I executed the following script, Host Echo Hello World > C:\...path...\1.Txt
After which I found the contents of the file changed to "Hello World". Later I loaded the .txt file with "Hello World" and queried the table. Still I am getting some garbage value. However since the string "Hello World" is much smaller than the previous contents, the garbage size is also smaller for ID 1. I don't get any errors, but you can see the output as follows.
1 䠀攀氀氀漀 圀漀爀氀搀 ഀ
2 ≁否吠潦晥牳摶慮捥搠睩牥汥獳潲浳慰慢楬楴礠睩瑨⁃潭整⁅娠䍯浥琬⁔牡捫敲湤⁃潭整⁍潢楬攠坯牫敲ਊ䍯浥琠䕚㨠周攠浯獴潢畳琬潳琠敦晥捴楶攠睥戠扡獥搠䵒䴠慰灬楣慴楯渠楮⁴桥湤畳瑲礮⁃慰慢楬楴楥猠楮捬畤攠䝐匠汯捡瑩潮⁴牡捫楮本⁷楲敬敳猠瑩浥汯捫Ⱐ来漭晥湣楮朠睩瑨汥牴猬灥敤湤瑯瀠瑩浥汥牴猬湤渭摥浡湤爠獣桥摵汥搠牥灯牴楮朮ਊ䍯浥琠呲慣步爺⁁⁰潷敲晵氠捬楥湴ⵢ慳敤⁰污瑦潲洠瑨慴晦敲猠慬氠瑨攠晥慴畲敳映䍯浥琠䕚⁰汵猺⁁摶慮捥搠摡瑡潬汥捴楯渠捡灡扩汩瑩敳㨠扡牣潤攠獣慮湩湧Ⱐ灡湩挠慬敲琬⁷潲欠潲摥爠浡湡来浥湴Ⱐ睩牥汥獳潲浳湤異敲癩獯爠瑩浥湴特⸊潭整⁍潢楬攠坯牫敲㨠周攠浯獴潢畳琠灡捫慧攮⁐牯癩摥猠扵獩湥獳敳⁷楴栠愠捯浰汥瑥汹⁷楲敬敳猠潰敲慴楯湡氠浡湡来浥湴祳瑥洮⁉湣汵摥猠慬氠潦⁃潭整⁔牡捫敲❳敡瑵牥猠灬畳㨠䍡汥湤慲猬畴潭慴敤畳瑯浥爠捯浭畮楣慴楯湳Ⱐ睯牫牤敲⽩湶潩捥⁵灤慴楮朠晲潭⁴桥楥汤Ⱐ睯牫牤敲敱略湣楮本硣敳獩癥瑯瀠瑩浥汥牴猬⁷楲敬敳猠景牭猬⁴畲渭批畲渠癯楣攠湡癩条瑩潮Ⱐ慮搠浯牥⸊ੁ摶慮捥搠坩牥汥獳⁆潲浳㨠呵牮湹⁰慰敲潲洠楮瑯⁷楲敬敳猠捬潮攠潦⁴桥慭攠楮景牭慴楯渠ⴠ湯慴瑥爠桯眠捯浰汩捡瑥搮⁓慶攠瑩浥礠瑲慮獦敲物湧湦潲浡瑩潮慣欠瑯⁴桥晦楣攠睩瑨⁷楲敬敳猠獰敥搮⁓慶攠灡灥爠慮搠敬業楮慴攠摵慬•ഊ
3 ≁䥒呉䵅⁍慮慧敲牯洠䅔♔⁰牯癩摥猠愠浯扩汥灰汩捡瑩潮猠摥獩杮敤⁴漠瑲慣欠扩汬慢汥潵牳⸠⁔桥⁁㑐潬畴楯湳畴潭慴楣慬汹潧⁷楲敬敳猠敭慩氬慬汳Ⱐ慮搠扩汬慢汥癥湴猬獳潣楡瑥猠瑨敭⁷楴栠捬楥湴爠灲潪散琠捯摥猠慮搠摩牥捴猠扩汬慢汥散潲摳⁴漠扩汬楮朠獹獴敭献†周攠呩浥乯瑥潬畴楯湳⁰牯癩摥汩浭敤潷渠數灥物敮捥Ⱐ慬汯睩湧潲牥慴楯渠潦慮畡氠扩汬慢汥癥湴献†周敲攠慲攠瑷漠癥牳楯渠潦⁁㑐湤⁔業敎潴攮ਊ䭥礠䙥慴畲敳㨊⨠䥮捬畤攠捡灴畲攠慤潣楬污扬攠敶敮瑳ਪ⁃慰瑵牥潢楬攠灨潮攠捡汬湤浡楬†慳楬污扬攠敶敮瑳Ⱐਪ⁁扩汩瑹⁴漠慳獩杮楬污扬攠敶敮琠瑯汩敮琠慮搠灲潪散琊⨠䅢楬楴礠瑯敡牣栠慮搠獣牯汬⁴桲潵杨楬污扬攠敶敮瑳Ⱐ潰瑩潮⁴漠楮瑥杲慴攠睩瑨楬汩湧祳瑥浳 ⨠偯瑥湴楡氠扥湥晩瑳湣汵摥湣牥慳敤⁰牯摵捴楶楴礠慮搠牥摵捥搠慤浩湩獴牡瑩癥癥牨敡搠湤湣牥慳敤敶敮略略⁴漠浯牥捣畲慴攠捡灴畲楮朠潦楬污扬攠敶敮瑳•ഊ
4 ≁灲楶慐慹⁁乄⁁灲楶慐慹⁐牯晥獳楯湡氠晲潭⁁否吠瑵牮⁹潵爠浯扩汥敶楣攠楮瑯⁰潲瑡扬攠捲敤楴慲搠瑥牭楮慬⸠坩瑨潭灡瑩扬攠䅔♔浡牴灨潮攬⁁灲楶慐慹爠䅰物癡偡礠偲潦敳獩潮慬潦瑷慲攬湤敲捨慮琠慣捯畮琬⁹潵爠浯扩汥⁷潲武潲捥慮⁰牯捥獳牥摩琠潲敢楴慲搠灡祭敮瑳牯洠瑨攠晩敬搮ਊ䭥礠䙥慴畲敳㨠 ⨠卭慲瑰桯湥ⵢ慳敤潬畴楯渠⁴漠灲潣敳猠捲敤楴慲搠灡祭敮瑳 ⨠䙵汬ⵦ敡瑵牥搠灯楮琭潦慬攠獯汵瑩潮異灯牴楮朠慬氠浡橯爠瑲慮獡捴楯渠瑹灥ਠ⨠卵灰潲瑳牥摩琠慮搠摥扩琠瑲慮獡捴楯湳 ਊ∍
Edited by: Arunkumar Gunasekaran on Jan 3, 2013 11:38 AM>
To make sure that the .txt files are accessible in the directory I executed the following script, Host Echo Hello World > C:\...path...\1.Txt
After which I found the contents of the file changed to "Hello World". Later I loaded the .txt file with "Hello World" and queried the table. Still I am getting some garbage value. However since the string "Hello World" is much smaller than the previous contents, the garbage size is also smaller for ID 1. I don't get any errors, but you can see the output as follows.
>
The most common problem I have seen using BFILEs is the character set; BFILEs do NOT handle character set conversion.
That is the main reason I don't recommend using BFILEs for loading data like this. Either SQL*Loader or external tables can do the job and they both handle character set conversions properly.
See the LOADFROMFILE Procedure of DBMS_LOB package in the PL/SQL Language doc
http://docs.oracle.com/cd/B28359_01/appdev.111/b28419/d_lob.htm#i998778
>
Note:
The input BFILE must have been opened prior to using this procedure. No character set conversions are performed implicitly when binary BFILE data is loaded into a CLOB. The BFILE data must already be in the same character set as the CLOB in the database. No error checking is performed to verify this.
Note:
If the character set is varying width, UTF-8 for example, the LOB value is stored in the fixed-width UCS2 format. Therefore, if you are using DBMS_LOB.LOADFROMFILE, the data in the BFILE should be in the UCS2 character set instead of the UTF-8 character set. However, you should use sql*loader instead of LOADFROMFILE to load data into a CLOB or NCLOB because sql*loader will provide the necessary character set conversions.
>
I suggest you use an external table definition to do this load. You can do an ALTER to change the file name for each load.
See External Tables Concepts in the Utilities doc for the basics
http://docs.oracle.com/cd/B28359_01/server.111/b28319/et_concepts.htm
See Altering External Tables in the DBA doc for detailed information
http://docs.oracle.com/cd/B28359_01/server.111/b28310/tables013.htm
>
DEFAULT DIRECTORY
Changes the default directory specification
ALTER TABLE admin_ext_employees
DEFAULT DIRECTORY admin_dat2_dir;
LOCATION
Allows data sources to be changed without dropping and re-creating the external table metadata
ALTER TABLE admin_ext_employees
LOCATION ('empxt3.txt',
'empxt4.txt');
>
You can also load in parallel if you have licensed that option. -
Passing CLOB datatype to a stored procedure
Hi,
How do I pass a CLOB value to a stored procedure?
I am creating a stored procedure which appends a value to a CLOB datatype. The procedure has 2 in parameter (one CLOB and one CLOB). The procedure is compiled but I'm having problem executing it. Below is a simplified version of the procedure and the error given when the procedure is executed.
SQL> CREATE OR REPLACE PROCEDURE prUpdateContent (
2 p_contentId IN NUMBER,
3 p_body IN CLOB)
4 IS
5 v_id NUMBER;
6 v_orig CLOB;
7 v_add CLOB;
8
9 BEGIN
10 v_id := p_contentId;
11 v_add := p_body;
12
13 SELECT body INTO v_orig FROM test WHERE id=v_id FOR UPDATE;
14
15 DBMS_LOB.APPEND(v_orig, v_add);
16 commit;
17 END;
18 /
Procedure created.
SQL> exec prUpdateContent (1, 'testing');
BEGIN prUpdateContent (1, 'testing'); END;
ERROR at line 1:
ORA-06550: line 1, column 7:
PLS-00306: wrong number or types of arguments in call to 'PRUPDATECONTENT'
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored
Any help or hints please.
nullsorry I made a mistake with the in parameter types - it's one NUMBER and one CLOB.
-
I need to have several columns in a table say 14 which would each hold around 8k of data.
Now i want a solution for creating the table and retrieving data from that particular table.
I want this solution to be as simple as possible. No triggers and no procedures.
Some one please help urgentlyShobana,
Try using something like this.
select * from clob_test
where to_char(clob_col) like 'key%'
It depends on the datatype you are inserting
into that column.
I actually don't what CLOB datatype is
since you are comparing with a char datatype
this should work.
AO
<BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Originally posted by Shobana ([email protected]):
Hi,
I have a CLOB datatype column in a table and I want to select few rows from that table by comparing that column with some static value as mentioned below. How can I do that?
create table clob_test( clob_col clob );
insert into clob_test values('keys');
insert into clob_test values('key board');
insert into clob_test values('monitor');
commit;
I want something like :-
select * from clob_test where clob_col like 'key%' to get 2 rows. This doesn't work. Gives an error as 'Inconsistent datatypes'
Casting also doesn't work.
This doesn't work. Is there any other way for this problem (other than DBMS_LOB package)?
Would be great if any of you can help me out.
Thanks in advance<HR></BLOCKQUOTE>
null
Maybe you are looking for
-
I'm running Vista and LR 3.2. I have multiple users accounts on the same computer (my family members). Everyone is able to use LR except one user. Whenever he trys to launch LR, he gets an error message that says: "Lightroom - Opening Catalog: IMG
-
Remove a USB from a MacBook Pro
Hi, Can someone help me please. I can't remove a USB from MacBook Pro because I don't see it on the desktop and does appear in the Finder. Thanks.
-
Hi, I read many threads, but cudnt find any final answer. Can you please let me know how to extract CRM (6.0) to BW/BI systems! What are the useful tcodes and how to enhance the Data Sources! Thnx.
-
How to properly use iphone 5s battery performance
Will my iphone 5s battery performance still be at maximum (full performance and capacity) if I charge it at any percent like 57%? I always thought charging it any time will weaken the battery life so i'll have to charge it more frequently as time goe
-
Reagarding the bbp_reqreq_transfer module
Hi I am nalysing the FM bbp_reqreq_transfer to find out how the transfer of SC data to the r/3 takes place. Also i am getting "error in transmission" in some cases. PO number is generated. I can see that in BBP_PD. so i tried to debug the FM bbp_reqr