Dbms_lob.writeappend is slow
I am having a table with number , float, double of 100 columns
need to read the table value and return as blob
so i have used utl_raw.CAST_FROM_
something like this
i have used loop to read each column data type and used bind variable to insert
TYPE varColtypelist is varray(100) of NUMBER(20);
collist varColtypelist;
TYPE varColLenlist is varray(100) of number(20);
byteLenList varColLenlist;
Select CASE WHEN data_type ='BINARY_FLOAT' THEN 1
WHEN data_type ='BINARY_DOUBLE' THEN 2
WHEN data_type ='NUMBER' THEN 3 END bulk collect into collist
from all_tab_columns where table_name=UPPER(Table_Name)
ORDER BY COLUMN_ID ;
v_cursor := DBMS_SQL.OPEN_CURSOR;
statment :='Select all columns from mytable order by c1 desc ' and rownum between 5000 to 10000
execute immediate 'Select byte_info from my_byte_info where id=1'
bulk collect into byteLenList ;
v_cursor := DBMS_SQL.OPEN_CURSOR;
DBMS_SQL.parse (v_cursor, statment, DBMS_SQL.native);
FOR col_ind IN 1 .. 100--(100 is 100 columns in that table)
LOOP
data_type := collist(col_ind);
if data_type =1 THEN
DBMS_SQL.define_column (v_cursor, col_ind, flid);
ELSIF data_type =2 THEN
DBMS_SQL.define_column (v_cursor, col_ind, dblid);
ELSIF data_type =3 THEN
DBMS_SQL.define_column (v_cursor, col_ind, nid);
END IF;
END LOOP;
dumy := DBMS_SQL.Execute (v_cursor);
LOOP
EXIT WHEN DBMS_SQL.FETCH_ROWS (v_cursor) = 0;
FOR i IN 1..l_max LOOP
data_type := collist(i);
ncollength := byteLenList(i);
if data_type =1 THEN
BEGIN
DBMS_SQL.column_value (v_cursor, i, flid);
value := utl_raw.cast_from_BINARY_FLOAT( flid);
END;
end if;
if data_type =2 THEN
BEGIN
DBMS_SQL.column_value (v_cursor, i, dblid);
value := utl_raw.cast_from_BINARY_DOUBLE( dblid);
END;
end if;
if data_type =3 THEN
BEGIN
DBMS_SQL.column_value (v_cursor, i, nid);
value := utl_raw.CAST_FROM_BINARY_INTEGER( nid);
END;
end if;
IF nNewRecord = 0 then
buffer1 := utl_raw.cast_to_varchar2(dbms_lob.substr(value));
dbms_lob.writeappend( l_out,ncollength,buffer1 );
End if;
IF nNewRecord = 1 then
nNewRecord := 0;
Select (utl_raw.cast_to_varchar2(dbms_lob.substr(var))) into l_out from dual;
end if;
end loop;
END LOOP;
DBMS_SQL.CLOSE_CURSOR (v_cursor);
it is something like
SELECT utl_raw.cast_to_varchar2(dbms_lob.substr(utl_raw.CAST_FROM_BINARY_INTEGER(int_column))) from mytable
SELECT utl_raw.cast_to_varchar2(dbms_lob.substr(utl_raw.cast_from_BINARY_DOUBLE(double_coulmn))) from mytable
SELECT utl_raw.cast_to_varchar2(dbms_lob.substr(utl_raw.cast_from_BINARY_FLOAT(float_column))) from mytable
SELECT utl_raw.cast_to_varchar2(dbms_lob.substr(utl_raw.CAST_FROM_BINARY_INTEGER(int_column))) from mytable
SELECT utl_raw.cast_to_varchar2(dbms_lob.substr(utl_raw.cast_from_BINARY_DOUBLE(double_coulmn))) from mytable
SELECT utl_raw.cast_to_varchar2(dbms_lob.substr(utl_raw.cast_from_BINARY_FLOAT(float_column))) from mytable
SELECT utl_raw.cast_to_varchar2(dbms_lob.substr(utl_raw.cast_from_BINARY_FLOAT(float_column))) from mytable
SELECT utl_raw.cast_to_varchar2(dbms_lob.substr(utl_raw.CAST_FROM_BINARY_INTEGER(int_column))) from mytable
SELECT utl_raw.cast_to_varchar2(dbms_lob.substr(utl_raw.cast_from_BINARY_FLOAT(float_column))) from mytable
SELECT utl_raw.cast_to_varchar2(dbms_lob.substr(utl_raw.CAST_FROM_BINARY_INTEGER(int_column))) from mytable
SELECT utl_raw.cast_to_varchar2(dbms_lob.substr(utl_raw.cast_from_BINARY_DOUBLE(double_coulmn))) from mytable
SELECT utl_raw.cast_to_varchar2(dbms_lob.substr(utl_raw.cast_from_BINARY_FLOAT(float_column))) from mytable
but it is time consuming taking more time to write and apennd
can we do it in alternate way.
If you select a single concotenated result from table then how will you differentiate b/w datatypes of each column and how to convert them to blob.
Well that is the SAME question I ask you a month ago:
Also - if all you do is concatenate multiple values together how do you expect the result to be useful? No one will be able to tell where one value begins and the next ends.
I also ask you to do this:
Start over and tell us, in English, what BUSINESS PROBLEM you are trying to solve.
Provide a small amount (2 rows of 3 columns each) of sample data (in the form of INSERT statements, the DDL for a sample table (again - 3 columns) and show us what the result should be: 2 rows each having 1 CLOB, 1 row having 1 CLOB, etc.
What you say you are doing doesn't make much sense. That is why you were ask to explain it more fully. We can't help you if we don't know what you are really trying to do.
If you just string multiple VARCHAR2 values together that are different lengths without using some kind of delimiter it won't be meaningful.
And that seems to be just the problem you are asking about in your other thread:
https://community.oracle.com/thread/2614252?start=15&tstart=0
You caused the problem - we are asking you why are you doing that?
Similar Messages
-
DBMS_LOB.WRITEAPPEND Max buffer size exceeded
Hello,
I'm following this guide to create an index using Oracle Text:
http://download.oracle.com/docs/cd/B19306_01/text.102/b14218/cdatadic.htm#i1006810
So I wrote something like this:
CREATE OR REPLACE PROCEDURE CREATE_INDEX(rid IN ROWID, tlob IN OUT NOCOPY CLOB)
IS
BEGIN
DBMS_LOB.CREATETEMPORARY(tlob, TRUE);
FOR c1 IN (SELECT ID_DOCUMENT FROM DOCUMENT WHERE rowid = rid)
LOOP
DBMS_LOB.WRITEAPPEND(tlob, LENGTH('<DOCUMENT>'), '<DOCUMENT>');
DBMS_LOB.WRITEAPPEND(tlob, LENGTH('<DOCUMENT_TITLE>'), '<DOCUMENT_TITLE>');
DBMS_LOB.WRITEAPPEND(tlob, LENGTH(NVL(c1.TITLE, ' ')), NVL(c1.TITLE, ' '));
DBMS_LOB.WRITEAPPEND(tlob, LENGTH('</DOCUMENT_TITLE>'), '</DOCUMENT_TITLE>');
DBMS_LOB.WRITEAPPEND(tlob, LENGTH('</DOCUMENT>'), '</DOCUMENT>');
FOR c2 IN (SELECT TITRE,TEXTE FROM PAGE WHERE ID_DOCUMENT = c1.ID_DOCUMENT)
LOOP
DBMS_LOB.WRITEAPPEND(tlob, LENGTH('<PAGE>'), '<PAGE>');
DBMS_LOB.WRITEAPPEND(tlob, LENGTH('<PAGE_TEXT>'), '<PAGE_TEXT>');
DBMS_LOB.WRITEAPPEND(tlob, LENGTH(NVL(c2.TEXTE, ' ')), NVL(c2.TEXTE, ' '));
DBMS_LOB.WRITEAPPEND(tlob, LENGTH('</PAGE_TEXT>'), '</PAGE_TEXT>');
DBMS_LOB.WRITEAPPEND(tlob, LENGTH('</PAGE>'), '</PAGE>');
END LOOP;
END LOOP;
END;
Issue is that some page text are bigger than 32767 bytes ! So I've got an INVALID_ARGVAL...
I can't figure out how can I increase this buffer size and how to manage this issue ??
Can you please help me :)
Thank you,
Ben
Edited by: user10900283 on 9 févr. 2009 00:05Hi ben,
I'm afraid, that doesn't help much, since you have obviously rewritten your procedure based on the advise given here.
Coluld you please post your new procedure, as formatted SQL*Plus, embedded in {noformat}{noformat} tags, like this:SQL> CREATE OR REPLACE PROCEDURE create_index(rid IN ROWID,
2 IS
3 BEGIN
4 dbms_lob.createtemporary(tlob, TRUE);
5
6 FOR c1 IN (SELECT id_document
7 FROM document
8 WHERE ROWID = rid)
9 LOOP
10 dbms_lob.writeappend(tlob, LENGTH('<DOCUMENT>'), '<DOCUMENT>');
11 dbms_lob.writeappend(tlob, LENGTH('<DOCUMENT_TITLE>')
12 ,'<DOCUMENT_TITLE>');
13 dbms_lob.writeappend(tlob, LENGTH(nvl(c1.title, ' '))
14 ,nvl(c1.title, ' '));
15 dbms_lob.writeappend(tlob
16 ,LENGTH('</DOCUMENT_TITLE>')
17 ,'</DOCUMENT_TITLE>');
18 dbms_lob.writeappend(tlob, LENGTH('</DOCUMENT>'), '</DOCUMENT>');
19
20 FOR c2 IN (SELECT titre, texte
21 FROM page
22 WHERE id_document = c1.id_document)
23 LOOP
24 dbms_lob.writeappend(tlob, LENGTH('<PAGE>'), '<PAGE>');
25 dbms_lob.writeappend(tlob, LENGTH('<PAGE_TEXT>'), '<PAGE_TEXT>');
26 dbms_lob.writeappend(tlob
27 ,LENGTH(nvl(c2.texte, ' '))
28 ,nvl(c2.texte, ' '));
29 dbms_lob.writeappend(tlob, LENGTH('</PAGE_TEXT>'), '</PAGE_TEXT>')
30 dbms_lob.writeappend(tlob, LENGTH('</PAGE>'), '</PAGE>');
31 END LOOP;
32 END LOOP;
33 END;
34 /
Advarsel: Procedure er oprettet med kompileringsfejl.
SQL>
SQL> DECLARE
2 rid ROWID;
3 tlob CLOB;
4 BEGIN
5 rid := 'AAAy1wAAbAAANwsABZ';
6 tlob := NULL;
7 create_index(rid => rid, tlob => tlob);
8 dbms_output.put_line('TLOB = ' || tlob); -- Not sure, you can do this?
9 END;
10 /
create_index(rid => rid, tlob => tlob);
FEJL i linie 7:
ORA-06550: line 7, column 4:
PLS-00905: object BRUGER.CREATE_INDEX is invalid
ORA-06550: line 7, column 4:
PL/SQL: Statement ignored
SQL> -
Dbms_lob.writeappend in Oracle 11g
Hi,
we have this line of code
DBMS_LOB.writeappend(v_clob,doc_rec.seg_length,doc_rec.value) in our procedure.
when we run this code in oracle10g ,its not taking thath much time.But when we run in oracle 11g,its taking time.
I there any noncompatibility DBMS_LOB.writeappend for this function in oracle 11g.
Thanks in advancethanks abufazal...
actually after execution of below job, its not gathering the stats...
i have checked using select LAST_ANALYZED,TABLE_NAME from dba_tables where OWNER='XXXXX';
can anyone please suggest how to create job with scussful stats gather.
BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name => 'example_job1',
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN
DBMS_STATS.gather_schema_stats("XXXXX", estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, cascade => TRUE);
END;',
start_date => TO_DATE('08-06-2013 18:00','DD-MM-YYYY HH24:MI'),
repeat_interval => 'FREQ=DAILY;BYDAY=TUE;BYHOUR=18;BYMINUTE=0;BYSECOND=0',
enabled => TRUE,
comments => 'Gather table statistics');
END;
Can you provide output of the following query
SQL> select job_name,job_action,start_date,last_start_date,last_run_duration,next_run_date,failure_count from dba_scheduler_jobs where job_name='EXAMPLE_JOB1';
If the failure_count is > 0, please do share the errors from alert log file. -
DBMS_LOB.WRITEAPPEND - HELP!
Why is my code failing? All I am doing is reading text from an external file (UTL_FILE package) and appending it onto an NCLOB variable, but the WRITEAPPEND to the NCLOB variable is failiing. I know the WRITEAPPEND is failinb because I instrumented the code with DBMS_OUTPUT statement. I get the following error message with the following code:
DECLARE
ERROR at line 1:
ORA-22275: invalid LOB locator specified
ORA-06512: at "SYS.DBMS_LOB", line 328
ORA-06512: at line 39
ORA-22275: invalid LOB locator specified
CODE
====
DECLARE
the_val NCLOB;
filehdl UTL_FILE.FILE_TYPE;
buf VARCHAR2(32765);
still_reading BOOLEAN := TRUE;
BEGIN
filehdl := UTL_FILE.FOPEN('/tmp','reqmnt.txt','r',32765);
DBMS_LOB.CREATETEMPORARY(the_val, TRUE);
DBMS_LOB.OPEN(the_val, DBMS_LOB.LOB_READWRITE);
SELECT empty_clob() INTO the_val
FROM DUAL;
WHILE ( still_reading ) LOOP
BEGIN
UTL_FILE.GET_LINE(filehdl, buf);
-- Append the line to the variable.
-- THE FOLLOWING WRITEAPPEND STATEMENT IS FAILING AT RUNTIME
DBMS_LOB.WRITEAPPEND(the_val, 32765, TRANSLATE( buf USING NCHAR_CS ));
EXCEPTION
WHEN NO_DATA_FOUND THEN
still_reading := FALSE;
UTL_FILE.FCLOSE(filehdl);
WHEN OTHERS THEN
UTL_FILE.FCLOSE(filehdl);
RAISE;
END;
END LOOP;
DBMS_LOB.CLOSE(the_val);
DBMS_LOB.FREETEMPORARY(the_val);
-- <NOW DO SOME SORT OF INSERT STATEMENT WITH the_val>
EXCEPTION
WHEN OTHERS THEN
DBMS_LOB.CLOSE(the_val);
DBMS_LOB.FREETEMPORARY(the_val);
RAISE;
END;
nullYou can use the returning clause to store the generated sequence value that was inserted into id into a variable, then reference that variable in the rest of your code, as demonstrated below.
SCOTT@10gXE> CREATE TABLE soundtable
2 (id NUMBER,
3 sound BLOB DEFAULT EMPTY_BLOB ())
4 /
Table created.
SCOTT@10gXE> CREATE SEQUENCE your_sequence
2 /
Sequence created.
SCOTT@10gXE> VARIABLE g_id_seq NUMBER
SCOTT@10gXE> INSERT INTO soundtable (id)
2 VALUES (your_sequence.NEXTVAL)
3 RETURNING id INTO :g_id_seq
4 /
1 row created.
SCOTT@10gXE> COMMIT
2 /
Commit complete.
SCOTT@10gXE> CREATE OR REPLACE DIRECTORY auddir AS 'C:\WINDOWS\Media'
2 /
Directory created.
SCOTT@10gXE> SET SERVEROUTPUT ON
SCOTT@10gXE> DECLARE
2 f_lob BFILE := BFILENAME ('AUDDIR', 'chimes.wav');
3 b_lob BLOB;
4 Lob BLOB;
5 Length INTEGER;
6 BEGIN
7
8 SELECT sound
9 INTO b_lob
10 FROM soundtable
11 WHERE id = :g_id_seq
12 FOR UPDATE;
13
14 dbms_lob.open (f_lob, dbms_lob.file_readonly);
15 dbms_lob.open (b_lob, dbms_lob.lob_readwrite);
16 dbms_lob.loadfromfile
17 (b_lob, f_lob, dbms_lob.getlength (f_lob));
18 dbms_lob.close(b_lob);
19 dbms_lob.close(f_lob);
20 COMMIT;
21
22 SELECT sound
23 INTO Lob
24 FROM soundtable
25 WHERE ID = :g_id_seq;
26 length := DBMS_LOB.GETLENGTH (Lob);
27 IF length IS NULL THEN
28 DBMS_OUTPUT.PUT_LINE ('LOB is null.');
29 ELSE
30 DBMS_OUTPUT.PUT_LINE ('The length is '|| length);
31 END IF;
32 END;
33 /
The length is 55776
PL/SQL procedure successfully completed.
SCOTT@10gXE> -
Clob datatype- very slow while appending
Hi,
please help on below pl/sql block.
I have written below code, but the response time is very slow.
the table - aux_comm_3_to_1 has more than 1 Lakh records.
and I have append all the record and send the record as clob OUT parameter to GUI.
Pls suggest me.
declare
TEMP_XML clob;
XML_OW clob;
begin
FOR rec IN (SELECT
FRAMEREFERENCEDATE,
EU_LEU_ID_OWNER,
EU_LEU_ID_SUBSIDIARY,
DIRECT_PERCENT,
QUALITY_IND_DIRECT_PERCENT,
SOURCE_DIRECT_PERCENT,
REF_DATE_DIRECT_PERCENT,
KIND_OF_CONTROL,
SOURCE_KIND_OF_CONTROL,
REF_DATE_KIND_OF_CONTROL,
DATE_OF_COMMENCEMENT,
SOURCE_DATE_OF_COMMENCEMENT,
REF_DATE_OF_COMMENCEMENT,
DATE_OF_CESSATION,
SOURCE_DATE_OF_CESSATION,
REF_DATE_OF_CESSATION,
SOURCE_TA_OWNERSHIP,
REF_DATE_TA_OWNERSHIP
FROM aux_comm_3_to_1 )
LOOP
TEMP_XML :=
'<TARGET_OWNERSHIP="'
|| idcounter
||'"FRAMEREFERENCEDATE="'||rec.FRAMEREFERENCEDATE
||'"EU_LEU_ID_OWNER="'||rec.EU_LEU_ID_OWNER
||'"EU_LEU_ID_SUBSIDIARY="'||rec.EU_LEU_ID_SUBSIDIARY
||'"DIRECT_PERCENT="'||rec.DIRECT_PERCENT
||'"QUALITY_IND_DIRECT_PERCENT="'||rec.QUALITY_IND_DIRECT_PERCENT
||'"SOURCE_DIRECT_PERCENT="'||rec.SOURCE_DIRECT_PERCENT
||'"REF_DATE_DIRECT_PERCENT="'||rec.REF_DATE_DIRECT_PERCENT
||'"KIND_OF_CONTROL="'||rec.KIND_OF_CONTROL
||'"SOURCE_KIND_OF_CONTROL="'||rec.SOURCE_KIND_OF_CONTROL
||'"REF_DATE_KIND_OF_CONTROL="'||rec.REF_DATE_KIND_OF_CONTROL
||'"DATE_OF_COMMENCEMENT="'||rec.DATE_OF_COMMENCEMENT
||'"SOURCE_DATE_OF_COMMENCEMENT="'||rec.SOURCE_DATE_OF_COMMENCEMENT
||'"REF_DATE_OF_COMMENCEMENT="'||rec.REF_DATE_OF_COMMENCEMENT
||'"DATE_OF_CESSATION="'||rec.DATE_OF_CESSATION
||'"SOURCE_DATE_OF_CESSATION="'||rec.SOURCE_DATE_OF_CESSATION
||'"REF_DATE_OF_CESSATION="'||rec.REF_DATE_OF_CESSATION
||'"SOURCE_TA_OWNERSHIP="'||rec.SOURCE_TA_OWNERSHIP
||'"REF_DATE_TA_OWNERSHIP="'||rec.REF_DATE_TA_OWNERSHIP
|| '"/>'
|| CHR (10);
XML_OW:=XML_OW||CHR(10)||TEMP_XML;
idcounter := idcounter + 1;
END LOOP;
end;
----------------------Can you extend the test also with dbms_lob.writeappend? I am not on the same machine so so I am repeating all tests again (slightly modified):
SQL> set timing on
SQL> declare
s clob;
begin
s := dbms_random.string ('X', 10000);
for j in 1 .. 100 * 100 * 10
loop
s := s || 'x';
end loop;
dbms_output.put_line ('Length: ' || length (s));
end;
Length: 104000
PL/SQL procedure successfully completed.
Elapsed: 00:00:20.49
SQL> declare
s clob;
begin
s := dbms_random.string ('X', 10000);
for j in 1 .. 100 * 100 * 10
loop
dbms_lob.append (s, 'x');
end loop;
dbms_output.put_line ('Length: ' || length (s));
end;
Length: 104000
PL/SQL procedure successfully completed.
Elapsed: 00:00:25.49
SQL> declare
s clob;
begin
dbms_lob.createtemporary (s, true);
s := dbms_random.string ('X', 10000);
for j in 1 .. 100 * 100 * 10
loop
dbms_lob.writeappend (s, 1, 'x');
end loop;
dbms_output.put_line ('Length: ' || length (s));
dbms_lob.freetemporary (s);
end;
Length: 104000
PL/SQL procedure successfully completed.
Elapsed: 00:00:20.32So not much of a difference between first and third run ... -
Slow extraction in big XML-Files with PL/SQL
Hello,
i have a performance problem with the extraction from attributes in big XML Files. I tested with a size of ~ 30 mb.
The XML file is a response of a webservice. This response include some metadata of a document and the document itself. The document is inline embedded with a Base64 conversion. Here is an example of a XML File i want to analyse:
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Body>
<ns2:GetDocumentByIDResponse xmlns:ns2="***">
<ArchivedDocument>
<ArchivedDocumentDescription version="1" currentVersion="true" documentClassName="Allgemeines Dokument" csbDocumentID="***">
<Metadata archiveDate="2013-08-01+02:00" documentID="123">
<Descriptor type="Integer" name="fachlicheId">
<Value>123<Value>
</Descriptor>
<Descriptor type="String" name="user">
<Value>***</Value>
</Descriptor>
<InternalDescriptor type="Date" ID="DocumentDate">
<Value>2013-08-01+02:00</Value>
</InternalDescriptor>
<!-- Here some more InternalDescriptor Nodes -->
</Metadata>
<RepresentationDescription default="true" description="Description" documentPartCount="1" mimeType="application/octet-stream">
<DocumentPartDescription fileName="20mb.test" mimeType="application/octet-stream" length="20971520 " documentPartNumber="0" hashValue=""/>
</RepresentationDescription>
</ArchivedDocumentDescription>
<DocumentPart mimeType="application/octet-stream" length="20971520 " documentPartNumber="0" representationNumber="0">
<Data fileName="20mb.test">
<BinaryData>
<!-- Here is the BASE64 converted document -->
</BinaryData>
</Data>
</DocumentPart>
</ArchivedDocument>
</ns2:GetDocumentByIDResponse>
</soap:Body>
</soap:Envelope>
Now i want to extract the filename and the Base64 converted document from this XML response.
For the extraction of the filename i use the following command:
v_filename := apex_web_service.parse_xml(v_xml, '//ArchivedDocument/ArchivedDocumentDescription/RepresentationDescription/DocumentPartDescription/@fileName');
For the extraction of the binary data i use the following command:
v_clob := apex_web_service.parse_xml_clob(v_xml, '//ArchivedDocument/DocumentPart/Data/BinaryData/text()');
My problem is the performance of this extraction. Here i created some summary of the start and end time for the commands:
Start Time
End Time
Difference
Command
10.09.13 - 15:46:11,402668000
10.09.13 - 15:47:21,407895000
00:01:10,005227
v_filename_bcm := apex_web_service.parse_xml(v_xml, '//ArchivedDocument/ArchivedDocumentDescription/RepresentationDescription/DocumentPartDescription/@fileName');
10.09.13 - 15:47:21,407895000
10.09.13 - 15:47:22,336786000
00:00:00,928891
v_clob := apex_web_service.parse_xml_clob(v_xml, '//ArchivedDocument/DocumentPart/Data/BinaryData/text()');
As you can see the extraction of the filename is slower then the document extraction. For the Extraction of the filename i need ~01
I wonder about it and started some tests.
I tried to use an exact - non dynamic - filename. So i have this commands:
v_filename := '20mb_1.test';
v_clob := apex_web_service.parse_xml_clob(v_xml, '//ArchivedDocument/DocumentPart/Data/BinaryData/text()');
Under this Conditions the time for the document extraction soar. You can see this in the following table:
Start Time
End Time
Difference
Command
10.09.13 - 16:02:33,212035000
10.09.13 - 16:02:33,212542000
00:00:00,000507
v_filename_bcm := '20mb_1.test';
10.09.13 - 16:02:33,212542000
10.09.13 - 16:03:40,342396000
00:01:07,129854
v_clob := apex_web_service.parse_xml_clob(v_xml, '//ArchivedDocument/DocumentPart/Data/BinaryData/text()');
So i'm looking for a faster extraction out of the xml file. Do you have any ideas? If you need more informations, please ask me.
Thank you,
Matthias
PS: I use the Oracle 11.2.0.2.0Although using an XML schema is a good advice for an XML-centric application, I think it's a little overkill in this situation.
Here are two approaches you can test :
Using the DOM interface over your XMLType variable, for example :
DECLARE
v_xml xmltype := xmltype('<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Body>
<ns2:GetDocumentByIDResponse xmlns:ns2="***">
<ArchivedDocument>
<ArchivedDocumentDescription version="1" currentVersion="true" documentClassName="Allgemeines Dokument" csbDocumentID="***">
<Metadata archiveDate="2013-08-01+02:00" documentID="123">
<Descriptor type="Integer" name="fachlicheId">
<Value>123</Value>
</Descriptor>
<Descriptor type="String" name="user">
<Value>***</Value>
</Descriptor>
<InternalDescriptor type="Date" ID="DocumentDate">
<Value>2013-08-01+02:00</Value>
</InternalDescriptor>
<!-- Here some more InternalDescriptor Nodes -->
</Metadata>
<RepresentationDescription default="true" description="Description" documentPartCount="1" mimeType="application/octet-stream">
<DocumentPartDescription fileName="20mb.test" mimeType="application/octet-stream" length="20971520 " documentPartNumber="0" hashValue=""/>
</RepresentationDescription>
</ArchivedDocumentDescription>
<DocumentPart mimeType="application/octet-stream" length="20971520 " documentPartNumber="0" representationNumber="0">
<Data fileName="20mb.test">
<BinaryData>
ABC123
</BinaryData>
</Data>
</DocumentPart>
</ArchivedDocument>
</ns2:GetDocumentByIDResponse>
</soap:Body>
</soap:Envelope>');
domDoc dbms_xmldom.DOMDocument;
docNode dbms_xmldom.DOMNode;
node dbms_xmldom.DOMNode;
nsmap varchar2(2000) := 'xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns2="***"';
xpath_pfx varchar2(2000) := '/soap:Envelope/soap:Body/ns2:GetDocumentByIDResponse/';
istream sys.utl_characterinputstream;
buf varchar2(32767);
numRead pls_integer := 1;
filename varchar2(30);
base64clob clob;
BEGIN
domDoc := dbms_xmldom.newDOMDocument(v_xml);
docNode := dbms_xmldom.makeNode(domdoc);
filename := dbms_xslprocessor.valueOf(
docNode
, xpath_pfx || 'ArchivedDocument/ArchivedDocumentDescription/RepresentationDescription/DocumentPartDescription/@fileName'
, nsmap
node := dbms_xslprocessor.selectSingleNode(
docNode
, xpath_pfx || 'ArchivedDocument/DocumentPart/Data/BinaryData/text()'
, nsmap
--create an input stream to read the node content :
istream := dbms_xmldom.getNodeValueAsCharacterStream(node);
dbms_lob.createtemporary(base64clob, false);
-- read the content in 32k chunk and append data to the CLOB :
loop
istream.read(buf, numRead);
exit when numRead = 0;
dbms_lob.writeappend(base64clob, numRead, buf);
end loop;
-- free resources :
istream.close();
dbms_xmldom.freeDocument(domDoc);
END;
Using a temporary XMLType storage (binary XML) :
create table tmp_xml of xmltype
xmltype store as securefile binary xml;
insert into tmp_xml values( v_xml );
select x.*
from tmp_xml t
, xmltable(
xmlnamespaces(
'http://schemas.xmlsoap.org/soap/envelope/' as "soap"
, '***' as "ns2"
, '/soap:Envelope/soap:Body/ns2:GetDocumentByIDResponse/ArchivedDocument/DocumentPart/Data'
passing t.object_value
columns filename varchar2(30) path '@fileName'
, base64clob clob path 'BinaryData'
) x -
Dbms_lob , where did my time go ?
Hi all
After using 10046 to identify the sql that is causing the slowness in a program “ less commits cause my program to go slower” i realised that i am missing something ,
There was a lot of time missing in the tkprof file , and no sql or wait event allocate the missing time , so i put the following test case together in an attempt to understand where the time is going .
Version of test database : 11.1.0.6.0
Name of test database: stdby ( :-) used my standby database)
Database non-default values
# Parameter Value1
1: audit_file_dest /u01/app/oracle/admin/stdby/adump
2: audit_trail DB
3: compatible 11.1.0.0.0
4: control_files /u01/app/oracle/oradata/stdby/control01.ctl
5: control_files /u01/app/oracle/oradata/stdby/control02.ctl
6: control_files /u01/app/oracle/oradata/stdby/control03.ctl
7: db_block_size 8192
8: db_domain
9: db_name stdby
10: db_recovery_file_dest /u01/app/oracle/flash_recovery_area
11: db_recovery_file_dest_size 2147483648
12: diagnostic_dest /u01/app/oracle
13: dispatchers (PROTOCOL=TCP) (SERVICE=stdbyXDB)
14: memory_target 314572800
15: open_cursors 300
16: processes 150
17: remote_login_passwordfile EXCLUSIVE
18: undo_tablespace UNDOTBS1More accurately I used existing example from http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:4084920819312
I hope Tom does not mind .
create table t ( x clob );
create or replace procedure p( p_open_close in boolean default false,
p_iters in number default 100 )
as
l_clob clob;
begin
insert into t (x) values ( empty_clob() )
returning x into l_clob;
if ( p_open_close )
then
dbms_lob.open( l_clob, dbms_lob.lob_readwrite );
end if;
for i in 1 .. p_iters
loop
dbms_lob.WriteAppend( l_clob, 5, 'abcde' );
end loop;
if ( p_open_close )
then
dbms_lob.close( l_clob );
end if;
commit;
end;I did the tracing and the run of the pkg with this
alter session set timed_statistics = true;
alter session set max_dump_file_size = unlimited;
alter session set tracefile_identifier = 'test_clob_commit';
alter session set events '10046 trace name context forever, level 12';
exec p(TRUE,20000);
exitDid the tkprof of the 10046 trace file with
tkprof stdby_ora_3656_test_clob_commit.trc stdby_ora_3656_test_clob_commit.trc.tkp sort=(prsela,exeela,fchela) aggregate=yes waits=yes sys=yesWith output of
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.02 0.02 0 0 0 0
Execute 1 46.89 147.81 38915 235267 492471 1
Fetch 0 0.00 0.00 0 0 0 0
total 2 46.92 147.83 38915 235267 492471 1
Misses in library cache during parse: 1
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 0.00 0.00
latch: shared pool 24 0.05 0.07
latch: row cache objects 2 0.00 0.00
log file sync 1 0.01 0.01
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 117 0.11 0.10 0 0 2 0
Execute 426 0.37 0.40 6 4 9 2
Fetch 645 0.17 0.51 63 1507 0 1952
total 1188 0.65 1.03 69 1511 11 1954
Misses in library cache during parse: 22
Misses in library cache during execute: 22
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 19778 1.12 30.31
direct path write 19209 0.00 0.44
direct path read 19206 0.00 0.37
log file switch completion 8 0.20 0.70
latch: cache buffers lru chain 5 0.01 0.02
3 user SQL statements in session.
424 internal SQL statements in session.
427 SQL statements in session.And it’s here where the time is being lost.The time of the main pkg p(TRUE,2000) takes 147.83 sec, which is correct , but what is making this time up.
From sorted trace file
SQL ID : catnjk0zv6jz1
BEGIN p(TRUE,20000); END;
call count cpu elapsed disk query current rows
Parse 1 0.02 0.02 0 0 0 0
Execute 1 46.89 147.81 38915 235267 492471 1
Fetch 0 0.00 0.00 0 0 0 0
total 2 46.92 147.83 38915 235267 492471 1
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 81
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
latch: shared pool 24 0.05 0.07
latch: row cache objects 2 0.00 0.00
log file sync 1 0.01 0.01
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
SQL ID : db78fxqxwxt7r
select /*+ rule */ bucket, endpoint, col#, epvalue
from
histgrm$ where obj#=:1 and intcol#=:2 and row#=:3 order by bucket
intresting , oracle is still using the rule hint in 11g ?
call count cpu elapsed disk query current rows
Parse 3 0.00 0.00 0 0 0 0
Execute 98 0.05 0.05 0 0 0 0
Fetch 98 0.04 0.17 28 294 0 1538
total 199 0.10 0.22 28 294 0 1538
Misses in library cache during parse: 0
Optimizer mode: RULE
Parsing user id: SYS (recursive depth: 3)
Rows Row Source Operation
20 SORT ORDER BY (cr=3 pr=1 pw=1 time=8 us cost=0 size=0 card=0)
20 TABLE ACCESS CLUSTER HISTGRM$ (cr=3 pr=1 pw=1 time=11 us)
1 INDEX UNIQUE SCAN I_OBJ#_INTCOL# (cr=2 pr=0 pw=0 time=0 us)(object id 408)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 28 0.02 0.12
SQL ID : 5n1fs4m2n2y0r
select pos#,intcol#,col#,spare1,bo#,spare2,spare3
from
icol$ where obj#=:1
call count cpu elapsed disk query current rows
Parse 2 0.00 0.00 0 0 0 0
Execute 19 0.03 0.03 0 0 0 0
Fetch 60 0.00 0.04 1 120 0 41
total 81 0.04 0.08 1 120 0 41
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 2)
Rows Row Source Operation
1 TABLE ACCESS BY INDEX ROWID ICOL$ (cr=4 pr=0 pw=0 time=0 us cost=2 size=54 card=2)
1 INDEX RANGE SCAN I_ICOL1 (cr=3 pr=0 pw=0 time=0 us cost=1 size=0 card=2)(object id 42)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 1 0.04 0.04None of the parse , execute ,fetch and wait times makes up the 147.83 seconds.
So i turned to oracles trcanlzr.sql that Carlos Sierra wrote and parsed the same trace file to find the offending sql .
And it starts getting intrestting
Trace Analyzer 11.2.6.2 Report: trcanlzr_75835.html
stdby_ora_3656_test_clob_commit.trc (6970486 bytes)
Total Trace Response Time: 148.901 secs.
2009-MAY-03 20:03:51.771 (start of first db call in trace).
2009-MAY-03 20:06:20.672 (end of last db call in trace).
RESPONSE TIME SUMMARY
~~~~~~~~~~~~~~~~~~~~~
pct of pct of pct of
Time total Time total Time total
Response Time Component (in secs) resp time (in secs) resp time (in secs) resp time
CPU: 47.579 32.0%
Non-idle Wait: 0.467 0.3%
ET Unaccounted-for: 100.825 67.7%
Total Elapsed(1): 148.871 100.0%
Idle Wait: 0.001 0.0%
RT Unaccounted-for: 0.029 0.0%
Total Response(2): 148.901 100.0%
(1) Total Elapsed = "CPU" + "Non-Idle Wait" + "ET Unaccounted-for".
(2) Total Response = "Total Elapsed Time" + "Idle Wait" + "RT Unaccounted-for".
Total Accounted-for = "CPU" + "Non-Idle Wait" + "Idle Wait" = 148.872 secs.
Total Unccounted-for = "ET Unaccounted-for" + "RT Unaccounted-for" = 100.854 secs.{font:Courier}
{color:red}
{size:19}100.825 seconds Wow , that is a lot 67.7 % of the time is not accounted for {size}
{color}
{font}
I even used TVD$XTAT TriVaDis eXtended Tracefile Analysis Tool with the same conclution .
{font:Courier}
{color:green}
{size:19}Looking at the raw trace file i see a lot of lines like this{size}
{color}
{font}
WAIT #7: nam='direct path read' ela= 11 file number=4 first dba=355935 block cnt=1 obj#=71067 tim=1241337833498756
WAIT #7: nam='direct path write' ela= 12 file number=4 first dba=355936 block cnt=1 obj#=71067 tim=1241337833499153
WAIT #7: nam='db file sequential read' ela= 1095 file#=4 block#=399 blocks=1 obj#=71067 tim=1241337833501366{font:Courier}
{color:green}
{size:19}
What is even more interesting is the sql for "PARSING IN CURSOR #7" is not in the trace file !
The question is where is the time going or is the parser of the 10046 trace file just not putting the detail in ? How do i fix this, without speculating, if I do not know where the problem is ?
I thought of doing a strace on the process . Where else can i look for my 100 sec
Please point me in a direction where i can look for my 100,825 seconds as this is a test case with a production system that is loosing the same amount of time but with a lot more sql arround its dbms_lob.writeappend.
{size}
{color}
{font}
Edited by: user5174849 on 2009/05/16 11:17 PMuser5174849 wrote:
After using 10046 to identify the sql that is causing the slowness in a program “ less commits cause my program to go slower” i realised that i am missing something ,
There was a lot of time missing in the tkprof file , and no sql or wait event allocate the missing time , so i put the following test case together in an attempt to understand where the time is going .
Version of test database : 11.1.0.6.0
What is even more interesting is the sql for "PARSING IN CURSOR #7" is not in the trace file !
The question is where is the time going or is the parser of the 10046 trace file just not putting the detail in ? How do i fix this, without speculating, if I do not know where the problem is ?
I thought of doing a strace on the process . Where else can i look for my 100 sec
Please point me in a direction where i can look for my 100,825 seconds as this is a test case with a production system that is loosing the same amount of time but with a lot more sql arround its dbms_lob.writeappend.I guess that the separate cursor that is opened for the LOB operation is where the time is spent, and unfortunately this part is not very well exposed via the usual interfaces (V$SQL, 10046 trace file etc).
You might want to read this post where Kerry identifies the offending SQL via V$OPEN_CURSOR: http://kerryosborne.oracle-guy.com/2009/04/hidden-sql-why-cant-i-find-my-sql-text/
The waits of this cursor #7 are quite likely rather relevant since they probably show you what the LOB operation is waiting for.
The LOB is created with the default NOCACHE attribute therefore it's read and written using direct path operations.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
Error while creating AW using DBMS_LOB with XML..
Hi All,
I am trying to create AW using DBMS_LOB package with XML,
while creating AW,i am facing the following error.find the code also below :
declare
xml_awcreate_clob clob;
xml_awcreate_st varchar2(4000);
begin
DBMS_LOB.CREATETEMPORARY(xml_awcreate_clob,TRUE);
dbms_lob.open(xml_awcreate_clob, DBMS_LOB.LOB_READWRITE);
dbms_lob.writeappend(xml_awcreate_clob, 48, '<?xml version = ''1.0'' encoding = ''UTF-8'' ?>');
dbms_lob.writeappend(xml_awcreate_clob, 43, '');
dbms_lob.writeappend(xml_awcreate_clob, 63, '<AWXML version = ''1.0'' timestamp = ''Mon Feb 11 13:29:11 2002'' >');
dbms_lob.writeappend(xml_awcreate_clob, 15, '<AWXML.content>');
dbms_lob.writeappend(xml_awcreate_clob, 25, ' <Create Id="Action41">');
dbms_lob.writeappend(xml_awcreate_clob, 19, ' <ActiveObject >');
dbms_lob.writeappend(xml_awcreate_clob, 163, ' <AW Name="NEW_XML_AW" LongName="NEW_XML_AW" ShortName="NEW_XML_AW" PluralName="NEW_XML_AW" Id="NEW_XML.AW"/>');
dbms_lob.writeappend(xml_awcreate_clob, 19, ' </ActiveObject>');
dbms_lob.writeappend(xml_awcreate_clob, 11, ' </Create>');
dbms_lob.writeappend(xml_awcreate_clob, 16, '</AWXML.content>');
dbms_lob.writeappend(xml_awcreate_clob, 8, '</AWXML>');
dbms_lob.close(xml_awcreate_clob);
xml_awcreate_st := sys.interactionExecute(xml_awcreate_clob);
end;
ORA-21560: argument 2 is null, invalid, or out of range
ORA-06512: at "SYS.DBMS_LOB", line 833
ORA-06512: at line 12
Any idea or thought on this would be appreciable.
Thanks in advance.
AnwarDid you change any of the text in the lob write statements ?
I believe you get this error if you increase the number of characters without increasing the 1st argument which looks as though it represents the number of characters -
Using dbms_lob append to insert text how do you insert a new line inbetween
DBMS_LOB.APPEND (P_TEXT,'* Operator Phone,');
---- inbetween I need to insert new I am using DBMS_LOB.APPEND (P_TEXT,CHR(10)); IS there amy better method?
DBMS_LOB.APPEND (P_TEXT,'* Operator Email Address,');Sorry if the question was not clear ---
Lets say in the folowing example every write append needs to start on a new line followed by text. How do we do that?
Do we add another writeappend(cvar,22, chr(10)); inbetween?
dbms_lob.writeappend(cvar, 19, '<root><book><title>');
dbms_lob.writeappend(cvar, length(r.title), r.title);
dbms_lob.writeappend(cvar, 14, '</title><desc>');
dbms_lob.writeappend(cvar, length(r.description), r.description);
dbms_lob.writeappend(cvar, 27, '</desc></book><author_name>');
dbms_lob.writeappend(cvar, length(r.author_name), r.author_name);
dbms_lob.writeappend(cvar, 21, '</author_name></root>');
Edited by: user521218 on May 7, 2009 12:34 PM -
Hi all
I am using dbms_lob.erase to delete part of data in my Clob
My question is does dbms_lob.erase also free up the space allocated by previous content of my clob?
If not is there someway to do it? After .erase i want to populate clob with different set of data and the amount of data is bigget than existing size of clob
Thanks for all your helpGalia,
I hope this example illustrates your question
completely:
SQL> declare
2 clb clob;
3 orig varchar2(10) := '1234567890';
4 adds varchar2(3) := '456';
5 amnt number := 3;
6 begin
7
8 dbms_lob.createtemporary(clb,true);
9 dbms_lob.write(clb,10,1,orig);
10 dbms_output.put_line('Original content: ' || clb);
11 dbms_lob.erase(clb,amnt,4);
12 dbms_output.put_line('After-erase content: ' || clb);
13 dbms_lob.writeappend(clb,3,adds);
14 dbms_output.put_line('After-writeappend content: ' || clb);
15 dbms_lob.write(clb,3,4,adds);
16 dbms_output.put_line('After-write content: ' || clb);
17 amnt := dbms_lob.getlength(clb);
18 dbms_lob.erase(clb,amnt);
19 dbms_output.put_line('After-complete erase content: ' || clb);
20 dbms_lob.write(clb,20,1,orig || orig);
21 dbms_output.put_line('New content: ' || clb);
22 end;
23 /
Original content: 1234567890
After-erase content: 123 7890
After-writeappend content: 123 7890456
After-write content: 1234567890456
After-complete erase content:
New content: 12345678901234567890
 
PL/SQL procedure successfully completed.Rgds. -
Dbms_lob and line size problem
Hi,
below is a simple function which returns a CLOB.
When I call this function from sqlplus and spool the result in a
file via the SPOOL command I get a line break after 80 characters.
When I replace in the function the CLOB type with a VARCHAR2 type
the whole TESTSTRING is printed on ONE line.
I do specify 'SET LINESIZE 200' in both cases and the length of
TESTSTRING is smaller than 200 characters.
How can I increase the line size for the CLOB case ?
CREATE OR REPLACE FUNCTION GetSimple (
RETURN CLOB
IS
RESULT CLOB;
TESTSTRING CONSTANT VARCHAR2(4000) := '123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_AFTER_LINEBREAK' ;
BEGIN
-- create CLOB for RETURN
dbms_lob.createtemporary(RESULT, TRUE);
dbms_lob.writeappend(RESULT, length(TESTSTRING), TESTSTRING);
RETURN RESULT;
END GetSimple;
/problem solved !
I had to add
SET LONGCHUNKSIZE 4096;
default is 80 -
Dbms_lob.copy vs. variable assignment
Cleanup of lob in java
Scenerio.
Java mid-tier application creates a CLOB variables to pass to Database PLSQL stored procedure One clob input, one output argument.
Input clob variable setup with dbms_lob.createtemporary(a,true,dbms_lob.call);
clob data taken from Front end written into with dbms_lob.writeappend.
Mid-tier app does NOT do a dbms_lob.createtemporary for second output variable.
Database stored procedure called to process data.
java call sp1(a in clob, b in out clob);
plsql sp on database...
procedure sp1(a in clob,b out clob)
is
lcl_clob clob;
begin
dbms_lob.createtemporary(lcl_clob,true,dbms_lob.call);
---- procedure process input clob data and when complete. filling lcl_clob with dbms_lob.copy
--When sp1 completes it does assignment to return data to java
b := lcl_clob; --this does deep copy
dbms_lob.freetemporary(lcl_clob);--free local temp clob
end;
Data gets back to JAVA midtier app. OK the Java app Read and Close clob.
Concern is about java's b defined clob and memory. How is java's defined b clob deep copy cleaned up?
Any memory/garbage issue on Java side with this? Mid tier server Weblogic on Win 2K
any ideas or issues. DB Environment: Oracle 8.1.7.3 on HPUX 11documentation says
There is also an interface to let you group temporary LOBs together into a logical bucket. The duration represents this logical store for temporary LOBs. Each temporary LOB can have separate storage characteristics, such as CACHE/ NOCACHE. There is a default store for every session into which temporary LOBs are placed if you don't specify a specific duration.
important part is below
Additionally, you are able to perform a free operation on durations, which causes all contents in a duration to be freed.
also
There is a default store for every session
i suppose its probably related with duration of lob u have used -
I have a clob database field and I want to replace every occurence of a char with another one(for ex. all 'a's to 'b's). There is not a repalce function in dbms_lob package and I want to know if there is a practical way of doing this.
Thanks a lot.It's not ideal but as you're looking for chunks of text this is probably the only way.
(1) Create a temporary LOB.
(2) Use DBMS_LOB.INSTR() to find "dangeous HTML", however you define that.
(3) Use DBMS_LOB.SUBSTR() and DBMS_LOB.WRITEAPPEND() to copy the safe HTML to the temporary lob.
(4) Overwrite the CLOB with the temporary LOB.
There may be smarter ways of doing this in 10g with regular expressions, but without knowing more about your scenario I wouldn't like to comment further.
Cheers, APC -
USER_DATASTORE: Could DBMS_LOB be leaking memory?
We've got a database which is indexed with USER_DATASTORE and the following procedure. We're getting an occasional situation where this message appears in the alert log:
ORA-04030: out of process memory when trying to allocate 8716 bytes (pga heap,Get krha asynch mem)
CKPT: terminating instance due to error 4030
Doing block recovery for file 2 block 81536
ORA-04030: out of process memory when trying to allocate bytes (,) (several times)
Instance terminated by CKPT, pid = 4900
Starting up ORACLE RDBMS Version: 10.1.0.2.0.
Could it be something in the procedure that is leaking or holding onto CLOBs longer than it should?
Thanks!
CREATE OR REPLACE PROCEDURE KM_TESTING_SMALLER_INDEX_PROC
(RID IN ROWID, TLOB IN OUT CLOB) IS
BEGIN
DBMS_LOB.OPEN(TLOB,DBMS_LOB.LOB_READWRITE);
DBMS_LOB.TRIM(TLOB,0);
FOR C1 IN (SELECT REPLACE(BEGINBATES,'<',' ') BEGINBATES,
REPLACE(ENDBATES,'<',' ') ENDBATES,
REPLACE(DOCTITLE,'<',' ') DOCTITILE,
REPLACE(OCR2,'<',' ') OCR2 FROM KM_TESTING_SMALLER WHERE ROWID=RID) LOOP
DBMS_LOB.WRITEAPPEND(TLOB,20,'<BEGINBATESnxtfield>');
IF (C1.BEGINBATES IS NOT NULL) THEN
IF NOT(LENGTH(C1.BEGINBATES)=0) THEN
DBMS_LOB.WRITEAPPEND(TLOB,LENGTH(C1.BEGINBATES),C1.BEGINBATES);
END IF;
END IF;
DBMS_LOB.WRITEAPPEND(TLOB,39,'</BEGINBATESnxtfield><ENDBATESnxtfield>');
DBMS_LOB.WRITEAPPEND(TLOB,36,'</DOCTYPEnxtfield><DOCTITLEnxtfield>');
IF (C1.DOCTITLE IS NOT NULL) THEN
IF NOT(LENGTH(C1.DOCTITLE)=0) THEN
DBMS_LOB.APPEND(TLOB,C1.DOCTITLE);
END IF;
END IF;
DBMS_LOB.WRITEAPPEND(TLOB,35,'</DOCTITLEnxtfield><AUTHORnxtfield>');
DBMS_LOB.WRITEAPPEND(TLOB,27,'</CCnxtfield><OCR2nxtfield>');
IF (C1.OCR2 IS NOT NULL) THEN
IF NOT(LENGTH(C1.OCR2)=0) THEN
DBMS_LOB.APPEND(TLOB,C1.OCR2);
END IF;
END IF;
DBMS_LOB.WRITEAPPEND(TLOB,15,'</OCR2nxtfield>');
END LOOP;
DBMS_LOB.CLOSE(TLOB);
END;Here is the documentation for ORA-04030:
ORA-04030 out of process memory when trying to allocate string bytes (string,string)
Cause: Operating system process private memory has been exhausted.
Action: See the database administrator or operating system administrator to increase process memory quota. There may be a bug in the application that causes excessive allocations of process memory space.
Generally speaking it means that the box hosting the database instance is running out of memory. You have some options:
- decrease memory usage on the server (reduce PGA ? reduce SGA ? check any non-Oracle process memory usage)
- add more RAM to your server.
You check global statistics about PGA memory with:
select * from v$pgastat;You can check PGA statistics for each server process with:
select * from v$process;Note that the memory here is the PGA memory (not the SGA). PGA memory is private memory allocated by each Oracle server process (background process or dedicated server process). -
DBMS_lob Parameters/Arguments
Hello all!
Can anyone point me to the correct documentation that contains a list of parameters/arguments for DBMS_lob calls? I have the developers guide that deals with LOBs, but haven't found a list of arguments yet. In particular, I am looking for the following:
DBMS_lob.WRITE
DBMS_lob.WRITEAPPEND
DBMS_lob.APPEND
DBMS_lob.FILEOPEN
DBMS_lob.FILEGETNAME
DBMS_lob.SUBSTR
Any help would be greatly appreciated. Thank you!
Stevehttp://download-east.oracle.com/docs/cd/B10501_01/appdev.920/a96612/d_lob2.htm#998404
Maybe you are looking for
-
Need help for Report Generation
I am using Designer 6i. I want to ask that : Is there any facility that can preserve the report layout when I want to generate the report, because every time I generate the report I can't preserve the last layout. Thank for helping me ! null
-
I'm having problems connecting the iBook G4 to a projector at school, on my old iBook it connected with ease but this one doesn't seem to work, i've tried clicking on detect displays and gather windows and nothing happens, my computer recognizes the
-
One Step DVD stops at 30 mins on 60 min project...help please?
Hi..I'm at wits end. Altho nothing,not even an error message appears on my screen, my One Step DVD burn stops at 30 mins on my 60 min project. It looks,however, as if it is continuing to burn! It's only after the fact when I try to play it that I fin
-
How do you create a PIP in a multicam clip?
I know how to create a picture-in-pictre where I would overlay the PIP clip over the primary clip in my timeline, and reduce the size of the PIP window. But I would like to use PIP in a multicam clip. I usually fully switch from one angle to the othe
-
Hi, We are doing EDI Inbound for PO. For this I need to define the port(Sender Port) as in the control record(i.e as they r sending) .If they change their port it is not working what shold I do for this. regds, Vinsa.R