Large XML and DBMS_XMLSTORE

Oracle DB = 10.2.0.3
I have XML messages (size range 300KB - 15MB) that are inserted into a table with an XMLType column throughout the day.
The volume will vary...say 10 transactions per minute. A single transaction can contain any number of entities/objects.
I have no need to preserve the XML - aside from assigning it a sequence number, date etc. and storing it for debugging (if needed).
Currently, I have a trigger defined on the table that calls various database packages
that shred the data via XMLTable (i.e. insert into tab1...select...xmltable..).
The problem is that I potentially need to insert into 50+ tables out of 200+ tables per transaction. So, I'm passing through the single instance document 200+ times.
The number of tables that are needed to represent the XML will continue to grow.
A 300KB XML message takes approx 10 seconds to shred. Obviously, I'm concerned about the larger messages.
     So, I was wondering if doing the following would be any better:
1. Have a XSLT transform the original message into "pseudo" Oracle canonical format.
At this point, all of the data contained in the message would need to be shredded and all unwanted data from the original message would be filtered..i.e. lowering the message size a little.
The message to be inserted would then look something like this:
<Transaction>
     <Table>
     <Name>TAB1</Name>
     <ROWSET>
          <ROW>
          <COL1>
          <COLn>
          </ROW>
</ROWSET>
</Table>
     <Table>
     <Name>TAB2</Name>
     <ROWSET>
          <ROW>
          <COL1>
          <COLn>
          </ROW>
</ROWSET>
</Table>
</Transaction>
2. Next, define a trigger on the table like this:
CREATE OR REPLACE TRIGGER tr_ai_my_table
AFTER INSERT
ON my_table
FOR EACH ROW
DECLARE
CURSOR table_cur
IS
SELECT a.table_nm
,a.xml_fragment
FROM XMLTABLE ('/Transaction/Table[Name]'
PASSING :NEW.xml_document
COLUMNS
table_nm VARCHAR2(30) PATH 'Name'
,xml_fragment XMLTYPE PATH 'ROWSET'
) a;
v_context DBMS_XMLSTORE.ctxtype;
v_rows NUMBER;
-- XML Date format: 2007-08-02
-- XML Datetime format: 2007-08-02T11:58:28.229-04:00
-- ALTER SESSION SET NLS_DATE_FORMAT = 'YYYY-MM-DD';
-- ALTER SESSION SET NLS_TIMESTAMP_FORMAT = 'YYYY-MM-DD-HH24.MI.SS.FF';
-- ALTER SESSION SET NLS_TIMESTAMP_TZ_FORMAT = 'YYYY-MM-DD"T"HH24:MI:SS.FFTZH:TZM';
BEGIN
FOR table_rec IN table_cur
LOOP
v_context := DBMS_XMLSTORE.newcontext (table_rec.table_nm);
v_rows := DBMS_XMLSTORE.insertxml (v_context, table_rec.xml_fragment);
DBMS_XMLSTORE.closecontext (v_context);
END LOOP;
END;
     I know this will get me out of maintaining 200+ insert statements/xpaths. Also, I wouldn't be coding the XSLT either and the XSLT may end up running on some specialized hardware for optimized performance.
But do I gain a significant performance boost using the Oracle canonical format
and DBMS_XMLSTORE even though I'm still building the DOM tree to build the cursor ?

i saw a problem with the xmlbeans implementation with wli 8.1
we had memeory issues with large xmls messages as they seemded to be in memory even through the JPD had finished.
to solve this we inserted code at the end of a process to assign the top level xmlbean document class to a tiny version of the same message type. this is valid when the document object is a top level variable of the jpd.
this was done to stop xmlbeans caching the large message behind the scenes.

Similar Messages

  • Large XML and Query performance

    This problem came to me from a Developer and she claims XML query on XMLType field is very slow when using large XML and is there any alternates. Details are below:
    =============
    Query:
    select attributepool_id, attributepool_name, vintage , p.attributepool.extract('//attributepool/segmentationsystem/id/text()').getStringVal() ,
    p.attributepool.extract('//attributepool/datasource/id/text()').getStringVal()
    from saved_attributepools p
    where user_id = 'CLPROFILE2' and vintage = 'SPRING_2003' order by attributepool_name
    Table name:                saved_attributepools
    Space:                    ecommerce
    A Column Name:                attributepool
    attributepool Column Type:      XmlType
    One of xml contains 4Mbytes:     CORE LIFESTLY
    When we try to get the data against this row, query is taking longer.
    conn ecommerce@ecom3 --&gt; 82 seconds (table has 65 rows)
    conn ecommerce@oradev--&gt; 34 seconds (table has only 4 rows)
    We think that;
    Oracle parse the entire XML document and load this document into an 'in-memory' DOM structure before executing the specified xpaths.
    Adding INDEX into XmlType won't help as we don't use whereclasue against XmlType for this case.
    We don't know 10g has solution for this or not.
    Any suggestion will be greatly appreciated.

    This problem came to me from a Developer and she claims XML query on XMLType field is very slow when using large XML and is there any alternates. Details are below:
    =============
    Query:
    select attributepool_id, attributepool_name, vintage , p.attributepool.extract('//attributepool/segmentationsystem/id/text()').getStringVal() ,
    p.attributepool.extract('//attributepool/datasource/id/text()').getStringVal()
    from saved_attributepools p
    where user_id = 'CLPROFILE2' and vintage = 'SPRING_2003' order by attributepool_name
    Table name:                saved_attributepools
    Space:                    ecommerce
    A Column Name:                attributepool
    attributepool Column Type:      XmlType
    One of xml contains 4Mbytes:     CORE LIFESTLY
    When we try to get the data against this row, query is taking longer.
    conn ecommerce@ecom3 --&gt; 82 seconds (table has 65 rows)
    conn ecommerce@oradev--&gt; 34 seconds (table has only 4 rows)
    We think that;
    Oracle parse the entire XML document and load this document into an 'in-memory' DOM structure before executing the specified xpaths.
    Adding INDEX into XmlType won't help as we don't use whereclasue against XmlType for this case.
    We don't know 10g has solution for this or not.
    Any suggestion will be greatly appreciated.

  • How to make use of XMLDB to process large XML and create spatial objects

    For someone new to XMLDB I find it hard to make sense of the enormous amount of information (and easy to get lost in it). So I come here to ask for directions.
    I have to build a procedure that fills a table of spatial objects based on XML input. Basically the XML contains a large amount of elements that describe the geometry type and contain the geometries coordinates. The XML can get quite large (200-300Mb).
    Somehow I have to process each element and create an sdo_geometry object.
    Now let me ask a very broad question: What would be a good way to handle this?

    I have the same question. Any news on this?
    Wijnand

  • I want to load large raw XML file in firefox and parse by DOM. But, for large XML file the firefox very slow some time crashed . Is there any option to increase DOM handling memory in Firefox

    Actually i am using an off-line form to load very large XML file and using firefox to load that form. But, its taking more time to load and some time the browser crashed. through DOM parsing this XML file to my form. Is there any option to increase DOM handler size in firefox

    Thank you for your suggestion. I have a question,
    though. If I use a relational database and try to
    access it for EACH and EVERY click the user makes,
    wouldn't that take much time to populate the page with
    data?
    Isn't XML store more efficient here? Please reply me.You have the choice of reading a small number of records (10 children per element?) from a database, or parsing multiple megabytes. Reading 10 records from a database should take maybe 100 milliseconds (1/10 of a second). I have written a web application that reads several hundred records and returns them with acceptable response time, and I am no expert. To parse an XML file of many megabytes... you have already tried this, so you know how long it takes, right? If you haven't tried it then you should. It's possible to waste a lot of time considering alternatives -- the term is "analysis paralysis". Speculating on how fast something might be doesn't get you very far.

  • What are the best tools for opening very large XML files and examining the tree and confirming they are valid?

    I am generating some very large XML files (600,000+ lines, 50MB+ characters). I finally have them all being valid XML and valid UTF-8.
    But the files are so large Safari and Chrome will often not open them. FireFox will though.
    Instead of these browsers, I was wondering if there are there any other recommended apps for the Mac for opening and viewing the XML, getting an error message if they are not valid for some reason and examing the XML tree?
    I opened the file in the default app for XML which is Xcode, but that is just like opening it in a plain text editor. You can't expand/collapse the XML tree like you can with a browser, and it doesn't report errors.
    Thanks,
    Doug

    Hi Tom,
    I had not seen that list. I'll look it over.
    I'm also in touch with the developer of BBEdit (they are quite responsive) and they are willing to look at the file in question and see why it is not reporting UTF-8 errors while Chrome is.
    For now I have all the invalid characters quashed and things are working. But it would be useful in the future.
    By the by, some of those editors are quite pricey!
    doug

  • Reading large XML file using a file event generator and a JPD process

    I am using a FileEventGenerator and a JPD Subscription process to read a large XML file. The large XML file basically contains repeated XML elements. My understanding is that the file subscription method reads the whole file in memory which causes lots of problem for huge file size like 1MB. Is there a way to read the file size-wise or is there a way to read chunks of data from a large size file..or any other alternative. I would like to process the file in a loop iteration by iteration.

    Hitejain,
    Here are a couple of pointers you could try. One is that the file event generator has a pass by reference (filename) functionality which you could use so that you could do the following inside of your process.
    1) Read file name from the reference
    2) Move the file to a processed directory (so it doesn't get picked up again. Note: I don't know how the embedded archive methods of the file event generator plays with pass by reference.
    3) Open a stream to the file.
    4) Use a SAX or SAX - DOM combined approach to parse your XML while managing the memory usage inside of your process
    There is another possibility which might fit your needs and it is related to the RawData object that BEA provides. If I understand it correctly provides wrapping functionality around a stream object, but depending on your parsing methods might just postpone the problem.
    Hope this helps
    Chris Falling
    Stormforge Software

  • Loading, processing and transforming Large XML Files

    Hi all,
    I realize this may have been asked before, but searching the history of the forum isn't easy, considering it's not always a safe bet which words to use on the search.
    Here's the situation. We're trying to load and manipulate large XML files of up to 100MB in size.
    The difference from what we have in our hands to other related issues posted is that the XML isn't big because it has a largly branched tree of data, but rather because it includes large base64-encoded files in the xml itself. The size of the 'clean' xml is relatively small (a few hundred bytes to some kilobytes).
    We had to deal with transferring the xml to our application using a webservice, loading the xml to memory in order to read values from it, and now we also need to transform the xml to a different format.
    We solved the webservice issue using XFire.
    We solved the loading of the xml using JAXB. Nevertheless, we use string manipulations to 'cut' the xml before we load it to memory - otherwise we get OutOfMemory errors. We don't need to load the whole XML to memory, but I really hate this solution because of the 'unorthodox' manipulation of the xml (i.e. the cutting of it).
    Now we need to deal with the transofmation of those XMLs, but obviously we can't cut it down this time. We have little experience writing XSL, but no experience on how to use Java to use the XSL files. We're looking for suggestions on how to do it most efficiently.
    The biggest problem we encounter is the OutOfMemory errors.
    So I ask several questions in one post:
    1. Is there a better way to transfer the large files using a webservice?
    2. Is there a better way to load and manipulate the large XML files?
    3. What's the best way for us to transform those large XMLs?
    4. Are we missing something in terms of memory management? Is there a better way to control it? We really are struggling there.
    I assume this is an important piece of information: We currently use JDK 1.4.2, and cannot upgrade to 1.5.
    Thanks for the help.

    I think there may be a way to do it.
    First, for low RAM needs, nothing beats SAX. as the first processor of the data. With SAX, you control the memory use since SAX only processes one "chunk" of the file at a time. You supply a class with methods named startElement, endElement, and characters. It calls the startElement method when it finds a new element. It calls the characters method when it wants to pass you some or all of the text between the start and end tags. It calls endElement to signal that passing characters is over, and to let you get ready for the next element. So, if your characters method did nothing with the base-64 data, you could see the XML go by with low memory needs.
    Since we know in your case that the characters will process large chunks of data, you can expect many calls as SAX calls your code. The only workable solution is to use a StringBuffer to accumulate the data. When the endElement is called, you can decode the base-64 data and keep it somewhere. The most efficient way to do this is to have one StringBuffer for the class handling the SAX calls. Instantiate it with a big enough size to hold the largest of your binary data streams. In the startElement, you can set the length of the StringBuilder to zero and reuse it over and over.
    You did not say what you wanted to do with the XML data once you have processed it. SAX is nice from a memory perspective, but it makes you do all the work of storing the data. Unless you build a structured set of classes "on the fly" nothing is kept. There is a way to pass the output of one SAX pass into a DOM processor (without the binary data, in this case) and then you would wind up with a nice tree object with the rest of your data and a group of binary data objects. I've never done the SAX/DOM combo, but it is called a SAXFilter, and you should be able to google an example.
    So, the bottom line is that is is very possible to do what you want, but it will take some careful design on your part.
    Dave Patterson

  • StAX and large XMLs

    Hi everyone,
    I want to use StAX for parsing an XML file that is about 20 MB large. Within this XML, a binary file is embedded which is located under a specific XML element.
    I thought that it would be a good idea to use StAX in that situation, because I want to make use of streaming technology. One goal was to avoid that I have the entire content of the embedded file in memory, but only single characters, which I can put into another stream.
    I am now facing the problem, that the StAX implementations (I tested the BEA implementation and the reference implementation) do not support the "getTextCharacters" method, which would be the most essential method for retrieving characters from a stream during a "CHARACTER" event.
    Currently, the only possibility to read out characters is calling the "getText" method, but in that case, I have the entire file in memory (really bad!).
    Does anyone has experience using StAX and especially streaming large XML elements to a file?
    I see no benefit of a StAX implementation which does not allow calling this method:
    getTextCharacters(int sourceStart,char[] target,int targetStart, int length)
    In my opinion, that would be the actual heart of a real streaming API.
    Does anyone know a good StAX implementation that supports that operation?
    Best regards,
    Martin

    In general, controlling this isn't easy, you may try using writeEmptyElement() but I don't believe it makes any guarantees about how closing tags are serialized.
    If large data sets is a big problem in your application, and readability isn't a primary concern, you may try using Fast Infoset,
    https://fi.dev.java.net
    which is now part of the platform in JDK 6. Using this format you can improve your application's performance and reduce bandwidth requirements by half, without having to re-write your application. For example JAX-WS, the WS API, already supports this format. Morever, it includes an HTTP-based content negotiation algorithm to make the use of the format completely transparent.

  • Parsing large xml file and display using swing

    Hi all,
    I want to read a large xml file and display graphically in swing as a tree structure.
    I implemented it and works fine for files of 5MB size after increasing the jvm heap size (-Xmx). If the file size is larger than 5MB it throws out of memory error. I'm creating a custom datastructure from the xml and I'm using sax parsing.
    After displaying the datastructure, the user could do some operation on this, like search etc.
    Can any of you suggest a method, to support larger files ? What I'm looking for is create the datastructure in file system, rather than in memory.
    Any other tips for memory management would be greatly appreciated
    Thanks in Advance.
    Nisha

    Use a memory-mapped file?
    http://javaalmanac.com/egs/java.nio/CreateMemMap.html

  • Passing Large xml (as CLOB) to store proc - fails for ORA-29532

    I am getting the following error when I pass a large XML document. It works upto a limit (not sure the size). Is there any size limitation using XMLSAVE? Please assist.
    Oracle Version : Oracle 9.2.0.1
    Error:
    ORA-29532: Java call terminated by uncaught Java exception:
    oracle.xml.sql.OracleXMLSQLException: '>' Missing from end tag.
    Procedure:
    CREATE OR REPLACE PROCEDURE XML2TABLE
    in_tx_XML IN CLOB
    ,in_na_Table IN VARCHAR2
    IS
    lv_XML_Context DBMS_XMLSAVE.ctxType;
    lv_Ct_Rows NUMBER(10);
    BEGIN
    EXECUTE IMMEDIATE 'TRUNCATE TABLE '||in_na_Table;
    lv_XML_Context := DBMS_XMLSAVE.NewContext(in_na_Table);
    lv_Ct_Rows := DBMS_XMLSAVE.InsertXML(lv_XML_Context,in_tx_XML);
    DBMS_XMLSAVE.CloseContext(lv_XML_Context);
    END XML2TABLE;
    /

    Technically DBMS_XMLSAVE and it's alter ego DBMS_XMLQUERY are not considered part of XML DB. DBMS_XMLSAVE and DBMS_XMLQUERY are legacy Java implementations.
    In 9.2.x these routines were replaced by DBMS_XMLSTORE and DBMS_XMLGEN which are written in 'C' and should be much faster in most cases and address a number of limitations inherant in the older implementations. DBMS_XMLSTORE and DBMS_XMLGEN are part of XML DB and the minimum supported release for XML DB is 9.2.0.3.0.
    DBMS_XMLSAVE issues are addressed in the TECH/XML forum
    http://forums.oracle.com/forums/category.jspa?categoryID=51

  • Import Large XML File to Table

    I have a large (819MB) XML file I'm trying to import into a table in the format:
    <ROW_SET>
    <ROW>
    <column_name>value</column_name>
    </ROW>
    <ROW>
    <column_name>value</column_name>
    </ROW>
    </ROW_SET>
    I've tried importing it with xmlsequence(...).extract(...) and ran into the number of nodes exceed maximum error.
    I've tried importing it with XMLTable(... passing XMLTYPE(bfilename('DIR_OBJ','large_819mb_file.xml'), nls_charset_id('UTF8'))) and I gave up after it ran for 15+ hours ( COLLECTION ITERATOR PICKLER FETCH issue ).
    I've tried importing it with:
    insCtx := DBMS_XMLStore.newContext('schemaname.tablename');
    DBMS_XMLStore.clearUpdateColumnList(insCtx);
    DBMS_XMLStore.setUpdateColumn(insCtx,'column1name');
    DBMS_XMLStore.setUpdateColumn(insCtx,'columnNname');
    ROWS := DBMS_XMLStore.insertXML(insCtx, XMLTYPE(bfilename('DIR_OBJ','large_819mb_file.xml'), nls_charset_id('UTF8')));
    and ran into ORA-04030: out of process memory when trying to allocate 1032 bytes (qmxlu subheap,qmemNextBuf:alloc).
    All I need to do is read the XML file and move the data into a matching table in a reasonable time. Once I have the data in the database, I no longer need the XML file.
    What would be the best way to import large XML files?
    Oracle Database 11g Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    "CORE     11.2.0.1.0     Production"
    TNS for Linux: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production

    This (rough) approach should work for you.
    CREATE TABLE HOLDS_XML
            (xml_col XMLTYPE)
          XMLTYPE xml_col STORE AS SECUREFILE BINARY XML;
    INSERT INTO HOLDS_XML
    VALUES (xmltype(bfilename('DIR_OBJ','large_819mb_file.xml'), nls_charset_id('UTF8')))
    -- Should be using AL32UTF8 for DB character set with XML
    SELECT ...
      FROM HOLD_XML HX
           XMLTable(...
              PASSING HX.xml_col ...)How it differs from your approach.
    By using the HOLDS_XML table with SECUREFILE BINARY XML storage (which became the default in 11.2.0.2) we are providing a place for Oracle to store a parsed version of the XML. This allows the XML to be stored on disk instead of in memory. Oracle can then access the needed pieces of XML from disk by streaming them instead of holding the whole XML in memory and parsing it repeatedly to find the information needed. That is what COLLECTION ITERATOR PICKLER FETCH means. A lot of memory work. You can search on that term to learn more about it if needed.
    The XMTable approach then simply reads this XML from disk and should be able to parse the XML with no issue. You have the option of adding indexes to the XML, but since you are simply reading it all one time and tossing it, there is no advantage to indexes (most likely)

  • How to extract data from xml and insert into Oracle table

    Hi,
    I have a large xml file. which will have hundreds of the following transaction tags having column names and there values.
    There is a table one of the schema with coulums "actualCostRate","billRate"....etc.
    I need to extract the values of these columns and insert into the table
    <Transaction actualCostRate="0" billRate="0" chargeable="1" clientID="NikuUK" chargeCode="LCOCD1" externalID="L-RESCODE_UK1-PROJ_UK_CNT_GBP-37289-8" importStatus="N" projectID="TESTPROJ" resourceID="admin" transactionDate="2002-02-12" transactionType="L" units="11" taskID="5017601" inputTypeCode="SALES" groupId="123" voucherNumber="ABCVDD" transactionClass="ABCD"/>
    <Transaction actualCostRate="0" billRate="0" chargeable="1" clientID="NikuEU" chargeCode="LCOCD1" externalID="L-RESCODE_US1-PROJ_EU_STD2-37291-4" importStatus="N" projectID="TESTPROJ" resourceID="admin" transactionDate="2002-02-04" transactionType="L" units="4" taskID="5017601" inputTypeCode="SALES" groupId="124" voucherNumber="EEE222" transactionClass="DEFG"/>

    Re: Insert from XML to relational table
    http://www.google.ae/search?hl=ar&q=extract+data+from+xml+and+insert+into+Oracle+table+&btnG=%D8%A8%D8%AD%D8%AB+Google&meta=

  • Is there a way to import large XML files into HANA efficiently are their any data services provided to do this?

    1. Is there a way to import large XML files into HANA efficiently?
    2. Will it process it node by node or the entire file at a time?
    3. Are there any data services provided to do this?
    This for a project use case i also have an requirement to process bulk XML files, suggest me to accomplish this task

    Hi Patrick,
         I am addressing the similar issue. "Getting data from huge XMLs into Hana."
    Using Odata services can we handle huge data (i.e create schema/load into Hana) On-the-fly ?
    In my scenario,
    I get a folder of different complex XML files which are to be loaded into Hana database.
    Then I gotta transform & cleanse the data.
    Can I use oData services to transform and cleanse the data ?
    If so, how can I create oData services dynamically ?
    Any help is highly appreciated.
    Thank you.
    Regards,
    Alekhya

  • How to set SAXParser at command-line interface to create a large XML file

    Hi,
    I am trying to create a large XML file (more than 50 MB) by selecting from Oracle database but failed because of "out of memory" error. According to "Oracle XML Developer Guide", we should use SAXParser to parsing a large XML file. But there is no example to show how to set SAXParser at command-line
    Following is what I use to get xml files. It works only when the file is small.
    java OracleXML getXML -DateFormat -withDTD -rowsetTag PO_HDR -conn
    "jdbc:oracle:oci8:@server_name" -user "ID/password" "select * from table_name"
    When I set SAXParser at the way below,
    java oracle.xml.parser.v2.SAXParser OracleXML getXML -DateFormat -withDTD -rowsetTag PO_HDR -conn
    "jdbc:oracle:oci8:@server_name" -user "ID/password" "select * from table_name"
    it failed with the error message: "In class oracle.xml.parser.v2.SAXParser: void main(String argv[]) is not defined"
    Does anyone know how to solve the problem? I'll be appreciated very much for your help.
    Yi

    here are my ideas.
    register the xml schema.
    using xmldom, generate the desired xml output and return as xmltype.
    then you can use something like this to check.
    declare
    xmldoc xmltype ;
    begin
       -- populate xmldoc from you xmldom function
       -- validate against XML schema
       xmldoc.isSchemaValid(schema_url, root_element);
       if xmldoc.isSchemaValid = 1 then
            --valid schema
       else
            --invalid
       end if;
    end

  • Performance Problem in parsing large XML file (15MB)

    Hi,
    I'm trying to parse a large XML file(15 MB) and facing a clear performance problem. A Simple XML Validation using the following code snippet:
    DBMS_LOB.fileopen(targetFile, DBMS_LOB.file_readonly);
    DBMS_LOB.loadClobfromFile
    tempCLOB,
    targetFile,
    DBMS_LOB.getLength(targetFile),
    dest_offset,
    src_offset,
    nls_charset_id(CONSTANT_CHARSET),
    lang_context,
    conv_warning
    DBMS_LOB.fileclose(targetFile);
    p_xml_document := XMLType(tempCLOB, p_schema_url, 0, 0);
    p_xml_document.schemaValidate();
    is taking 30 mins on a HP-UX (4GB ram, 2 CPU) machine (Oracle version : 9.2.0.4).
    Please explain what could be going wrong.
    Thanks In Advance,
    Vineet

    Thanks Mark,
    I'll open a TAR and also upload the schema and instance XML.
    If i'm not changing the track too much :-) one more thing in continuation:
    If i skip the Schema Validation step and directly insert the instance document into a Schema linked XMLType table, what does OracleXDB do in such a case?
    i'm getting a severe performance hit here too... the same file as above takes almost 40 mins to Insert.
    code snippet:
    DBMS_LOB.fileopen(targetFile, DBMS_LOB.file_readonly);
    DBMS_LOB.loadClobfromFile
    tempCLOB,
    targetFile,
    DBMS_LOB.getLength(targetFile),
    dest_offset,
    src_offset,
    nls_charset_id(CONSTANT_CHARSET),
    lang_context,
    conv_warning
    DBMS_LOB.fileclose(targetFile);
    p_xml_document := XMLType(tempCLOB, p_schema_url, 0, 0);
    -- p_xml_document.schemaValidate();
    insert into INCOMING_XML values(p_xml_document);
    Here table INCOMING_XML is :
    TABLE of SYS.XMLTYPE(XMLSchema "http://INCOMING_XML.xsd" Element "MatchingResponse") STORAGE Object-
    relational TYPE "XDBTYPE_MATCHING_RESPONSE"
    This table and type XDBTYPE_MATCHING_RESPONSE were created using the mapping provided in the registered XML Schema.
    Thanks,
    Vineet

Maybe you are looking for