Reading Large XML Files

Hi All,
I am trying to read an XML file which is around 6.4 GB which has around 1000 million records in it.
I need to travel through all the elements and take out the values which matches my criteria.
For Example the XML Looks like :
<ss>
<s>
  <m>00:11:22:33:44:55</m>
  <o>1244548353223</o>
  <f>1244548354223</f>
</s>
<s>
  <m>00:11:22:33:44:56</m>
  <o>1244548353223</o>
  <f>1244548354223</f>
</s>
</ss>I need to get the <s> node which has the <m> value is 00:11:22:33:44:55. It may have around 1024 occurences per file.
I tried XPath, XQuery apis for java and nothing helped me in reading such a huge file.
The actual task is to read 2 to 30 huge files concurrently.
Can anyone help me in this?
Thanks in advance.
Kousik Rajendran

I tried XOM. It worked fine for 102.4 MILLION data stored in a xml file which has size about 5.5GB.
Checkout [XOM Home Site|http://www.xom.nu/]
Or dowload Zip File with all necessary Zip and Sample file : [XOM Complete Zip|http://www.cafeconleche.org/XOM/xom-1.2.1.zip]
Try out the Sample which uses Streaming Method.

Similar Messages

  • Reading large XML file using a file event generator and a JPD process

    I am using a FileEventGenerator and a JPD Subscription process to read a large XML file. The large XML file basically contains repeated XML elements. My understanding is that the file subscription method reads the whole file in memory which causes lots of problem for huge file size like 1MB. Is there a way to read the file size-wise or is there a way to read chunks of data from a large size file..or any other alternative. I would like to process the file in a loop iteration by iteration.

    Hitejain,
    Here are a couple of pointers you could try. One is that the file event generator has a pass by reference (filename) functionality which you could use so that you could do the following inside of your process.
    1) Read file name from the reference
    2) Move the file to a processed directory (so it doesn't get picked up again. Note: I don't know how the embedded archive methods of the file event generator plays with pass by reference.
    3) Open a stream to the file.
    4) Use a SAX or SAX - DOM combined approach to parse your XML while managing the memory usage inside of your process
    There is another possibility which might fit your needs and it is related to the RawData object that BEA provides. If I understand it correctly provides wrapping functionality around a stream object, but depending on your parsing methods might just postpone the problem.
    Hope this helps
    Chris Falling
    Stormforge Software

  • I want to load large raw XML file in firefox and parse by DOM. But, for large XML file the firefox very slow some time crashed . Is there any option to increase DOM handling memory in Firefox

    Actually i am using an off-line form to load very large XML file and using firefox to load that form. But, its taking more time to load and some time the browser crashed. through DOM parsing this XML file to my form. Is there any option to increase DOM handler size in firefox

    Thank you for your suggestion. I have a question,
    though. If I use a relational database and try to
    access it for EACH and EVERY click the user makes,
    wouldn't that take much time to populate the page with
    data?
    Isn't XML store more efficient here? Please reply me.You have the choice of reading a small number of records (10 children per element?) from a database, or parsing multiple megabytes. Reading 10 records from a database should take maybe 100 milliseconds (1/10 of a second). I have written a web application that reads several hundred records and returns them with acceptable response time, and I am no expert. To parse an XML file of many megabytes... you have already tried this, so you know how long it takes, right? If you haven't tried it then you should. It's possible to waste a lot of time considering alternatives -- the term is "analysis paralysis". Speculating on how fast something might be doesn't get you very far.

  • Query in a large xml file

    Hello,
    I'm trying to work with very large xml files which are created from csv files. These files may be very large - up to 1 GB ! Untill now I have managed to do several validations on these big xml files, and the only thing that works for me is SAX parser, DOM is out of the question because it fills up memory.
    My next task is to do queries on these files, smth like:
    select field1,field2 from file.xml
    where field3 = 'A'
    and (fileld4>'B' or field1='C')
    order by field2.
    I searched the net about finding out how to make queries on xml files (since I have never done queries on xml before), but I couldn't find which "query language" is best for large files. If I use XPath (XSLT) will that not cause me memory problems because XSLT represents the file as a memory object?
    My idea is to parse the file with SAX and check every row if it fits the where condition and then write it immediately to a result xml file. But validating the where statement can be very complicated without using some tool. Also the order by statement is another problematic issue.
    Does anyone have some more intelegent ideas about how I can do this? Please help! :(
    The xml file looks like this:
    <doc>
    <row id ="1">
    <column id="1" name="column1">value</column>
    <column id="N" name="columnN">value</column>
    </row>
    <row id ="M">
    <column id="1" name="column1">value</column>
    <column id="N" name="columnN">value</column>
    </row>
    </doc>

    Hi all,
    Thank you very much for your replies.
    First, saxon didn't work because it uses an in-memory parser, and that is what I was trying to avoid.
    Different database is also out of the question, because the customer insist on XML, and also there are some files that can never be converted to a database table, because eventually with some transformations thay are changed and are not completely like the standard csv format.
    I think that maybe http://exist.sourceforge.net is the rigth solution for me, but I will probably try it in the next version of my project.
    For now I have managed to make the project with only SAXParser and a lot of back - end programming and it works ok, althoug it was very hard to make it, and will be harded to maintain, so I will try to look at the eXist project.
    Thanks everyone for the help.

  • Best technology to navigate through a very large XML file in a web page

    Hi!
    I have a very large XML file that needs to be displayed in my web page, may be as a tree structure. Visitors should be able to go to any level depth nodes and access the children elements or text element of those nodes.
    I thought about using DOM parser with Java but dropped that idea as DOM would be stored in memory and hence its space consuming. Neither SAX works for me as every time there is a click on any of the nodes, my SAX parser parses the whole document for the node and its time consuming.
    Could anyone please tell me the best technology and best parser to be used for very large XML files?

    Thank you for your suggestion. I have a question,
    though. If I use a relational database and try to
    access it for EACH and EVERY click the user makes,
    wouldn't that take much time to populate the page with
    data?
    Isn't XML store more efficient here? Please reply me.You have the choice of reading a small number of records (10 children per element?) from a database, or parsing multiple megabytes. Reading 10 records from a database should take maybe 100 milliseconds (1/10 of a second). I have written a web application that reads several hundred records and returns them with acceptable response time, and I am no expert. To parse an XML file of many megabytes... you have already tried this, so you know how long it takes, right? If you haven't tried it then you should. It's possible to waste a lot of time considering alternatives -- the term is "analysis paralysis". Speculating on how fast something might be doesn't get you very far.

  • Efficient searching in a large XML file for specific elements

    Hi
    How can I search in a large XML file for a specific element efficiently (fast and memory savvy?) I have a large (approximately 32MB with about 140,000 main elements) XML file and I have to search through it for specific elements. What stable and production-ready open source tools are available for such tasks? I think PDOM is a solution but I can't find any well-known and stable implementations on the web.
    Thanks in advance,
    Behrang Saeedzadeh.

    The problem with DOM parsers is that the whole document needs to be parsed!
    So with large documents this uses up a lot of memory.
    I suggest you look at sometthing like a pull parser (Piccolo or MPX1) which is a fast parser that is program driven and not event driven like SAX. This has the advantage of not needing to remember your state between events.
    I have used Piccolo to extract events from large xml based log files.
    Carl.

  • How to parse large xml file

    I need to parse large xml file which contains following tag. The size of the file is upto 10MB-50MB or more.
    <departments>
    <department>
    <a_depart id="124">
    <b_depart id="Bss_253">
    <bss_depart id="253">
    <attributes>
    <name_one>abc</name_one>
    </attributes>
    </bss_depart id="253">
    </b_depart id="Bss_253">
    </a_depart id="124">
    </department>
    <department>
    <a_depart id="124">
    <b_depart id="Bss_254">
    <mss_depart id="253">
              <attributes>
              <name_one>abc</name_one>
              <name_two>xyz</name_one>
              </attributes>
         </mss_depart>
         </b_depart>
    </a_depart>
    </department>
    <department>
    <a_depart id="124">
    <b_depart id="Bss_254">
    <mss_depart id="255">
              <attributes>
              <name_one>abc</name_one>
              <name_two>xyz</name_one>
              </attributes>
         </mss_depart>
         </b_depart>
    </a_depart>
    </department>
    <department>
    <a_depart id="125">
    <b_depart id="Bss_254">
    <mss_depart id="253">
              <attributes>
              <name_one>abc</name_one>
              <name_two>xyz</name_one>
              </attributes>
         </mss_depart>
         </b_depart>
    </a_depart>
    </department>
    I want to get the infomation for that xml file. like mss_depart id=233, building xpath dyanmically for every id and loading
    that using dom4j. which is very very slow.
    Is there any other solution for that to read the data using sax parser only.
    I want to execute the xpath or data for the following way.
    //a_depart/@id ------> all the ids of a_depart tags if it returns 3 values say 123,124,125
    after that i want to execute
    //a_depart[@id='123']/b_depart/@id like this ...to retrive the values of all the levels ...
         I am executing following xpath for every unique ids at all levels.
         List l = doc.selectNodes(xPathForID);
         List l1 = doc.selectNodes(xPathForAttributes+attributes.get(j)+"/text()");
    But it is very slow and taking lot of time.
    Is there any other way to solve this problem. If any please mail me it is urgent.
    I am using jdk1.4 and jdk1.5
    Is there any support for sax parser to execute xpath in jdk1.5 direclty, with out using dom4j
    Thanks in advance....

    I doubt you will find a preexisting solution to your problem.
    SAX is usually recommended for processing big files (where "big" is undefined"). It works on big files by avoiding the messy problem of storing the data -- that is left as an exercise to you.
    DOM (and its variants) works by building a Document object as the head of the tree of objects for the entire contents. With DOM, you can then use XPath, because there is something to search that is already in memory. To use XPath, you seem to have two choices, build a DOM-ish tree, or if you can find an XPath processor (I'm not sure if one exists) that can process the XML file directly, but it will be slow, since you are looking for "all" occurences of an attribute, and this means you have to read the entire file each time.
    It might be worth exploring a hybrid approach -- use SAX to get some information, and build your own objects to store the data. Maybe a HashMap as the main index. But, that will keep you from using XPath, since you do not have the data structures it expects.
    A third alternative would be to look at JAXB. It builds Java code from a Schema of your data and then when you import the data, it creates the necessary objects and fills in values. But, I don't think XPath woll work there either.
    Dave Patterson

  • Loading, processing and transforming Large XML Files

    Hi all,
    I realize this may have been asked before, but searching the history of the forum isn't easy, considering it's not always a safe bet which words to use on the search.
    Here's the situation. We're trying to load and manipulate large XML files of up to 100MB in size.
    The difference from what we have in our hands to other related issues posted is that the XML isn't big because it has a largly branched tree of data, but rather because it includes large base64-encoded files in the xml itself. The size of the 'clean' xml is relatively small (a few hundred bytes to some kilobytes).
    We had to deal with transferring the xml to our application using a webservice, loading the xml to memory in order to read values from it, and now we also need to transform the xml to a different format.
    We solved the webservice issue using XFire.
    We solved the loading of the xml using JAXB. Nevertheless, we use string manipulations to 'cut' the xml before we load it to memory - otherwise we get OutOfMemory errors. We don't need to load the whole XML to memory, but I really hate this solution because of the 'unorthodox' manipulation of the xml (i.e. the cutting of it).
    Now we need to deal with the transofmation of those XMLs, but obviously we can't cut it down this time. We have little experience writing XSL, but no experience on how to use Java to use the XSL files. We're looking for suggestions on how to do it most efficiently.
    The biggest problem we encounter is the OutOfMemory errors.
    So I ask several questions in one post:
    1. Is there a better way to transfer the large files using a webservice?
    2. Is there a better way to load and manipulate the large XML files?
    3. What's the best way for us to transform those large XMLs?
    4. Are we missing something in terms of memory management? Is there a better way to control it? We really are struggling there.
    I assume this is an important piece of information: We currently use JDK 1.4.2, and cannot upgrade to 1.5.
    Thanks for the help.

    I think there may be a way to do it.
    First, for low RAM needs, nothing beats SAX. as the first processor of the data. With SAX, you control the memory use since SAX only processes one "chunk" of the file at a time. You supply a class with methods named startElement, endElement, and characters. It calls the startElement method when it finds a new element. It calls the characters method when it wants to pass you some or all of the text between the start and end tags. It calls endElement to signal that passing characters is over, and to let you get ready for the next element. So, if your characters method did nothing with the base-64 data, you could see the XML go by with low memory needs.
    Since we know in your case that the characters will process large chunks of data, you can expect many calls as SAX calls your code. The only workable solution is to use a StringBuffer to accumulate the data. When the endElement is called, you can decode the base-64 data and keep it somewhere. The most efficient way to do this is to have one StringBuffer for the class handling the SAX calls. Instantiate it with a big enough size to hold the largest of your binary data streams. In the startElement, you can set the length of the StringBuilder to zero and reuse it over and over.
    You did not say what you wanted to do with the XML data once you have processed it. SAX is nice from a memory perspective, but it makes you do all the work of storing the data. Unless you build a structured set of classes "on the fly" nothing is kept. There is a way to pass the output of one SAX pass into a DOM processor (without the binary data, in this case) and then you would wind up with a nice tree object with the rest of your data and a group of binary data objects. I've never done the SAX/DOM combo, but it is called a SAXFilter, and you should be able to google an example.
    So, the bottom line is that is is very possible to do what you want, but it will take some careful design on your part.
    Dave Patterson

  • Does the parser work with large XML files?

    Is there a restriction on the XML file size that can be loaded into the parser?
    I am getting a out of memory exception reading in large XML file(10MB) using the commands
    DOMParser parser = new DOMParser();
    URL url = createURL(argv[0]);
    parser.setErrorStream(System.err);
    parser.setValidationMode(true);
    parser.showWarnings(true);
    parser.parse(url);
    Win NT 4.0 Server
    Sun JDK 1.2.2
    ===================================
    Error output
    ===================================
    Exception in thread "main" java.lang.OutOfMemoryError
    at oracle.xml.parser.v2.ElementDecl.getAttrDecls(ElementDecl.java, Compi
    led Code)
    at java.util.Hashtable.<init>(Unknown Source)
    at oracle.xml.parser.v2.DTDDecl.<init>(DTDDecl.java, Compiled Code)
    at oracle.xml.parser.v2.ElementDecl.getAttrDecls(ElementDecl.java, Compi
    led Code)
    at oracle.xml.parser.v2.ValidatingParser.checkDefaultAttributes(Validati
    ngParser.java, Compiled Code)
    at oracle.xml.parser.v2.NonValidatingParser.parseAttributes(NonValidatin
    gParser.java, Compiled Code)
    at oracle.xml.parser.v2.NonValidatingParser.parseElement(NonValidatingPa
    rser.java, Compiled Code)
    at oracle.xml.parser.v2.ValidatingParser.parseRootElement(ValidatingPars
    er.java:97)
    at oracle.xml.parser.v2.NonValidatingParser.parseDocument(NonValidatingP
    arser.java:199)
    at oracle.xml.parser.v2.XMLParser.parse(XMLParser.java:146)
    at TestLF.main(TestLF.java:40)
    null

    We have a number of test files that are that size and it works without a problem. However using the DOMParser does require significantly more memory than your doc size.
    What is the memory configuration of the JVM that you are running with? Have you tried increasing it? Are you using our latest version 2.0.2.6?
    Oracle XML Team

  • Problem with parsing large xml files

    Hello All,
    I am parsing a large xml file of 20MB and I use DocumentBuilder.parse(File). This method works for small xml files with size less than 20MB but the application hangs and doesn't through any error message when parsing 20MB xml files. Please let me know what I have to do at this point ?
    Thanks & Regards,
    Kumar.

    Well... i can't agree.
    If you have such structure:
    <task>
      <task/>
      <task>
         <task>
            <task/>
         </task>
         <task/>
      </task>
    </task>
    ...you may always keep stack of tasks (at startElement push to top, and at endElement pop), so at every leaf of tree you will have all parents of that leaf.
    for such structure:
    <task id="1" parent="0"/>
    <task id="2" parent="1"/>
    <task id="3" parent="1"/>
    <task id="4" parent="2"/>
    <task id="5" parent="3"/>
    ...it will be much faster to go thro document by sax several times to build tree of tasks, than to load all document into memory...

  • Problem with parsing large XML files chunked over HTTP

    I'm trying to isolate a bug that was introduced when upgrading the JRE in use from Java 7u51 to 7u71 without changing any code. The problem appears to be very similar to: Bug ID: JDK-8027359 XML parser returns incorrect parsing results.
    Further investigation showed that it was also introduced in the same versions (7u71) where that fix was applied. Unlike that bug though, my XML is marked as version 1.0. It also appears to be with only large XML files, on the order of 10MB or so.
    The closest I've been able to narrow it down to is the code is using JAXB to unmarshall a stream that the debugger tells me is a org.apache.http.com.EofSensorInputStream / org.apache.http.impl.io.ChunkedInputStream. The exception I get is not consistent, but typically appears to be from chunks being overwritten or shuffled, resulting in letters appearing in attributes that are actually numbers, or like the following where an attribute "testAttribute" gets partially overwritten by the end of a timestamp that was in a different section of the XML.
    javax.xml.bind.UnmarshalException
    - with linked exception:
    [javax.xml.stream.XMLStreamException: ParseError at [row,col]:[1,98748]
    Message: Attribute name "testAttribu00Z" associated with an element type "testElement" must be followed by the ' = ' character.]
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallerImpl.handleStreamException(UnmarshallerImpl.java:421)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal0(UnmarshallerImpl.java:357)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal(UnmarshallerImpl.java:334)
    Caused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[1,98748]
    Message: Attribute name "testAttribu00Z" associated with an element type "testElement" must be followed by the ' = ' character.
      at com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next(XMLStreamReaderImpl.java:598)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.StAXStreamConnector.bridge(StAXStreamConnector.java:181)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal0(UnmarshallerImpl.java:355)
      ... 6 more
    Here's some code that seems to reproduce it if you can connect to an XML server that returns a large chunked XML file:
      SchemeRegistry registry = new SchemeRegistry();
      registry.register(
                    new Scheme("http", 80, PlainSocketFactory.getSocketFactory()));
      HttpClient client = new DefaultHttpClient(new BasicClientConnectionManager(registry));
      String url = "http://someUrlReturningAlargeChunkedXML";
      HttpGet method = new HttpGet(url);
      HttpResponse response = client.execute(method);
      InputStream inputStream = response.getEntity().getContent();
      XMLStreamReader responseReader = factory.createXMLStreamReader(inputStream);
      JAXBElement<JaxBObjectOfResponse> wot = unmarshaller.unmarshal(responseReader, JaxBObjectOfResponse.class);
    If you connect using URL.openStream() to the same service there is no error. If I read bytes directly and write to a file, there is no error. The error only happens when I try to unmarshal it, and it's large, and I'm using Java 7u71 (or later). It can be consistently repeated with the jsp webapp that I'm using, but didn't show the error when I used the same code with a Wikipedia dump XML file.
    How can I unmarshal in a different way to avoid this problem? Or, how can I better isolate the bug so it can be posted to the appropriate bug system?

    Apparently, adding the Woodstox XML libraries avoids the bug. Is there anyone who can reproduce this on another system? Was there any changes to the Stax implementation between u67 and u71 that may have introduced a bug like this?
    Edit: When setting the logging level to DEBUG, I once saw the overwritten buffer being logged as if that was what was received (as in the testAttribu00Z example above). I can't repeat that anymore though, and very rarely it does parses with no exception (though it may have still been corrupted). Now the error seems to be consistently on one of the buffer boundaries, as in:
    17:08:09,705 DEBUG wire:63 - << "2000[\r][\n]"
    17:08:09,705 DEBUG wire:77 - << "trend>....OTHER XML...<trend hours=""
    17:08:09,705 DEBUG wire:77 - << "634.0972777777778" datetime="2013-05-21T00:43:48.350Z" t"
    17:08:09,705 DEBUG wire:63 - << "[\r][\n]"
    17:08:09,705 DEBUG wire:63 - << "2000[\r][\n]"
    17:08:09,705 DEBUG wire:77 - << "rend-mode="0">
    Exception in thread "main" java.lang.NumberFormatException: t34.0972777777778
      at com.sun.xml.internal.bind.DatatypeConverterImpl._parseDouble(DatatypeConverterImpl.java:213)
      at mypackage.Trend_JaxbXducedAccessor_hours.parse(TransducedAccessor_field_Double.java:48)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.StructureLoader.startElement(StructureLoader.java:194)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallingContext._startElement(UnmarshallingContext.java:486)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallingContext.startElement(UnmarshallingContext.java:465)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.InterningXmlVisitor.startElement(InterningXmlVisitor.java:60)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.StAXStreamConnector.handleStartElement(StAXStreamConnector.java:231)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.StAXStreamConnector.bridge(StAXStreamConnector.java:165)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal0(UnmarshallerImpl.java:355)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal(UnmarshallerImpl.java:334)
    Or:
    17:19:12,563 DEBUG wire:63 - << "2000[\r][\n]"
    17:19:12,563 DEBUG wire:77 - << ...OTHER XML...<trend index="5"
    17:19:12,563 DEBUG wire:77 - << "" label="N"
    17:19:12,563 DEBUG wire:63 - << "[\r][\n]"
    Exception in thread "main" java.lang.NumberFormatException: Not a number: N
      at com.sun.xml.internal.bind.DatatypeConverterImpl._parseInt(DatatypeConverterImpl.java:106)
      at com.sun.xml.internal.bind.DatatypeConverterImpl._parseShort(DatatypeConverterImpl.java:118)

  • Problems with Large XML files

    I have tried increasing the memory pool using the -mx and -ms options. It doesnt work. I am using your latest XML parser for Java v2. Please let me know if there are some specific options I should be using.
    Thanx,
    -Sameer
    We have a number of test files that are that size and it works without a problem. However using the DOMParser does require significantly more memory than your doc size.
    What is the memory configuration of the JVM that you are running with? Have you tried increasing it? Are you using our latest version 2.0.2.6?
    Oracle XML Team
    Is there a restriction on the XML file size that can be loaded into the parser?
    I am getting a out of memory exception reading in large XML file(10MB) using the commands
    DOMParser parser = new DOMParser();
    URL url = createURL(argv[0]);
    parser.setErrorStream(System.err);
    parser.setValidationMode(true);
    parser.showWarnings(true);
    parser.parse(url);
    Win NT 4.0 Server
    Sun JDK 1.2.2
    ===================================
    Error output
    ===================================
    Exception in thread "main" java.lang.OutOfMemoryError
    at oracle.xml.parser.v2.ElementDecl.getAttrDecls(ElementDecl.java, Compi
    led Code)
    at java.util.Hashtable.<init>(Unknown Source)
    at oracle.xml.parser.v2.DTDDecl.<init>(DTDDecl.java, Compiled Code)
    at oracle.xml.parser.v2.ElementDecl.getAttrDecls(ElementDecl.java, Compi
    led Code)
    at oracle.xml.parser.v2.ValidatingParser.checkDefaultAttributes(Validati
    ngParser.java, Compiled Code)
    at oracle.xml.parser.v2.NonValidatingParser.parseAttributes(NonValidatin
    gParser.java, Compiled Code)
    at oracle.xml.parser.v2.NonValidatingParser.parseElement(NonValidatingPa
    rser.java, Compiled Code)
    at oracle.xml.parser.v2.ValidatingParser.parseRootElement(ValidatingPars
    er.java:97)
    at oracle.xml.parser.v2.NonValidatingParser.parseDocument(NonValidatingP
    arser.java:199)
    at oracle.xml.parser.v2.XMLParser.parse(XMLParser.java:146)
    at TestLF.main(TestLF.java:40)
    null

    You might try using a different JDK/JRE - either a 1.1.6+ or 1.3 version as 1.2 in our experience has the largest footprint. If this doesn't work can you give us some details about your system configuration. Finally you might try the SAX interface as it does not need to load the entire DOM tree into memory.
    Oracle XML Team

  • Validating large xml files

    In our application we have a requirement to validate large xml file (around 40 mb) against a simple schema file.
    I have tried all options(Sax and Dom) mentioned in below article on sun site -
    http://java.sun.com/developer/EJTechTips/2005/tt1025.html
    but with all of these approches the validation process just hangs for hours.
    Even i am running my test case for this validation by allocating 1 GB memory to test case, on latest processor(core-2 duo) but still it just hangs and returns after a long duration.
    If needed i can provide sample test case, schema file and xml file.
    Please help me to know if its a problem with JAXP 1.3 Schema Validation Framework (SVF) to handle large files, or if there is any other efficient way of validating large xml file.
    Regards
    Rahul

    you may want to split the XML to 10MB each and parse the XML's separately or store in a database and read the XML using XQuery.
    I am sure there may be better ways to do this, will see experts opinions here.

  • Store large XML files

    Has anybody experience with storing of very large XML files (400 MB) as a XML type in Oracle?
    Is this feasible? Is this efficient (response time)? Can you recommend to do this?
    Or is it better (with a better performance) to parse the files and store the data in an own db scheme?
    Thanks,
    -Bernhard

    Here is an Oracle ACE with experience on storing large XML into the DB
    [HOWTO: Load Really Big XML Files|http://www.liberidu.com/blog/?p=473]
    There are more examples on Marco's blog as well of different ways to load XML into the DB.
    Yes it is feasible. Efficient depends upon the complexity of the XML (not so much size) and how you are storing it into the DB (XMLType, XMLType associated to a schema, Object Relational, Hybrid, etc).
    Mark Drake from the {forum:id=34} forum has seen good performance just using INSERT INTO ... VALUES ... (BFILENAME()) to load the XML as well. Some other suggestions are listed in the FAQ on that forum as well.
    Any future questions you have on this topic would probably be best answered by posting in that forum.

  • Is JAXB suitable for large XML files ?

    Hi,
    I have a very large XML file (~700 MB) (schema available). I need to unmarshall this into java objects and carry out some (business validation rules) on it.. These buisness rules may involve validating data from content objects that correspond to different sections of this large XML file.
    I am uncertain whether JAXB will help me here. (just started on it) Does JAXB build the entire content tree for the XML document during Unmarshaller.unmarshall ? Is there anyway of asking it to build content objects on demand as opposed to building the whole content tree immediately ?
    All help/suggestions appreciated.

    Forgot to add:
    after carrying out validation the data is put into some RDBMS tables.
    One approach would be to convert the XML files into SQL Loader compatible flat files (using a tool). Load these flat files into staging tables. Perform business validations on staging table data and then finally move the data into the main tables. All the validating logic could either be in stored procedures or java code.
    The above is very long-winded. It would be great if JAXB can handle very large XML files (without loading the whole XML file into memory) so that business validations can be done by java, without any intermediate format conversion.
    I hope the above is somewhat clear.

Maybe you are looking for