Problem with parsing large xml files

Hello All,
I am parsing a large xml file of 20MB and I use DocumentBuilder.parse(File). This method works for small xml files with size less than 20MB but the application hangs and doesn't through any error message when parsing 20MB xml files. Please let me know what I have to do at this point ?
Thanks & Regards,
Kumar.

Well... i can't agree.
If you have such structure:
<task>
  <task/>
  <task>
     <task>
        <task/>
     </task>
     <task/>
  </task>
</task>
...you may always keep stack of tasks (at startElement push to top, and at endElement pop), so at every leaf of tree you will have all parents of that leaf.
for such structure:
<task id="1" parent="0"/>
<task id="2" parent="1"/>
<task id="3" parent="1"/>
<task id="4" parent="2"/>
<task id="5" parent="3"/>
...it will be much faster to go thro document by sax several times to build tree of tasks, than to load all document into memory...

Similar Messages

  • Problem with parsing large XML files chunked over HTTP

    I'm trying to isolate a bug that was introduced when upgrading the JRE in use from Java 7u51 to 7u71 without changing any code. The problem appears to be very similar to: Bug ID: JDK-8027359 XML parser returns incorrect parsing results.
    Further investigation showed that it was also introduced in the same versions (7u71) where that fix was applied. Unlike that bug though, my XML is marked as version 1.0. It also appears to be with only large XML files, on the order of 10MB or so.
    The closest I've been able to narrow it down to is the code is using JAXB to unmarshall a stream that the debugger tells me is a org.apache.http.com.EofSensorInputStream / org.apache.http.impl.io.ChunkedInputStream. The exception I get is not consistent, but typically appears to be from chunks being overwritten or shuffled, resulting in letters appearing in attributes that are actually numbers, or like the following where an attribute "testAttribute" gets partially overwritten by the end of a timestamp that was in a different section of the XML.
    javax.xml.bind.UnmarshalException
    - with linked exception:
    [javax.xml.stream.XMLStreamException: ParseError at [row,col]:[1,98748]
    Message: Attribute name "testAttribu00Z" associated with an element type "testElement" must be followed by the ' = ' character.]
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallerImpl.handleStreamException(UnmarshallerImpl.java:421)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal0(UnmarshallerImpl.java:357)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal(UnmarshallerImpl.java:334)
    Caused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[1,98748]
    Message: Attribute name "testAttribu00Z" associated with an element type "testElement" must be followed by the ' = ' character.
      at com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next(XMLStreamReaderImpl.java:598)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.StAXStreamConnector.bridge(StAXStreamConnector.java:181)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal0(UnmarshallerImpl.java:355)
      ... 6 more
    Here's some code that seems to reproduce it if you can connect to an XML server that returns a large chunked XML file:
      SchemeRegistry registry = new SchemeRegistry();
      registry.register(
                    new Scheme("http", 80, PlainSocketFactory.getSocketFactory()));
      HttpClient client = new DefaultHttpClient(new BasicClientConnectionManager(registry));
      String url = "http://someUrlReturningAlargeChunkedXML";
      HttpGet method = new HttpGet(url);
      HttpResponse response = client.execute(method);
      InputStream inputStream = response.getEntity().getContent();
      XMLStreamReader responseReader = factory.createXMLStreamReader(inputStream);
      JAXBElement<JaxBObjectOfResponse> wot = unmarshaller.unmarshal(responseReader, JaxBObjectOfResponse.class);
    If you connect using URL.openStream() to the same service there is no error. If I read bytes directly and write to a file, there is no error. The error only happens when I try to unmarshal it, and it's large, and I'm using Java 7u71 (or later). It can be consistently repeated with the jsp webapp that I'm using, but didn't show the error when I used the same code with a Wikipedia dump XML file.
    How can I unmarshal in a different way to avoid this problem? Or, how can I better isolate the bug so it can be posted to the appropriate bug system?

    Apparently, adding the Woodstox XML libraries avoids the bug. Is there anyone who can reproduce this on another system? Was there any changes to the Stax implementation between u67 and u71 that may have introduced a bug like this?
    Edit: When setting the logging level to DEBUG, I once saw the overwritten buffer being logged as if that was what was received (as in the testAttribu00Z example above). I can't repeat that anymore though, and very rarely it does parses with no exception (though it may have still been corrupted). Now the error seems to be consistently on one of the buffer boundaries, as in:
    17:08:09,705 DEBUG wire:63 - << "2000[\r][\n]"
    17:08:09,705 DEBUG wire:77 - << "trend>....OTHER XML...<trend hours=""
    17:08:09,705 DEBUG wire:77 - << "634.0972777777778" datetime="2013-05-21T00:43:48.350Z" t"
    17:08:09,705 DEBUG wire:63 - << "[\r][\n]"
    17:08:09,705 DEBUG wire:63 - << "2000[\r][\n]"
    17:08:09,705 DEBUG wire:77 - << "rend-mode="0">
    Exception in thread "main" java.lang.NumberFormatException: t34.0972777777778
      at com.sun.xml.internal.bind.DatatypeConverterImpl._parseDouble(DatatypeConverterImpl.java:213)
      at mypackage.Trend_JaxbXducedAccessor_hours.parse(TransducedAccessor_field_Double.java:48)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.StructureLoader.startElement(StructureLoader.java:194)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallingContext._startElement(UnmarshallingContext.java:486)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallingContext.startElement(UnmarshallingContext.java:465)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.InterningXmlVisitor.startElement(InterningXmlVisitor.java:60)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.StAXStreamConnector.handleStartElement(StAXStreamConnector.java:231)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.StAXStreamConnector.bridge(StAXStreamConnector.java:165)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal0(UnmarshallerImpl.java:355)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal(UnmarshallerImpl.java:334)
    Or:
    17:19:12,563 DEBUG wire:63 - << "2000[\r][\n]"
    17:19:12,563 DEBUG wire:77 - << ...OTHER XML...<trend index="5"
    17:19:12,563 DEBUG wire:77 - << "" label="N"
    17:19:12,563 DEBUG wire:63 - << "[\r][\n]"
    Exception in thread "main" java.lang.NumberFormatException: Not a number: N
      at com.sun.xml.internal.bind.DatatypeConverterImpl._parseInt(DatatypeConverterImpl.java:106)
      at com.sun.xml.internal.bind.DatatypeConverterImpl._parseShort(DatatypeConverterImpl.java:118)

  • Performance Problem in parsing large XML file (15MB)

    Hi,
    I'm trying to parse a large XML file(15 MB) and facing a clear performance problem. A Simple XML Validation using the following code snippet:
    DBMS_LOB.fileopen(targetFile, DBMS_LOB.file_readonly);
    DBMS_LOB.loadClobfromFile
    tempCLOB,
    targetFile,
    DBMS_LOB.getLength(targetFile),
    dest_offset,
    src_offset,
    nls_charset_id(CONSTANT_CHARSET),
    lang_context,
    conv_warning
    DBMS_LOB.fileclose(targetFile);
    p_xml_document := XMLType(tempCLOB, p_schema_url, 0, 0);
    p_xml_document.schemaValidate();
    is taking 30 mins on a HP-UX (4GB ram, 2 CPU) machine (Oracle version : 9.2.0.4).
    Please explain what could be going wrong.
    Thanks In Advance,
    Vineet

    Thanks Mark,
    I'll open a TAR and also upload the schema and instance XML.
    If i'm not changing the track too much :-) one more thing in continuation:
    If i skip the Schema Validation step and directly insert the instance document into a Schema linked XMLType table, what does OracleXDB do in such a case?
    i'm getting a severe performance hit here too... the same file as above takes almost 40 mins to Insert.
    code snippet:
    DBMS_LOB.fileopen(targetFile, DBMS_LOB.file_readonly);
    DBMS_LOB.loadClobfromFile
    tempCLOB,
    targetFile,
    DBMS_LOB.getLength(targetFile),
    dest_offset,
    src_offset,
    nls_charset_id(CONSTANT_CHARSET),
    lang_context,
    conv_warning
    DBMS_LOB.fileclose(targetFile);
    p_xml_document := XMLType(tempCLOB, p_schema_url, 0, 0);
    -- p_xml_document.schemaValidate();
    insert into INCOMING_XML values(p_xml_document);
    Here table INCOMING_XML is :
    TABLE of SYS.XMLTYPE(XMLSchema "http://INCOMING_XML.xsd" Element "MatchingResponse") STORAGE Object-
    relational TYPE "XDBTYPE_MATCHING_RESPONSE"
    This table and type XDBTYPE_MATCHING_RESPONSE were created using the mapping provided in the registered XML Schema.
    Thanks,
    Vineet

  • How to parse large xml file

    I need to parse large xml file which contains following tag. The size of the file is upto 10MB-50MB or more.
    <departments>
    <department>
    <a_depart id="124">
    <b_depart id="Bss_253">
    <bss_depart id="253">
    <attributes>
    <name_one>abc</name_one>
    </attributes>
    </bss_depart id="253">
    </b_depart id="Bss_253">
    </a_depart id="124">
    </department>
    <department>
    <a_depart id="124">
    <b_depart id="Bss_254">
    <mss_depart id="253">
              <attributes>
              <name_one>abc</name_one>
              <name_two>xyz</name_one>
              </attributes>
         </mss_depart>
         </b_depart>
    </a_depart>
    </department>
    <department>
    <a_depart id="124">
    <b_depart id="Bss_254">
    <mss_depart id="255">
              <attributes>
              <name_one>abc</name_one>
              <name_two>xyz</name_one>
              </attributes>
         </mss_depart>
         </b_depart>
    </a_depart>
    </department>
    <department>
    <a_depart id="125">
    <b_depart id="Bss_254">
    <mss_depart id="253">
              <attributes>
              <name_one>abc</name_one>
              <name_two>xyz</name_one>
              </attributes>
         </mss_depart>
         </b_depart>
    </a_depart>
    </department>
    I want to get the infomation for that xml file. like mss_depart id=233, building xpath dyanmically for every id and loading
    that using dom4j. which is very very slow.
    Is there any other solution for that to read the data using sax parser only.
    I want to execute the xpath or data for the following way.
    //a_depart/@id ------> all the ids of a_depart tags if it returns 3 values say 123,124,125
    after that i want to execute
    //a_depart[@id='123']/b_depart/@id like this ...to retrive the values of all the levels ...
         I am executing following xpath for every unique ids at all levels.
         List l = doc.selectNodes(xPathForID);
         List l1 = doc.selectNodes(xPathForAttributes+attributes.get(j)+"/text()");
    But it is very slow and taking lot of time.
    Is there any other way to solve this problem. If any please mail me it is urgent.
    I am using jdk1.4 and jdk1.5
    Is there any support for sax parser to execute xpath in jdk1.5 direclty, with out using dom4j
    Thanks in advance....

    I doubt you will find a preexisting solution to your problem.
    SAX is usually recommended for processing big files (where "big" is undefined"). It works on big files by avoiding the messy problem of storing the data -- that is left as an exercise to you.
    DOM (and its variants) works by building a Document object as the head of the tree of objects for the entire contents. With DOM, you can then use XPath, because there is something to search that is already in memory. To use XPath, you seem to have two choices, build a DOM-ish tree, or if you can find an XPath processor (I'm not sure if one exists) that can process the XML file directly, but it will be slow, since you are looking for "all" occurences of an attribute, and this means you have to read the entire file each time.
    It might be worth exploring a hybrid approach -- use SAX to get some information, and build your own objects to store the data. Maybe a HashMap as the main index. But, that will keep you from using XPath, since you do not have the data structures it expects.
    A third alternative would be to look at JAXB. It builds Java code from a Schema of your data and then when you import the data, it creates the necessary objects and fills in values. But, I don't think XPath woll work there either.
    Dave Patterson

  • Can someone help me with a problem of parsing an XML file?

    Hello,
    I'm having some problems parsing an xml file. I get a SAXNotSupportedException when setting a property value.
    Here is the piece of code where I have the problem:
    SAXParserFactory spf = SAXParserFactory.newInstance();
    spf.setNamespaceAware(true);
    SAXParser saxParser = spf.newSAXParser();
    XMLReader xmlReader = saxParser.getXMLReader();
    DefaultHandler defHandler = new DefaultHandler();
    xmlReader.setProperty("http://xml.org/sax/properties/lexical-handler", defHandler);
    and the log is:
    Problem with the parser org.xml.sax.SAXNotSupportedException: PAR012 For propertyID "http://xml.org/sax/properties/lexical-handler", the value "org.xml.sax.helpers.DefaultHandler@4ff4f74a" cannot be cast to LexicalHandler.
    http://xml.org/sax/properties/lexical-handler org.xml.sax.helpers.DefaultHandler@4ff4f74a LexicalHandler
    I've been working on this problem but I can't find the error.
    Does anyone have an idea of what to do to solve it?
    Thanx in advance,
    M@G

    before deciding which XML technology to use, you should see if your application fit in the category below:
    use SAX:
    1. The XML file is rather large (30 or 40+ MB)
    2. I don't need the xml document in memory. I will parse the document and store the data in my own object.
    use DOM or JDOM
    1. The XML file is relatively small (less than 30 MB) or I can increase the runtime memory for larger xml file.
    2. I will need to walk up and down the xml document tree severals time.
    3. My application is in Java and it's not going to be rewritten in C++, etc (use JDOM)
    NOTE:
    JDOM is rather easier to use (for Java developer), but it's not an www.org.com standardlized xml parser.
    personally, i like JDOM for traversing the DOM.

  • Parsing Large XML File

    Is there a restriction on the XML file size that can be loaded into the parser?
    I am getting a out of memory exception reading in large XML file(10MB) using the commands
    DOMParser parser = new DOMParser();
    URL url = createURL(argv[0]);
    parser.setErrorStream(System.err);
    parser.setValidationMode(true);
    parser.showWarnings(true);
    parser.parse(url);
    Win NT 4.0 Server
    Sun JDK 1.2.2
    ===================================
    Error output
    ===================================
    Exception in thread "main" java.lang.OutOfMemoryError
    at oracle.xml.parser.v2.ElementDecl.getAttrDecls(ElementDecl.java, Compi
    led Code)
    at java.util.Hashtable.<init>(Unknown Source)
    at oracle.xml.parser.v2.DTDDecl.<init>(DTDDecl.java, Compiled Code)
    at oracle.xml.parser.v2.ElementDecl.getAttrDecls(ElementDecl.java, Compi
    led Code)
    at oracle.xml.parser.v2.ValidatingParser.checkDefaultAttributes(Validati
    ngParser.java, Compiled Code)
    at oracle.xml.parser.v2.NonValidatingParser.parseAttributes(NonValidatin
    gParser.java, Compiled Code)
    at oracle.xml.parser.v2.NonValidatingParser.parseElement(NonValidatingPa
    rser.java, Compiled Code)
    at oracle.xml.parser.v2.ValidatingParser.parseRootElement(ValidatingPars
    er.java:97)
    at oracle.xml.parser.v2.NonValidatingParser.parseDocument(NonValidatingP
    arser.java:199)
    at oracle.xml.parser.v2.XMLParser.parse(XMLParser.java:146)
    at TestLF.main(TestLF.java:40)
    null

    hi
    i think you can use STAX a java lib.
    since i m not having much more knowledge about this.
    you can find this on Internaet.
    Thanks

  • Parsing large xml file and display using swing

    Hi all,
    I want to read a large xml file and display graphically in swing as a tree structure.
    I implemented it and works fine for files of 5MB size after increasing the jvm heap size (-Xmx). If the file size is larger than 5MB it throws out of memory error. I'm creating a custom datastructure from the xml and I'm using sax parsing.
    After displaying the datastructure, the user could do some operation on this, like search etc.
    Can any of you suggest a method, to support larger files ? What I'm looking for is create the datastructure in file system, rather than in memory.
    Any other tips for memory management would be greatly appreciated
    Thanks in Advance.
    Nisha

    Use a memory-mapped file?
    http://javaalmanac.com/egs/java.nio/CreateMemMap.html

  • Problem with sending large HTML files as attachment using JavaMail 1.2

    Hi dear fellows, i am currently working on posting Emails with attachments using JavaMail 1.2. i succeeded in sending many mimetypes of files as attachments except for .htm and .html files. when large HTML files (say, >100 kB) serve as attachements, the mail is posted on mail server, but not properly posted since only the first small part of the file is writted into mail server but the latter part of the attachment file is missing.
    is that a bug of JavaMail??? are there any fellows encountering similar problem as i did??? any suggestions for me to proceed? hopefully i made myself clear...
    Many thanks in advance,
    Fatty

    i've sort of found the cause for that, it is because when the stream write to the mail server, unfortunately there is a "." in a single line. so the server refuse to take any more inputs.
    so do i have to remove all the "." in the file manually to avoid this disaster? any suggestions??
    Fatty

  • Help : Parsing large XML files

    Hi,
    someone please help, I am trying to parse XML files of about 60 MB, I have to parse throught 120 of them , search for a particular node and print it. I am using jdk1.3.x , using jdom
    On the sample filesd that r available of 114KB i am able to run my code and get the result, but as soon as the large files are used I get the following error
    OutOfMemoryError
    <<nostacktrace>>
    Exception in thread main
    thanks

    I guess you are using a DOM parser which builds a complete tree of the document. For what you are trying to do this is probably not necessary so a SAX parser may be better. If JDOM doesn't have one try using Xerces from Apache.

  • Query in a large xml file

    Hello,
    I'm trying to work with very large xml files which are created from csv files. These files may be very large - up to 1 GB ! Untill now I have managed to do several validations on these big xml files, and the only thing that works for me is SAX parser, DOM is out of the question because it fills up memory.
    My next task is to do queries on these files, smth like:
    select field1,field2 from file.xml
    where field3 = 'A'
    and (fileld4>'B' or field1='C')
    order by field2.
    I searched the net about finding out how to make queries on xml files (since I have never done queries on xml before), but I couldn't find which "query language" is best for large files. If I use XPath (XSLT) will that not cause me memory problems because XSLT represents the file as a memory object?
    My idea is to parse the file with SAX and check every row if it fits the where condition and then write it immediately to a result xml file. But validating the where statement can be very complicated without using some tool. Also the order by statement is another problematic issue.
    Does anyone have some more intelegent ideas about how I can do this? Please help! :(
    The xml file looks like this:
    <doc>
    <row id ="1">
    <column id="1" name="column1">value</column>
    <column id="N" name="columnN">value</column>
    </row>
    <row id ="M">
    <column id="1" name="column1">value</column>
    <column id="N" name="columnN">value</column>
    </row>
    </doc>

    Hi all,
    Thank you very much for your replies.
    First, saxon didn't work because it uses an in-memory parser, and that is what I was trying to avoid.
    Different database is also out of the question, because the customer insist on XML, and also there are some files that can never be converted to a database table, because eventually with some transformations thay are changed and are not completely like the standard csv format.
    I think that maybe http://exist.sourceforge.net is the rigth solution for me, but I will probably try it in the next version of my project.
    For now I have managed to make the project with only SAXParser and a lot of back - end programming and it works ok, althoug it was very hard to make it, and will be harded to maintain, so I will try to look at the eXist project.
    Thanks everyone for the help.

  • Does the parser work with large XML files?

    Is there a restriction on the XML file size that can be loaded into the parser?
    I am getting a out of memory exception reading in large XML file(10MB) using the commands
    DOMParser parser = new DOMParser();
    URL url = createURL(argv[0]);
    parser.setErrorStream(System.err);
    parser.setValidationMode(true);
    parser.showWarnings(true);
    parser.parse(url);
    Win NT 4.0 Server
    Sun JDK 1.2.2
    ===================================
    Error output
    ===================================
    Exception in thread "main" java.lang.OutOfMemoryError
    at oracle.xml.parser.v2.ElementDecl.getAttrDecls(ElementDecl.java, Compi
    led Code)
    at java.util.Hashtable.<init>(Unknown Source)
    at oracle.xml.parser.v2.DTDDecl.<init>(DTDDecl.java, Compiled Code)
    at oracle.xml.parser.v2.ElementDecl.getAttrDecls(ElementDecl.java, Compi
    led Code)
    at oracle.xml.parser.v2.ValidatingParser.checkDefaultAttributes(Validati
    ngParser.java, Compiled Code)
    at oracle.xml.parser.v2.NonValidatingParser.parseAttributes(NonValidatin
    gParser.java, Compiled Code)
    at oracle.xml.parser.v2.NonValidatingParser.parseElement(NonValidatingPa
    rser.java, Compiled Code)
    at oracle.xml.parser.v2.ValidatingParser.parseRootElement(ValidatingPars
    er.java:97)
    at oracle.xml.parser.v2.NonValidatingParser.parseDocument(NonValidatingP
    arser.java:199)
    at oracle.xml.parser.v2.XMLParser.parse(XMLParser.java:146)
    at TestLF.main(TestLF.java:40)
    null

    We have a number of test files that are that size and it works without a problem. However using the DOMParser does require significantly more memory than your doc size.
    What is the memory configuration of the JVM that you are running with? Have you tried increasing it? Are you using our latest version 2.0.2.6?
    Oracle XML Team

  • Problems with Large XML files

    I have tried increasing the memory pool using the -mx and -ms options. It doesnt work. I am using your latest XML parser for Java v2. Please let me know if there are some specific options I should be using.
    Thanx,
    -Sameer
    We have a number of test files that are that size and it works without a problem. However using the DOMParser does require significantly more memory than your doc size.
    What is the memory configuration of the JVM that you are running with? Have you tried increasing it? Are you using our latest version 2.0.2.6?
    Oracle XML Team
    Is there a restriction on the XML file size that can be loaded into the parser?
    I am getting a out of memory exception reading in large XML file(10MB) using the commands
    DOMParser parser = new DOMParser();
    URL url = createURL(argv[0]);
    parser.setErrorStream(System.err);
    parser.setValidationMode(true);
    parser.showWarnings(true);
    parser.parse(url);
    Win NT 4.0 Server
    Sun JDK 1.2.2
    ===================================
    Error output
    ===================================
    Exception in thread "main" java.lang.OutOfMemoryError
    at oracle.xml.parser.v2.ElementDecl.getAttrDecls(ElementDecl.java, Compi
    led Code)
    at java.util.Hashtable.<init>(Unknown Source)
    at oracle.xml.parser.v2.DTDDecl.<init>(DTDDecl.java, Compiled Code)
    at oracle.xml.parser.v2.ElementDecl.getAttrDecls(ElementDecl.java, Compi
    led Code)
    at oracle.xml.parser.v2.ValidatingParser.checkDefaultAttributes(Validati
    ngParser.java, Compiled Code)
    at oracle.xml.parser.v2.NonValidatingParser.parseAttributes(NonValidatin
    gParser.java, Compiled Code)
    at oracle.xml.parser.v2.NonValidatingParser.parseElement(NonValidatingPa
    rser.java, Compiled Code)
    at oracle.xml.parser.v2.ValidatingParser.parseRootElement(ValidatingPars
    er.java:97)
    at oracle.xml.parser.v2.NonValidatingParser.parseDocument(NonValidatingP
    arser.java:199)
    at oracle.xml.parser.v2.XMLParser.parse(XMLParser.java:146)
    at TestLF.main(TestLF.java:40)
    null

    You might try using a different JDK/JRE - either a 1.1.6+ or 1.3 version as 1.2 in our experience has the largest footprint. If this doesn't work can you give us some details about your system configuration. Finally you might try the SAX interface as it does not need to load the entire DOM tree into memory.
    Oracle XML Team

  • I want to load large raw XML file in firefox and parse by DOM. But, for large XML file the firefox very slow some time crashed . Is there any option to increase DOM handling memory in Firefox

    Actually i am using an off-line form to load very large XML file and using firefox to load that form. But, its taking more time to load and some time the browser crashed. through DOM parsing this XML file to my form. Is there any option to increase DOM handler size in firefox

    Thank you for your suggestion. I have a question,
    though. If I use a relational database and try to
    access it for EACH and EVERY click the user makes,
    wouldn't that take much time to populate the page with
    data?
    Isn't XML store more efficient here? Please reply me.You have the choice of reading a small number of records (10 children per element?) from a database, or parsing multiple megabytes. Reading 10 records from a database should take maybe 100 milliseconds (1/10 of a second). I have written a web application that reads several hundred records and returns them with acceptable response time, and I am no expert. To parse an XML file of many megabytes... you have already tried this, so you know how long it takes, right? If you haven't tried it then you should. It's possible to waste a lot of time considering alternatives -- the term is "analysis paralysis". Speculating on how fast something might be doesn't get you very far.

  • OSB - Iterating over large XML files with content streaming

    Hi @ll
    I have to iterate over all item in large XML files and insert into a oracle database.
    The file is about 200 MB and contains around 500'000, and I am using OSB 10gR3.
    The XML structure is something like this:
    <allItems>
    <item>.....</item>
    <item>.....</item>
    <item>.....</item>
    <item>.....</item>
    <item>.....</item>
    </allItems>
    Actually I thought about using a proxy service with enabled content streaming and a "for each" action for iterating
    over all items. But for this the whole XML structure has to be materialized into a variable otherwise it is not possible!
    More about streaming large files can be found here:
    [http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/userguide/context.html#large_messages]
    There is written "When you enable streaming for large message processing, you cannot use the ... for each...".
    And for accessing single items you should should use an assign action with a xpath like "$body/allItems/item[1]";
    this works fine and not the whole XML stream has to be materialized.
    So my idea was to use the "for each" action and processing seqeuntially all items with a xpath like:
    $body/allItems/item[$counter]
    But the "for each" action just allows iterating over a sequence of xml items by defining an selection xpath
    and the variable that contains all items. I would like to have a "repeat until" construct that iterates as long
    $body/allItems/item[$counter] returns not null. Or can I use the "for each" action differently?
    Does the OSB provides any other iterating mechanism? I know there is this spli-join construct that supports
    different looping techniques, but as far I know it does not support content streaming, is this correct?
    Did I miss somehting?
    Thanks a lot for helping!
    Cheers
    Dani
    Edited by: user10095731 on 29.07.2009 06:41

    Hi Dani,
    Yes, according to me this would be the best approach. You can use content-streaming to pass this large xml to ejb and once it passes successfully EJB should operate on this. If you want any result back (for further routing), you can get it back from EJB.
    EJB gives you power of java to process this file and from java perspective 150 MB is not a very LARGE data. Ensure that you are using buffering. Check out this link for an explanation on Java IO Streams and, in particular, buffered streams -
    http://java.sun.com/developer/technicalArticles/Streams/ProgIOStreams/
    Try dom4J with xpp (XML Pull Parser) parser in case you have parsing requirement. We had worked with 1.2GB file using this technique.
    Regards,
    Anuj

  • Parse exisitng xml file and recreate another xml file with different struct

    Is it possible in java using dom parser to parse an exisiting xml file and recreate a new xml file from data obtained by parsing old xml file.
    I checked on old forum threads and everywhere either parsing xml file has been explained or creating new xml file from scratch has been shown.
    Any examples/guidance will be appreciated....

    The general process is:
    Document dom1 = ... // the parsed document
    Document dom2 = ... // new document constructed on the fly
    Node nD1 = ... // some random node found in dom1
    // copy the node from dom1 and associate with dom2
    Node nD2 = dom2.importNode(nD1, true);
    // ... treat nD2 as a node which can now be inserted into dom2 ...
    Node otherD2 =  ... // some other node already in dom2
    otherD2.appendChild(nD2);Note, in this example the nD1 node is copied (including any sub-nodes) into a new node, nD2. You can alternately move the node from one Document to another using Document.adoptNode(), however this may fail (see the javadoc).

Maybe you are looking for

  • Open vendor invoices not picked up in F-44, F-53

    Hi All, I have a problem here. Many vendor invoices aren't being seen at all in the F-44, F-53 screens and I'm not being able to make payments or clear invoices against credit memos. What could possibly be the reason. I tried comparing these invoices

  • Reconciliation - mont end

    hi, How do we do reconcilisation for clearing account at month end? Eg: GR/IR - how do we manintan open items at month end please explan me

  • Exporting for Flash

    I'm trying to export a motion graphic for Flash, but when I export as Lossless + Alpha Channel, I still get a black background behind my graphic that prevents me from compositing it in Flash. However, the same .mov that I exported composites without

  • Personalize link not appearing in Portal

    Hello Gurus We had installed EP 7.0 sometime back but I have observed today that the poersonalize link is not appearing for a user that I created. Can someone please tell me how to correct this? Thanks.

  • User creation failed - BAM

    I tried to create an user through Oracle BAM administrator. But it is throwing the following error. "This login is not currently known to be a valid login." Can any one help me to solve this.