Large XML File in JMS Message

Hi Experts,
I got to send larg XML files (+20MB) to the Message listener for processing. Would you please comment if it would be a good approach or suggest some tested and authentic solution for this purpose. Prompt response will be highly appreciated.
Regards
Shahzad mahmood

You can use the following algo to send large messages
- use BytesMessage to send the large message
- break the message into chunks (chunk size can be configurable)
- in the header of the first message add a header prioperty to notify that this is the first message
- in the header add a property for the size of the each message
- in the header of the last message add a property to notify that its the last message
on the receiver side
- see the header property of the message
- if its first message, open a stream (may be file stream)
- keep writing messages to this stream till you receive the last message (identified by the header property)
- close the stream after processing the last message.
BTW some JMS products like FioranoMQ (www.fiorano.com) provide built-in support for large messages.
You might want to look into such products.
Thanks
Bhuvan

Similar Messages

  • How to set SAXParser at command-line interface to create a large XML file

    Hi,
    I am trying to create a large XML file (more than 50 MB) by selecting from Oracle database but failed because of "out of memory" error. According to "Oracle XML Developer Guide", we should use SAXParser to parsing a large XML file. But there is no example to show how to set SAXParser at command-line
    Following is what I use to get xml files. It works only when the file is small.
    java OracleXML getXML -DateFormat -withDTD -rowsetTag PO_HDR -conn
    "jdbc:oracle:oci8:@server_name" -user "ID/password" "select * from table_name"
    When I set SAXParser at the way below,
    java oracle.xml.parser.v2.SAXParser OracleXML getXML -DateFormat -withDTD -rowsetTag PO_HDR -conn
    "jdbc:oracle:oci8:@server_name" -user "ID/password" "select * from table_name"
    it failed with the error message: "In class oracle.xml.parser.v2.SAXParser: void main(String argv[]) is not defined"
    Does anyone know how to solve the problem? I'll be appreciated very much for your help.
    Yi

    here are my ideas.
    register the xml schema.
    using xmldom, generate the desired xml output and return as xmltype.
    then you can use something like this to check.
    declare
    xmldoc xmltype ;
    begin
       -- populate xmldoc from you xmldom function
       -- validate against XML schema
       xmldoc.isSchemaValid(schema_url, root_element);
       if xmldoc.isSchemaValid = 1 then
            --valid schema
       else
            --invalid
       end if;
    end

  • OSB - Iterating over large XML files with content streaming

    Hi @ll
    I have to iterate over all item in large XML files and insert into a oracle database.
    The file is about 200 MB and contains around 500'000, and I am using OSB 10gR3.
    The XML structure is something like this:
    <allItems>
    <item>.....</item>
    <item>.....</item>
    <item>.....</item>
    <item>.....</item>
    <item>.....</item>
    </allItems>
    Actually I thought about using a proxy service with enabled content streaming and a "for each" action for iterating
    over all items. But for this the whole XML structure has to be materialized into a variable otherwise it is not possible!
    More about streaming large files can be found here:
    [http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/userguide/context.html#large_messages]
    There is written "When you enable streaming for large message processing, you cannot use the ... for each...".
    And for accessing single items you should should use an assign action with a xpath like "$body/allItems/item[1]";
    this works fine and not the whole XML stream has to be materialized.
    So my idea was to use the "for each" action and processing seqeuntially all items with a xpath like:
    $body/allItems/item[$counter]
    But the "for each" action just allows iterating over a sequence of xml items by defining an selection xpath
    and the variable that contains all items. I would like to have a "repeat until" construct that iterates as long
    $body/allItems/item[$counter] returns not null. Or can I use the "for each" action differently?
    Does the OSB provides any other iterating mechanism? I know there is this spli-join construct that supports
    different looping techniques, but as far I know it does not support content streaming, is this correct?
    Did I miss somehting?
    Thanks a lot for helping!
    Cheers
    Dani
    Edited by: user10095731 on 29.07.2009 06:41

    Hi Dani,
    Yes, according to me this would be the best approach. You can use content-streaming to pass this large xml to ejb and once it passes successfully EJB should operate on this. If you want any result back (for further routing), you can get it back from EJB.
    EJB gives you power of java to process this file and from java perspective 150 MB is not a very LARGE data. Ensure that you are using buffering. Check out this link for an explanation on Java IO Streams and, in particular, buffered streams -
    http://java.sun.com/developer/technicalArticles/Streams/ProgIOStreams/
    Try dom4J with xpp (XML Pull Parser) parser in case you have parsing requirement. We had worked with 1.2GB file using this technique.
    Regards,
    Anuj

  • How to retrive data from .XML file to JMS

    hi friends.. this is vamsi from india.. i have been working on jsp, java bean, xml, jms, message driven bean , ejb and database from last couple of days. but i am not able to retrieve data from fields in xml file to jms. can any one help me out in retrieving the data from xml to jms..
    thanking you,
    vamsi.

    Database but we are not able to retrieve the values
    from XML to JMS."We are not able to" is too vague a statement. An answer to this could well be "Did you turn your computer on?"
    So again, please post formatted code showing what you have done and where you are stuck.
    Perhaps an error message and/or a stack trace if appropriate.

  • What are the best tools for opening very large XML files and examining the tree and confirming they are valid?

    I am generating some very large XML files (600,000+ lines, 50MB+ characters). I finally have them all being valid XML and valid UTF-8.
    But the files are so large Safari and Chrome will often not open them. FireFox will though.
    Instead of these browsers, I was wondering if there are there any other recommended apps for the Mac for opening and viewing the XML, getting an error message if they are not valid for some reason and examing the XML tree?
    I opened the file in the default app for XML which is Xcode, but that is just like opening it in a plain text editor. You can't expand/collapse the XML tree like you can with a browser, and it doesn't report errors.
    Thanks,
    Doug

    Hi Tom,
    I had not seen that list. I'll look it over.
    I'm also in touch with the developer of BBEdit (they are quite responsive) and they are willing to look at the file in question and see why it is not reporting UTF-8 errors while Chrome is.
    For now I have all the invalid characters quashed and things are working. But it would be useful in the future.
    By the by, some of those editors are quite pricey!
    doug

  • Problem with parsing large xml files

    Hello All,
    I am parsing a large xml file of 20MB and I use DocumentBuilder.parse(File). This method works for small xml files with size less than 20MB but the application hangs and doesn't through any error message when parsing 20MB xml files. Please let me know what I have to do at this point ?
    Thanks & Regards,
    Kumar.

    Well... i can't agree.
    If you have such structure:
    <task>
      <task/>
      <task>
         <task>
            <task/>
         </task>
         <task/>
      </task>
    </task>
    ...you may always keep stack of tasks (at startElement push to top, and at endElement pop), so at every leaf of tree you will have all parents of that leaf.
    for such structure:
    <task id="1" parent="0"/>
    <task id="2" parent="1"/>
    <task id="3" parent="1"/>
    <task id="4" parent="2"/>
    <task id="5" parent="3"/>
    ...it will be much faster to go thro document by sax several times to build tree of tasks, than to load all document into memory...

  • Problem with parsing large XML files chunked over HTTP

    I'm trying to isolate a bug that was introduced when upgrading the JRE in use from Java 7u51 to 7u71 without changing any code. The problem appears to be very similar to: Bug ID: JDK-8027359 XML parser returns incorrect parsing results.
    Further investigation showed that it was also introduced in the same versions (7u71) where that fix was applied. Unlike that bug though, my XML is marked as version 1.0. It also appears to be with only large XML files, on the order of 10MB or so.
    The closest I've been able to narrow it down to is the code is using JAXB to unmarshall a stream that the debugger tells me is a org.apache.http.com.EofSensorInputStream / org.apache.http.impl.io.ChunkedInputStream. The exception I get is not consistent, but typically appears to be from chunks being overwritten or shuffled, resulting in letters appearing in attributes that are actually numbers, or like the following where an attribute "testAttribute" gets partially overwritten by the end of a timestamp that was in a different section of the XML.
    javax.xml.bind.UnmarshalException
    - with linked exception:
    [javax.xml.stream.XMLStreamException: ParseError at [row,col]:[1,98748]
    Message: Attribute name "testAttribu00Z" associated with an element type "testElement" must be followed by the ' = ' character.]
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallerImpl.handleStreamException(UnmarshallerImpl.java:421)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal0(UnmarshallerImpl.java:357)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal(UnmarshallerImpl.java:334)
    Caused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[1,98748]
    Message: Attribute name "testAttribu00Z" associated with an element type "testElement" must be followed by the ' = ' character.
      at com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next(XMLStreamReaderImpl.java:598)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.StAXStreamConnector.bridge(StAXStreamConnector.java:181)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal0(UnmarshallerImpl.java:355)
      ... 6 more
    Here's some code that seems to reproduce it if you can connect to an XML server that returns a large chunked XML file:
      SchemeRegistry registry = new SchemeRegistry();
      registry.register(
                    new Scheme("http", 80, PlainSocketFactory.getSocketFactory()));
      HttpClient client = new DefaultHttpClient(new BasicClientConnectionManager(registry));
      String url = "http://someUrlReturningAlargeChunkedXML";
      HttpGet method = new HttpGet(url);
      HttpResponse response = client.execute(method);
      InputStream inputStream = response.getEntity().getContent();
      XMLStreamReader responseReader = factory.createXMLStreamReader(inputStream);
      JAXBElement<JaxBObjectOfResponse> wot = unmarshaller.unmarshal(responseReader, JaxBObjectOfResponse.class);
    If you connect using URL.openStream() to the same service there is no error. If I read bytes directly and write to a file, there is no error. The error only happens when I try to unmarshal it, and it's large, and I'm using Java 7u71 (or later). It can be consistently repeated with the jsp webapp that I'm using, but didn't show the error when I used the same code with a Wikipedia dump XML file.
    How can I unmarshal in a different way to avoid this problem? Or, how can I better isolate the bug so it can be posted to the appropriate bug system?

    Apparently, adding the Woodstox XML libraries avoids the bug. Is there anyone who can reproduce this on another system? Was there any changes to the Stax implementation between u67 and u71 that may have introduced a bug like this?
    Edit: When setting the logging level to DEBUG, I once saw the overwritten buffer being logged as if that was what was received (as in the testAttribu00Z example above). I can't repeat that anymore though, and very rarely it does parses with no exception (though it may have still been corrupted). Now the error seems to be consistently on one of the buffer boundaries, as in:
    17:08:09,705 DEBUG wire:63 - << "2000[\r][\n]"
    17:08:09,705 DEBUG wire:77 - << "trend>....OTHER XML...<trend hours=""
    17:08:09,705 DEBUG wire:77 - << "634.0972777777778" datetime="2013-05-21T00:43:48.350Z" t"
    17:08:09,705 DEBUG wire:63 - << "[\r][\n]"
    17:08:09,705 DEBUG wire:63 - << "2000[\r][\n]"
    17:08:09,705 DEBUG wire:77 - << "rend-mode="0">
    Exception in thread "main" java.lang.NumberFormatException: t34.0972777777778
      at com.sun.xml.internal.bind.DatatypeConverterImpl._parseDouble(DatatypeConverterImpl.java:213)
      at mypackage.Trend_JaxbXducedAccessor_hours.parse(TransducedAccessor_field_Double.java:48)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.StructureLoader.startElement(StructureLoader.java:194)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallingContext._startElement(UnmarshallingContext.java:486)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallingContext.startElement(UnmarshallingContext.java:465)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.InterningXmlVisitor.startElement(InterningXmlVisitor.java:60)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.StAXStreamConnector.handleStartElement(StAXStreamConnector.java:231)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.StAXStreamConnector.bridge(StAXStreamConnector.java:165)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal0(UnmarshallerImpl.java:355)
      at com.sun.xml.internal.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal(UnmarshallerImpl.java:334)
    Or:
    17:19:12,563 DEBUG wire:63 - << "2000[\r][\n]"
    17:19:12,563 DEBUG wire:77 - << ...OTHER XML...<trend index="5"
    17:19:12,563 DEBUG wire:77 - << "" label="N"
    17:19:12,563 DEBUG wire:63 - << "[\r][\n]"
    Exception in thread "main" java.lang.NumberFormatException: Not a number: N
      at com.sun.xml.internal.bind.DatatypeConverterImpl._parseInt(DatatypeConverterImpl.java:106)
      at com.sun.xml.internal.bind.DatatypeConverterImpl._parseShort(DatatypeConverterImpl.java:118)

  • Is there a way to import large XML files into HANA efficiently are their any data services provided to do this?

    1. Is there a way to import large XML files into HANA efficiently?
    2. Will it process it node by node or the entire file at a time?
    3. Are there any data services provided to do this?
    This for a project use case i also have an requirement to process bulk XML files, suggest me to accomplish this task

    Hi Patrick,
         I am addressing the similar issue. "Getting data from huge XMLs into Hana."
    Using Odata services can we handle huge data (i.e create schema/load into Hana) On-the-fly ?
    In my scenario,
    I get a folder of different complex XML files which are to be loaded into Hana database.
    Then I gotta transform & cleanse the data.
    Can I use oData services to transform and cleanse the data ?
    If so, how can I create oData services dynamically ?
    Any help is highly appreciated.
    Thank you.
    Regards,
    Alekhya

  • Performance Problem in parsing large XML file (15MB)

    Hi,
    I'm trying to parse a large XML file(15 MB) and facing a clear performance problem. A Simple XML Validation using the following code snippet:
    DBMS_LOB.fileopen(targetFile, DBMS_LOB.file_readonly);
    DBMS_LOB.loadClobfromFile
    tempCLOB,
    targetFile,
    DBMS_LOB.getLength(targetFile),
    dest_offset,
    src_offset,
    nls_charset_id(CONSTANT_CHARSET),
    lang_context,
    conv_warning
    DBMS_LOB.fileclose(targetFile);
    p_xml_document := XMLType(tempCLOB, p_schema_url, 0, 0);
    p_xml_document.schemaValidate();
    is taking 30 mins on a HP-UX (4GB ram, 2 CPU) machine (Oracle version : 9.2.0.4).
    Please explain what could be going wrong.
    Thanks In Advance,
    Vineet

    Thanks Mark,
    I'll open a TAR and also upload the schema and instance XML.
    If i'm not changing the track too much :-) one more thing in continuation:
    If i skip the Schema Validation step and directly insert the instance document into a Schema linked XMLType table, what does OracleXDB do in such a case?
    i'm getting a severe performance hit here too... the same file as above takes almost 40 mins to Insert.
    code snippet:
    DBMS_LOB.fileopen(targetFile, DBMS_LOB.file_readonly);
    DBMS_LOB.loadClobfromFile
    tempCLOB,
    targetFile,
    DBMS_LOB.getLength(targetFile),
    dest_offset,
    src_offset,
    nls_charset_id(CONSTANT_CHARSET),
    lang_context,
    conv_warning
    DBMS_LOB.fileclose(targetFile);
    p_xml_document := XMLType(tempCLOB, p_schema_url, 0, 0);
    -- p_xml_document.schemaValidate();
    insert into INCOMING_XML values(p_xml_document);
    Here table INCOMING_XML is :
    TABLE of SYS.XMLTYPE(XMLSchema "http://INCOMING_XML.xsd" Element "MatchingResponse") STORAGE Object-
    relational TYPE "XDBTYPE_MATCHING_RESPONSE"
    This table and type XDBTYPE_MATCHING_RESPONSE were created using the mapping provided in the registered XML Schema.
    Thanks,
    Vineet

  • I want to load large raw XML file in firefox and parse by DOM. But, for large XML file the firefox very slow some time crashed . Is there any option to increase DOM handling memory in Firefox

    Actually i am using an off-line form to load very large XML file and using firefox to load that form. But, its taking more time to load and some time the browser crashed. through DOM parsing this XML file to my form. Is there any option to increase DOM handler size in firefox

    Thank you for your suggestion. I have a question,
    though. If I use a relational database and try to
    access it for EACH and EVERY click the user makes,
    wouldn't that take much time to populate the page with
    data?
    Isn't XML store more efficient here? Please reply me.You have the choice of reading a small number of records (10 children per element?) from a database, or parsing multiple megabytes. Reading 10 records from a database should take maybe 100 milliseconds (1/10 of a second). I have written a web application that reads several hundred records and returns them with acceptable response time, and I am no expert. To parse an XML file of many megabytes... you have already tried this, so you know how long it takes, right? If you haven't tried it then you should. It's possible to waste a lot of time considering alternatives -- the term is "analysis paralysis". Speculating on how fast something might be doesn't get you very far.

  • Query in a large xml file

    Hello,
    I'm trying to work with very large xml files which are created from csv files. These files may be very large - up to 1 GB ! Untill now I have managed to do several validations on these big xml files, and the only thing that works for me is SAX parser, DOM is out of the question because it fills up memory.
    My next task is to do queries on these files, smth like:
    select field1,field2 from file.xml
    where field3 = 'A'
    and (fileld4>'B' or field1='C')
    order by field2.
    I searched the net about finding out how to make queries on xml files (since I have never done queries on xml before), but I couldn't find which "query language" is best for large files. If I use XPath (XSLT) will that not cause me memory problems because XSLT represents the file as a memory object?
    My idea is to parse the file with SAX and check every row if it fits the where condition and then write it immediately to a result xml file. But validating the where statement can be very complicated without using some tool. Also the order by statement is another problematic issue.
    Does anyone have some more intelegent ideas about how I can do this? Please help! :(
    The xml file looks like this:
    <doc>
    <row id ="1">
    <column id="1" name="column1">value</column>
    <column id="N" name="columnN">value</column>
    </row>
    <row id ="M">
    <column id="1" name="column1">value</column>
    <column id="N" name="columnN">value</column>
    </row>
    </doc>

    Hi all,
    Thank you very much for your replies.
    First, saxon didn't work because it uses an in-memory parser, and that is what I was trying to avoid.
    Different database is also out of the question, because the customer insist on XML, and also there are some files that can never be converted to a database table, because eventually with some transformations thay are changed and are not completely like the standard csv format.
    I think that maybe http://exist.sourceforge.net is the rigth solution for me, but I will probably try it in the next version of my project.
    For now I have managed to make the project with only SAXParser and a lot of back - end programming and it works ok, althoug it was very hard to make it, and will be harded to maintain, so I will try to look at the eXist project.
    Thanks everyone for the help.

  • Large XML file Loading

    I have a large XML file that I am converting to an
    ArrayCollection to use as a dataprovider for a datagrid. It takes
    sometime to fully load. Is there any way to load partial list while
    the rest of the list is loading?? or does anyone know a way speed
    up this process??
    Thanks

    I'd try to modify the autoComplete component.
    You could break this processing up into smaller chunks. For
    it to work, you need some outside counter or indexer that keeps
    track of where you are. Have the conversion function process say
    nodes 0-500, then end. Then using callLater, call that function
    again, to process ne next batch of nodes.
    This process will allow the UI to update between iteration
    batches. If you need more responsiveness, you could try monitoring
    mouse move, and stopping the conversion, until the mouse is
    inactive again. That is just brainstorming. I have not tried it
    (the mouse move part. I know the iterator method works to allow the
    UI to update.)
    Tracy

  • ESB/File Adapter - XML files containing multiple messages

    Hi,
    In all the examples on file adapters I read, if files contain multiple messages, it always concerns non-XML files, such as CSV files.
    In our case, we have an XML file containing multiple messages, which we want to process separately (not in a batch). We selected "Files contain Multiple Messages" and set "Publish Messages in Batches of" to 1.
    However, the OC4J log files show the following error:
    ORABPEL-12505
    Payload Record Element is not DOM source.
    The Resource Adapter sent a Message to the Adapter Framework which could not be converted to a org.w3c.dom.Element.
    Anyone knows whether it's possible to do this for XML files?
    Regards, Ronald

    Maybe I need to give a little bit more background info.
    Ideally, one would only read/pick-up small XML documents in which every XML document forms a single message. In that way they can be processed individually.
    However, in our case an external party supplies multiple messages in a single batch-file, which is in XML format. I want to "work" on individual messages as soon as possible and not put a huge batch file through our ESB and BPEL processes. Unfortunately we can not influence the way the XML file is supplied, since we are not the only subscriber to it.
    So yes, we can use XPath to extract all individual messages from the XML batch-file and start a ESB process instance for each individual message. But that would require the creation of another ESB or BPEL process which only task is to "chop up" the batch file and start the original ESB process for each message.
    I was hoping that the batch option in the File adapter could also do this for XML content and not only for e.g. CSV content. That way it will not require an additional process and manual coding.
    Can anyone confirm this is not supported in ESB?
    Regards,
    Ronald
    Message was edited by:
    Ronald van Luttikhuizen

  • Best technology to navigate through a very large XML file in a web page

    Hi!
    I have a very large XML file that needs to be displayed in my web page, may be as a tree structure. Visitors should be able to go to any level depth nodes and access the children elements or text element of those nodes.
    I thought about using DOM parser with Java but dropped that idea as DOM would be stored in memory and hence its space consuming. Neither SAX works for me as every time there is a click on any of the nodes, my SAX parser parses the whole document for the node and its time consuming.
    Could anyone please tell me the best technology and best parser to be used for very large XML files?

    Thank you for your suggestion. I have a question,
    though. If I use a relational database and try to
    access it for EACH and EVERY click the user makes,
    wouldn't that take much time to populate the page with
    data?
    Isn't XML store more efficient here? Please reply me.You have the choice of reading a small number of records (10 children per element?) from a database, or parsing multiple megabytes. Reading 10 records from a database should take maybe 100 milliseconds (1/10 of a second). I have written a web application that reads several hundred records and returns them with acceptable response time, and I am no expert. To parse an XML file of many megabytes... you have already tried this, so you know how long it takes, right? If you haven't tried it then you should. It's possible to waste a lot of time considering alternatives -- the term is "analysis paralysis". Speculating on how fast something might be doesn't get you very far.

  • Transform Large XML files with XSL

    HELP, LARGE XML FILES
    I have got 30 - 50 MB large xml file, and i would like to transform it
    with xslt, i tried but i have got OutOfMemory Exception.
    I tried to find out the solution on JAVA site, but i didn't find it.
    I can not displit my xml file. I hope for some help.
    I tried really everything.
    Thanks a lot

    What is your machine configuration ?
    The above 2 suggestions would help, but it does depend on how your software is written.
    Please post more info about your environment and object design.
    Chintan

Maybe you are looking for