Very large XML String parameters

Hi !
I'm using AXIS 1.x, websphere 5 --- The problem is - when i call webservice with xml (String) parameter upto size of 10kb-400kb.. it works fine..
But my application could genrate very large xml, like 900kb-1000kb even more.. When this large XML is sent as String parameter.. no reply is recieved back..
Can some body throw some light.. what is going wrong... and which approch to be followed.
Thanks a lot
@mit

Maybe this example on the XDB forum will be helpful...
XMLType view of Relational Content
XML type questions are best asked in that forum.
;)

Similar Messages

  • Best technology to navigate through a very large XML file in a web page

    Hi!
    I have a very large XML file that needs to be displayed in my web page, may be as a tree structure. Visitors should be able to go to any level depth nodes and access the children elements or text element of those nodes.
    I thought about using DOM parser with Java but dropped that idea as DOM would be stored in memory and hence its space consuming. Neither SAX works for me as every time there is a click on any of the nodes, my SAX parser parses the whole document for the node and its time consuming.
    Could anyone please tell me the best technology and best parser to be used for very large XML files?

    Thank you for your suggestion. I have a question,
    though. If I use a relational database and try to
    access it for EACH and EVERY click the user makes,
    wouldn't that take much time to populate the page with
    data?
    Isn't XML store more efficient here? Please reply me.You have the choice of reading a small number of records (10 children per element?) from a database, or parsing multiple megabytes. Reading 10 records from a database should take maybe 100 milliseconds (1/10 of a second). I have written a web application that reads several hundred records and returns them with acceptable response time, and I am no expert. To parse an XML file of many megabytes... you have already tried this, so you know how long it takes, right? If you haven't tried it then you should. It's possible to waste a lot of time considering alternatives -- the term is "analysis paralysis". Speculating on how fast something might be doesn't get you very far.

  • What are the best tools for opening very large XML files and examining the tree and confirming they are valid?

    I am generating some very large XML files (600,000+ lines, 50MB+ characters). I finally have them all being valid XML and valid UTF-8.
    But the files are so large Safari and Chrome will often not open them. FireFox will though.
    Instead of these browsers, I was wondering if there are there any other recommended apps for the Mac for opening and viewing the XML, getting an error message if they are not valid for some reason and examing the XML tree?
    I opened the file in the default app for XML which is Xcode, but that is just like opening it in a plain text editor. You can't expand/collapse the XML tree like you can with a browser, and it doesn't report errors.
    Thanks,
    Doug

    Hi Tom,
    I had not seen that list. I'll look it over.
    I'm also in touch with the developer of BBEdit (they are quite responsive) and they are willing to look at the file in question and see why it is not reporting UTF-8 errors while Chrome is.
    For now I have all the invalid characters quashed and things are working. But it would be useful in the future.
    By the by, some of those editors are quite pricey!
    doug

  • Best parser for handling very large XML  document

    which is the best parser whenread and extract information from very large XML document

    Any SAX-parser, since DOM would use 6 times as much primary memory as the file-size.
    Xerces SAX-parser is in my experience the fastest.
    Gil

  • Writing a large xml string to a file

    Hi,
    I get a large xml file, into a String (It might be around 8 MB also). After this, I need to write this to a file. Which java method I can use for the same.
    I am getting confused by the various classes for writing a file.
    I want to use the best, efficient way of doing this. Can any one please help?
    public static void writeStringToFile( String fileName, String content,
              boolean append )
              throws FileNotFoundException, IOException
              FileOutputStream fos = null;
              File file = new File (fileName);
              try {
    //++  fos = new FileOutputStream( file);     
          fos = new FileOutputStream( file, append );                 //++
                   fos.write(getBytes(content));
              } catch (FileNotFoundException fne) {
                   throw fne;
              } catch (IOException ioe) {
                   throw ioe;
              } finally {
                   try {
                   if(fos != null)
                        fos.close();
                   } catch (Exception _ex) {
         }

    hi_all wrote:
    But is there any any particular reason for using the encoding specific to xml?Yes, because that's what the XML specification says you're supposed to do. If you don't do that then you may produce malformed XML.
    Will there be any perf degrade or truncation of the large string, while writing to the file, if we use,
    out.write(sigUpdateString);Compared to what? You're just writing the data out to a file, the quickest way to do that is to just write the data out. The two things you should do are: (1) use a BufferedWriter (2) stop obsessing about performance.

  • How to process a large XML string passed to a LONG variable?

    I am attempting to extract and loop through some XML that is stored in a variable (v_xml_string) that is defined as LONG data type. However, I am receiving an ORA-01460: unimplemented or unreasonable conversion requested when the string value exceeds 20 records in the XML layout below. When I performed a LENGHTB on a sample XML string containing 19 records (just below the threshold of erroring), I get 3895, which I'm assuming is in BYTES...this is not near the 32,760 byte limit of PL/SQL variables defined as LONG. I suppose my other alternative is that I use CLOB datatype instead of LONG.
    XML layout:
    <?xml version="1.0"?>
    <DocumentElement>
      <tblElections>
        <EmpID>872G4</EmpID>
        <MgrNTID>JohnDoe</MgrNTID>
        <Entity>050595</Entity>
        <AddlAmt>1000</AddlAmt>
        <CheckedForSave>Y</CheckedForSave>   
      </tblElections>
    </DocumentElement>sample of code where error appers to be occurring:
    DECLARE
      v_xml_string LONG;
    BEGIN
        FOR v_xml_rec IN (SELECT t.COLUMN_VALUE.extract('/tblElections/EmpID/text()') .getStringVal() EmpID,
                                 t.COLUMN_VALUE.extract('/tblElections/MgrNTID/text()') .getStringVal() MgrNTID,
                                 t.COLUMN_VALUE.extract('/tblElections/FAEntity/text()') .getStringVal() FAEntity,
                                 t.COLUMN_VALUE.extract('/tblElections/AddlSLEAAmt/text()') .getStringVal() AddlSLEAAmt,
                                 t.COLUMN_VALUE.extract('/tblElections/CheckedForSave/text()') .getStringVal() CheckedForSave
                            FROM TABLE(xmlsequence(XMLTYPE(v_xml_string) .extract('/DocumentElement/tblElections'))) t)
        LOOP
    ... <do some stuff here>
       END LOOP;
    END;

    Strings in SQL are limited to 4000 in length, the long variable will work up to 32K as long as it is not used in SQL, if it goes over 4000 you will get the error.
    SQL> declare
      2    l long;
      3  begin
      4    l := rpad('x',32000,'x');
      5    dbms_output.put_line('length is : ' || to_char(length(l)));
      6  end;
      7  /
    length is : 32000
    PL/SQL procedure successfully completed.
    SQL> edi
    Wrote file afiedt.sql
      1  declare
      2    l long;
      3    n number;
      4  begin
      5    l := rpad('x',32000,'x');
      6    select length(l) into n from dual;
      7    dbms_output.put_line('length is : ' || to_char(n));
      8* end;
    SQL> /
    declare
    ERROR at line 1:
    ORA-01460: unimplemented or unreasonable conversion requested
    ORA-06512: at line 6
    SQL> edi
    Wrote file afiedt.sql
      1  declare
      2    l long;
      3    n number;
      4  begin
      5    l := rpad('x',4000,'x');
      6    select length(l) into n from dual;
      7    dbms_output.put_line('length is : ' || to_char(n));
      8* end;
    SQL> /
    length is : 4000
    PL/SQL procedure successfully completed.
    SQL> edi
    Wrote file afiedt.sql
      1  declare
      2    l long;
      3    n number;
      4  begin
      5    l := rpad('x',4001,'x');
      6    select length(l) into n from dual;
      7    dbms_output.put_line('length is : ' || to_char(n));
      8* end;
    SQL> /
    declare
    ERROR at line 1:
    ORA-01460: unimplemented or unreasonable conversion requested
    ORA-06512: at line 6
    SQL>

  • I want to load large raw XML file in firefox and parse by DOM. But, for large XML file the firefox very slow some time crashed . Is there any option to increase DOM handling memory in Firefox

    Actually i am using an off-line form to load very large XML file and using firefox to load that form. But, its taking more time to load and some time the browser crashed. through DOM parsing this XML file to my form. Is there any option to increase DOM handler size in firefox

    Thank you for your suggestion. I have a question,
    though. If I use a relational database and try to
    access it for EACH and EVERY click the user makes,
    wouldn't that take much time to populate the page with
    data?
    Isn't XML store more efficient here? Please reply me.You have the choice of reading a small number of records (10 children per element?) from a database, or parsing multiple megabytes. Reading 10 records from a database should take maybe 100 milliseconds (1/10 of a second). I have written a web application that reads several hundred records and returns them with acceptable response time, and I am no expert. To parse an XML file of many megabytes... you have already tried this, so you know how long it takes, right? If you haven't tried it then you should. It's possible to waste a lot of time considering alternatives -- the term is "analysis paralysis". Speculating on how fast something might be doesn't get you very far.

  • Query in a large xml file

    Hello,
    I'm trying to work with very large xml files which are created from csv files. These files may be very large - up to 1 GB ! Untill now I have managed to do several validations on these big xml files, and the only thing that works for me is SAX parser, DOM is out of the question because it fills up memory.
    My next task is to do queries on these files, smth like:
    select field1,field2 from file.xml
    where field3 = 'A'
    and (fileld4>'B' or field1='C')
    order by field2.
    I searched the net about finding out how to make queries on xml files (since I have never done queries on xml before), but I couldn't find which "query language" is best for large files. If I use XPath (XSLT) will that not cause me memory problems because XSLT represents the file as a memory object?
    My idea is to parse the file with SAX and check every row if it fits the where condition and then write it immediately to a result xml file. But validating the where statement can be very complicated without using some tool. Also the order by statement is another problematic issue.
    Does anyone have some more intelegent ideas about how I can do this? Please help! :(
    The xml file looks like this:
    <doc>
    <row id ="1">
    <column id="1" name="column1">value</column>
    <column id="N" name="columnN">value</column>
    </row>
    <row id ="M">
    <column id="1" name="column1">value</column>
    <column id="N" name="columnN">value</column>
    </row>
    </doc>

    Hi all,
    Thank you very much for your replies.
    First, saxon didn't work because it uses an in-memory parser, and that is what I was trying to avoid.
    Different database is also out of the question, because the customer insist on XML, and also there are some files that can never be converted to a database table, because eventually with some transformations thay are changed and are not completely like the standard csv format.
    I think that maybe http://exist.sourceforge.net is the rigth solution for me, but I will probably try it in the next version of my project.
    For now I have managed to make the project with only SAXParser and a lot of back - end programming and it works ok, althoug it was very hard to make it, and will be harded to maintain, so I will try to look at the eXist project.
    Thanks everyone for the help.

  • Parsing the large XML ( 1GB) using SAX PARSER

    We have very large XMLs being generated by processes. These XMLs should be validated first and then parsed next. We have implementation that works for small files. My question is
    how can we validate and parse large XMLs with SAX parser?

    The same way as parsing a small XML file, no? Why don't you try it? Then if you have problems that you can't solve, ask about them here.

  • Bulk Loader Program to load large xml document

    I am looking for a bulk loader database program that will load a very large xml document. The simple bulk loader application available on the oracle site will not load this document due to its size which is approximately 20MG. Please advise asap. Thank you.

    From the above document:
    Storing XML Data Across Tables
    Question
    Can XML- SQL Utility store XML data across tables?
    Answer
    Currently XML-SQL Utility (XSU) can only store to a single table. It maps a canonical representation of an XML document into any table/view. But of course there is a way to store XML with the XSU across table. One can do this using XSLT to transform any document into multiple documents and insert them separately. Another way is to define views over multiple tables (object views if needed) and then do the inserts ... into the view. If the view is inherently non-updatable (because of complex joins, ...), then one can use INSTEAD-OF triggers over the views to do the inserts.
    -- I've tried this, works fine.

  • Inserting large xml data into xmltype

    Hi all,
    In my project I need to insert very large XML data into xmltype column.
    My table:
    CREATE TABLE TransDetailstblCLOB ( id number, data_xml XMLType) XmlType data_xml STORE AS CLOB;
    I am using JDBC approach to insert values. It works fine for data less than 4000 bytes when using preparedStatement.setString(1, xmlData). As I have to insert large Xml data >4000 bytes I am now using preparedStatement.setClob() methods.
    My code works fine for table which has column declared as CLOB expicitly. But for TransDetailstblCLOB where the column is declared as XMLTYPE and storage option as CLOB I am getting the error : "ORA-01461: can bind a LONG value only for insert into a LONG column".
    This error means that there is a mismatch between my setClob() and column. which means am I not storing in CLOB column.
    I read in Oracle site that
    When you create an XMLType column without any XML schema specification, a hidden CLOB column is automatically created to store the XML data. The XMLType column itself becomes a virtual column over this hidden CLOB column. It is not possible to directly access the CLOB column; however, you can set the storage characteristics for the column using the XMLType storage clause."
    I dont understand its stated here that it is a hidden CLOB column then why not I use setClob()? It worked fine for pure CLOB column (another table) then Why is it giving such error for XMLTYPE table?
    I am struck up with this since 3 days. Can anyone help me please?
    My code snippet:
    query = "INSERT INTO po_xml_tab VALUES (?,XMLType(?)) ";
              //query = "INSERT INTO test VALUES (?,?) ";
         // Get the statement Object
         pstmt =(OraclePreparedStatement) conn.prepareStatement(query);
         // pstmt = conn.prepareStatement(query);
         //xmlData="test";
    //      If the temporary CLOB has not yet been created, create new
         temporaryClob = oracle.sql.CLOB.createTemporary(conn, true, CLOB.DURATION_SESSION);
         // Open the temporary CLOB in readwrite mode to enable writing
         temporaryClob.open(CLOB.MODE_READWRITE);
         log.debug("tempClob opened"+"size bef writing data"+"length "+temporaryClob.getLength()+
                   "buffer size "+temporaryClob.getBufferSize()+"chunk size "+temporaryClob.getChunkSize());
         OutputStream out = temporaryClob.getAsciiOutputStream();
         InputStream in = new StringBufferInputStream(xmlData);
    int length = -1;
    int wrote = 0;
    int chunkSize = temporaryClob.getChunkSize();
    chunkSize=xmlData.length();
    byte[] buf = new byte[chunkSize];
    while ((length = in.read(buf)) != -1) {
    out.write(buf, 0, length);
    wrote += length;
    temporaryClob.setBytes(buf);
    log.debug("Wrote lenght"+wrote);
         // Bind this CLOB with the prepared Statement
         pstmt.setInt(1,100);
         pstmt.setStringForClob(2, xmlData);
         int i =pstmt.executeUpdate();
         if (i == 1) {
         log.debug("Record Successfully inserted!");
         }

    try this, in adodb works:
    declare poXML CLOB;
    BEGIN
    poXML := '<OIDS><OID>large text</OID></OIDS>';
    UPDATE a_po_xml_tab set podoc=XMLType(poXML) WHERE poid = 102;
    END;

  • Store large XML files

    Has anybody experience with storing of very large XML files (400 MB) as a XML type in Oracle?
    Is this feasible? Is this efficient (response time)? Can you recommend to do this?
    Or is it better (with a better performance) to parse the files and store the data in an own db scheme?
    Thanks,
    -Bernhard

    Here is an Oracle ACE with experience on storing large XML into the DB
    [HOWTO: Load Really Big XML Files|http://www.liberidu.com/blog/?p=473]
    There are more examples on Marco's blog as well of different ways to load XML into the DB.
    Yes it is feasible. Efficient depends upon the complexity of the XML (not so much size) and how you are storing it into the DB (XMLType, XMLType associated to a schema, Object Relational, Hybrid, etc).
    Mark Drake from the {forum:id=34} forum has seen good performance just using INSERT INTO ... VALUES ... (BFILENAME()) to load the XML as well. Some other suggestions are listed in the FAQ on that forum as well.
    Any future questions you have on this topic would probably be best answered by posting in that forum.

  • Is JAXB suitable for large XML files ?

    Hi,
    I have a very large XML file (~700 MB) (schema available). I need to unmarshall this into java objects and carry out some (business validation rules) on it.. These buisness rules may involve validating data from content objects that correspond to different sections of this large XML file.
    I am uncertain whether JAXB will help me here. (just started on it) Does JAXB build the entire content tree for the XML document during Unmarshaller.unmarshall ? Is there anyway of asking it to build content objects on demand as opposed to building the whole content tree immediately ?
    All help/suggestions appreciated.

    Forgot to add:
    after carrying out validation the data is put into some RDBMS tables.
    One approach would be to convert the XML files into SQL Loader compatible flat files (using a tool). Load these flat files into staging tables. Perform business validations on staging table data and then finally move the data into the main tables. All the validating logic could either be in stored procedures or java code.
    The above is very long-winded. It would be great if JAXB can handle very large XML files (without loading the whole XML file into memory) so that business validations can be done by java, without any intermediate format conversion.
    I hope the above is somewhat clear.

  • Large xml file

    I have a very large xml file (i.e. test.xml) which I need to import into our oracle database.
    We have Oracle9i Release 9.2.0.6.0.
    Test.xml is contained in a directory on our server, i.e. "xmldirectory".
    I have looked on the web, forums (including this one), left, right and centre...also looked at this link http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14259/xdb03usg.htm#sthref246 ...but was unable to get anything to work.
    Can you please provide examples?
    Thanks

    well, you could
    do this
    http://www.oracle-base.com/articles/9i/ParseXMLDocuments9i.php
    or this
    http://www.devx.com/xml/Article/32046
    or this
    http://www.oracle.com/technology/pub/articles/quinlan-xml.html
    or this
    XMLType view of Relational Content

  • XML Report comes up as blank when a very large sequence is run

    Forum,
    We have multiple test sequences which me mix and match to do testing on different producst in our product line. We have no issues when we are working with small sequences ( small sequences : Which generate reports upto 12-50MB ). However when the sequences become large ( ~100-200 tests at one go , we get a blank report with the following text: 
    Begin Sequence:
    No Sequence Results Found
    We notice this typically for report sizes 60MB and more. Is there a limit to how much teststands result collection memory can store ? 
    My predominant report options 
    -- On the fly reporting disabled
    -- XML - expand.xsl selected
    The same set of settings do not make any difference for smaller reports, but give error for larger sequences - So I suspect its something to do with size of report being generated ! 

    I created a simple sequence file which had a 100 Pass/Fail steps using the None Adaptor. These 100 steps in the form of 10 SequenceCall steps each performed 10 Pass/Fail step was placed in a For loop in the MainSequence. The loop was set to run 1000 iterations. The ReportOptions were set for XML horizonal, On the Fly enabled.
    This I set running. Explorer was open at the folder where the result file was being stored. Two files were generated, the actual result file and a temporay file. I also had the Taskmanager open to monitor the performance.
    As the result file got larger, about 20MB, I noticed that the size of the file was first set to zero and the data was written to the file. (It seemed like the file was deleted and generated each step result time). I also noticed that as the file got larger and larger, the storing of the step results was having an effect on the performance of the Test Sequence execution.
    I left this running over night and sometime later the execution crashed. (see attached image). Before closing the dialog, I checked explorer to see what the state of the result file was. The both files was empty.
    I repeated the run but this time the number of iteration was set to 500, again the execution crashed but this time the result file did have some data, rather a lot of data, over 50MBs.
    I tried to open the file to check the contents but unfortunately my PC didn't seem to beable to handle a file of that size, as it was taking a long time to load, so I kill the process.
    I dont think changing the iterations from 100 to 500 had anything to do with the getting the results on the second. I just think the point were it crashed was slightly different allowing the result to be transferred back to the file.
    It would be interesting to find out whats going on the On the fly routine.
    It also seems that On the fly seems no better that normal reports generation.  It also seems pointless generating a very large file of results and that generating smaller files would be the better way to go. Using HTML or XML a top level report file could be used to link all the smaller files together.
    Regards
    Ray Farmer 
    Regards
    Ray Farmer

Maybe you are looking for

  • Disk Utility - Disk Image Restore Failure

    Hi folks, when trying to restore a disk image I have had the error message: < Starting Restore… Could not establish communication with helper tool. Starting Restore… Parasite died Lost connection to helper tool> I have searched the discussions, apple

  • Why did I have to get a new username to use this forum?

    I have been a .mac subscriber for what seems like centuries (got my first SE in 89), and I've used the discussion forums elsewhere on the mac site a lot. Yet, today, even though the site properly identified me by name when I logged in, it forced me t

  • Can't deactivate apps off Creative Cloud's desktop.

    Hi, when I first installed Creative Cloud I installed the apps that I thought I needed. When I realized I didn't need some of them I tried to remove it from my computer, but eventually I did a system restore because it was not cooperating. Now the ap

  • How do I access the USB harddrive on E3000? Am I missing something?

    I have my router setup fine to the internet, as well as the media server.  I can't figure out how to access the hard drive from my laptop, however.  The manual says put 192.168.1.1 in the address field of windows explorer, but it just gives me the "S

  • Regarding credit Memo

    HI All , Am facing a problem , when I refere to an invoice and I create the credit memo then at item level -item details  - price data tab it marks automatically as a return.  However, if I create a credit memo without referencing the invoice then it