PostMethod with large XML passed in setRequestEntity is truncated

Hi,
I use PostMethod to transfer large XML to a servlet.
CharArrayWriter chWriter = new CharArrayWriter();
post.marshal(chWriter);
//The chWriter length is 120KB
HttpClient httpClient = new HttpClient();
PostMethod postMethod = new PostMethod(urlStr);
postMethod.setRequestEntity(new StringRequestEntity(chWriter.toString(), null, null));
postMethod.setRequestHeader("Content-type", "text/xml");
int responseCode = httpClient.executeMethod(postMethod);
String responseBody = postMethod.getResponseBodyAsString();
postMethod.releaseConnection();
When I open the request in doPost method in sevlet:
Reader inpReader = request.getReader();
char[] chars = MiscUtils.ReaderToChars(inpReader);
inpReader = new CharArrayReader(chars);
//The data is truncated (in chars[]) ???
static public char[] ReaderToChars(Reader r) throws IOException, InterruptedException {
//IlientConf.logger.debug("Start reading from stream");
BufferedReader br = new BufferedReader(r);
StringBuffer fileData = new StringBuffer("");
char[] buf = new char[1024];
int numRead=0;
while((numRead=br.read(buf)) != -1){
String readData = String.valueOf(buf, 0, numRead);
fileData.append(readData);
buf = new char[1024];
return fileData.toString().toCharArray();
Any idies what can be the problem??
Lior.

Hi,
I use the same code and have 2 tomcats running with the same servlet on each of them.
One running on Apache/2.0.52 (CentOS) Server and the second running on Apache/2.0.52 (windows XP) Server.
I managed to post large XML from (CentOS) Server to (windows XP) Server successfully and failed to post large XML
from (windows XP) Server to (CentOS) Server.
I saw somthing called mod_isapi that might be disabling posting large XML files.
Can anyone help me on going over that limitation?
Thanks,
Lior.

Similar Messages

  • SQL Adapter Crashes with large XML set returned by SQL stored procedure

    Hello everyone. I'm running BizTalk Server 2009 32 bit on Windows Server 2008 R2 with 8 GB of memory.
    I have a Receive Port with the Transport Type being SQL and the Receive Pipeline being XML Receive.
    I have a Send Port which processes the XML from this Receive Port and creates an HIPAA 834 file.
    Once a large file is created (approximately 1.6 GB in XML format, 32 MB in EDI form), a second file 1.7 GB fails to create.
    I get the following error in the Event Viewer:
    Event Type: Warning
    Event Source: BizTalk Server 2009
    Event Category: (1)
    Event ID: 5740
    Date:  10/28/2014
    Time:  7:15:31 PM
    User:  N/A
    The adapter "SQL" raised an error message. Details "HRESULT="0x80004005" Description="Unspecified error"
    Is there a way to change some BizTalk server settings to help in the processing of this large XML set without the SQL adapter crashing?
    Paul

    Could you check Sql Profiler to trace or determine if you are facing deadlock?
    Is your Adapter running under 64 bits?
    Have you studied the possibility of using SqlBulkInser Adapter?
    http://blogs.objectsharp.com/post/2005/10/23/Processing-a-Large-Flat-File-Message-with-BizTalk-and-the-SqlBulkInsert-Adapter.aspx

  • ArrayIndexOutOfBoundsException with large XML

    Hello,
    I have some java code that queries the DB and displays the XML in a browser. I am using the oracle.jbo.ViewObject object with the writeXML() method. Everything works well until I try to process a large XML file, the I get the following error:
    java.lang.ArrayIndexOutOfBoundsException: 16388
         at oracle.xml.io.XMLObjectOutput.writeUTF(XMLObjectOutput.java:215)
         at oracle.xml.parser.v2.XMLText.writeExternal(XMLText.java:354)
         at oracle.xml.parser.v2.XMLElement.writeExternal(XMLElement.java:1459)
         at oracle.xml.parser.v2.XMLElement.writeExternal(XMLElement.java:1459)
         at oracle.xml.parser.v2.XMLElement.writeExternal(XMLElement.java:1459)
         at oracle.xml.parser.v2.XMLElement.writeExternal(XMLElement.java:1459)
         at oracle.xml.parser.v2.XMLElement.writeExternal(XMLElement.java:1414)
         at java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java:1267)
         at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1245)
         at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1052)
         at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:278)
         at com.evermind.server.ejb.EJBUtils.cloneSerialize(EJBUtils.java:409)
         at com.evermind.server.ejb.EJBUtils.cloneObject(EJBUtils.java:396)
    etc...
    I can put in the query to only allow a specific size to be displayed, but the users need to be able to access the larger XML files also. Has anyone else run into this issue?
    Oracle 10g
    Any help or pointers are greatly appreciated.
    Thank you.
    S

    No. We are not limiting the size in our code. Here is a snip of the offending code. The exception occurs on the " results = batchInterfaceView.writeXML(0, 0);" line, but only with larger files.
    <pre>
    try {
    // Request and response helper classes
    XMLHelper request = new XMLHelper(inputXML);
    response = new ResponseXMLHelper();
    if (request.doesValueExist(APP_ERROR_ID)) {
    //get input parameter
    strAppErrorId = request.getValue(APP_ERROR_ID);
    appErrorId = NumberConverter.toBigDecimal(strAppErrorId);
    //get Pos location view
    ViewObject batchInterfaceView =
    findViewObject(GET_ERROR_VIEW, PACKAGE_NAME);
    // get data for selected BatchInterface
    batchInterfaceView.setWhereClauseParam(0, appErrorId);
    batchInterfaceView.executeQuery();
    results = batchInterfaceView.writeXML(0, 0);
    response.addView(results);
    } catch (JboException e) {
    </pre>
    Thank you again for any help.
    S

  • Does the parser work with large XML files?

    Is there a restriction on the XML file size that can be loaded into the parser?
    I am getting a out of memory exception reading in large XML file(10MB) using the commands
    DOMParser parser = new DOMParser();
    URL url = createURL(argv[0]);
    parser.setErrorStream(System.err);
    parser.setValidationMode(true);
    parser.showWarnings(true);
    parser.parse(url);
    Win NT 4.0 Server
    Sun JDK 1.2.2
    ===================================
    Error output
    ===================================
    Exception in thread "main" java.lang.OutOfMemoryError
    at oracle.xml.parser.v2.ElementDecl.getAttrDecls(ElementDecl.java, Compi
    led Code)
    at java.util.Hashtable.<init>(Unknown Source)
    at oracle.xml.parser.v2.DTDDecl.<init>(DTDDecl.java, Compiled Code)
    at oracle.xml.parser.v2.ElementDecl.getAttrDecls(ElementDecl.java, Compi
    led Code)
    at oracle.xml.parser.v2.ValidatingParser.checkDefaultAttributes(Validati
    ngParser.java, Compiled Code)
    at oracle.xml.parser.v2.NonValidatingParser.parseAttributes(NonValidatin
    gParser.java, Compiled Code)
    at oracle.xml.parser.v2.NonValidatingParser.parseElement(NonValidatingPa
    rser.java, Compiled Code)
    at oracle.xml.parser.v2.ValidatingParser.parseRootElement(ValidatingPars
    er.java:97)
    at oracle.xml.parser.v2.NonValidatingParser.parseDocument(NonValidatingP
    arser.java:199)
    at oracle.xml.parser.v2.XMLParser.parse(XMLParser.java:146)
    at TestLF.main(TestLF.java:40)
    null

    We have a number of test files that are that size and it works without a problem. However using the DOMParser does require significantly more memory than your doc size.
    What is the memory configuration of the JVM that you are running with? Have you tried increasing it? Are you using our latest version 2.0.2.6?
    Oracle XML Team

  • Problems with Large XML files

    I have tried increasing the memory pool using the -mx and -ms options. It doesnt work. I am using your latest XML parser for Java v2. Please let me know if there are some specific options I should be using.
    Thanx,
    -Sameer
    We have a number of test files that are that size and it works without a problem. However using the DOMParser does require significantly more memory than your doc size.
    What is the memory configuration of the JVM that you are running with? Have you tried increasing it? Are you using our latest version 2.0.2.6?
    Oracle XML Team
    Is there a restriction on the XML file size that can be loaded into the parser?
    I am getting a out of memory exception reading in large XML file(10MB) using the commands
    DOMParser parser = new DOMParser();
    URL url = createURL(argv[0]);
    parser.setErrorStream(System.err);
    parser.setValidationMode(true);
    parser.showWarnings(true);
    parser.parse(url);
    Win NT 4.0 Server
    Sun JDK 1.2.2
    ===================================
    Error output
    ===================================
    Exception in thread "main" java.lang.OutOfMemoryError
    at oracle.xml.parser.v2.ElementDecl.getAttrDecls(ElementDecl.java, Compi
    led Code)
    at java.util.Hashtable.<init>(Unknown Source)
    at oracle.xml.parser.v2.DTDDecl.<init>(DTDDecl.java, Compiled Code)
    at oracle.xml.parser.v2.ElementDecl.getAttrDecls(ElementDecl.java, Compi
    led Code)
    at oracle.xml.parser.v2.ValidatingParser.checkDefaultAttributes(Validati
    ngParser.java, Compiled Code)
    at oracle.xml.parser.v2.NonValidatingParser.parseAttributes(NonValidatin
    gParser.java, Compiled Code)
    at oracle.xml.parser.v2.NonValidatingParser.parseElement(NonValidatingPa
    rser.java, Compiled Code)
    at oracle.xml.parser.v2.ValidatingParser.parseRootElement(ValidatingPars
    er.java:97)
    at oracle.xml.parser.v2.NonValidatingParser.parseDocument(NonValidatingP
    arser.java:199)
    at oracle.xml.parser.v2.XMLParser.parse(XMLParser.java:146)
    at TestLF.main(TestLF.java:40)
    null

    You might try using a different JDK/JRE - either a 1.1.6+ or 1.3 version as 1.2 in our experience has the largest footprint. If this doesn't work can you give us some details about your system configuration. Finally you might try the SAX interface as it does not need to load the entire DOM tree into memory.
    Oracle XML Team

  • Xml query hungs up with large xml response from utl_http request

    We are having very sensitive problem in Production environment with xmlquery.
    When receive a small or medium size xml, the query shown below works. But when the xml is large or very large its hung and slow down all the database.
    We are with Oracle 11gR2
    We are using clob to parse the response from the http request.
    What could be the problem or the solutions?. Please help. Its urgent...
    SELECT opciones_obj (x.strindice,
    x.nombrecompleto,
    x.nombre,
    x.strtipodato,
    x.codigoopcion,
    x.floatval,
    x.strtipo,
    x.strval)
    BULK COLLECT INTO t_opciones
    FROM XMLTABLE (
    xmlnamespaces (
    'http://schemas.xmlsoap.org/soap/envelope/' AS "env",
    'http://wsevaluarreglacondicioncomercial/' AS "ns0",
    'http://wsevaluarreglacondicioncomercial/types/' AS "ns1",
    'http://www.oracle.com/webservices/internal/literal' AS "ns2"),
    '/env:Envelope/env:Body/ns0:listarOpcionesAtributoEventoResponseElement/ns0:result/ns1:listaVariables/ns2:item/ns2:item'
    PASSING rsp_xml
    COLUMNS strindice VARCHAR2 (4000)
    PATH 'ns2:mapEntry[ns2:key="strIndice"]/ns2:value',
    nombrecompleto VARCHAR2 (4000)
    PATH 'ns2:mapEntry[ns2:key="nombreCompleto"]/ns2:value',
    nombre VARCHAR2 (4000)
    PATH 'ns2:mapEntry[ns2:key="nombre"]/ns2:value',
    strtipodato VARCHAR2 (4000)
    PATH 'ns2:mapEntry[ns2:key="strTipoDato"]/ns2:value',
    codigoopcion NUMBER
    PATH 'ns2:mapEntry[ns2:key="codigoOpcion"]/ns2:value',
    floatval FLOAT
    PATH 'ns2:mapEntry[ns2:key="floatVal"]/ns2:value',
    strtipo VARCHAR2 (4000)
    PATH 'ns2:mapEntry[ns2:key="strTipo"]/ns2:value',
    strval VARCHAR2 (4000)
    PATH 'ns2:mapEntry[ns2:key="strVal"]/ns2:value') x;

    What could be the problem or the solutions?1) Create an XMLType table (could be temporary) using binary XML storage :
    create table tmp_xml of xmltype
    xmltype store as securefile binary xml;2) In your procedure, load the XMLType containing the response (rsp_xml) into the table :
    insert into tmp_xml values (rsp_xml);3) Then, execute the query directly from the table :
    SELECT opciones_obj ( ... )
    BULK COLLECT INTO t_opciones
    FROM tmp_xml t
       , XMLTABLE (
             xmlnamespaces ( ... ),
             '/env:Envelope/env:Body/...'
             PASSING t.object_value
             COLUMNS ...4) At the end of the procedure, delete (or truncate) the table or simply let the table delete itself when the session ends (in case you created it TEMPORARY)

  • MII Performance with Large XML

    HI,
    We are facing performance issue while parsing XML of large size of around 50MB.
    Initially XMII was crashing due to "out of heap memory error". However, with some changes the problem has been removed.But
    now, we are facing an issue with time taken by the Payload to execute. Its taking more than half-an-hour to execute the transaction.
    The solution tried so far has decreased the time to half-an-hour. earlier it was taking more than one and half hour to process.
    We have tried parallel processing by asynchronous call to the transactions
    Is there any way to further reduce the time of processing?
    Further, is it possible to find out which parser is used by MII internally for XML SAX or DOM Parser.
    Thanks
    Amit

    Hi Amit,
    Just some tips to improve performance of BLS.
    1. Use Xpath whenever possible.
    2. Remove unnecessary repeaters performing on BLS.
    3. Check the Like to Optimizing BLS Performance.
    Optimizing BLS Performance for XML Handling in SAP MII
    If you are storing the data in database. just pass whole xml to query and insert data using bulk insert.
    Thanks
    Anshul

  • TransformerHandler throws OutOfMemoryError with large xml files

    i'm using TransformerHandler to convert any content to SAX events and transform it using XSLT into an XML file.
    the problem is that for large amount of content i get a OutOfMemoryError.
    it seams that the content is kept in memory and only flushed when i call handler.endDocument();
    i tried using auto flush writers as the Result, or call the flush() method myself, but nothing.
    here is the example - pls help!
    import javax.xml.transform.TransformerFactory;
    import javax.xml.transform.sax.SAXTransformerFactory;
    import javax.xml.transform.sax.TransformerHandler;
    import javax.xml.transform.stream.StreamResult;
    import javax.xml.transform.stream.StreamSource;
    import org.xml.sax.helpers.AttributesImpl;
    public class Test
          * test handler memory usage
          * @param loops no of loops - when large enogh - OutOfMemoryError !!!
          * @param xsltFilePath xslt file
          * @param targetXmlFile output xml file
          * @throws Exception
         public static void testHandlerMemUsage(int loops, String xsltFilePath, String targetXmlFile)throws Exception
              //verify SAX support
              TransformerFactory factory = TransformerFactory.newInstance();
              if(!factory.getFeature(SAXTransformerFactory.FEATURE))
                   throw new UnsupportedOperationException("SAX tranformations not supported");
              TransformerHandler handler=
                   ((SAXTransformerFactory)factory).newTransformerHandler(new StreamSource(xsltFilePath));
              handler.setResult(new StreamResult(targetXmlFile));
              handler.startDocument();
              handler.startElement(null,"root","root",new AttributesImpl());
              //loop
              for(int i=0;i<loops;i++)
                   handler.startElement(null,"el-"+i,"el-"+i,new AttributesImpl());
                   handler.characters("value".toCharArray(),0,"value".length());
                   handler.endElement(null,"el-"+i,"el-"+i);
              handler.endElement(null,"root","root");
              //System.out.println("end document");
              //only after endDocument() starts to print..
              handler.endDocument();
              //System.out.println("ended document");
         public static void main(String[] args)throws Exception
              System.out.println("--starting..");
              testHandlerMemUsage(500000,"/copy.xslt","/testHandlerMemUsage.xml");
              System.out.println("--we are still here -- increase loops..");
    }

    Did you try increasing memeory when starting java with the -Xmx parameter? You know that java uses only 64MB by default, so you might need to increase it to e.g. 256MB for your XML to work.

  • Performance Issues with large XML (1-1.5MB) files

    Hi,
    I'm using an XML Schema based Object relational storage for my XML documents which are typically 1-1.5 MB in size and having serious performance issues with XPath Query.
    When I do XPath query against an element of SQLType varchar2, I get a good performance. But when I do a similar XPath query against an element of SQLType Collection (Varray of varchar2), I get a very ordinary performance.
    I have also created indexes on extract() and analyzed my XMLType table and indexes, but I have no performance gain. Also, I have tried all sorts of storage options available for Collections ie. Varray's, Nested Tables, IOT's, LOB's, Inline, etc... and all these gave me same bad performance.
    I even tried creating XMLType views based on XPath queries but the performance didn't improve much.
    I guess I'm running out of options and patience as well.;)
    I would appreciate any ideas/suggestions, please help.....
    Thanks;
    Ramakrishna Chinta

    Are you having similar symptoms as I am? http://discussions.apple.com/thread.jspa?threadID=2234792&tstart=0

  • Large XML and DBMS_XMLSTORE

    Oracle DB = 10.2.0.3
    I have XML messages (size range 300KB - 15MB) that are inserted into a table with an XMLType column throughout the day.
    The volume will vary...say 10 transactions per minute. A single transaction can contain any number of entities/objects.
    I have no need to preserve the XML - aside from assigning it a sequence number, date etc. and storing it for debugging (if needed).
    Currently, I have a trigger defined on the table that calls various database packages
    that shred the data via XMLTable (i.e. insert into tab1...select...xmltable..).
    The problem is that I potentially need to insert into 50+ tables out of 200+ tables per transaction. So, I'm passing through the single instance document 200+ times.
    The number of tables that are needed to represent the XML will continue to grow.
    A 300KB XML message takes approx 10 seconds to shred. Obviously, I'm concerned about the larger messages.
         So, I was wondering if doing the following would be any better:
    1. Have a XSLT transform the original message into "pseudo" Oracle canonical format.
    At this point, all of the data contained in the message would need to be shredded and all unwanted data from the original message would be filtered..i.e. lowering the message size a little.
    The message to be inserted would then look something like this:
    <Transaction>
         <Table>
         <Name>TAB1</Name>
         <ROWSET>
              <ROW>
              <COL1>
              <COLn>
              </ROW>
    </ROWSET>
    </Table>
         <Table>
         <Name>TAB2</Name>
         <ROWSET>
              <ROW>
              <COL1>
              <COLn>
              </ROW>
    </ROWSET>
    </Table>
    </Transaction>
    2. Next, define a trigger on the table like this:
    CREATE OR REPLACE TRIGGER tr_ai_my_table
    AFTER INSERT
    ON my_table
    FOR EACH ROW
    DECLARE
    CURSOR table_cur
    IS
    SELECT a.table_nm
    ,a.xml_fragment
    FROM XMLTABLE ('/Transaction/Table[Name]'
    PASSING :NEW.xml_document
    COLUMNS
    table_nm VARCHAR2(30) PATH 'Name'
    ,xml_fragment XMLTYPE PATH 'ROWSET'
    ) a;
    v_context DBMS_XMLSTORE.ctxtype;
    v_rows NUMBER;
    -- XML Date format: 2007-08-02
    -- XML Datetime format: 2007-08-02T11:58:28.229-04:00
    -- ALTER SESSION SET NLS_DATE_FORMAT = 'YYYY-MM-DD';
    -- ALTER SESSION SET NLS_TIMESTAMP_FORMAT = 'YYYY-MM-DD-HH24.MI.SS.FF';
    -- ALTER SESSION SET NLS_TIMESTAMP_TZ_FORMAT = 'YYYY-MM-DD"T"HH24:MI:SS.FFTZH:TZM';
    BEGIN
    FOR table_rec IN table_cur
    LOOP
    v_context := DBMS_XMLSTORE.newcontext (table_rec.table_nm);
    v_rows := DBMS_XMLSTORE.insertxml (v_context, table_rec.xml_fragment);
    DBMS_XMLSTORE.closecontext (v_context);
    END LOOP;
    END;
         I know this will get me out of maintaining 200+ insert statements/xpaths. Also, I wouldn't be coding the XSLT either and the XSLT may end up running on some specialized hardware for optimized performance.
    But do I gain a significant performance boost using the Oracle canonical format
    and DBMS_XMLSTORE even though I'm still building the DOM tree to build the cursor ?

    i saw a problem with the xmlbeans implementation with wli 8.1
    we had memeory issues with large xmls messages as they seemded to be in memory even through the JPD had finished.
    to solve this we inserted code at the end of a process to assign the top level xmlbean document class to a tiny version of the same message type. this is valid when the document object is a top level variable of the jpd.
    this was done to stop xmlbeans caching the large message behind the scenes.

  • Best method for encrypting/decrypting large XML files ( 100MB)

    I am in need of encrypting XML for large part files that can get upwards of 100Mb+.
    I found some articles and code, but the only example I was successful in getting to work used XMLCipher, which takes a Document, parses it, and then encrypts it.
    Obviously, 100Mb files do not cooperate well with DOM, so I want to find a better method for encryption/decryption of these files.
    I found some articles using a CipherInputStream and CipherOutputStreams, but am not clear if this is the way to go and if this will avoid memory errors.
    import java.io.*;
    import java.security.spec.AlgorithmParameterSpec;
    import javax.crypto.*;
    import javax.crypto.spec.IvParameterSpec;
    public class DesEncrypter {
        Cipher ecipher;
        Cipher dcipher;
        public DesEncrypter(SecretKey key) {
            // Create an 8-byte initialization vector
            byte[] iv = new byte[]{
                (byte)0x8E, 0x12, 0x39, (byte)0x9C,
                0x07, 0x72, 0x6F, 0x5A
            AlgorithmParameterSpec paramSpec = new IvParameterSpec(iv);
            try {
                ecipher = Cipher.getInstance("DES/CBC/PKCS5Padding");
                dcipher = Cipher.getInstance("DES/CBC/PKCS5Padding");
                // CBC requires an initialization vector
                ecipher.init(Cipher.ENCRYPT_MODE, key, paramSpec);
                dcipher.init(Cipher.DECRYPT_MODE, key, paramSpec);
            } catch (java.security.InvalidAlgorithmParameterException e) {
            } catch (javax.crypto.NoSuchPaddingException e) {
            } catch (java.security.NoSuchAlgorithmException e) {
            } catch (java.security.InvalidKeyException e) {
        // Buffer used to transport the bytes from one stream to another
        byte[] buf = new byte[1024];
        public void encrypt(InputStream in, OutputStream out) {
            try {
                // Bytes written to out will be encrypted
                out = new CipherOutputStream(out, ecipher);
                // Read in the cleartext bytes and write to out to encrypt
                int numRead = 0;
                while ((numRead = in.read(buf)) >= 0) {
                    out.write(buf, 0, numRead);
                out.close();
            } catch (java.io.IOException e) {
        public void decrypt(InputStream in, OutputStream out) {
            try {
                // Bytes read from in will be decrypted
                in = new CipherInputStream(in, dcipher);
                // Read in the decrypted bytes and write the cleartext to out
                int numRead = 0;
                while ((numRead = in.read(buf)) >= 0) {
                    out.write(buf, 0, numRead);
                out.close();
            } catch (java.io.IOException e) {
    }This looks like it might fit, but there is one more twist, I am using a persistence manager and xml encoding to accomplish that, so I am not sure how (where) to implement this method without affecting persistence.
    Any guidance on what would work best in this situation would be appreciated.
    Regards,
    vbplayr2000

    I can give some general guidelines that might help, having done much similar work:
    You have 2 different issues, at least from my reading of your problem:
    1) How to deal with large XML docs that most parsers will not handle without memory issues
    2) Where to hide or "black box" the encrypt/decrypt routines
    #1: Check into XPP3/XMLPull. Yes, it's different that the other XML parsers you are used to using, and more work is involved, but it is blazing fast and can be used to parse a stream as it is being read. You can populate beans and process as needed since there is really not much "inversion of control" involved compared to parsers that go on to finish the entire document or load it all into memory.
    #2: Extend Serializable and write your own readObject/writeObject methods. Place the encrypt/decrypt in there as appropriate. That will "hide" the implementation and should be what any persistence manager can deal with.
    Regards,
    antarti

  • Problem converting XPS document with large images

    Hi,
    When I try to convert XPS documents to PDF in Acrobat 9.0 Pro, I have problems with "large images". They are either truncated or scaled down. The only images that convert correctly are small images that are not stretched to fill a large area. Attached to this message is a simple example. Does anyone know a way to correct this problem? By the way, the page is slightly larger than the standard Letter size.
    Thanks!

    Thanks for your suggestion! In fact, I am the programmer who creates the XPS files from a .NET document. Our company used the method "Postscript + PDF" during a few months and it worked quite well. Recently, we introduced semi-transparent objects and fonts in our documents and the resulting PS prints look really bad. Since I can create a perfect XPS document, and since Acrobat can convert it to PDF, I gave it a try. It solved all the problems that I had with PS printing but there is that thing with images... Anyway, since Acrobat converts XPS to PDF, I guess they should try to solve the bug with the images. As for me, I will see if I can get a component to create PDF documents directly from our application.

  • OSB - Iterating over large XML files with content streaming

    Hi @ll
    I have to iterate over all item in large XML files and insert into a oracle database.
    The file is about 200 MB and contains around 500'000, and I am using OSB 10gR3.
    The XML structure is something like this:
    <allItems>
    <item>.....</item>
    <item>.....</item>
    <item>.....</item>
    <item>.....</item>
    <item>.....</item>
    </allItems>
    Actually I thought about using a proxy service with enabled content streaming and a "for each" action for iterating
    over all items. But for this the whole XML structure has to be materialized into a variable otherwise it is not possible!
    More about streaming large files can be found here:
    [http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/userguide/context.html#large_messages]
    There is written "When you enable streaming for large message processing, you cannot use the ... for each...".
    And for accessing single items you should should use an assign action with a xpath like "$body/allItems/item[1]";
    this works fine and not the whole XML stream has to be materialized.
    So my idea was to use the "for each" action and processing seqeuntially all items with a xpath like:
    $body/allItems/item[$counter]
    But the "for each" action just allows iterating over a sequence of xml items by defining an selection xpath
    and the variable that contains all items. I would like to have a "repeat until" construct that iterates as long
    $body/allItems/item[$counter] returns not null. Or can I use the "for each" action differently?
    Does the OSB provides any other iterating mechanism? I know there is this spli-join construct that supports
    different looping techniques, but as far I know it does not support content streaming, is this correct?
    Did I miss somehting?
    Thanks a lot for helping!
    Cheers
    Dani
    Edited by: user10095731 on 29.07.2009 06:41

    Hi Dani,
    Yes, according to me this would be the best approach. You can use content-streaming to pass this large xml to ejb and once it passes successfully EJB should operate on this. If you want any result back (for further routing), you can get it back from EJB.
    EJB gives you power of java to process this file and from java perspective 150 MB is not a very LARGE data. Ensure that you are using buffering. Check out this link for an explanation on Java IO Streams and, in particular, buffered streams -
    http://java.sun.com/developer/technicalArticles/Streams/ProgIOStreams/
    Try dom4J with xpp (XML Pull Parser) parser in case you have parsing requirement. We had worked with 1.2GB file using this technique.
    Regards,
    Anuj

  • Transform Large XML files with XSL

    HELP, LARGE XML FILES
    I have got 30 - 50 MB large xml file, and i would like to transform it
    with xslt, i tried but i have got OutOfMemory Exception.
    I tried to find out the solution on JAVA site, but i didn't find it.
    I can not displit my xml file. I hope for some help.
    I tried really everything.
    Thanks a lot

    What is your machine configuration ?
    The above 2 suggestions would help, but it does depend on how your software is written.
    Please post more info about your environment and object design.
    Chintan

  • XMLAGG giving ORA-19011 when creating CDATA with large embedded XML

    What I'm trying to achieve is to embed XML (XMLTYPE return type) inside a CDATA block. However, I'm receiving "ORA-19011: Character string buffer too small" when generating large amounts of information within the CDATA block using XMLCDATA within an XMLAGG function.
    Allow me to give a step by step explanation through the thought process.
    h4. Creating the inner XML element
    For example, suppose I have the subquery below
    select
        XMLELEMENT("InnerElement",DUMMY) as RESULT
    from dual
    ;I would get the following.
    RESULT                           
    <InnerElement>X</InnerElement>h4. Creating outer XML element, embedding inner XML element in CDATA
    Now, if I my desire were to embed XML inside a CDATA block, that's within another XML element, I can achieve it by doing so
    select
        XMLELEMENT("OuterElement",
            XMLCDATA(XML_RESULT)
        ) XML_IN_CDATA_RESULT
    FROM
    (select
        XMLELEMENT("InnerElement",DUMMY) as XML_RESULT
    from dual)
    ;This gets exactly what I want, embedding XML into CDATA block, and CDATA block is in an XML element.
    XML_IN_CDATA_RESULT                                                       
    <OuterElement><![CDATA[<InnerElement>X</InnerElement>]]></OuterElement>    So far so good. But the real-world dataset naturally isn't that tiny. We'd have more than one record. For reporting, I'd like to put all the <OuterElement> under a XML root.
    h4. Now, I want to put that data in XML root element called <Root>, and aggregate all the <OuterElement> under it.
    select
        XMLELEMENT("Root",
            XMLAGG(
                XMLELEMENT("OuterElement",
                    XMLCDATA(INNER_XML_RESULT)
    FROM
        (select
             XMLELEMENT("InnerElement",DUMMY) as INNER_XML_RESULT
         from dual)
    ;And to my excitement, I get what I want..
    <Root>
        <OuterElement><![CDATA[<InnerElement>X</InnerElement>]]></OuterElement>
        <OuterElement><![CDATA[<InnerElement>Y</InnerElement>]]></OuterElement>
        <OuterElement><![CDATA[<InnerElement>Z</InnerElement>]]></OuterElement>
    </Root>  But... like the real world again... the content of <InnerElement> isn't always so small and simple.
    h4. The problem comes when <InnerElement> contains lots and lots of data.
    When attempting to generate large XML, XMLAGG complains the following:
    ORA-19011: Character string buffer too smallThe challenge is to keep the XML formatting of <InnerElement> within CDATA. A particular testing tool I'm using parses XML out of a CDATA block. I'm hoping to use [Oracle] SQL to generate a test suite to be imported to the testing tool.
    I would appreciate any help and insight I could receive, and hopefully overcome this roadblock.
    Edited by: user6068303 on Jan 11, 2013 12:33 PM
    Edited by: user6068303 on Jan 11, 2013 12:34 PM

    That's an expected error.
    XMLCDATA takes a string as input, but you're passing it an XMLType instance, therefore an implicit conversion occurs from XMLType to VARCHAR2 which is, as you know, limited to 4000 bytes.
    This indeed gives an error :
    SQL> select xmlelement("OuterElement", xmlcdata(inner_xml))
      2  from (
      3    select xmlelement("InnerElement", rpad(to_clob('X'),8000,'X')) as inner_xml
      4    from dual
      5  ) ;
    ERROR:
    ORA-19011: Character string buffer too small
    no rows selectedThe solution is to serialize the XMLType to CLOB before passing it to XMLCDATA :
    SQL> select xmlelement("OuterElement",
      2           xmlcdata(xmlserialize(document inner_xml))
      3         )
      4  from (
      5    select xmlelement("InnerElement", rpad(to_clob('X'),8000,'X')) as inner_xml
      6    from dual
      7  ) ;
    XMLELEMENT("OUTERELEMENT",XMLCDATA(XMLSERIALIZE(DOCUMENTINNER_XML)))
    <OuterElement><![CDATA[<InnerElement>XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX(use getClobVal method if your version doesn't support XMLSerialize)

Maybe you are looking for

  • New Bookmarks Screen

    On the new safari update when you go to the Bookmarks screen, the top is a large black area showing the latest pages I visited. I hate it, and want to get rid of it. No use to me. How do I get rid of it?

  • Webservice reference toolkit with office2003 Excel

    Hi, I am trying to use webservice reference toolkit for office 2003 in excel to call a SAP RFC function. The has successfully generated the proxy class. but when i try to execute the webservice method i am getting the following error and Google didnt

  • Adobe Reader crashes Adobe AIR Application silently

    Hey, here at my agency we stumbled about a very strange bug: A program that we're developing needs to constantly show many types of media: Images, Video, Websites and PDF. Its some kind of presentation tool and cycles through the media-objects in its

  • Mailing Ringtones From PC to GZ One Ravine 2

    My dad just got a GZ One Ravine 2 and we'd like to put a ringtone on it.  I have one that I created with http://makeownringtone.com/ and they've always worked on my phone which is a Samsung Brightside.  All I have to do is send the ringtone I've crea

  • Trackpad keeps "bouncing" the mouse pointer when I tap to select (MacBook Pro, 15" Retina, 2014)

    Hi, I got a brand new 15" MacBook Pro a week or so back and had problems with the trackpad - when I tap to select an item the mouse pointer "bounces" off the item I'm trying to select. It happens randomly - not every time. Very irritating. Supplier g