Big XML Problem

Hello,
I am using an ObjectOutputStream to write and XML file to a disk, however this causes me problems as it inserts some header characters to the begining of the file which it writes. These extra characters prevent my SAX parser from correctly reading the file. When I manually open up the file and remote the characters in notepad the problem is solved, how can I save XML files without the extra 'hearer' characters??
Please help, I've been stuck on this problem for hours!

How can you use ObjectOutputStream to write XML?
ObjectOutputStream mandates using the object stream format of its own. You can customize the serialization process by implementing readObject and writeObject but you can't change the overall format (headers etc) specified in http://java.sun.com/j2se/1.4.2/docs/guide/serialization/spec/serialTOC.html
Maybe you could look into using XMLEncoder and XMLDecoder? http://java.sun.com/j2se/1.4.2/docs/api/java/beans/XMLEncoder.html

Similar Messages

  • Loading big XML files using JDBC gives errors

    Hi,
    I've created a XMLType table using binary storage, with the restriction that any document stored has a (any) schema:
    CREATE TABLE XMLBIN OF XMLTYPE
    XMLTYPE STORE AS BINARY XML
    ALLOW ANYSCHEMA;Then I use JDBC to store a relatively large document using the following code:
    Class.forName("oracle.jdbc.driver.OracleDriver").newInstance();
    String connectionString = "jdbc:oracle:thin:@host:1521:sid";
    File f = new File("c:\\temp\\big.xml");
    Connection conn = DriverManager.getConnection(connectionString, "username", "password");
    XMLType xml = XMLType.createXML(conn,new FileInputStream(f));
    String statementText = "INSERT INTO xmlbin VALUES (?)";
    OracleResultSet resultSet = null;
    OracleCallableStatement statement = (OracleCallableStatement)conn.prepareCall(statementText);
    statement.setObject(1,xml);
    statement.execute();
    statement.close();
    conn.commit();
    conn.close();Loading a file of 61Mb (real Mb, in non-IT Mb (where 1Mb seems to be 10^6) it is 63.9Mb) or less doesn't give any errors, loading a file bigger then that gives the following error:
    java.sql.SQLRecoverableException: Io exception: Software caused connection abort: socket write error
            at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:101)
            at oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:112)
            at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:173)
            at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:229)
            at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:458)
            at oracle.jdbc.driver.T4CCallableStatement.executeForRows(T4CCallableStatement.java:960)
            at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1222)
            at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3381)
            at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3482)
            at oracle.jdbc.driver.OracleCallableStatement.execute(OracleCallableStatement.java:3856)
            at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1373)
            at jdbctest.Tester.main(Tester.java:60)A succesful insert of a 63Mb file takes about 23 seconds to execute. The 70Mb file fails already after a few seconds, so I'm ruling out any time outs.
    I'm guessing there are some buffers that need to be enlarged, but don't have a clue which ones.
    Anyone any idea what might cause the problem and how to resolve?
    My server runs Oracle 11g Win32. The client is Windows running Sun Java 1.6, using ojdbc6.jar and Oracle 11g Client installed.
    Cheers,
    Harald

    Hi Mark,
    The trace log in the OEM shows me:
    Errors in file d:\oracle11g\app\helium\diag\rdbms\helium\helium\trace\helium_ora_6948.trc  (incident=7510): ORA-07445: exception encountered: core dump [__intel_new_memcpy()+613] [ACCESS_VIOLATION] [ADDR:0x0] [PC:0x6104B045] [UNABLE_TO_WRITE] []  If needed I can post the full contents (if I find out how, am still a novice :-))
    Cheers,
    Harald

  • Slow extraction in big XML-Files with PL/SQL

    Hello,
    i have a performance problem with the extraction from attributes in big XML Files. I tested with a size of ~ 30 mb.
    The XML file is a response of a webservice. This response include some metadata of a document and the document itself. The document is inline embedded with a Base64 conversion.  Here is an example of a XML File i want to analyse:
    <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
       <soap:Body>
          <ns2:GetDocumentByIDResponse xmlns:ns2="***">
             <ArchivedDocument>
                <ArchivedDocumentDescription version="1" currentVersion="true" documentClassName="Allgemeines Dokument" csbDocumentID="***">
                   <Metadata archiveDate="2013-08-01+02:00" documentID="123">
                      <Descriptor type="Integer" name="fachlicheId">
                         <Value>123<Value>
                      </Descriptor>
                      <Descriptor type="String" name="user">
                         <Value>***</Value>
                      </Descriptor>
                      <InternalDescriptor type="Date" ID="DocumentDate">
                         <Value>2013-08-01+02:00</Value>
                      </InternalDescriptor>
                      <!-- Here some more InternalDescriptor Nodes -->
                   </Metadata>
                   <RepresentationDescription default="true" description="Description" documentPartCount="1" mimeType="application/octet-stream">
                      <DocumentPartDescription fileName="20mb.test" mimeType="application/octet-stream" length="20971520 " documentPartNumber="0" hashValue=""/>
                   </RepresentationDescription>
                </ArchivedDocumentDescription>
                <DocumentPart mimeType="application/octet-stream" length="20971520 " documentPartNumber="0" representationNumber="0">
                   <Data fileName="20mb.test">
                      <BinaryData>
                        <!-- Here is the BASE64 converted document -->
                      </BinaryData>
                   </Data>
                </DocumentPart>
             </ArchivedDocument>
          </ns2:GetDocumentByIDResponse>
       </soap:Body>
    </soap:Envelope>
    Now i want to extract the filename and the Base64 converted document from this XML response.
    For the extraction of the filename i use the following command:
    v_filename := apex_web_service.parse_xml(v_xml, '//ArchivedDocument/ArchivedDocumentDescription/RepresentationDescription/DocumentPartDescription/@fileName');
    For the extraction of the binary data i use the following command:
    v_clob := apex_web_service.parse_xml_clob(v_xml, '//ArchivedDocument/DocumentPart/Data/BinaryData/text()');
    My problem is the performance of this extraction. Here i created some summary of the start and end time for the commands:
    Start Time
    End Time
    Difference
    Command
    10.09.13 - 15:46:11,402668000
    10.09.13 - 15:47:21,407895000
    00:01:10,005227
    v_filename_bcm := apex_web_service.parse_xml(v_xml, '//ArchivedDocument/ArchivedDocumentDescription/RepresentationDescription/DocumentPartDescription/@fileName');
    10.09.13 - 15:47:21,407895000
    10.09.13 - 15:47:22,336786000
    00:00:00,928891
    v_clob := apex_web_service.parse_xml_clob(v_xml, '//ArchivedDocument/DocumentPart/Data/BinaryData/text()');
    As you can see the extraction of the filename is slower then the document extraction. For the Extraction of the filename i need ~01
    I wonder about it and started some tests.
    I tried to use an exact - non dynamic - filename. So i have this commands:
    v_filename := '20mb_1.test';
    v_clob := apex_web_service.parse_xml_clob(v_xml, '//ArchivedDocument/DocumentPart/Data/BinaryData/text()');
    Under this Conditions the time for the document extraction soar. You can see this in the following table:
    Start Time
    End Time
    Difference
    Command
    10.09.13 - 16:02:33,212035000
    10.09.13 - 16:02:33,212542000
    00:00:00,000507
    v_filename_bcm := '20mb_1.test';
    10.09.13 - 16:02:33,212542000
    10.09.13 - 16:03:40,342396000
    00:01:07,129854
    v_clob := apex_web_service.parse_xml_clob(v_xml, '//ArchivedDocument/DocumentPart/Data/BinaryData/text()');
    So i'm looking for a faster extraction out of the xml file. Do you have any ideas? If you need more informations, please ask me.
    Thank you,
    Matthias
    PS: I use the Oracle 11.2.0.2.0

    Although using an XML schema is a good advice for an XML-centric application, I think it's a little overkill in this situation.
    Here are two approaches you can test :
    Using the DOM interface over your XMLType variable, for example :
    DECLARE
      v_xml    xmltype := xmltype('<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> 
           <soap:Body> 
              <ns2:GetDocumentByIDResponse xmlns:ns2="***"> 
                 <ArchivedDocument> 
                    <ArchivedDocumentDescription version="1" currentVersion="true" documentClassName="Allgemeines Dokument" csbDocumentID="***"> 
                       <Metadata archiveDate="2013-08-01+02:00" documentID="123"> 
                          <Descriptor type="Integer" name="fachlicheId"> 
                             <Value>123</Value> 
                          </Descriptor> 
                          <Descriptor type="String" name="user"> 
                             <Value>***</Value> 
                          </Descriptor> 
                          <InternalDescriptor type="Date" ID="DocumentDate"> 
                             <Value>2013-08-01+02:00</Value> 
                          </InternalDescriptor> 
                          <!-- Here some more InternalDescriptor Nodes --> 
                       </Metadata> 
                       <RepresentationDescription default="true" description="Description" documentPartCount="1" mimeType="application/octet-stream"> 
                          <DocumentPartDescription fileName="20mb.test" mimeType="application/octet-stream" length="20971520 " documentPartNumber="0" hashValue=""/> 
                       </RepresentationDescription> 
                    </ArchivedDocumentDescription> 
                    <DocumentPart mimeType="application/octet-stream" length="20971520 " documentPartNumber="0" representationNumber="0"> 
                       <Data fileName="20mb.test"> 
                          <BinaryData> 
                            ABC123 
                          </BinaryData> 
                       </Data> 
                    </DocumentPart> 
                 </ArchivedDocument> 
              </ns2:GetDocumentByIDResponse> 
           </soap:Body> 
        </soap:Envelope>');
      domDoc    dbms_xmldom.DOMDocument;
      docNode   dbms_xmldom.DOMNode;
      node      dbms_xmldom.DOMNode;
      nsmap     varchar2(2000) := 'xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns2="***"';
      xpath_pfx varchar2(2000) := '/soap:Envelope/soap:Body/ns2:GetDocumentByIDResponse/';
      istream   sys.utl_characterinputstream;
      buf       varchar2(32767);
      numRead   pls_integer := 1;
      filename       varchar2(30);
      base64clob     clob;
    BEGIN
      domDoc := dbms_xmldom.newDOMDocument(v_xml);
      docNode := dbms_xmldom.makeNode(domdoc);
      filename := dbms_xslprocessor.valueOf(
                    docNode
                  , xpath_pfx || 'ArchivedDocument/ArchivedDocumentDescription/RepresentationDescription/DocumentPartDescription/@fileName'
                  , nsmap
      node := dbms_xslprocessor.selectSingleNode(
                docNode
              , xpath_pfx || 'ArchivedDocument/DocumentPart/Data/BinaryData/text()'
              , nsmap
      --create an input stream to read the node content :
      istream := dbms_xmldom.getNodeValueAsCharacterStream(node);
      dbms_lob.createtemporary(base64clob, false);
      -- read the content in 32k chunk and append data to the CLOB :
      loop
        istream.read(buf, numRead);
        exit when numRead = 0;
        dbms_lob.writeappend(base64clob, numRead, buf);
      end loop;
      -- free resources :
      istream.close();
      dbms_xmldom.freeDocument(domDoc);
    END;
    Using a temporary XMLType storage (binary XML) :
    create table tmp_xml of xmltype
    xmltype store as securefile binary xml;
    insert into tmp_xml values( v_xml );
    select x.*
    from tmp_xml t
       , xmltable(
           xmlnamespaces(
             'http://schemas.xmlsoap.org/soap/envelope/' as "soap"
           , '***' as "ns2"
         , '/soap:Envelope/soap:Body/ns2:GetDocumentByIDResponse/ArchivedDocument/DocumentPart/Data'
           passing t.object_value
           columns filename    varchar2(30) path '@fileName'
                 , base64clob  clob         path 'BinaryData'
         ) x

  • How to Canonicalize an Big XML?

    Hello, i want to Canonicalize a big XML (like 1GB or more), the program it's in DOM, but dom can't manage big XML documents, so i was thinking to do it with STAX. How i can do it?
    The Code (with dom) that i have it's:
    File file=new File("big-1gb.xml");
    org.apache.xml.security.Init.init();
    DocumentBuilderFactory dfactory = DocumentBuilderFactory.newInstance();
    DocumentBuilder documentBuilder = dfactory.newDocumentBuilder();
    Document doc = documentBuilder.parse(file);
    Canonicalizer c14n = Canonicalizer.getInstance("http://www.w3.org/TR/2001/REC-xml-c14n-20010315");
    outputBytes = c14n.canonicalizeSubtree(doc.getElementsByTagName("SomeTag").item(0));
    The idea it's do the code below with Stax...
    Thx :)

    yes i'm trying to create a XMLSignature :)
    Well, i solve the problem, here is the solution:
    XOM have an example that use low memory (see nu.xom.samples.MinimalNodeFactory in the source code of XOM), so, i use this code to canonicalize 200Mb XML file:
    ByteArrayOutputStream bytestream = new ByteArrayOutputStream();
    ObjectOutputStream outputstream = new ObjectOutputStream(bytestream);
    nu.xom.Builder builder = new nu.xom.Builder(false, new nu.xom.samples.MinimalNodeFactory()); //The false parameter is for avoid a ValidationException that trows XOM
    try {
         nu.xom.canonical.Canonicalizer outputter = new nu.xom.canonical.Canonicalizer(outputstream);
         nu.xom.Document input = builder.build(file);
         outputter.write(input);
         catch (Exception ex) {
         System.err.println(ex);
         ex.printStackTrace();
    outputstream.close();
    MessageDigest sha1 = MessageDigest.getInstance("SHA1");
    sha1.reset();
    sha1.update(java.nio.ByteBuffer.wrap(bytestream.toByteArray()));
    salidasha1=sha1.digest();
    String tagDigestValue=new String(Base64.encodeBase64(salidasha1));
    /* Rest of digitalsig program */Take like 7 minutes to do the XML signature and the final XML it's like 2Gb :)
    I hope that if some have the same problem can use this :)
    And the final tips: AVOID DOM, with DOM i will need 10x of RAM the size of the XML File (in this case like 20Gb of RAM)
    ps.: Other day i will post how much time take to process 1GB XML Files :)
    Edited by: Enriquesmw on May 7, 2010 12:26 PM

  • Building big XML file from scratch - Urgent

    Oracle 8.1.7.3 on windows NT platform
    What is the best way to generate a quiet big XML file from multiple tables ?
    I have information stored in many relational tables from which I need to generate a XML flat file either stored in a CLOB field or in a text file in a system directory. This XML file will be then used
    as an input to generate a report either in HTML using XSLT, or in a PDF file using apache-fop or in a MS Excel file using SoftArtisan ExcelWriter.
    My XML file has many levels in it structure, I mean that it is composed of one root element with 2 children, each children has a 3 children. One on these 3 children has 2 children and so on. Actually there are more or less 10 nested levels.
    To generate this XML file, I tried to use XSU for PL/SQL with the nested cursor() feature plus XSLT to transform the raw XML file to my requirements.
    But obviously there are some limitations using this method. Particularly, if the inner cursor returns an empty set I get exhausted resultset java error... A TAR confirmed that limitation.
    So I had to give up this method to use basic nested PL/SQL cursors. Each fetched row is then inserted into a table (varchar2) with a sequence number so that with a cursor like select xml_chunk from my_table order by sequence, I get the whole XML file that I save either in a flat file or in a CLOB (using append method).
    This method works fine, but it takes time and it's not flexible at all as I have to construct each XML tag. I guest this way of proceeding is not the more efficient...
    Using DOM method won't be better as I still need PL/SQL cursor to select each level of my XML structure and in addition I might for sure encounter a problem of memory.
    So what solutions would you suggest to generate this XML file. It must be quiet fast. The XML file can be up to 2Mo big. My system is actually a kind of on-the-fly reports generation. I mean that the XML file needs to be created with up-to-date data many times during the working hours !
    Quick answers or suggestions would be greatly appreciated. It's very urgent !!
    Thanks

    I looks like the best way is to using the SAX processing for your application? Do you know the DTD or XML schema of your output XML document?
    Would you send me the sample code for the method "to use XSU for PL/SQL with the nested cursor() feature plus XSLT to transform the raw XML file" to reproduce the problem?

  • Breaking BIG XML files in to 4 different XML Files

    Hi:
    What will be a problem if I break the BIG XML file in to a number of different XML FILES?
    My main reason is to create XMLVIEW.
    Please help
    ALI_2

    Would Suite Spot Studios "AA Translator" achieve this?  If you navigate to his web site you can test the app by submitting a test project to him and he/they will attempt to "convert" it for you.  (Only one of you, you or your buddy, would need to buy and have the app installed).
    I realise this is something of a "long shot" since "suitespot" does not use a Mac and this "new" session format (.sesx) may not be readable by his app.
    Jeff

  • Too big XML file to parse

    I have this big xml file which is 13MB which I am trying to parse by Java. The problem is that I get the message
    java.lang.OutOfMemoryError: Java heap spaceI am loading from xml file as following:
    DocumentBuilderFactory docBuilderFactory = DocumentBuilderFactory.newInstance();
    DocumentBuilder docBuilder = docBuilderFactory.newDocumentBuilder();
    Document doc = docBuilder.parse (new File("xmlGeneSynonyms.xml"));  //Here I get problem!!!Do you have any suggestion how I can work this out?
    thanx

    I agree with the poster who said SAX may be a better choice. 13 MB, though (probably) smaller than the memory space of your machine, is still going to take a while to parse & build the nodes in memory. Unless you are going to keep the parsed XML tree in a memory for a LONG time and do a LOT with it, SAX is likely a better choice.
    With SAX, you just parse a little at a time until you find what you're looking for, then quit. This would be a better wa to go, for example, if want only few parts of the 13 MB document.
    And how many 13 MB documents do you have? If more than just one or two, you're going to take a huge performance hit by trying to parse each one & stuff the whole thing into memory if you're only interested in a small part of it.

  • Generating one PDF from a big XML file

    Hi all,
    We're running into an issue with our batch run with Documaker Studio 11. One of the invoice extracts generated for Documaker reached 200,000 lines (the body has a repeating section that generates has several thousand entries).
    I've run our .BAT file on it as well as the test scenario and it takes forever to generate the PDF. Now we are wondering if there is a better way of generating such PDFs or do we resort to cutting the single extract into multiple PDFs? The template we use generates smaller PDFs quickly (6000 pdfs in less than 20 minutes).
    Next question is: is this a Documaker limitation or is there a way for us to restructure the xml and our template so that we can generate this monstrous PDF in one go at an acceptable rate?
    Thanks, Simon.

    It doesn't appear yet that the blame falls on the PDF. Is this a Gendata with single-step print situation? Or are you using a 2-step Gendata then GenPrint executable?
    The most likely scenario is related to memory use. The size of this run likely hits some level where thrashing starts to occur within the memory management of the operating system. You didn't mention, but this is likely running on a Windows box. Yes? What Windows version and what memory specifications do you have.
    It will require much more memory to do a single-step GenData with print rule as that means the entire transaction generation is still in memory as well as the PDF that is being built. Unlike a regular print stream, PDFs are not typically spooled out in the same manner and consume memory while pages are completed.
    A secondary question when you are talking about generating thousands of  sections. Does this job rely upon some type of group and/or subform pagination to occur? That might be if you have "stay together" or "min or max on page" requirements.  If this is your situation, the time may actually be in building the overall form set and not the actual print itself.
    More information is required before establishing that this is a "big PDF" problem. It may be that it is a "big/complex transaction" problem.

  • Email and big wifi problems.

    Email and big wifi problems.
    Why does it take 6 minutes to send a photo attachment when my iphone 4 will do it in 30 to 40 seconds ?
    Why will my email account be recognised and log on only half the time?
    Why does the wifi keep dropping out?
    Why does the speed and signal vary when I'm sitting two feet away from by wifi router?
    All settings in my phone have been reset twice as per advice from vodapone

    Thanks very much, jjgraphics. I will grit my teeth and try India once more, as you suggest, and then get in touch with the moderators.
    Karen.

  • Best practice for optimizing processing of big XMLs?

    All,
    What is the best practice when dealing with large XML files ... (say couple of MBs).
    Instead of having to read the file from the file system everytime a static method is run, what would be the best way in which the program reads the file once and then keeps it in memory. So the next time it would not have to read and parse it all over again?
    Currently my code just read the file in the static method like ...
    public static String doOperation(String path,...) throws Exception
    try{           
    String masterFile = path+"configfile.xml";
    Document theFile = (Document)getDocument(masterFile);
    Element root = theFile.getDocumentElement();
    NodeList nl = root.getChildNodes();
    // ... operations on file
    Optimization tips and tricks most appreciated :-)
    Thanks,
    David

    The best practice for multi-megabyte XML files is not to have them at all.
    However if you must, presumably you don't need all of the information in your XML, repeatedly. Or do you? If you need a little bit of it here, then another little bit of it there, then yet another little bit of it later, then you shouldn't have stored your data in one big XML.
    Sorry if that sounds unhelpful, but I'm having trouble imagining a scenario when you need all the data in an XML document repeatedly. Perhaps you could expand on your design?
    PC&#178;

  • Have Operating System 10.6.8, Mail Program 4.6.  How can I prevent the next email in the que from automatically opening after I act on the previous email ? It creates big organizational problems for me. My computer changes this mode from self opening to m

    Have Operating System 10.6.8, Mail Program 4.6.
    How can I prevent the next email in the que from automatically opening after I act on the previous email ? It creates big organizational problems for me. My computer changes this mode from self opening to manually opening every few month with no ? action from me.
    Help

    Have Operating System 10.6.8, Mail Program 4.6.
    How can I prevent the next email in the que from automatically opening after I act on the previous email ? It creates big organizational problems for me. My computer changes this mode from self opening to manually opening every few month with no ? action from me.
    Help

  • How to split big xml-messages with file inbound adapter

    Hello,
    we have big xml-messages in our filesystem, which are processed by XI 3.0 (SP11). We are using the file inbound adapter. Now we want to split these big xml-message into some smaller messages.
    Exist there a corresponding function to the "xml.recordsetsPerMessage" which is working with xml-files?
    Thanks!
    Regards
    Stefan

    Hi,
    maybe you can split the message in the BPM 
    with 1:N mapping?
    Process Integration (PI) & SOA Middleware
    I don't think anythink like "recordsetsPerMessage" is possible for xml messages 
    Regards,
    michal

  • IOError in IE but not in Firefox (possible crossdomain.xml problem)

    Yesterday, I hopefully debugged a problem that is occuring for our application in IE but not in Firefox.
    It has to do with accessing remote content from a separate domain.
    In every aspect it APPEARS to be a crossdomain.xml issue but the fact that this issue only arrises in IE is what has prompted me to post here.
    We have a solution in the works (bureaucratically speaking) but I want to double check here.
    Our application is on domain "a.domain".
    It access an xml file on "b.domain/xml/".
    And finally (this is the tricky part) it also accesses an xml file at "b.domain/forwardingPath/" which is actually forwarded to "c.domain/xml/".
    The crossdomain.xml is located at "b.domain/crossdomain.xml".
    The request for "b.domain/xml/anXMLFile.xml" works without any problem.
    The request for "b.domain/forwardingPath/anotherXMLFile.xml" succeeds in Firefox but not in IE (remember, the ACTUAL request is forwarded to "c.domain/xml/anotherXMLFile.xml").
    In IE I get an IOError.
    I believe we need an appropriate crossdomain.xml file also located at "c.domain/crossdomain.xml" and have put in that request.  What I want to confirm is whether this understanding is correct.  I am not a server-side person at all.  It's all elves and fairies to me.  And then finally, why the hell is this behavior inconsistent between IE and Firefox?  Is the Firefox version of flash player violating its own security standards?!
    I am cross-posting this at stack overflow.  http://stackoverflow.com/questions/7395931/ioerror-in-ie-but-not-in-firefox-possible-cross domain-xml-problem

    I've pinged our developers about this and here's what they have to say:
    "We did some work for the plugin around redirects andhence the correct behavior on Firefox.
    AFAIK, on IE we don't get notified of the redirect and can't participate in making security decisions during redirect scenarios. This behavior is out of our control.
    There is a workaround documented in the AS3docs here: http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/system/LoaderCont ext.html#checkPolicyFile
    Here is the pertinent paragraph:
    Be careful with checkPolicyFile if you are downloading anobject from a URL that may use server-side HTTP redirects. Policy files arealways retrieved from the corresponding initial URL that you specify inURLRequest.url. If the final object comes from a different URL because of HTTPredirects, then the initially downloaded policy files might not be applicableto the object's final URL, which is the URL that matters in security decisions.If you find yourself in this situation, you can examine the value ofLoaderInfo.url after you have received a ProgressEvent.PROGRESS orEvent.COMPLETE event, which tells you the object's final URL. Then call theSecurity.loadPolicyFile() method with a policy file URL based on the object'sfinal URL. Then poll the value of LoaderInfo.childAllowsParent until it becomes true."
    Chris

  • Big XML Files

    Hello,
    does anybody have experience with large XML documents in oracle XML DB? I think bigger than the purchaseOrder example, for example a whole book. I am looking for a storage method for documents at least 10 MByte large.
    Thanks
    Krisztian

    Sometimes the chapters are so big like a whole book. (ca 200-300 pages, legislation commentaries). What I'm looking for is a way to store big XML files and access them flexible at different levels. E.g. A law can have 50 articles and some times even 2400 articles. If I need to share the editing work the editor can get the whole document but sometimes even only fragments. Much better would be if more than one editor could work at one document, even at different fragments. But the fragments must be created dinamically.

  • How quickly parse big XML file (60 MB) ???

    How quickly parse big XML file (60 MB) ???

    I assume you mean load it into XML DB ?. Fundamentally your document is about the upper limit for 9.2.x. I would strongly recommend trying to break it up into a set of smaller documents using a SAX parser before trying to load it into XML DB. In 10g it should be possible to load much bigger documents than this.

Maybe you are looking for

  • Sims 3 crashes all the time on new MacBook Pro, what do I do?

    Got a brand new MacBook Pro 13inch (not retina) for Christmas. I was advised when buying the Mac to update the ram if I wanted to use it for playing the Sims 3 (which I did). Previsoulsy I had a Sony VAIO (top spec) in which the Sims 3 worked on for

  • No item category exists (Table T184L HOD V)

    Hi, When I'm trying to reverse a goods receipt for a PO I'm getting the error as follows. No item category exists (Table T184L HOD V) Message no. VL320 Diagnosis There is no item category available in item category determination in the delivery (tabl

  • Error while installing SAP NetWeaver 2004s SR2

    Hi, We are installing SAP NW2004s SR2 on  Linux/Oracle. In the 18th step of "Execute Service"("Create Secure Store"), we are getting an error <i>CJS 30050 - Cannot create the secure store. See output of log file SecureStoreCreate.log. Exception in th

  • How do I control how the name of my site comes up in Google?

    Hi - Question is in the title, I'm not sure how I put my own copy into the blue link area in each item Google brings up after a search. Does anyone know? Many thanks

  • SAP HR SHARED SERVICES FRAMEWORK CERTIFICATION

    hI Pls Is there a certification/ training for SAP HR shared Services Framework in London/. I am interested in exploring the possibility of certification in SAP HR SSF. Thanks you Message was edited by: Colleen Lee moving from training and education t