Problem Loading Large XML Doc

I'm running 10.2.0.3 on a Linux box and I'm having problems loading a large XML document (about 100 MB). In the past, I would simply load the XML file into a XMLType column like this:
INSERT INTO foo VALUES (XMLType(bfilename('XMLDIR', 'test.xml'). nls_charset_id('AL32UTF8')));
But when I try this with a large file, it runs for 10 minutes and then returns an ORA-03113. I'm assuming the file is just too large for this technique. I spoke to Mark Drake when I was at OpenWorld and he suggested I use Oracle XML DB, so I created and registered a schema and tried using sqlldr to load the doc, but it ran for 2 1/2 hours before returning:
Parse Error on row 1 in table FOO
OCI-31038: Invalid integer value: "129"
I tried simplifying both the XML file and schema to just the following:
<schedules>
<s s="2009-09-21T04:00:00" d="21600" p="335975" c="19672"/>
<s s="2009-09-21T04:00:00" d="21600" p="335975" c="15387"/>
<s s="2009-09-21T04:00:00" d="25200" p="335975" c="5256"/>
<s s="2009-09-21T04:00:00" d="86400" p="335975" c="26198">
<k id="5" v="2009-09-21 09:00:00.000"/>
<k id="6" v="2009-09-22 03:59:59.000"/>
<k id="26" v="0.00"/><k id="27" v="US"/>
</s>
<s s="2009-09-21T04:00:00" d="21600" p="335975" c="11678"/>
<s s="2009-09-21T04:00:00" d="21600" p="335975" c="26697"/>
<s s="2009-09-21T04:00:00" d="21600" p="335975" c="25343"/>
<s s="2009-09-21T04:00:00" d="21600" p="335975" c="25269"/>
<s s="2009-09-21T04:00:00" d="86400" p="335975" c="26200">
<k id="5" v="2009-09-21 09:00:00.000"/>
<k id="6" v="2009-09-22 03:59:59.000"/>
<k id="26" v="0.00"/><k id="27" v="US"/>
</s>
</schedules>
<?xml version="1.0" encoding="UTF-8"?>
<!--W3C Schema generated by XMLSpy v2008 sp1 (http://www.altova.com)-->
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
     xmlns:xdb="http://xmlns.oracle.com/xdb"
    version="1.0"
    xdb:storeVarrayAsTable="true">
     <xs:element name="schedules" xdb:defaultTable="SCHEDULES">
          <xs:complexType xdb:SQLType="SCHEDULES_T">
               <xs:sequence>
                    <xs:element ref="s" maxOccurs="unbounded"/>
               </xs:sequence>
          </xs:complexType>
     </xs:element>
     <xs:element name="s">
          <xs:complexType>
               <xs:choice minOccurs="0">
                    <xs:element ref="f" maxOccurs="unbounded"/>
                    <xs:element ref="k" maxOccurs="unbounded"/>
               </xs:choice>
               <xs:attribute name="s" use="required" type="xs:dateTime"/>
               <xs:attribute name="p" use="required" type="xs:int"/>
               <xs:attribute name="d" use="required" type="xs:int"/>
               <xs:attribute name="c" use="required" type="xs:short"/>
          </xs:complexType>
     </xs:element>
     <xs:element name="k">
          <xs:complexType>
               <xs:attribute name="v" use="required" type="xs:string"/>
               <xs:attribute name="id" use="required" type="xs:byte"/>
          </xs:complexType>
     </xs:element>
     <xs:element name="f">
          <xs:complexType>
               <xs:attribute name="id" use="required" type="xs:byte"/>
          </xs:complexType>
     </xs:element>
</xs:schema>
Keep in mind both the actual XML file and corresponding XSD is much more complex, but this particular section is about 70 MB so I wanted to see if I could just load that.  I used the following sqlldr script:
LOAD DATA
INFILE *
INTO TABLE schedules_tmp TRUNCATE
XMLType(xml_doc)(
     lobfn FILLER CHAR TERMINATED by ',',
     xml_doc LOBFILE(lobfn) TERMINATED BY EOF
BEGINDATA
/tmp/schedules.xmlThis worked fine on a small doc - loaded correctly and I could query it fine - but when I tried using the 70 MB file it ran for a couple of hours before dying with a memory problem.
So what am I doing wrong? Is there a better way to load a large file?
Thanks for the help.
Pete
Edited by: mdrake on Nov 9, 2009 8:46 PM

Mark:
Answers to your questions:
Do you use direct load ? -> Yes. I tried again using UNRECOVERABLE LOAD DATA to try to speed up performance, but it still ran for a couple of hours before dying.
Which DB Release are you working with ? -> 10.2.0.3
Can you see if you can upload via FTP ? -> I added noNamespaceSchemaLocation to my XML file and ftp'd it to my XML directory, but it wasn't recognized. Is there something else I have to do?
The change for unsignedInt should have code rid of the issue with the value 129. Did it ? -> I didn't try it again on the whole XML file (I'm just working with the schedules section), so I haven't verified this.
I'm still stumped as to why sqlldr takes so long. I could write something to parse the XML file into a flat file and then use sqlldr to load it into a relational table, and the load would only take a few minutes. But then I wouldn't be using XML DB which I thought would be faster. What am I doing wrong?
Pete

Similar Messages

  • Loading Large XML files  using plsql

    I have a process where there is a need to load large xml files (i.e. easily over 500k or more) into Oracle via an interface. Preference would be to use plsql or some plsql based utility if possible. I am looking for any suggestions on the best method to accomplish this. Currently running on 9.2.0.6. Thanks in advance.

    I have a process where there is a need to load large xml files (i.e. easily over 500k or more) into Oracle via an interface. Preference would be to use plsql or some plsql based utility if possible. I am looking for any suggestions on the best method to accomplish this. Currently running on 9.2.0.6. Thanks in advance.

  • Problems with Large XML files

    I have tried increasing the memory pool using the -mx and -ms options. It doesnt work. I am using your latest XML parser for Java v2. Please let me know if there are some specific options I should be using.
    Thanx,
    -Sameer
    We have a number of test files that are that size and it works without a problem. However using the DOMParser does require significantly more memory than your doc size.
    What is the memory configuration of the JVM that you are running with? Have you tried increasing it? Are you using our latest version 2.0.2.6?
    Oracle XML Team
    Is there a restriction on the XML file size that can be loaded into the parser?
    I am getting a out of memory exception reading in large XML file(10MB) using the commands
    DOMParser parser = new DOMParser();
    URL url = createURL(argv[0]);
    parser.setErrorStream(System.err);
    parser.setValidationMode(true);
    parser.showWarnings(true);
    parser.parse(url);
    Win NT 4.0 Server
    Sun JDK 1.2.2
    ===================================
    Error output
    ===================================
    Exception in thread "main" java.lang.OutOfMemoryError
    at oracle.xml.parser.v2.ElementDecl.getAttrDecls(ElementDecl.java, Compi
    led Code)
    at java.util.Hashtable.<init>(Unknown Source)
    at oracle.xml.parser.v2.DTDDecl.<init>(DTDDecl.java, Compiled Code)
    at oracle.xml.parser.v2.ElementDecl.getAttrDecls(ElementDecl.java, Compi
    led Code)
    at oracle.xml.parser.v2.ValidatingParser.checkDefaultAttributes(Validati
    ngParser.java, Compiled Code)
    at oracle.xml.parser.v2.NonValidatingParser.parseAttributes(NonValidatin
    gParser.java, Compiled Code)
    at oracle.xml.parser.v2.NonValidatingParser.parseElement(NonValidatingPa
    rser.java, Compiled Code)
    at oracle.xml.parser.v2.ValidatingParser.parseRootElement(ValidatingPars
    er.java:97)
    at oracle.xml.parser.v2.NonValidatingParser.parseDocument(NonValidatingP
    arser.java:199)
    at oracle.xml.parser.v2.XMLParser.parse(XMLParser.java:146)
    at TestLF.main(TestLF.java:40)
    null

    You might try using a different JDK/JRE - either a 1.1.6+ or 1.3 version as 1.2 in our experience has the largest footprint. If this doesn't work can you give us some details about your system configuration. Finally you might try the SAX interface as it does not need to load the entire DOM tree into memory.
    Oracle XML Team

  • Performance problems loading an XML file into oracle database

    Hello ODI Guru's,
    I am trying to load and XML file into the database after doing simple business validations. But the interface takes hours to complete.
    1. The XML files are large in size >200 Mb. We have an XSD file for the schema definition instead of a DTD.
    2. We used the external database feature for loading these files in database.
    The following configuration was used in the XML Data Server:
    jdbc:snps:xml?f=D:\CustomerMasterData1\CustomerMasterInitialLoad1.xml&d=D:\CustomerMasterData1\CustomerMasterInitialLoad1.xsd&re=initialLoad&s=CM&db_props=oracle&ro=true
    3. Now we reverse engineer the XML files and created models using ODI Designer
    4. Similar thing was done for the target i.e. an Oracle database table as well.
    5. Next we created a simple interface with one-to-one mapping from the XSD schema to the Oracle database table and executed the interface. This execution takes more than one hour to complete.
    6. We are running ODI client on Windows XP Professional SP2.
    7. The Oracle database server(Oracle 10g 10.2.0.3) for the target schema as well as the ODI master and work repositories are on the same machine.
    8. I tried changing the following properties but it is not making much visible difference:
    use_prepared_statements=Y
    use_batch_update=Y
    batch_update_size=510
    commit_periodically=Y
    num_inserts_before_commit=30000
    I have another problem that when I set batch_update_size to value greater that 510 I get the following error:
    java.sql.SQLException: class org.xml.sax.SAXException
    class java.lang.ArrayIndexOutOfBoundsException said -32413
    at com.sunopsis.jdbc.driver.xml.v.a(v.java)
    The main concern is why should the interface taking so long to execute.
    Please send suggestions to resolve the problem.
    Thanks in advance,
    Best Regards,
    Nikunj

    Approximately how many rows are you trying to insert?
    One of the techniques which I found improved performance for this scenario was to extract from the xml to a flat file, then to use SQL*LOADER or external tables to load the data into Oracle.

  • Loading large XML documents

    Hi everyone again :) Just sitting here trying to load a large XML document (it's ~13Mb). I know that's a massive XML document, but that's the way it is. The problem that I am having is that when I try to load the document I get an out of memory exception.
    Frankly, I'm not surprised, but is there a remedy? Any thoughts/ideas/solutions would be greatly appreciated. :)
    Ben

    Sounds like you are using DOM. The DOM parser you are using must be loading the whole Document tree right away. DOM really eats up memory. There are two possible solutions:
    1. Look into using a SAX parser. I don't know what you are doing with the xml, so I can't say whether or not that will work for you.
    2. Configure the DOM parser to defer loading nodes until they are requested, or if that option is not available with your parser, get a parser that will defer node loading.
    If option 2 sounds like what you need, then I suggest looking into the Apache Xerces parser. I am pretty sure it defers loading. You shouldn't have to change your code to work with the Xerces parser, you just have to make sure you set the proper system properties so that Java will automatically use the Xerces parser.

  • Problems while writing xml doc to a  file

    Hi all , in my project (of distributed xml databases) i need to write the xml files from the main server to the clients.
    These xml files i had formed by fragmenting one xml doc and i did the fragmentation using ....
    TransformerFactory tf = TransformerFactory.newInstance();
              Transformer transformer = tf.newTransformer();
              transformer.transform(new DOMSource(root),
              new StreamResult(new FileOutputStream(outputFile)));
    Now the problem is that on the client where these fragments reside..i m not able to do the indexing of the document properly ie...
    some extra text nodes with no values are coming in the index...
    i dont know how to deal wid the extra nodes that i m getting after parsing the file and craeting an index for the same..
    may be its coz of the transformer func i m using....don know(???)
    Note : i m fragmenting the xml files into text files using the above function and then sending thm to the client via sockets.
    Also,after fragmenting i am getting sumthng like
    <?xml version="1.0" encoding="UTF-8"?>
    in all the files..is this a source of any problem....
    plz reply soon....

    You have not described how you "index" the files and what you mean by that.
    Are you processing them with SAX or DOM, or one of the variations of those means?
    Is there a chance that the "extra" nodes are simply text nodes with newlines ("\n")? There are usually a lot of extra text nodes in a file each containing only one newline.
    If you are using SAX, there is no requirement for the parser to collect all of the text inside an element into a single block before calling the characters method. You may get several calls to characters between the start of an element and the end. If you change parsers, you may even get a different number of calls, but the character data will always be the same.
    Dave Patterson
    As to the <?xml version="1.0" encoding="UTF-8"?> line, that is perfectly fine. It means that your file thinks it is valid XML. Whether or not it REALLY is valid depends on a validation of the file.

  • Problem loading large DNG images with Camera RAW!

    Hello,
    When loading large DNGs with Camera RAW 4.4 (for example, DNG dimensions are 9984x6656) CS3 says "cannot open the image because this is not a right kind of a document". EXACTLY THE SAME file is loaded without a problem using Camera RAW 3.7 and 4.0 (did not test other versions). Have you specially prevented loading of large files? Wouldn't it be better to provide this as an option? It's a real disappointment - new version of plugin CANNOT do the thing that older versions CAN. (Simply using older version is not an option, as I need to load RAWs from new cameras, like Canon 450D).
    Dear Adobe, please correct this problem :)

    It might be wise to give some insight like how the DNG was created, by what version of what software, if ACR is hosted in Photoshop or Bridge... What platform you are using, how much RAM, memory allocation, etc.

  • Bulk Loader Program to load large xml document

    I am looking for a bulk loader database program that will load a very large xml document. The simple bulk loader application available on the oracle site will not load this document due to its size which is approximately 20MG. Please advise asap. Thank you.

    From the above document:
    Storing XML Data Across Tables
    Question
    Can XML- SQL Utility store XML data across tables?
    Answer
    Currently XML-SQL Utility (XSU) can only store to a single table. It maps a canonical representation of an XML document into any table/view. But of course there is a way to store XML with the XSU across table. One can do this using XSLT to transform any document into multiple documents and insert them separately. Another way is to define views over multiple tables (object views if needed) and then do the inserts ... into the view. If the view is inherently non-updatable (because of complex joins, ...), then one can use INSTEAD-OF triggers over the views to do the inserts.
    -- I've tried this, works fine.

  • Loading Large XML into Oracle Database

    Hi,
    I am fairly new to XML DB. I have been successful in registering a schema to a table in the Database. Now, I have to load the appropriate XML into that table. I am using the Simple Bulk Loader program found on this oracle site, however, when I load my XML file I get the following error: ORA-21700: object does not exist or is marked for delete.
    So, I figured maybe simple bulk loader cannot handle large files? So I reduced my XML file and loaded it with the program and it worked. However, does anyone know how I can load large files into my registered schema table.
    Thanks,
    Prerna :o)

    Did you specify genTables true or false when registering the XML Schema ?
    Does you XML schema contain a recursive definition
    Is it possible that after reducing the size of the document you no longer have nodes that contain recursive structures...

  • Using dbms_xmldom to load large xml

    Hi,
    I am using dbms_xmldom to load the xml of size 600 MB. Its taking around 7 hours to load the xml.
    Please provide the tips or tricks to tune the dbms_xmldom code. Quick help will be appreciated. Thanks !!

    55e6744f-71c3-4d9d-a1a9-772c14ab90f9 wrote:
    Please help. Its urgent.
    No it's not. And it's very rude of you to assume so.
    Read: Re: 2. How do I ask a question on the forums?
    In answer to your question, I agree with Odie, you don't want to be using XML DOM for this.  See Mark Drake's (mdrake) answers on this thread over in the XMLDB forum...
    Re: XML file processing into oracle

  • ORA-00911:invalid character when loading an XML doc to the XMLType column

    We have followed this code snippet: http://www.oracle.com/technology/sample_code/tech/java/codesnippet/xmldb/Example_Code.html#createclob
    using the CLOB object to load an XML Document which has more 4000 characters to the XMLType column, we got the error -- ORA-00911:invalid character.
    Sound likes an encoding issue? any suggestions would be appreciated.
    BTW, we're able to use the TopLink's mapping -- Direct-to-Field to insert an XML String (<4000 characters) to the XMLType column.
    Thanks!

    Try removing the semi-colon at the end of your statement. That is the cause of ORA-911 with DBMS_SQL.PARSE and EXECUTE IMMEDIATE, I don't see why ADO should be any different.
    Cheers, APC

  • Build large XML Docs within PLSQL

    Hi..
    I wondered what will be the fastest way to make a XML Document from a selection resultset and put it in a CLOB.
    I think XSU isn't suitable to retrieve large XML (?)
    Maybe I should build the XML within PLSQL manually by adding all Tag- and Element-Strings to one CLOB.
    Can you please give me some recommendations.
    Al

    If you are using 9iR2 you might want to look at the new SQL/XML operators that provide an industry standard mechanism for generating XML from a SQL Query. Using these operators allows significant optimizations to take place in the database kernel as the optimzier is aware that the result set will be presented as an XML Document. The typical output of these operators is an XMLType which can be stored as a column in the database. There is not need to use a CLOB as such, as for not Schema based XMLType, the underlying storage mechanism is CLOB

  • Xerces: Creating large XML docs

    Hi
    I am using the Xerces DOM parser to create large XML files. However, I need to flush the output every now and then to a file since I don't want the memory to keep growing. I tried doing the following:
    _domWriter->writeNode(fileFormatTarget, domDocument);
    fileFormatTarget->flush();
    But if I do a flush many times, the whole XML file is written again to the file.
    Is there a way to keeping flushing the memory to the file and releasing the memory after flushing ?
    I tried:
    _domWriter->release()
    But I can't use the DOMWriter any longer.
    Thanks
    Bijal

    Bijal,
    "Bijal Vora" <[email protected]> wrote in message news:19360326.1100183759055.JavaMail.root@jserv5...
    I am using the Xerces DOM parser to create large XML files. However, I need to flush the output every now and then to a file sinceI don't want the memory to keep growing. I tried doing the following:
    >
    _domWriter->writeNode(fileFormatTarget, domDocument);
    fileFormatTarget->flush();
    But if I do a flush many times, the whole XML file is written again to the file.
    Is there a way to keeping flushing the memory to the file and releasing the memory after flushing ?
    I tried:
    _domWriter->release()
    But I can't use the DOMWriter any longer.XML in general and DOM in particular is not suitable for processing
    large amounts of data. For your case the only feasible option could
    be using custom SAX handler, IMHO.
    Regards,
    Slava Imeshev

  • Problem loading external XML

    Hi, I"m having a problem with an Applet.
    I wrote a program that acceses an external XML file located on my webspace, but when I converted this program to run in a webbrowser I get console errors when trying to acces the file.
    Here's my code:
    package javaapplication;
    import java.applet.Applet;
    import java.awt.BorderLayout;
    import java.awt.Canvas;
    import java.util.ArrayList;
    import javaapplication7.parsing.XmlParser;
    public class Main extends Applet {  
        protected Thread gameThread = null;
        private Canvas display_parent = null;
        private boolean running = false;
        private XmlParser parser = new XmlParser();
        public void destroy() {
            remove(display_parent);
            super.destroy();
            System.out.println("Clear Up");
        public void start() {
            gameThread = new Thread() {
                public void run() {
                    running = true;
                    System.out.println("Entering Gameloop");
                    gameLoop();
            gameThread.setDaemon(true);
            gameThread.start();
        public void stop() {
        public void init() {
            setLayout(new BorderLayout());
            try {
                display_parent = new Canvas() {
                    public final void removeNotify() {
                        super.removeNotify();
                display_parent.setIgnoreRepaint(true);
                display_parent.setSize(getWidth(),getHeight());
                add(display_parent);
                display_parent.setFocusable(true);
                display_parent.requestFocus();
                setVisible(true);
            catch(Exception e) {
                System.err.println(e);
                throw new RuntimeException("Unable to create display");
        private void gameLoop() {
            ArrayList <String> temp = null;
            while (running) {    
                temp = parser.SearchXml("http://users.telenet.be/decoy/FacebookApp/xml/BulletType.xml", "BulletList",2);
                for (int i=0;i<temp.size();++i) {
                    System.out.println(temp.get(i));
    }my html file :
    <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
    <html>
      <head>
        <title>AppletLoader</title>
      </head>
      <body>
      <applet code="javaaplication.Main" archive="JavaApplication7.jar" codebase="." width="800px" height="600px">
        <!-- The following tags are mandatory -->
        <!-- Name of Applet, will be used as name of directory it is saved in, and will uniquely identify it in cache -->
        <param name="al_title" value="appletloadertest">
        <!-- Main Applet Class -->
        <param name="al_main" value="javaapplication.Main">
        <!-- logo to paint while loading, will be centered -->
        <param name="al_logo" value="appletlogo.png">
        <!-- progressbar to paint while loading. Will be painted on top of logo, width clipped to percentage done -->
        <param name="al_progressbar" value="appletprogress.gif">
        <!-- List of Jars to add to classpath -->
        <param name="al_jars" value="JavaApplication7.jar">
        <!-- signed windows natives jar in a jar -->
        <param name="al_windows" value="lib/windows_natives.jar.lzma">
        <!-- signed linux natives jar in a jar -->
        <param name="al_linux" value="lib/linux_natives.jar.lzma">
        <!-- signed mac osx natives jar in a jar -->
        <param name="al_mac" value="lib/macosx_natives.jar.lzma">
        <!-- signed solaris natives jar in a jar -->
        <param name="al_solaris" value="lib/solaris_natives.jar.lzma">
        <!-- Tags under here are optional -->
        <!-- Version of Applet, important otherwise applet won't be cached, version change will update applet, must be int or float -->
        <!-- <param name="al_version" value="0.1"> -->
        <!-- background color to paint with, defaults to white -->
        <!-- <param name="al_bgcolor" value="000000"> -->
        <!-- foreground color to paint with, defaults to black -->
        <!-- <param name="al_fgcolor" value="ffffff"> -->
        <!-- error color to paint with, defaults to red -->
        <!-- <param name="al_errorcolor" value="ff0000"> -->
        <!-- whether to run in debug mode -->
        <!-- <param name="al_debug" value="true"> -->
        <!-- whether to prepend host to cache path - defaults to true -->
        <param name="al_prepend_host" value="false">
        <!-- main applet specific params -->
        <param name="test" value="test">
      </applet>
      <p>
        if <code>al_debug</code> is true the applet will load and extract resources with a delay, to be able to see the loader process.
      </p>
      </body>
    </html>console errors :
    network: Connecting http://users.telenet.be/decoy/FacebookApp/xml/BulletType.xml with proxy=DIRECT
    network: Connecting http://users.telenet.be/crossdomain.xml with proxy=DIRECT
    network: Connecting http://users.telenet.be:80/ with proxy=DIRECT
    network: Connecting http://users.telenet.be/crossdomain.xml with cookie "__utmz=226366239.1223468593.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(none); st8id_wlf_%2Etelenet%2Ebe_%2F=TE5HVExNX0ZMQVNI?81c37b0684bbf41f4c42c63db814f879; __utma=226366239.605535372.1223468593.1223556298.1224764225.3"
    network: Connecting http://www.zita.be/users_error/ with proxy=DIRECT
    network: Connecting http://www.zita.be:80/ with proxy=DIRECT
    java.security.PrivilegedActionException: java.io.FileNotFoundException: http://www.zita.be/users_error/
         at java.security.AccessController.doPrivileged(Native Method)
         at com.sun.deploy.net.CrossDomainXML.check(Unknown Source)
         at com.sun.deploy.net.CrossDomainXML.check(Unknown Source)
         at sun.plugin2.applet.Applet2SecurityManager.checkConnect(Unknown Source)
         at sun.net.www.http.HttpClient.openServer(Unknown Source)
         at sun.net.www.http.HttpClient.<init>(Unknown Source)
         at sun.net.www.http.HttpClient.New(Unknown Source)
         at sun.net.www.http.HttpClient.New(Unknown Source)
         at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(Unknown Source)
         at sun.net.www.protocol.http.HttpURLConnection.plainConnect(Unknown Source)
         at sun.net.www.protocol.http.HttpURLConnection.connect(Unknown Source)
         at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source)
         at java.net.URL.openStream(Unknown Source)
         at facebookgame.parsing.XmlParser.SearchXml(XmlParser.java:38)
         at facebookgame.entity.ShotEntity.<init>(ShotEntity.java:62)
         at facebookgame.InGameState.Enter(InGameState.java:75)
         at facebookgame.FacebookApp.changeToState(FacebookApp.java:258)
         at facebookgame.MenuState.CheckPlayerInput(MenuState.java:106)
         at facebookgame.MenuState.StateCycle(MenuState.java:139)
         at facebookgame.FacebookApp.gameLoop(FacebookApp.java:212)
         at facebookgame.FacebookApp.access$200(FacebookApp.java:17)
         at facebookgame.FacebookApp$1.run(FacebookApp.java:60)
    Caused by: java.io.FileNotFoundException: http://www.zita.be/users_error/
         at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source)
         at com.sun.deploy.net.CrossDomainXML$2.run(Unknown Source)
         ... 22 moreHopefully some1 can help me :).
    Edited by: Veko on Dec 7, 2008 2:51 PM

    But why is it possible then to succesfully acces the .xml file with a non-web applet
    This code reads the file perfectly of my webspace...
    import java.io.InputStream;
    import java.io.OutputStream;
    import java.net.URL;
    public class XMLHandler {
         public static void SearchXml(){
            try{
                InputStream filesource = new URL("http://users.telenet.be/decoy/FacebookApp/xml/BulletType.xml").openStream();
                System.out.println("found input source");
                byte[] readIn = new byte[8];
                int bytesRead = filesource.read(readIn);
                OutputStream stream = System.out;
                while(bytesRead != -1){
                     stream.write(readIn);
                     bytesRead = filesource.read(readIn);
            } catch (Exception e){
                e.printStackTrace();
         public static void main(String[] args){
              SearchXml();
    }it returns the contents of the xml perfectly
    <?xml version="1.0"?>
    <bulletList>
    <bullet type="1">
         <speed>1.0f</speed>
         <size>O.65f</size>
         <damage>1</damage>
         <texture>"res/shot.png"</texture>
         <sound>"blah.ogg"</sound>
    </bullet>
    <bullet type="2">
         <speed>1.5f</speed>
         <size>O.65f</size>
         <damage>1</damage>
         <texture>"res/shot.png"</texture>
         <sound>"blah.ogg"</sound>
    </bullet>
    </bulletList>Does a applet in a browser have somekind of weird restriction that I haven't heard of yet ?
    Edited by: Veko on Dec 8, 2008 2:28 AM
    Edited by: Veko on Dec 8, 2008 2:29 AM
    Edited by: Veko on Dec 8, 2008 2:29 AM

  • Load different xml docs to same database table

    I have 2 different xml documents that contains the information about rows which should be loaded into a database table.
    File 1:
    <customer>
    <name>Company1</name>
    <city>Helsinki</city>
    </customer>
    File 2:
    <client>
    <clientname>Company2</clientname>
    <clientcity>Helsinki</clientcity>
    </client>
    The data in the 2 different files should be read and inserted as a row into the same database table.
    Instead of creating a java program for each xml file to insert the content into the database, I would like to first "translate" the content of the xml files and create a file for each xml file which uses the same tags, so that my java programs only reads the new "translated" files and inserts the data into the database.
    This means that when I receive a new xml file with different tags but information about a row that should be inserted into the same table, I just have to "translate" this file, and use the same java program to insert the new file data into the database.
    I would like to know which technology should be used to perform this task.

    XSLT is quite commonly used for this.
    Or, for more a formal approach, google 'xml "architectural forms"'
    Pete

Maybe you are looking for

  • Transfer Data from a Query to DSO

    Hello, I have an APD to transfer data from a query to a 'Direct update DSO'. The execution of this APD times out since the volume of query data is very high. Is there an alternate way of transfering data from a query to DSO? Please suggest. Thanks!

  • Query on Performance issues relating to a report

    Hi Group, I have an issue while running a report which was creating Business Partners for (both Company and the Contact person and as well as relationship b/w them). This report was having BAPI( for creating Business Partners ) and also for creating

  • Sp3 and pdf export with diadem

    Dear all; We use Diadem 11 with windows XP SP2, and we use the functions PDF and Html Export . After upgrading from pack 2 to windows xp pack 3 we could no longer use these functions. We now the recieve error message : Access Violation r The printer

  • What happened to the "complete album" feature

    is the "complete this album" feature still valid? if so, how do you access it? also, if i buy 2 songs from the same album from the amazon mp3 store, can i "complete the album" when they are in my itunes and be able to get the rest of the album minus

  • Healthy Living Username and Password

    What is the username, password and DSN for the Healthy Living demo? I have looked everywhere and searched this forum numerous times and I am still unable to find it. When i run the demo it prompts me for the password for the user hl on the database O