XML efficiency

I'm trying to redesign a Flash web app so that others can use
it for their purposes and update it themselves.
So, the question is: if I change the app to read all content
from an XML file and load JPG backgrounds as dictated by vars in
the XML file - will it slow the loading time down significantly
(compared to the original app in which the FLA was built in the
standard fashion with all string values and images ebedded)?
Thanks, Nathan

yes, but that shouldn't be a major issue unless the xml file
is large. and if it is large you can always display an xml
preloader so users are entertained and/or informed while the xml
file is loading.

Similar Messages

  • Question to Steve - document('somedocument.xml') efficiency.

    Hi.
    I have a question about how efficient is using external documents in XSLT with Oracle XML parser and XSLT processor. Will somedocument.xml be reloaded every time I access something like document('somedocument.xml')/root/node/whatever?
    Will this document be reloaded every time if I use
    <xsl:variable name="t" select="document('somedocumentorURL.xml')/root/set"/>
    and then use
    <xsl:value-of select="$t/node1"/>
    <xsl:value-of select="$t/node2"/>
    and so on.
    Is there some cache for external documents implemented? Or what is the preferred way of accessing information in external documents?
    Thank you in advance for the answer.
    Bye.

    The XSLT 1.0 spec requires a processor to fetch the resource only once per transformation "run" for each distinct URI that you pass to the document() function.

  • Typo in Developer XML/Parsing XML Efficiently ?

    Hi !
    On this web page :
    http://otn.oracle.com/oramag/oracle/03-sep/o53devxml.html
    If you look at the "Figure 1: A DOM Tree" for the book example from an XML piece above on that page you'll see that in a second from the bottom line, box, second from the left has element named "author" and then text box underneath says "XML in a Nutshell".
    Shoouldn't that element be a "title" ? And then level five in that tree should look like :
    Attr | Element | Element | Element
    id = 121 | title | author | Price
    Thank you !
    Vladimir Bogak,
    DB Developer, LucasFilm, CA

    You could try re-installing JWSDP 1.3 but this time include Tomcat and ant, and then develop your application with help of the JWSDP Tutorial[1]. Then, port ing your app to your external Tomcat shouldn't be hard.
    Regards,
    -- Ed
    [1] http://java.sun.com/webservices/docs/1.3/tutorial/doc/index.html

  • Advantages/disadvantages,Capabilities,Failures of Types of mapping

    Dear all,
    Can you kindly explain me the Advantages/disadvantages,Capabilities,Failures of Types of mapping.
    what is the parser XSLT uses.
    what is differnece between sax/dom parsers.
    Thanks,
    Srinivasa

    Hi srinivas phani  ,
    The following websites give u good documentation on Mapping:
    Excellent PDF Document on Mapping
    http://help.sap.com/bp_bpmv130/Documentation/Operation/MappingXI30.pdf
    Mapping Development with the ABAP Workbench
    http://help.sap.com/saphelp_nw04/helpdata/en/10/5abb2d9df242f6a62e22e027a6c382/content.htm
    ABAP Mappings
    http://help.sap.com/saphelp_nw04/helpdata/en/ba/e18b1a0fc14f1faf884ae50cece51b/content.htm
    how to create a flat file out of an IDoc-XML by means of an ABAP mapping program and the J2EE File Adapter.
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/46759682-0401-0010-1791-bd1972bc0b8a
    How to Use ABAP Mapping in XI 3.0
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/e3ead790-0201-0010-64bb-9e4d67a466b4
    SAXParser
    Both DOM and SAX approaches have advantages and disadvantages, and your choice of technique must depend on the type of data being parsed, the requirements of your application, and the constraints under which you are operating.
    The SAX approach is linear: It processes XML structures as it finds them, generating events and leaving the event handlers to decide what to do with each structure. The advantage of this approach is that it is resource-friendly; because SAX does not build a tree representation of the document in memory, it can parse XML data in chunks, processing large amounts of data with very little impact on memory. This also translates into better performance; if your document structure is simple, and the event handlers don't have anything too complicated to do, SAX-based applications will generally offer a speed advantage over DOM-based ones.
    The downside, though, is an inability to move around the document in a non- linear manner. SAX does not maintain any internal record of the relationships between the different nodes of an XML document (as the DOM does), making it difficult to create customized node collections or to traverse the document in a non-sequential manner. The only way around this with SAX is to create your own custom object model, and map the document elements into your own custom structures—a process that adds to complexity and can possibly degrade performance.
    Where SAX flounders, though, the DOM shines. The DOM creates a tree representation of the document in memory, making it possible to easily travel from one node to another, or even access the same node repeatedly (again, not something you can do easily in SAX). This tree representation is based on a standard, easy-to-understand model, making it easier to write code to interact with it.
    This flexibility does, however, come with an important caveat. Because the DOM builds a tree in memory, DOM processing cannot begin until the document has been fully parsed (SAX, on the other hand, can begin parsing a document even if it's not all available immediately). This reduces a developer's ability to "manage" the parsing process by feeding data to the parser in chunks, and also implies greater memory consumption and consequent performance degradation.
    Consequently, the choice of technique depends significantly on the type of constraints the application will be performing under, and the type of processing it will be expected to carry out. For systems with limited memory, SAX is a far more efficient approach. On the other hand, complex data-processing requirements can benefit from the standard object model and API of the DOM.
    After using DOM to parse XML documents for any length of time, you will probably begin to notice that performance tends to suffer when you’re dealing with large documents. This problem is endemic to DOM's tree-based structure: larger trees demand more memory, and DOM must load an entire document into memory before it can allow you to parse it. For situations where performance is problematic with DOM, you have an alternative in the Simple API for XML (SAX). In this fifth installment in our Remedial XML series, I'll introduce you to the SAX API, and provide some links to SAX implementations in several different languages.
    Further differences can be seen in the following websites
    http://articles.techrepublic.com.com/5100-22-1044823.html
    Parsing XML Efficiently : DOM Parsing ,SAX Parsing
    http://www.oracle.com/technology/oramag/oracle/03-sep/o53devxml.html
    SAX Example
    http://www.informit.com/articles/article.aspx?p=26351&seqNum=6&rl=1
    Event or DOM parsing?
    http://discuss.joelonsoftware.com/default.asp?design.4.156750.12
    SAX samples
    http://xerces.apache.org/xerces2-j/samples-sax.html
    Set up a SAX parser
    http://www.ibm.com/developerworks/xml/library/x-tipsaxp.html
    Simple API for XML (SAX)
    http://en.wikipedia.org/wiki/Simple_API_for_XML
    SAX Parser Benchmarks
    http://piccolo.sourceforge.net/bench.html
    XML Parsers: DOM and SAX Put to the Test
    http://www.devx.com/xml/Article/16922/1954?pf=true
    High-Performance XML Parsing With SAX
    http://www.xml.com/pub/a/2001/02/14/perlsax.html
    Using the SAX Parser
    http://www.javacommerce.com/displaypage.jsp?name=saxparser1.sql&id=18232
    Class SAXParser
    http://java.sun.com/j2se/1.4.2/docs/api/javax/xml/parsers/SAXParser.html
    Class SAXParser
    http://people.apache.org/~andyc/neko/doc/html/javadoc/org/cyberneko/html/parsers/SAXParser.html
    SAX Parsers
    http://webdesign.about.com/od/saxparsers/SAX_Parsers.htm
    Interface SAXParser
    http://excalibur.apache.org/apidocs/org/apache/excalibur/xml/sax/SAXParser.html
    Class SAXParser
    http://xerces.apache.org/xerces2-j/javadocs/xerces2/org/apache/xerces/parsers/SAXParser.html
    DOM Parsing
    Excellent website showing how 2 use Document Object Model (DOM)
    http://www.w3.org/DOM/
    Document Object Model (DOM) Parsing
    http://www.xml.com/lpt/a/1597
    Overview of DOM, DOM Level 3 core,DOM Level 3 Load & Save
    http://www.softwaresummit.com/2004/speakers/GrahamJAXP1.pdf
    Sample java program using DOM
    http://mail-archives.apache.org/mod_mbox/cocoon-cvs/200305.mbox/%[email protected]%3Eorg/mod_mbox/cocoon-cvs/200305.mbox/%[email protected]%3E
    Easy RFC lookup from XSLT mappings using a Java helper class
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/05a3d62e-0a01-0010-14bc-adc8efd4ee14
    XML Transformation Using the TrAX APIs in JAXP
    http://www.xml.com/pub/a/2005/07/06/jaxp.html?page=last
    cheers!
    gyanaraj
    ****Pls reward points if u find this helpful

  • XDK -Performance best practices etc

    All ,
    Am looking for some best practices with specific emphasis on performance
    for the Oracle XDK ..
    can any one share any such doc or point me to white papers etc ..
    Thanks

    The following article discusses how to choose the most performant parsing strategy based on your application requirements.
    Parsing XML Efficiently
    http://www.oracle.com/technology/oramag/oracle/03-sep/o53devxml.html
    -Blaise

  • Where can I find Oracle Magazines that are prior to  2006?

    I tried to find an archive which name is "Parsing XML Efficiently", and the linkage to it is "http://www.oracle.com/technology/oramag/oracle/03-sep/o53devxml.html", but cannot find it actually. I also found all the magazines that are prior to 2006 are gone.
    Anyone can help me?

    I think you can report the broken link in the {forum:id=165} forum, maybe someone could help you there.
    (though I'm pretty confident you can find more up-to-date materials on the subject)

  • Is there a way to import large XML files into HANA efficiently are their any data services provided to do this?

    1. Is there a way to import large XML files into HANA efficiently?
    2. Will it process it node by node or the entire file at a time?
    3. Are there any data services provided to do this?
    This for a project use case i also have an requirement to process bulk XML files, suggest me to accomplish this task

    Hi Patrick,
         I am addressing the similar issue. "Getting data from huge XMLs into Hana."
    Using Odata services can we handle huge data (i.e create schema/load into Hana) On-the-fly ?
    In my scenario,
    I get a folder of different complex XML files which are to be loaded into Hana database.
    Then I gotta transform & cleanse the data.
    Can I use oData services to transform and cleanse the data ?
    If so, how can I create oData services dynamically ?
    Any help is highly appreciated.
    Thank you.
    Regards,
    Alekhya

  • Efficient way for Continous Creation of XML Content?

    Hi
    I have a requirement of creating xml content from the data extraced from a udp packet.
    As the packet arrives, i have to generate appropriate xml content from them and keep in the same single xml file.
    Problem:
    Since the xml file is not a flat file, i can't just append the new contents at the end. So if i have to write into xml file, Each and Every time i have to parse the content as a packet arrives and insert the new content under appropriate parent. I think this is not the most efficient way.
    Every time parsing the file may affect cpu time and as the file grows in size, the memory will also be a constraint.
    Other options i could think of
    * Hold the XML Document Object in memory until a certain event like timeout for receiving packet and write into the xml file at oneshot.
    * Serialize the objects containing the extracted packet content to a temp file and after some event, parse and create the xml file at oneshot
    Which is the efficient way or is there any design pattern to handle this situation? I am worried about the memory footprint and performance on peak loads
    I am planning to use JDOM / SAX Builder for xml creation.
    Thank you...

    Lot's of "maybe" and "I think" and "I'm worried about" in that question, and no "I have found" or "it is the case that". In short, you're worrying too much about problems you don't even know you have. XML is a verbose format anyway, efficiency isn't paramount when dealing with it. Even modestly powered machines can deal with quite a lot of disk I/O these days without noticeable impact. The most efficient thing you can do here is write something that works, and see if you can live with the performance

  • Most efficient way to load XML file data into tables

    I have a complex XML file running into MBs. I want to load it's data into 7-8 tables.
    Which way will be better:
    1) Use SQL Loader to actually load directly into the 7-8 tables directly by modifying the control card.
    Is this really possible and feasible? I am not even sure about it
    2) Load data as XML Type in a table and register it. Then extract from there to load into various tables.
    Please help. I have to find the most efficient way of doing it.
    Regards,
    Sudhir

    Yes it is possible to use SQL*Loader to parse and load XML, but that is not what it was designed for and so is not recommended. You also don't need to register a schema, just to load/store/parse XML in the DB either.
    So where does that leave you?
    Some options
    {thread:id=410714} (see page 2)
    {thread:id=1090681}
    {thread:id=1070213}
    Those talk some about storage options and reading in XML from disk and parsing XML. They should also give you options to consider. Without knowing more about your requirements for the effort, it is difficult to give specific advice. Maybe your 7-8 tables don't exist and so using Object Relational Storage for the XML would be the best solution as you can query/update tables that Oracle creates based off the schema associated to the XML. Maybe an External Table definition works better for reading the XML into the system because this process will happen just once. Maybe using WebDAV makes more sense for loading XML to be parsed (I don't have much experience with this, just know it is possible from what I've read on the forums). Also, your version makes a difference as you have different options available depending upon the version of Oracle.
    Hope all that helps as a starter.
    Edited by: A_Non on Jul 8, 2010 4:31 PM
    A great example, see the answers by mdrake in {thread:id=1096784}

  • Efficient searching in a large XML file for specific elements

    Hi
    How can I search in a large XML file for a specific element efficiently (fast and memory savvy?) I have a large (approximately 32MB with about 140,000 main elements) XML file and I have to search through it for specific elements. What stable and production-ready open source tools are available for such tasks? I think PDOM is a solution but I can't find any well-known and stable implementations on the web.
    Thanks in advance,
    Behrang Saeedzadeh.

    The problem with DOM parsers is that the whole document needs to be parsed!
    So with large documents this uses up a lot of memory.
    I suggest you look at sometthing like a pull parser (Piccolo or MPX1) which is a fast parser that is program driven and not event driven like SAX. This has the advantage of not needing to remember your state between events.
    I have used Piccolo to extract events from large xml based log files.
    Carl.

  • Oracle Report vs JSP efficiency and Excel XML in Web Source Questions

    I have used Oracle Reports in the past 6i, but haven't used them in about 4 years. We are now using 9.0.4 reports and I am trying to generate Excel XML from an Oracle report by manipulating the web source of the report. Basically copying and pasting the Excel XML into the web source of the Oracle Report instead of html as suggested by this Metalink Doc ID 240819.1. I do have a JSP working example that generates Excel XML created in JDeveloper and successfully spawns off a very nice looking Excel spreadsheet.
    However, a statement was made here by our app sever guy that an Oracle Report is more efficient at getting data from the database than a JSP. The app sever section does not want anyone using JSPs to generate a report of any kind. I guess small web pages are OK since our java guys use them all the time. This rule for a Reports only environment seems a little restrictive to me. Is there any truth to the statement that Oracle Reports bulk collects data in one chunk as opposed to a JSP making multiple trips to the database from the middle tier?
    Secondly, if an Oracle Report is more efficient in the way that it collects large record sets from the db, and if I save an Oracle Report as a JSP not an rdf, and use the reports_tld.jar with the rw tags instead of the typical jsp jstl tags, will I still get the same improvement from using an Oracle report? I guess the improvement is in the library used correct?
    Thirdly, although not part of this forum, I also am assuming that if we have single sign on that when I call a JSP from an Oracle Form I will not have to log in again correct?
    Thanks...
    Message was edited by:
    Mark Reichman

    It could be the configuration issue. Reconfigure the engine.

  • Edit and display XML in an efficient way

    Hello,
    for quite some time now I searched the internet, this forum and some other forums but I couldn't really find an answer.
    My first question is, which 'technology' should one use to build an XML-Editor, i.e. how can XML-files be efficiently loaded, displayed, in a JTree or JTable, and edited (i.e. add and remove elements and save the XML-file again)?
    The only 'technology' I can think of is DOM/JDOM (that's what I've read on many websites) because one needs the XML-file in memory to edit it. So SAX doesn't help here, does it? But DOM/JDOM is very inefficient because the DOM-tree has to be build and is quite big in memory, at least it's said to be, because I have XML-files which are several MB big. And actually I want to display more than one file in the application (e.g. with a JTabbedPane) so that they can be displayed and edited simultanously. But that means the tree for each file has to be in memory so, I thought this would consume to much memory. And I think it's not a good idea to write the file back when it's not displayed.
    Another thing is that I want to be able to search the XML-file so, I think, it's only possible with DOM/JDOM because you can use XPath. I don't know how to search an XML-file fast with an other 'technology'.
    If the XML-Schema is know, I could use JAXB, but then I don't really know how to search the XML.
    Which 'technology' do the 'professional' XML-editors use?
    My 2. question is, if there's maybe an library etc. that already implements editing and displaying for XML-files, which one can use in an application.
    I'd really appreciate any ideas.

    What you could do is use a SAX parser to parse the XML document and build the JTree. Tree the JTree like an XMl Tree. when the user want yo save the Tree, then write the Tree out to an XML file. THere is no need to have the DOm for the XML; however, to let user have the ability to perrform XPATH, you have to write code for it.
    14 MB is not a lot, so i would stick with using a JTree and have each TreeNode associated with an Element in the DOM. Parser are pretty fast..it shoiuld probably not takes more than one or two seconds to parse a 14 MB XMl file. Now, if the user have a 100+ MB file, then it may become a problem. In all, your application will probably require at least 100MB just to render the XML tree.
    You can also write a class that will parse the XML document...and serialize the DOM to a file. you can break up the dOM into sevral document object (and store them in a file)..to allow you to traverse faster. Your application (class) will read in the necessary document and work with it. THere are two problem here:
    1. IO operations are very expensive. however, most of the time, the application will not be calling for the document file. It takes time for user to type than it would for the application to save user update to a file..so during this time, a Thread can be spawn to perform updating the document (synchronization required)
    2. The code is complex..you have to make sure the serialized document is updated. you have to come up with an algorithm (design) on fetching these document object, etc..
    Many XML editor I see use the simplier approach..loading the whole XML document to a DOM. .. so 100MB document will require 100+ MB of memory space available..This approach is easy..fast..bt require memory.
    There is a trade off..speed, easy vs memory.
    It's upto you to determine which path you want to go..you can't have everything.

  • Efficient way of searching multiple xml files for multiple entries

    As I�m quite new to using xml in java I can't figure out how to solve my problem.
    I've got about 20 xml files, each about 500-1000kB big. each files contains about 500 questions, each with a unique ID.
    A user had to be capable of entering any number of ',' separated id's, and the program needs to show the user the questions.
    Using a SQL server this would be easy, but in this situation I can't. As this had to be a small program I can't add a 10MB jar file either, nor can I ask the users to install an additional program.
    Creating a brute search will be easy, but searching 20MB of xml files multiple times will be hard even for a modern PC.
    So my question is: What will be the most efficient way of searching these files?
    Hope that someone will be kind enough to respond :)
    Rick

    I'd still go with a database. There are databases that are significantly more light-weight than MS SQL Server.
    More concretely there are databases that run completely in memory. HSQLDB is one, Java DB (formerly Derby) is another one.
    I'd parse the XML files once, add them to the database and query from there later on.
    If even that is to complicated for you, then you could simply parse the XML files once and put the Questions into a HashMap with the ID as the key.

  • Efficient XML parsing?

    Is there a simpler and more efficient way to parse XML than with SAX? I'm currently using it but I really don't like having to hard-code everything (or at least that's the way I'm doing it), and I'd rather not write my own (which I started doing- extending HandlerBase, etc- until I realized I'd probably be better off simply hard-coding everything).
    This is generally the code I'm using. If someone knows of a better parser or a way to simplify this, thats what I'm looking for.
    class Parser extends HandlerBase {
        private boolean isSomeElement = false;
        public void startElement(String name, ...) {
            if (name.equals("someElement") {
                isSomeElement = true;
        public void endElement(String name) {
            if (name.equals("someElement") {
                isSomeElement = false;
        public void characters(...) {
            if (isSomeElement) {
                //process the data in the element...
    }Thanks!

    You need to distinguish between the parser (which just deciphers XML) and the code that you put next to it (which is what you are asking about). I'm assuming you used the word "efficiency" to refer to the perceived ugliness of your code rather than speed or anything like that.
    In any case you are going to have code that says "For an ABC element, do this, for a PUTZ element do that". There are ways you can write the code so it doesn't look so much like hard-coding, for example you could have a Map for which "ABC" is the key and the value is some object that processes the content of the element. That way it looks more object-oriented, and it does have the advantage of getting all the ABC-processing code into one place.

  • XML Generation using a sql query in an efficient way -Help needed urgently

    Hi
    I am facing the following issue while generating xml using an sql query. I get the below given table using a query.
         CODE      ID      MARK
    ==================================
    1 4 2331 809
    2 4 1772 802
    3 4 2331 845
    4 5 2331 804
    5 5 2331 800
    6 5 2210 801
    I need to generate the below given xml using a query
    <data>
    <CODE>4</CODE>
    <IDS>
    <ID>2331</ID>
    <ID>1772</ID>
    </IDS>
    <MARKS>
    <MARK>809</MARK>
    <MARK>802</MARK>
    <MARK>845</MARK>
    </MARKS>
    </data>
    <data>
    <CODE>5</CODE>
    <IDS>
    <ID>2331</ID>
    <ID>2210</ID>
    </IDS>
    <MARKS>
    <MARK>804</MARK>
    <MARK>800</MARK>
    <MARK>801</MARK>
    </MARKS>
    </data>
    Can anyone help me with some idea to generate the above given CLOB message

    not sure if this is the right way to do it but
    /* Formatted on 10/12/2011 12:52:28 PM (QP5 v5.149.1003.31008) */
    WITH data AS (SELECT 4 code, 2331 id, 809 mark FROM DUAL
                  UNION
                  SELECT 4, 1772, 802 FROM DUAL
                  UNION
                  SELECT 4, 2331, 845 FROM DUAL
                  UNION
                  SELECT 5, 2331, 804 FROM DUAL
                  UNION
                  SELECT 5, 2331, 800 FROM DUAL
                  UNION
                  SELECT 5, 2210, 801 FROM DUAL)
    SELECT TO_CLOB (
                 '<DATA>'
              || listagg (xml, '</DATA><DATA>') WITHIN GROUP (ORDER BY xml)
              || '</DATA>')
              xml
      FROM (  SELECT    '<CODE>'
                     || code
                     || '</CODE><IDS><ID>'
                     || LISTAGG (id, '</ID><ID>') WITHIN GROUP (ORDER BY id)
                     || '</ID><IDS><MARKS><MARK>'
                     || LISTAGG (mark, '</MARK><MARK>') WITHIN GROUP (ORDER BY id)
                     || '</MARK></MARKS>'
                        xml
                FROM data
            GROUP BY code)

Maybe you are looking for

  • Can't open Adobe Reader in 10.5.7

    I just discovered that I cannot download pdf files in Safari with OS 10.5.7. My old Adobe Reader 8 will not open at all. I found a patch 9.1.2, that supposedly would work, but it is a patch for 9.1.1, which is a patch for 9.0. When I downloaded 9.0,

  • Old computer died - need to get itunes songs/apps on new computer

    Our old computer has died. Repair guy is able to retrieve files from hard drive. My kids have an ipod nano and ipod touch. How do we get the new computer set up and get all of the songs/apps/movies onto the new computer? Should I ask repair guy to lo

  • HTTPS with HTTP_GET ?

    Hi, i'm trying to use function module HTTP_GET with a HTTPS url (incl. userid + password). I'm using destination SAPHTTP. When i use the url in my Internet Browser the results are shown. But function module HTTP_GET i'm getting the error 'Connect err

  • Drag and drop file onto java app

    Hi, I would like to drag and drop a file from windows explorer onto a panel in my java app. All I need is that when the file is 'dropped', I can get a reference to the path of file. e.g. There is a file, C:\myFile.txt There is a String in my program

  • XML Messages export to Excel

    Hi, In NW2004s PI7, under oracle 10g, windows 2003server... I would like to export XML messages in excel automatically everyday... do I need some ABAP program or scripts to do that ... I know how to do manualy.. SXMB_MONI Monitor for processed XML me