Which more performance hit: trip to DB, traverse XML DOM in memory?

Which is more of a performance hit:
     1. Make trip to DB, get Data, come back
          a. Includes Make trip to DB, alter smth, come back
     2. Use DOM Parser to get data from XML (XML is in Memory)
          a. Includes altering the DOM Document that is in memory

The trip to the DB depends on network latency and sometimes (but not often) network speed and of course the complexity of doing the update on the DB (how many indices chang, what trigggers fire, what integrity constraints must be validated, etc.).
The XML parsing is generally done once and from then on you manipulate the tree directly - that latter part is efficient. Of course the cost depends on your technique, the hit taken to parse the XML (which parser? validating?).
The first part will probably consume less CPU on the host running the Java program (assuming DB is remote) but take more time total.
The latter may (or may not) consume more local CPU but should be faster.
Your mileage may vary. Test it.
Chuck.

Similar Messages

  • Which tables are hit when a query runs - time dependent objects performance

    Hello all,
    We are trying to see what are the effects of time dependent master data objects in query. We will have a key date as variable so user can see the data as a particular point.
    I am trying to see what are all the tables hit when a query is executed and how the time dependent info objects affect performance. Basically we are trying to see is - does the query hit the P or Q or Y tables of the infoobject. Is there any tcode or program that I can use to see which tables were hit and how much time the query took to execute.
    Also if the time dependent attribute is in free characteristic does it directly effect the query.
    If you can share some more experience with time dependent master data objects in query and its effects on performance that will be great.
    Thanks all in advance

    Hello Siggi, Thanks for inputs
    That is what I actually did before posting message here, the tables that are hit are /BI0/S...or /BI0/R and /BI0/T....    I never see the tables /BI0/P or /BI0/Q ... tables hit. I have a key date on variable screen at so when i put future or past date the /BI0/SDATE table is hit does it sound about right to you ?
    Is the /S table hit the most because the data is being seen from the SID's that are generated.
    Can you share your thoughts.
    Thanks again,
    Have assigned you points.

  • File Adapter or File  Transport which one will give more performance

    Hi all,
    File Adapter or File Transport which one will give more performance ? in OSB?
    Which one to select any one did performance analysis?
    Thanks
    Phani

    Why don't you just go read some benchmarks?
    http://www.barefeats.com/mbpp18.html

  • How can I run two independant LabView applications from the same computer, without taking a performance hit?

    I have two identical, but independant test stations, both feeding data back to a Data Acquisition Computer running LabView 6.1. Everything is duplicated at the computer as well, with two E-series multifunction I/O cards (one for each test station) and two instances of the same LabView program for acquiring and analysing the data. The DAQ computer has a Celeron processor w/ 850Mhz clock and 512MB memory, and is running on Windows NT.
    I have noticed that when I run both the applications simultaneously, I take a substantial performance hit in terms of processing speed (as opposed to running just one program). Why does this happen and how can I prevent it? (In t
    his particular case, it may be possible to combine both the tests into one program since they are identical, but independant, simultaneous control of two different LabView programs is a concept I need to prove out).
    Thanks in advance for any tips, hints and spoon feedings (!)....

    Depending on your application, you may or may not be able to improve things.
    Firstly, each task requires CPU time, so a certain performance difference is guaranteed. Making sure you have a "wait until ms" in every while loop helps in all but the most CPU intensive programs.
    Secondly, if you are
    1) streaming data to disk
    2) Acquiring lots of data over the PCI bus
    3) Sending lots of data o ver the network
    you can have bottlenecks elsewhere than in your program (limited Disk, PCI or Network bandwidth).
    Avoid also displaying data which doesn`t need to be displayed. An array indicator which only shows one element still needs a lot of processing time if the array itself is large.... Best is to set the indicator invisible if this is the case.
    I think
    it would be best if you could give some more information about the amount of data being acquired, processed and sent. Then maybe it will be more obvious where you can optimise things. If you are running W2000, try activating the task manager while the program(s) are sunning to see where the bottleneck is.
    Shane
    Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)

  • Internet Performance Hit Using BitTorrent with a Linksys Router

    I have a LinkSys WRT54G V2.2. I noticed when I'm using a Bit Torrent client with about 6 incoming and 6 outgoing connections that my internet performance goes down significantly. During this time I would be using about 150KB per second downstream and about 30KB per sec upstream. I know I have plenty of downstream bandwidth available but it appears that the router is choking on all the simultaneous connections being used by the Bit Torrent client.
    Using the speed tests provided by www.dslreports.com I test my connection before using Bit Torrent and have about 600KB per sec downstream and about 60KB per sec upstream. After running the Bit Torrent client with about a half dozen connections incoming and out going using only 150KB downstream and 30KB upstream the speed test shows that I only have about 50KB downstream throughput and only about 6KB upstream throughput. It doesn't make sense that only that small amount of bandwidth is available when I'm not even using nearly that much.
    I also tried the same test with a couple large downloads via HTTP, and my bandwidth shows the performance hit that I expect taking into account the actual bandwidth that I'm using.
    Can anyone shed some light on what is going on here.

    If you're utilizing BitTorrent I assume you're using Azureus or a similar client for your connections? Either way, by nature, BitTorrent files are transmitted via multiple connections over a wide span of users. Even if you arn't downloading your full bandwidth's worth, your client is continuously scanning for more connections to increase the speed at which your transfers are running. The numbers may add up to much less than your bandwidth, but there's quite a lot of background work that those clients are doing and not listing for you in a display. Even if that utilizes half of your bandwidth, 1/2 is quite a bit. I have the same issue, and have resulted in just using my torrents while I sleep or am at work.

  • Performance hit using "where" clause in the query

    Hi All,
    I am facing a huge performance hit in the java code when using "where" clause in queries. Following are the details:
    1. SELECT * FROM Employee
    2. SELECT * FROM Employee where employeeid in (26,200,330,571,618,945)
    There is no difference in Query Execution Time for both queries.
    Business Logic Time is huge in second case as compared to first one (ratio - 1:20).
    Rows returned are more in first case as compared to second case.(ratio - 1:4)
    Business Logic is same for both the cases where I iterate through the ResultSet, get the objects and set them in a data structure.
    Does anybody know the reason of unexpected time difference for the business logic in the second case?

    Since you're mentioning clustering your index, I'll assume you are using Oracle. Knowing what database you are using makes it a lot easier to suggest things.
    Since you are using Oracle, you can get the database to tell you what execution plan it is using for each of the 2 SQL statements, and figure out why they have similar times (if they do).
    First, you need to be able to run SQL*Plus; that comes as part of a standard database installation and as part of the Oracle client installation - getting it set up and running is outside the scope of this forum.
    Second, you may need your DBA to enable autotracing, if it's not already:
    http://asktom.oracle.com/~tkyte/article1/autotrace.html
    http://www.samoratech.com/tips/swenableautotrace.htm
    Once it's all set up, you can log in to your database using sql*plus, issue "SET AUTOTRACE ON", issue queries and get execution plan information back.
    For example:
    SQL> set autotrace on
    SQL> select count(*) from it.ticket where ticket_number between 10 and 20;
      COUNT(*)
            11
    Execution Plan
    Plan hash value: 2983758974
    | Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |            |     1 |     4 |     1   (0)| 00:00:01 |
    |   1 |  SORT AGGREGATE   |            |     1 |     4 |            |          |
    |*  2 |   INDEX RANGE SCAN| TICKET_N10 |    12 |    48 |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("TICKET_NUMBER">=10 AND "TICKET_NUMBER"<=20)
    Statistics
              0  recursive calls
              0  db block gets
              1  consistent gets
              0  physical reads
              0  redo size
            515  bytes sent via SQL*Net to client
            469  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> This tells me that this query used an INDEX RANGE SCAN on index TICKET_N1; the query can't do much better than that logically... In fact, the statistic "1 consistent gets" tells me that Oracle had to examine only one data block to get the answer, also can't do better than that. the statistic, "0 physical reads" tells me that the 1 data block used was already cached in Oracle's memory.
    the above is from Oracle 10g; autotrace is available back to at least 8i, but they've been adding information to the output with each release.
    If you have questions about sql_plus, check the forums at asktom.oracle.com or http://forums.oracle.com/forums/category.jspa?categoryID=18
    since sql*plus is not a JDBC thing...
    Oh, and sql*plus can also give you easier access to timing information, with "set timing on".

  • Performance hit when using a FirePro GPU?

    Hey there!
    I'm interested in purchasing a cheap ATI FirePro V4900 or similar GPU in order to take advantage of my 10 bit TFT when using Photoshop. I was wondering if I have to expect a performance hit when using PS 6.0 with such a card as opposed to a normal "gamer" GPU like the NVidia 670 GTX when :
    - performing normal file handling, opening PSD files, paning, zooming, brushing, etc.
    - applying some of the newer GPU-enhanced filters like Liquify, Oil Paint, Iris Blur, 3d enhancements, etc.
    - actually applying/rendering a more demanding fliter such as Iris Blur?
    Does anyone know about this? I'm afraid I could not find any benchmarks at all except for the one on Tomshardware regarding OCL, but that one does not include professional GPUs...
    Thanks for any info in advance!

    There will not be synchronization when the method of A
    is being called. The method 1) certainly saves memory
    space, but will the performance be hurt since there
    will only be one object accessed by multiple threads?
    Or maybe it doesn't matter?If there is no synchronization, it will not matter. Threads execute methods. Methods do not run on Objects. The Object is just data that is implicitly linked to the method.
    Just make sure it's safe to keep the method unsynchronized.

  • How bad is the performance hit with RTMPT?

    In a conversation with engineers recently at a CDN it was suggested to me that streaming all video over RTMPT only was a viable solution to the barriers posed by firewalls and proxies blocking port 1935.  They indicated that they had seen no signifigant performance degradation with tunneling and that many of their clients were making the switch from "rollover" type connection models to connection via RTMPT only.
    This ran counter to my notion of the process.  I have always thought the packet overhead was signifigant.
    Is it?  How bad is the performance hit for streaming live h.264 video?

    Hmmm...  That's quite a hit.
    I'm trying to determine the best strategy for reaching the most people with the least performance hit.  A guy lays out his strategy here:
    http://www.kensodev.com/2010/02/19/rtmp-being-blocked-by-firewalls-flash-media-server/
    He basically says ditch 1935.  Never use it.  Always use 80.  Like this:
    rtmp://your_ip_address:80/app_name
    If that fails, do this:
    rtmpt://your_ip_address:80/app_name
    Does that seem valid?  Does that first option avoid the perfomance hit of tunnelling while getting you more connections?  If so, it makes me think there is no benefit at all to connecting via 1935.

  • Any point to disabling dock reflections on my Macbook 2009? Will I gain more performance or not really?

    I know that for some odd reason when I shut the translucent bar off on my Macbook 2009 that for some odd reason my Macbooks cpu runs a lot cooler and when I play hd videos 720p vids that I don't hear my cpu fan starting up. The tranlucent bar was taking that much of my cpu or nvidia 9400m intergrated gpu performance up? I'm wondering if I can gain even more performance gain if I shut my dock reflections off. Would I gain even more performance or do you think that would be pointless to shut off the reflections on dock because it's probably not taking up too much performance? Also, I think the only way to shut reflections off in the dock would be through the terminal which is kind of odd that they don't give you a feature in system preferences to shut it off.

    No change in peformance. Disabling is pointless.
    For the reflection, you can make changes in System Preferences > Dock
    (No need for Terminal)
    Select or deselect whichever you want:   Show indicator lights for open applications

  • Performance Hit Due to NVL() Function

    Hi,
    I am from dev project team,we are facing a performance hit due to NVL() function,pls give a solution to resolve this issue.
    the below is my function which i created to calculate some efforts.
    create or replace function check_function(
    v_deal_detail ,
    v_tower ,
    v_subtower,
    v_location ,
    v_client_role ,
    v_emp_category ,
    v_year ,
    v_state )
    return number
    is
    v_trans_offshore_efforts number(30,8) default 0;
    v_stdy_offshore_efforts number(30,8) default 0;
    begin
    if v_state =1
    or v_state is null
    then
    begin
    select nvl(sum(decode (d.loc_type_id,
    crmuat_global_constant_pkg.GLB_OFFSHORE,
    s.trans_efforts,
    0)),0)
    into v_trans_offshore_efforts
    from prc_calc_trans_fte_dtls_t s, prc_deal_dtl_loc_dtls_t d
    where s.deal_detail_id= d.deal_detail_id
    and s.tower_id = d.tower_id
    and s.location_id = d.location_id
    and s.deal_detail_id = v_deal_detail
    and s.client_role_id = nvl(v_client_role,s.client_role_id)
    and s.emp_category_id = nvl(v_emp_category,s.emp_category_id)
    and s.tower_id = v_tower
    and s.subtower_id = nvl(v_subtower,s.subtower_id)
    and s.location_id = nvl(v_location,s.location_id)
    and s.year_no = v_year;
    exception
    when no_data_found
    then
    v_trans_offshore_efforts := 0;
    end;
    end if;
    if v_state = 1
    then
    return v_trans_offshore_efforts;
    end if;
    end;
    pls give me a solution.
    Regards,
    shinu

    {message:id=9360003}

  • Which is Performance wise better MOVE or MOVE CORRESPONDING ...

    Hi SAP-ABAP Experts .
    Which is Performance wise better ...
    MOVE or MOVE CORRESPONDING .
    Regards : Rajneesh

    > A 
    >
    > * access path and indexes
    Indexes and hence access paths are defined when you design the data model. They are part of the model design.
    > * too large numbers of records or executions
    consider a datawarehouse environment - you have to deal with huge loads of data. a Million records are considered "small" here. Terms like "small" or "large"  are depending on the context you are working in.
    If you never heard of Star transformation, Partitioning and Parallel Query you will get lost here!
    OLTP is different: you have short transactions, but a huge number of concurrent users.
    You would not even consider Bitmap indexes in an OLTP environment - but maybe a design that evenly distributes data blocks over several files for avoiding hot spots on heavily used tables.
    > * processing of internal tables => no quadratic coding
    >
    > these are the main performance issues!
    >
    > > Performance is defined at design time
    > partly yes, but more is determined during runtime, you must check everything at least once. Many things can go wrong and will go wrong.
    sorry, it's all about the data model design - sure you have to tune later in the development but you really can't tune successfully on a  BAD data model ... you have to redesign.
    If the model is good there is a chance the developers chooses the worst access to it , but then you have the potential to tune with success because your model allows for a better access strategy.
    The decisions you make in the design phase are detemining the potential for tuning later.
    >
    > * database does not what you expect
    I call this the black box view: The developer is not interested in the underlying database.
    Why we have different DB vendors if they would all behave the same way? I.e. compare concurrency
    and consistency implementations in various DB's - totally different.  You can't simply apply your working knowledge from one database to another DB product. I learned the hard way while implementing on INFORMIX and ORACLE...

  • Performance hit when running in ARCHIVELOG mode.

    What is the performance hit when running in ARCHIVELOG mode?
    Thank you,
    David

    I am not one to disagree with Tom Kyte (unless I think he is wrong :) ), and I am not going to disagree here. I do caution the simplistic answer that the hit is negligible. I commend the respondent who qualified that answer with a discussion of I/O.
    I have come across more than one situation where archive logging was a performance hit because of the associated I/O. Many want to put archive logs on cheaper storage and do not recognize that not only can this slow a system but that it could become a major issue resulting in a system that hangs until the logs are written. A better solution for these folks is to write to fast storage and have a secondary process that offloads those logs to the slower storage.
    Let us also not assume that the archive location is local disk. It might be that an archive location is remote, such as with log shipping or NFS. Network latency can become an issue.
    There are many things to consider as there always are. I suppose with any answer, even if simple, one could spin it with some obscure situation that makes the simple answer inappropriate. Having seen some burned by this issue, I chose to elaborate, and I appreciate your indulgence.
    Chris

  • Performance Hit After Oracle Database Upgrade to 10.2.0.4

    We have a couple dozen workbooks that took this performance hit after the upgrade of the database/migration to a new server. Worksheets that executed in the ten second range are now running for hours or simply not finishing. We took the new server factor out of the equation by rolling back the database to 10.2.0.3 where a test EUL resides and the problem was resolved. Has anyone seen this issue? Does anyone have an suggestions? An early reply would be greatly appreciated.
    Thanks,
    Jerre

    Rod,
    Thanks for the quick reply. We are looking at the different plans and modifying the optimizer settings, switching back and forth, as we speak. We are now starting with the hints. Currently our Server 'optimizer_mode' parameter is ALL_ROWS. We are planning to change the to 'Choose' and see what happens. The workbooks that are impacted are on our oldest business areas of Finance and HR. The former setup was borrowed from another school for a quick, low cost start up. The latter was thrown together by novices. Our true datamarts developed by knowledgeable personnel with star schemas are not impacted. Of course we are planning on redoing the older business areas but time, personnel and money matters slow things down. It is these workbooks on the older business areas that are greatly affected by the migrations and upgrade. We eventually get things to settle down but past actions do not always have the same resolution with newer and better servers and upgrades.
    Thanks,
    Jerre

  • Do a new layout on SAP NetWeaver Portal need more Performance?

    Hello,
    we want to create a new layout on SAP NetWeaver Portal in the Corporate Design of our company. Somebody said me, that a change from the standard layout needs a lot of more Performance on the server. Is that true?
    At the present time our server already works very slowly with the standard layout. Is it useful to change the layout? Or do we need a better server for that?
    Thanks in advance!
    Best regards,
    Ulrike Arndt

    Hi,
    It depend on how you design your layout.
    Use standard light framework and design your layout without heavy weight components.
    It increases the performance also.(a little b
    it.)
    Thanks and Regards,
    gopal.sattiraju

  • How to Recursively traverse a Dom Tree

    Hi there, I'm new to java and xml and would like to see some sample code on recursively traversing a DOM tree in java, printing out all Element, Text, etc to the console window.
    Please help

    Use this: DomRead.java at your own risk. caveat: this gets screwed up if the attributes are multi-valued. You can use XPath to get around that. I am struggling with the proper XPath expressions.
    import org.xml.sax.*;
    import org.w3c.dom.*;
    import java.util.*;
    * version 1.0
    public class DomRead implements ErrorHandler
         private static final String CRLF = System.getProperty("line.separator");
         private static String key = "";
         private static String value = "";
         private Hashtable elements = new Hashtable();
         * This constructor has to be used to pass in the DOM document which needs to
         * be read so that this class can generate the hashtable with the attributes as
         * keys and their corresponding values.
         public DomRead(Document rootDoc)
              process(rootDoc);
         private void processChild(NodeList root)
              for(int i=0;i<root.getLength(); i++)
                   process(root.item(i));
         private void printAttrib(Node root)
              NamedNodeMap attrib = root.getAttributes();
              int len = attrib.getLength();
              if(len == 0) return;
              for(int i=0; i < len ; i++)
                   Attr attribute = (Attr) attrib.item(i);
                   key = attribute.getNodeValue();
         private void process(Node root)
              switch( root.getNodeType())
                   case Node.DOCUMENT_NODE :
                        Document doc = (Document) root;
                        processChild(doc.getChildNodes());
                        break;
                   case Node.ELEMENT_NODE :
                        root.setNodeValue(root.getNodeValue() );
                        printAttrib(root);
                        processChild(root.getChildNodes());
                        break;
                   case Node.TEXT_NODE :                    
                        Text text = (Text) root;
                        value = text.getNodeValue().trim();                    
                        //Log("Value: "+value+CRLF);
                        if(!value.equalsIgnoreCase(""))
                             elements.put(key, value);
                        break;
         * Use this method, if you intend to print out the contents of the generated
         * hashtable.
         public void printElements()
              for (Enumeration enum = elements.keys(); enum.hasMoreElements() ;)
                   String tKey = (String)enum.nextElement();
                   Log(tKey+"::::"+(String)elements.get(tKey));
         * This method returns the Hashtable with the attributes that have non-empty
         * values.
         public Hashtable getElements()
              return elements;
         public void error(SAXParseException e)
              e.printStackTrace();
         public void warning(SAXParseException e)
              e.toString();
         public void fatalError(SAXParseException e)
              e.printStackTrace();
         private static void Log(String log)
              System.out.print(log+CRLF);
    }

Maybe you are looking for