Which approach is having better performance in terms of time

For large no of data from more then two diffrent tables which are having relations ,
In Oracle in following two approaches which is having better performance in terms of time( i.e which is having less time) ?
1. A single compex query
2. Bunch of simple queries

Because their is a relationship between each of the tables in the simple queries then if you adopt this approach you will have to JOIN in some way, probably via a FOR LOOP in PL/SQL.
In my experience, a single complex SQL statement is the best way to go, join in the database and return the set of data required.
SQL rules!

Similar Messages

  • Which one will get better performance when traversing an ArrayList,  iterat

    hi, everyone,
    Which one will get better performance when traversing an ArrayList, iterators, or index(get(i))?
    Any reply would be valuable.
    Thank you in advance.

    Use the iterator, or a foreach loop, which is just syntactic sugar over an iterator. The cases where there is a noticeable difference will be extremely rare. You would only use get() if you actually measured a bottleneck, changed it, re-tested, and found significant improvement. An iterator will give O(n) time for iterating over any collection. Using get() only works on Lists, and for a LinkedList, gives O(n^2).

  • Which method has a better performance ?

    Hello !
    I'm using entity framework , and I have several cases where I should run a query than return some parent items , and after I display these parents and the related children  in one report.
    I want to know which of these methods have the better performance : ( or is there any other better method ??? )
    Method1: (the childs collection are loaded later , using lazy loading)
    Dim lista as IQueryable(Of MyObj) = (From t In context.MyObjs Where(..condition..) select t).Tolist
    Method2:
    Dim lista as IQueryable(Of MyObj) = (From t In context.MyObjs Where(..condition..) _
    .Include(Function(t2) t2.Childs1) _
    .Include(Function(t2) t2.Childs2) _
    .Include(Function(t2) t2.Childs2.Child22) _
    .Include(Function(t2) t2.Childs1.Childs11) _
    .Include(Function(t2) t2.Childs1.Childs12) _
    Select t).ToList
    Method3:
    Dim lista as IQueryable(Of MyObj)
    Dim lst= (From t2 In context.MyObjs Where(..condition..) Select New with _
    { .Parent=t2
    .ch1=t2.Childs1 _
    .ch2=t2.Childs2 _
    .ch21=t2.Childs2.Child21) _
    .ch11=t2.Childs1.Childs11) _
    .ch12= t2.Childs1.Childs12 _
    ).ToList
    lista=lst.Select(Function(t2) t2.parent)
    I noticed that the first method cause the report to open very slow. Also I read somewhere that Include() cause repeat of parent items?
    But anyway I want a professional opinion in general for the three methods.
    Thank you !

    Hello,
    As far as I know, the Entity Framework offers two ways to load related data after the fact. The first is called lazy loading and, with the appropriate settings, it happens automatically. In your case, your first method uses the last loading, while the second
    and third are the same actually, both of them are Eager Loading. (In VB, if you could check use code as “DbContext.Database.Log = Sub(val) Diagnostics.Trace.WriteLine(val)” to see the actually generated sql statement, you could see the third and second query
    would generate a join syntax). Since you mentions, the lazy loading way is low performance, you could use either the second or third one.
    >>Also I read somewhere that Include() cause repeat of parent items?
    For this, nor sure if you worry it would firstly use lazy loading and then use eager loading, however, in my test, I do not see this behavior, the Entity Framework seems to be smart enough to use one mode to load data at the same time. Or you could disable
    its lazy loading when using eager loading:
    context.ContextOptions.LazyLoadingEnabled = false
    Regards.
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Which clause will yield better performance?

    I've always been wondering from a performance standpoint if it was better to use a WITH clause withing a cursor or if it was better to use a sub query? So, if say, I could either place my sub query in the FROM clause of my cursor (for this example that would be the case) or use a WITH clause, which would yield better performance? (I'm using Oracle 11g)

    Check this link.
    http://jonathanlewis.wordpress.com/2010/09/13/subquery-factoring-4/
    Regards
    Raj

  • Which provides better performance?

    I have a ATI HD3200 video built-in to my motherboard.  I also have an old Nvidia 6600 GT 128 MB videocard.  Which will give me better performance? 
    I have the desktop effects enabled in KDE 4.2 and I also like to play the occasional 3D game.

    For desktop use with compositing i'd agree with Draje.  For games, you can easily enough benchmark each card and see which is best.  If you go this route I strongly recommend using a lighter weight wm just when you play games.  I know my own comp went up 40fps or so when playing nexuiz under fluxbox as opposed to KDE 3.5.x.

  • Difference between Temp table and Variable table and which one is better performance wise?

    Hello,
    Anyone could you explain What is difference between Temp Table (#, ##) and Variable table (DECLARE @V TABLE (EMP_ID INT)) ?
    Which one is recommended to use for better performance?
    also Is it possible to create CLUSTER and NONCLUSTER Index on Variable table?
    In my case: 1-2 days transactional data are more than 3-4 Millions. I tried using both # and table variable and found table variable is faster.
    Is that Table variable using Memory or Disk space?
    Thanks Shiven:) If Answer is Helpful, Please Vote

    Check following link to see differences b/w TempTable & TableVariable: http://sqlwithmanoj.com/2010/05/15/temporary-tables-vs-table-variables/
    TempTables & TableVariables both use memory & tempDB in similar manner, check this blog post: http://sqlwithmanoj.com/2010/07/20/table-variables-are-not-stored-in-memory-but-in-tempdb/
    Performance wise if you are dealing with millions of records then TempTable is ideal, as you can create explicit indexes on top of them. But if there are less records then TableVariables are good suited.
    On Tables Variable explicit index are not allowed, if you define a PK column, then a Clustered Index will be created automatically.
    But it also depends upon specific scenarios you are dealing with , can you share it?
    ~manoj | email: http://scr.im/m22g
    http://sqlwithmanoj.wordpress.com
    MCCA 2011 | My FB Page

  • Which gives better performance in webi using display attr or nav attr -

    Hello all,
    We are using the Bex query as the datasource for our universes and the end user is using the Web Intelligence as the reporting tool (rich client and infoview) we have employee as one of the infoobject in the cube.
    Now employee has a lot of attributes which the user wants to use for reporting (delivered employee infoobject has quite a few attr), we are making some of them as Nav attr like Org unit since they are time dependent and the end user will need to put in the key date to bring the employees from right org unit.
    We have enhanced the employee attr to have the all address information of the employee (Z fileds) and we have made those nav attr in RSd1.
    So my question is should we make the address Z fields as nav attr in cube as well and use those objects in webi or can we use the objects in webi which fall under employee like details (green icons) rather than separate object.
    Please let me know what will keep better performance and what is the best practice.
    Thanks you in advance and appreciate everyone's help
    Edited by: Cathy on Jun 16, 2011 7:35 AM

    Hi,
    BEx Query Design Recommendations:
    "Reduce Usage of Navigational Attributes as much as possible Also, if simply displaying a Characteristicu2019s Attribute, DO NOT use the Navigational Attribute u2013 rather utilize the Characteristic Attribute for display in the report  This avoids unneeded JOINS, and also reduces total number of rows transferred to WebI"
    Source : SAP Document
    Thanks,
    Amit

  • In the below queries which gives better performance

    Hi All,
    In the below two queries which gives better performance.
    Requirement is I need to find if all the 3 score columns are null then I need to assign -ve value -9999 else some +ve value 2
    1)
    select case when count(CUST_score1)+count(CUST_score2)+count(CUST_score3)=0 then -111111'
    else 11 end
    from
    customer
    where subscriber_id=1050
    and cust_system_code='1882484'
    2)
    select case whenCUST_score1 is null and CUST_score2 is null and CUST_score3 is null then '-9999'
    else '11' end
    from
    customer
    where subscriber_id=1050
    and cust_system_code='1882484'
    Please help, because we have more data in table customer so I need to confirm which is better.
    Regards,
    Chanda

    user546757 wrote:
    Hi All,
    In the below two queries which gives better performance.
    Requirement is I need to find if all the 3 score columns are null then I need to assign -ve value -9999 else some +ve value 2
    1)
    select case when count(CUST_score1)+count(CUST_score2)+count(CUST_score3)=0 then -111111'
    else 11 end
    from
    customer
    where subscriber_id=1050
    and cust_system_code='1882484'
    2)
    select case whenCUST_score1 is null and CUST_score2 is null and CUST_score3 is null then '-9999'
    else '11' end
    from
    customer
    where subscriber_id=1050
    and cust_system_code='1882484'
    Please help, because we have more data in table customer so I need to confirm which is better.
    Regards,
    ChandaThe two statements aren't equivalent. If you know that your where condition is restricting to a single row then there is no point in doing a count as that will introduce an additional aggregate function that isn't required for a single row. If you are dealing with multiple rows from the where condition then the second query will return multiple rows whereas the first query returns 1 row, so they don't do the same thing anyway.

  • Agents: Method or Rule - WF more robust vs. better performance

    Hi all,
    we are in ECC 6.0 and building several workflows to cater for HR processes. These workflows will be implemented globally.
    Earlier this year, this thread talked a bit about this and I am using some of the statements from it in this post:
    Responsable agents: What's better? Role or expression (variable)
    We are writing a function module to Find Manager. What I am trying to determine is the best way to use this function module. I can either create a method to call it, or I can create a rule (called 'role' up to 4.6) to call it or I can create a virtual attribute to call it.
    This function module will be called a lot as most of the workflows will involve the employee's Manager.
    If implemented as a method, an RFC is used and I will need 2 steps in the WF - but I will be able to 'trap' any errors returned by the function module, e.g. manager not found, and use the returned exceptions within the workflow. The method can be implemented in a generic WF_UTILITY class/BOR, it doesn't need to be linked to a particular class/BOR.
    If implemented as a rule, it is 1 step instead of 2 - less logs, better performance. But if the rule fails, the workflow goes into error. I do not think there is a way to avoid the workflow going into error.
    I might be able to create a virtual attribute for it, but one of the parameters for the function module is the workflow that is calling it & it will also mean that I would have to make sure that every workflow has an instance of the object that I implement the virtual attribute.
    Is it worthy to 'trap' the errors and deal with it within the workflow? Or it is better to let the workflow go into error?
    Please let me know your thoughts on this one.
    Much thanks and regards,
    Cristiana

    I agree with Raja that you should choose the approach with rules. In your version you can also use tools to re-evaluate rules for active workflows to redetermine the agents, an option you lose if you implement it as a virtual attribute.
    Let the rule fail (flag HRS1203-ENFORCE set, the checkbox at the bottom of the rule definition screen) if no agent is found. Don't harcode sending it to anyone if no agent is found, that just gives you less flexibility. Whether the workflow administrator receives a work item in the inbox or sees it in the administrator transactions shouldn't make much difference to an administrator.
    If you want to avoid the workflow going into error (sending it to an administrator is not better than letting it go into error, it is just an error handling strategy) you must as in all programming have defined rules for handling the known problems. This could e.g. be a table which specifies who will receive the workflow (with as many parameters as you like for finding the most relevant person) if the proper agent can not be found. I have implemented solutions along those lines, but it always boils down to finding someone who will accept having responsibility for handling errors.

  • I need a clarification : Can I use EJBs instead of helper classes for better performance and less network traffic?

    My application was designed based on MVC Architecture. But I made some changes to HMV base on my requirements. Servlet invoke helper classes, helper class uses EJBs to communicate with the database. Jsps also uses EJBs to backtrack the results.
    I have two EJBs(Stateless), one Servlet, nearly 70 helperclasses, and nearly 800 jsps. Servlet acts as Controler and all database transactions done through EJBs only. Helper classes are having business logic. Based on the request relevant helper classed is invoked by the Servlet, and all database transactions are done through EJBs. Session scope is 'Page' only.
    Now I am planning to use EJBs(for business logic) instead on Helper Classes. But before going to do that I need some clarification regarding Network traffic and for better usage of Container resources.
    Please suggest me which method (is Helper classes or Using EJBs) is perferable
    1) to get better performance and.
    2) for less network traffic
    3) for better container resource utilization
    I thought if I use EJBs, then the network traffic will increase. Because every time it make a remote call to EJBs.
    Please give detailed explanation.
    thank you,
    sudheer

    <i>Please suggest me which method (is Helper classes or Using EJBs) is perferable :
    1) to get better performance</i>
    EJB's have quite a lot of overhead associated with them to support transactions and remoteability. A non-EJB helper class will almost always outperform an EJB. Often considerably. If you plan on making your 70 helper classes EJB's you should expect to see a dramatic decrease in maximum throughput.
    <i>2) for less network traffic</i>
    There should be no difference. Both architectures will probably make the exact same JDBC calls from the RDBMS's perspective. And since the EJB's and JSP's are co-located there won't be any other additional overhead there either. (You are co-locating your JSP's and EJB's, aren't you?)
    <i>3) for better container resource utilization</i>
    Again, the EJB version will consume a lot more container resources.

  • I need better performance from my computer. Could someone point me in the right direction?

    I do a lot of video editing for work. I am currently using the Creative Cloud, and the programs I use most frequently are Premiere Pro CS6, Photoshop CS6, and Encore. My issue is that when I am rendering video in Premiere Pro, and most importantly, transcoding in Encore for BluRay discs, I am getting severe lag from my computer. It basically uses the majority of my computer's resources and doesn't allow me to do much else. This means, that I can't do other work while stuff is rendering or transcoding. I had this computer built specifically for video editing and need to know which direction to go in for an upgrade to get some better performance, and allow me to do other work.
    For the record, I do have MPE: GPU Acceleration turned ON, and I have 12GBs of RAM alloted for Adobe in Premiere Pro's settings, and 4GBs left for "other".
    Here is my computer:
    - Dell Precision T7600
    - Windows 7 Professional, 64-bit
    - DUAL Intel Xeon CPU E-2620 - 2.0GHz 6-core Processors
    - 16GBs of RAM
    - 256GB SSD as my primary drive. This is where the majority of my work is performed.
    - Three 2TB secondary drives in a RAID5 configuration. This is solely for backing up data after I have worked on it. I don't really use this to work off of.
    - nVidia Quadro 4000 2GB video card
    When I am rendering or transcoding, my processor(s) performance fluctuates between 50%-70%, with all 12 cores active and being used. My physical memory is basically ALL used up while this is happening.
    Here is where I am at on the issue. I put in a request for more RAM (32GBs), this way I can allot around 25GBs of RAM to the Adobe suite, leaving more than enough to do other things. I was told that this was not the right direction to go in. I was told that since my CPUs are working around 50-70%, it means that my video card isn't pulling enough weight. I was told that the first step in upgrading this machine that we should take, is to replace my 2GB video card with a 4GB video card, and that will fix these performance issues that I am having, not RAM.
    This is the first machine that has been built over here for this purpose, so it is a learning process for us. I was hoping someone here could give a little insight to my situation.
    Thanks for any help.

    You have a couple of issues with this system:
    Slow E5-2620's. You would be much better off with E5-2687W's
    Limited memory. 32 GB is around bare minimum for a dual processor system.
    Outdated Quadro 4000 card, which is very slow in comparison to newer cards and is generally not used when transcoding.
    Far insufficient disk setup. You need way more disks.
    A software raid5 carries a lot of overhead.
    The crippled BIOS of Dell does not allow overclocking.
    The SSD may suffer from severe 'stable state' performance degradation, reducing performance even more.
    You would not be the first to leave a Dell in the configuration it came in. If that is the case, you need a lot of tuning to get it to run decently.
    Second thing to consider is what source material are you transcoding to what destination format? If you start with AVCHD material and your destination is MPEG2-DVD, the internal workings of PR may look like this:
    Convert AVCHD material to an internal intermediate, which is solely CPU bound. No GPU involvement.
    Rescale the internal intermediate to DVD dimensions, which is MPE accelerated, so heavy GPU involvement.
    Adjust the frame rate from 29.97 to 23.976, which again is MPE accelerated, so GPU bound.
    Recode the rescaled and frame-blended internal intermediate to MPEG2-DVD codec, which is again solely CPU bound.
    Apply effects to the MPEG2-DVD encoded material, which can be CPU bound for non-accelerated effects and GPU bound for accelerated effects.
    Write the end result to disk, which is disk performance related.
    If you export AVCHD to H.264-BR the GPU is out of the game altogether, since all transcoding is purely CPU based, assuming there is no frame blending going on. Then all the limitations of the Dell show up, as you noticed.

  • Is there a difference in Premier and After Effects Performance in terms of intel or AMD?

    Is there a difference in Premier and After Effects Performance in terms of intel or AMD?  Forget the speed issue, assume that the processors are of comparable speed and also assume that the system is built beyond recommended requirements.  When it comes to reliablility and performance of either processor working and managing data with CS5, is there a difference?  I am looking to build a computer with multiple CPU's so i7 if out as well (unless you can convince me that have only one CPU is better than building with multiple)  Thanks for reading and I'm looking forward to any help you may give me!  Bow.

    See Harm, this is why I am a BIG fan of your post!!!  Also thank you John for responding as well.  I have been wrestling with purchasing a new computer for months now looking for a CS5 CPU, trying to find the most economical purchase.  The last computer I had bought was specifically for CS2 and I spent over $8500 dollars for it in 2006.  A dual XEON 2.80 with Hyperthreading.  Back then I thought I had a great machine, but now looking at what I paid for, i feel much wiser.
    I have seen the benchmark test and studied it closely, but until this post, I didn't know how the results translated into real world results.  I figured out pretty quickly AMD is no where near to Intel currently.
    I am looking to edit primarily with AVCHD and will be using After Effects extensively.  So of course, I want processor speed and plenty of RAM.  But it gets expensive.  Sandy Bridge looks like an option but there is a RAM limitation and the recent problems with it.  i7 processors look like a good option but they are also 24 gig limitation.  Of course dual CPU Xeon gives me unlimited RAM practically, BUT i have a limited budget.
    So in looking at the Benchmark Test, (which I love, but can't equate to real world applications - i.e. Time Savings between results - because i don't know what the length of the footage is) it has been hard to gauge a cost/gain benefit when choosing my next machine.
    I noticed your results are 65.0 to ADK's 35.0 under the h.264 CPU performance.  I guess I am asking what is that in a real world time crunch difference?
    P.S. I do realize that the ADK results aren't average.  If you wish, you may reference the #1 system with the top average of 45.0 under the h.264 CPU performance to give me an idea on comparision.  Also, I noticed your machine is overclocked.  Does overclocking make CS5 any less stable?
    Thanks for all you do Harm, I appreciate your dedication to us Adobe followers.  You to John!
    (sorry it has taken some time to respond)
    Also, I noticed you are using Areca ARC-1680iX-12.  Nice!

  • Which one is the better cover for protection on tje ipad mini with retina

    I  bougth an mini ipad with retina, my 5 years old will use using too. Which one is the better cover for protection?

    You'll have to think in terms of a shell rather than just a cover. Main problems are dropping and liquid getting inside. Plus a shell can add a bit of extra heft and "grippability", aiding little hands in holding on the slippery aluminum back. Can add some sort of screen protector to prevent scratches, though that will reduce sensitivity to an extent; however, a direct blow will inevitably shatter the Gorilla Glass.
    That being said, I am very happy with the Lifeproof Nüüd case that has protected mine from the get-go. Decided to forego screen protectors in lieu of better touch performance; almost 2 years down the road the screen is still scratch free.
    iPad 4th Gen, iOS 7.1.2, 64GB, White

  • Designing Web applications for better performance

    Hi,
    Let me explain the scenario.
    We are following MVC architecture, where we have our JSP pages interacting with a controller
    servlet , which decides what action to performed . It is using request despatcher method to redirect
    the request object to jsp pages.
    We have most of jsp pages as entry screens. We are not using java beans, fearing the complexity of master
    details forms . ( we dont know how to use "bean technolagy" in case of detail entry screens).
    In forms at a time we can have max 50 rows , with each row having 10 columns. When a particular page
    need to be displayes for editing (may be with already existing data), the servlet will populate objects
    and jsp will access data from these objects. Once the data is accessed , these objects will be removed
    from the session.
    Since we are removing the objects from the session (not keeping it certain amount of time too), if the
    servlet /another jsp requires some information, we are forced to use "hidden variables". In the current
    scenario, we are having at least 40-50 hidden variables passing from a jsp page to servlet.
    We decided not to use session objects fearing that it will give lot of problem to server. But now it
    is making our client "fat".
    Eventhogh our application is big, we are not using any database side coding like triggers,stored procedures
    etc? Is this create performance problems?
    When our application runs, many times we can find that the request will not get processed correctly
    in jrun sever or some times we gets blank window or some time jrun default server hangs.
    we are using Jrun 3.0 and Solaris_JDK_1.2.2_05.
    Pl. find my following questions.
    1. is this right way to do the things (hidden variables).
    2. Is creating session variables is going to hamper the performance of the system?

    Dear sandhyavk,
    From my experience, you are better to put your hidden values into a JavaBean, simply a data object which is saved in a session, after you have accessed to it, you simply invalidate the session indeed. You can find more details from the following links from javaworld.com, I have adopted and amended it for my eIPO system, it is flexible and easy to maintain and understand.
    * http://www.javaworld.com/javaworld/jw-01-2001/jw-0119-jspframe.html
    * http://www.javaworld.com/javaworld/jw-01-2001/jw-0119-jspframe.html
    I do hope that it can give you some ideas.
    Best regards,
    Anthony Lai

  • For my game's better performance, should i use Starling?

    I heard that using Starling gives better performance than just using Flash pro Native (GPU mode??) when playing flash games on smartphones.
    However, according to this article, there is not much difference between GPU mode and Starling although its recorded in late 2012.
    http://esdot.ca/site/2012/runnermark-scores-july-18-2012
    My game is tile matching game that uses vectors and many different tile pictures. also upto 64 tiles can be present at the same time.
    I don't know how much more performance Starling would provide, but if starling would give more performance, i don't know if its worth the time and effort to learn how to use Starling and change my current codes to use Starling?

    This is a test between multiple frameworks that all use Stage3D, which is basically the means to get any hardware benefits from the GPU.
    These frameworks do nothing else than helping to streamline your game development and doing some optimizing (object pooling etc.) under the hood.
    The basic concept is to have spritessheets (for 2D) , that are also called "Textureatlas`" instead of the "old" method of having  separated MovieClips/sprites.
    If you dont use this method in your game, then you will have indeed no benefit from Starling or any other Stage3D framework.
    So if you your game is coded "like in the old days" you would have to rewrite some parts of it and convert all MovieClips to Spritesheets to benefit from the GPU.
    The real Performance-comparison reads like this:
    CopyPixels (the PreStage 3D method) had a Perfomance gainof 500%/ SpriteSheet (Stage3D) 4000% compared to the "old way".
    It all depends if you`re unhappy with your games curretn performance on current mobile devices or not.

Maybe you are looking for

  • Word processing extremely slow

    In my desktop I am having problems to close word files which is taking atlest 2 minutes to close. Somebody suggested that it should be formatted but I am scared to lose data in formatting. Is there any  alternative to shoot this problem?

  • Display SSAS Pivot Table by user permissions

    Is it possible to display different data from SSAS cube in excel pivot for each user (like having the username in custom data- no kerberoes)? How? Thanks I meant without having sharepoint/performancepoint Namnami

  • My Airport is acting strange

    Hello, I got a problem i would like to get some help with: It is my Airport that's not acting as it should. It have been doing this for just about three months and what it does is acting all unlogically. As an example: 1. When I set the airport netwo

  • How to check whether a node exist in a Particular Level. (xmltype)

    hi, please help me to check whether a particular node exists in one level. for eg I have the following xml <map>      <entry>           <key>                heading1                </key>           <map>                <entry>                     <ke

  • Mac OSX Leopard 64-bit!!??

    Hello all, as you probably already know, the new Mac OSX is 64-bit!! Is there going to be a 32-bit version? I no many programs that will not work on the 64-bit version and the company´s wont bring out a update for it. Autodesks 3ds MAX and MAYA will