Which method has a better performance ?

Hello !
I'm using entity framework , and I have several cases where I should run a query than return some parent items , and after I display these parents and the related children  in one report.
I want to know which of these methods have the better performance : ( or is there any other better method ??? )
Method1: (the childs collection are loaded later , using lazy loading)
Dim lista as IQueryable(Of MyObj) = (From t In context.MyObjs Where(..condition..) select t).Tolist
Method2:
Dim lista as IQueryable(Of MyObj) = (From t In context.MyObjs Where(..condition..) _
.Include(Function(t2) t2.Childs1) _
.Include(Function(t2) t2.Childs2) _
.Include(Function(t2) t2.Childs2.Child22) _
.Include(Function(t2) t2.Childs1.Childs11) _
.Include(Function(t2) t2.Childs1.Childs12) _
Select t).ToList
Method3:
Dim lista as IQueryable(Of MyObj)
Dim lst= (From t2 In context.MyObjs Where(..condition..) Select New with _
{ .Parent=t2
.ch1=t2.Childs1 _
.ch2=t2.Childs2 _
.ch21=t2.Childs2.Child21) _
.ch11=t2.Childs1.Childs11) _
.ch12= t2.Childs1.Childs12 _
).ToList
lista=lst.Select(Function(t2) t2.parent)
I noticed that the first method cause the report to open very slow. Also I read somewhere that Include() cause repeat of parent items?
But anyway I want a professional opinion in general for the three methods.
Thank you !

Hello,
As far as I know, the Entity Framework offers two ways to load related data after the fact. The first is called lazy loading and, with the appropriate settings, it happens automatically. In your case, your first method uses the last loading, while the second
and third are the same actually, both of them are Eager Loading. (In VB, if you could check use code as “DbContext.Database.Log = Sub(val) Diagnostics.Trace.WriteLine(val)” to see the actually generated sql statement, you could see the third and second query
would generate a join syntax). Since you mentions, the lazy loading way is low performance, you could use either the second or third one.
>>Also I read somewhere that Include() cause repeat of parent items?
For this, nor sure if you worry it would firstly use lazy loading and then use eager loading, however, in my test, I do not see this behavior, the Entity Framework seems to be smart enough to use one mode to load data at the same time. Or you could disable
its lazy loading when using eager loading:
context.ContextOptions.LazyLoadingEnabled = false
Regards.
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey.

Similar Messages

  • Which one will get better performance when traversing an ArrayList,  iterat

    hi, everyone,
    Which one will get better performance when traversing an ArrayList, iterators, or index(get(i))?
    Any reply would be valuable.
    Thank you in advance.

    Use the iterator, or a foreach loop, which is just syntactic sugar over an iterator. The cases where there is a noticeable difference will be extremely rare. You would only use get() if you actually measured a bottleneck, changed it, re-tested, and found significant improvement. An iterator will give O(n) time for iterating over any collection. Using get() only works on Lists, and for a LinkedList, gives O(n^2).

  • Which approach is having better performance in terms of time

    For large no of data from more then two diffrent tables which are having relations ,
    In Oracle in following two approaches which is having better performance in terms of time( i.e which is having less time) ?
    1. A single compex query
    2. Bunch of simple queries

    Because their is a relationship between each of the tables in the simple queries then if you adopt this approach you will have to JOIN in some way, probably via a FOR LOOP in PL/SQL.
    In my experience, a single complex SQL statement is the best way to go, join in the database and return the set of data required.
    SQL rules!

  • Which clause will yield better performance?

    I've always been wondering from a performance standpoint if it was better to use a WITH clause withing a cursor or if it was better to use a sub query? So, if say, I could either place my sub query in the FROM clause of my cursor (for this example that would be the case) or use a WITH clause, which would yield better performance? (I'm using Oracle 11g)

    Check this link.
    http://jonathanlewis.wordpress.com/2010/09/13/subquery-factoring-4/
    Regards
    Raj

  • Which structure of itab has a better performance

    Hi experts,
    I have written an ABAP-Report - the performance was good...
    Then I must implement "some" more functions which were not requested at the beginning
    and now the performance is very bad !!!
    In this case I think a new report is the better way than a refactoring of the source code...
    Now I am not sure how to organize my data in itabs.
    I need a dynamic internal table cause number of key columns is only known at runtime...
    What do you think is the better and faster way:
    Alternative A:
    Structure of itab
    -  Assuming 5 key columns ( Key01, Key02, Key03,... )
    -  4 further columns
    estimated lines: 6.000.000 - much smaller linesize than Alternative B
    Alternative B:
    Structure of itab
    -  Assuming 5 key columns ( Key01, Key02, Key03,... )
    -  100 further columns
    estimated lines: 80.000 - much bigger linesize than Alternative B
    I think Alternative B would be faster but I want know your opinion...
    Regards,
    Oliver

    First of all yolu should check the total size of your internal table,
    no of lines *  bytes of the structure =   ?
    And of course you should either a sorted table or a hashed table!  Standard tables should definitely be avoided here!
    Hashed tables are only usefull with the complete unique table key, no other access!!!
    + If a hashed tables can be used then A could actually be better than B!
    Use Assigning !!!
    kind regards    Siegfried

  • Which provides better performance?

    I have a ATI HD3200 video built-in to my motherboard.  I also have an old Nvidia 6600 GT 128 MB videocard.  Which will give me better performance? 
    I have the desktop effects enabled in KDE 4.2 and I also like to play the occasional 3D game.

    For desktop use with compositing i'd agree with Draje.  For games, you can easily enough benchmark each card and see which is best.  If you go this route I strongly recommend using a lighter weight wm just when you play games.  I know my own comp went up 40fps or so when playing nexuiz under fluxbox as opposed to KDE 3.5.x.

  • Which method is better to the performance?

    Using SQLs at front side or using stored procedures at back side, which method is better to the performance?

    jetq wrote:
    In my view, it maybe have other difference, for example,
    Using stored procedure, you don't need to recompile the script every time to be executed,
    and use the existing execute plan.what if first time procedure is called after DB start?
    PL/SQL does not have EXPLAIN PLAN; only SQL does.
    But using SQL statement from application layer may be different.different than what exactly.
    SQL is SQL & can only be executed by SQL Engine inside the DB.
    SQL statement does not know or care about how it got to the DB.
    DB does not know or care from where SQL originated.

  • Agents: Method or Rule - WF more robust vs. better performance

    Hi all,
    we are in ECC 6.0 and building several workflows to cater for HR processes. These workflows will be implemented globally.
    Earlier this year, this thread talked a bit about this and I am using some of the statements from it in this post:
    Responsable agents: What's better? Role or expression (variable)
    We are writing a function module to Find Manager. What I am trying to determine is the best way to use this function module. I can either create a method to call it, or I can create a rule (called 'role' up to 4.6) to call it or I can create a virtual attribute to call it.
    This function module will be called a lot as most of the workflows will involve the employee's Manager.
    If implemented as a method, an RFC is used and I will need 2 steps in the WF - but I will be able to 'trap' any errors returned by the function module, e.g. manager not found, and use the returned exceptions within the workflow. The method can be implemented in a generic WF_UTILITY class/BOR, it doesn't need to be linked to a particular class/BOR.
    If implemented as a rule, it is 1 step instead of 2 - less logs, better performance. But if the rule fails, the workflow goes into error. I do not think there is a way to avoid the workflow going into error.
    I might be able to create a virtual attribute for it, but one of the parameters for the function module is the workflow that is calling it & it will also mean that I would have to make sure that every workflow has an instance of the object that I implement the virtual attribute.
    Is it worthy to 'trap' the errors and deal with it within the workflow? Or it is better to let the workflow go into error?
    Please let me know your thoughts on this one.
    Much thanks and regards,
    Cristiana

    I agree with Raja that you should choose the approach with rules. In your version you can also use tools to re-evaluate rules for active workflows to redetermine the agents, an option you lose if you implement it as a virtual attribute.
    Let the rule fail (flag HRS1203-ENFORCE set, the checkbox at the bottom of the rule definition screen) if no agent is found. Don't harcode sending it to anyone if no agent is found, that just gives you less flexibility. Whether the workflow administrator receives a work item in the inbox or sees it in the administrator transactions shouldn't make much difference to an administrator.
    If you want to avoid the workflow going into error (sending it to an administrator is not better than letting it go into error, it is just an error handling strategy) you must as in all programming have defined rules for handling the known problems. This could e.g. be a table which specifies who will receive the workflow (with as many parameters as you like for finding the most relevant person) if the proper agent can not be found. I have implemented solutions along those lines, but it always boils down to finding someone who will accept having responsibility for handling errors.

  • Which gives better performance in webi using display attr or nav attr -

    Hello all,
    We are using the Bex query as the datasource for our universes and the end user is using the Web Intelligence as the reporting tool (rich client and infoview) we have employee as one of the infoobject in the cube.
    Now employee has a lot of attributes which the user wants to use for reporting (delivered employee infoobject has quite a few attr), we are making some of them as Nav attr like Org unit since they are time dependent and the end user will need to put in the key date to bring the employees from right org unit.
    We have enhanced the employee attr to have the all address information of the employee (Z fileds) and we have made those nav attr in RSd1.
    So my question is should we make the address Z fields as nav attr in cube as well and use those objects in webi or can we use the objects in webi which fall under employee like details (green icons) rather than separate object.
    Please let me know what will keep better performance and what is the best practice.
    Thanks you in advance and appreciate everyone's help
    Edited by: Cathy on Jun 16, 2011 7:35 AM

    Hi,
    BEx Query Design Recommendations:
    "Reduce Usage of Navigational Attributes as much as possible Also, if simply displaying a Characteristicu2019s Attribute, DO NOT use the Navigational Attribute u2013 rather utilize the Characteristic Attribute for display in the report  This avoids unneeded JOINS, and also reduces total number of rows transferred to WebI"
    Source : SAP Document
    Thanks,
    Amit

  • Which is best method to improve the performance

    Hi,
    I have one scnerio, to meet this requirement i have two methods
    My requirement is to get the data from multiple tables, so i am developing a query based on joins and this query may give approx 80000 rows.
    To meet this requirement i have two methods.
    I have confusion which is the best method to improve the performance.
    Method #1
    for i in <query>
    loop
    end loop;
    Here we are retriving row by row(80000 rows) from data base and applying our logic
    Method #2
    By using the BULKCOLLECT at a time we are getting all the rows(80000 rows) into plsql table.
    then loop is based on plsql table
    for i in 1..plsqltable.count
    loop
    end loop;
    Here we are retriving row by row(80000 rows) from plsql table instead of going to data base.
    Can any body please suggest which is the best to improve the performance

    Using BULK COLLECT will give you better performance with a large data set because there will be reduced IO with this method versus the traditional CURSOR FOR LOOP. Database Admin's (DBAs) typically don't like BULK COLLECT because developers tend to forget to limit the number of rows returned by the BULK COLLECT operation so it could use up too much memory. Take a look at DEVELOPER: PL/SQL Practices On BULK COLLECT By Steven Feuerstein for some great tips on using BULK COLLECT. Another good Feuerstein article is: Bulk Processing with BULK COLLECT and FORALL.
    As others have mentioned, you should have posted your question in the PL/SQL forum. ;)
    Hope this helps,
    Craig...

  • Which phone has best performance?

    Which phone has best performance (CPU speed, RAM) ? (Nokia 5800xpress music or Nokia N97)
    Glad to help
    Solved!
    Go to Solution.

    If you want Nokia touchscreen, as things stand at the moment , you are looking at the wrong two phones !! Both the X6 and N97 mini are better alternatives, and I would advise a little Google research to get independent  reviews as you will probably get 'slightly' prejudice advice here due to the nature of the forum 
    If I have helped at all, a click on the White Star is always appreciated :
    you can also help others by marking 'accept as solution' 

  • Difference between Temp table and Variable table and which one is better performance wise?

    Hello,
    Anyone could you explain What is difference between Temp Table (#, ##) and Variable table (DECLARE @V TABLE (EMP_ID INT)) ?
    Which one is recommended to use for better performance?
    also Is it possible to create CLUSTER and NONCLUSTER Index on Variable table?
    In my case: 1-2 days transactional data are more than 3-4 Millions. I tried using both # and table variable and found table variable is faster.
    Is that Table variable using Memory or Disk space?
    Thanks Shiven:) If Answer is Helpful, Please Vote

    Check following link to see differences b/w TempTable & TableVariable: http://sqlwithmanoj.com/2010/05/15/temporary-tables-vs-table-variables/
    TempTables & TableVariables both use memory & tempDB in similar manner, check this blog post: http://sqlwithmanoj.com/2010/07/20/table-variables-are-not-stored-in-memory-but-in-tempdb/
    Performance wise if you are dealing with millions of records then TempTable is ideal, as you can create explicit indexes on top of them. But if there are less records then TableVariables are good suited.
    On Tables Variable explicit index are not allowed, if you define a PK column, then a Clustered Index will be created automatically.
    But it also depends upon specific scenarios you are dealing with , can you share it?
    ~manoj | email: http://scr.im/m22g
    http://sqlwithmanoj.wordpress.com
    MCCA 2011 | My FB Page

  • I'm trying to purchase an app which only costs 0.69p, and it's saying my payment method has been declined. How can I resolve this?

    I have money in my bank, and I'm trying to purchase WhatsApp which is only 0.69p, it then said I need to verify my payment method by entering my bank details... So I did (correctly). It then said my payment method has been declined and to use a different payment method. I don't have another payment method and really need this app. How can I resolve this?

    It's now not letting me download free apps now

  • Which method is better

    hi,
    when uploading data into sap,which method is to be used,either session method r call transaction method? how i can analyse which method is useful? explain briefly abt analying data?

    Hi,
    Better to use session method than other methods

  • In the below queries which gives better performance

    Hi All,
    In the below two queries which gives better performance.
    Requirement is I need to find if all the 3 score columns are null then I need to assign -ve value -9999 else some +ve value 2
    1)
    select case when count(CUST_score1)+count(CUST_score2)+count(CUST_score3)=0 then -111111'
    else 11 end
    from
    customer
    where subscriber_id=1050
    and cust_system_code='1882484'
    2)
    select case whenCUST_score1 is null and CUST_score2 is null and CUST_score3 is null then '-9999'
    else '11' end
    from
    customer
    where subscriber_id=1050
    and cust_system_code='1882484'
    Please help, because we have more data in table customer so I need to confirm which is better.
    Regards,
    Chanda

    user546757 wrote:
    Hi All,
    In the below two queries which gives better performance.
    Requirement is I need to find if all the 3 score columns are null then I need to assign -ve value -9999 else some +ve value 2
    1)
    select case when count(CUST_score1)+count(CUST_score2)+count(CUST_score3)=0 then -111111'
    else 11 end
    from
    customer
    where subscriber_id=1050
    and cust_system_code='1882484'
    2)
    select case whenCUST_score1 is null and CUST_score2 is null and CUST_score3 is null then '-9999'
    else '11' end
    from
    customer
    where subscriber_id=1050
    and cust_system_code='1882484'
    Please help, because we have more data in table customer so I need to confirm which is better.
    Regards,
    ChandaThe two statements aren't equivalent. If you know that your where condition is restricting to a single row then there is no point in doing a count as that will introduce an additional aggregate function that isn't required for a single row. If you are dealing with multiple rows from the where condition then the second query will return multiple rows whereas the first query returns 1 row, so they don't do the same thing anyway.

Maybe you are looking for