Question related to query performance

Hi Experts,
Could any 1 please explain me ,when  BWA Indexes or Aggregates are created Query performance will be high because query will hit the aggregates tables, which we can see in the statistics table RSDDSTAT_DM .
But now the problem is though Aggregates are created some time Query is Hitting 'F' Table and in this case time read is very high.
In which cases though Aggregates or BWA indexes present ,Query will hit 'F table
Please explain

Query will read the data from BWA or aggregates only if data is present there i.e. the data required as per query definition. If it doesn't find data in BWA or Aggregates, it will go and read from infocube.
Ex. Lets say you have aggregates that holds data only for comp code - 1234.
However, user has run the query seeking the details of Comp code - 4567. In this case, query will read the data from F table, even though aggregates are present.
Rgds..
Shambhu

Similar Messages

  • Question related to project performance management

    Hi,
    We have implemented peoject management and project accounting module with Grants module for one of the semi public sector client. Now we are planning to enable project performance repporting.
    I assume below things
    1. If Grant is implemented, Project budgeting doesn't work
    And
    2. project performance reporting based on project accounting,project budgeting and project management modules.
    If anyone of you have done similar requirement , Please let us know the approach your approach.
    Thanks in advance
    Praveen Javvaji

    Duplicate post -- question related to project performance management

  • Question related to project performance reporting

    Hi,
    We have implemented peoject management and project accounting module with Grants module for one of the semi public sector client. Now we are planning to enable project performance repporting.
    I assume below things
    1. If Grant is implemented, Project budgeting doesn't work
    And
    2. project performance reporting based on project accounting,project budgeting and project management modules.
    If anyone of you have done similar requirement , Please let us know the approach your approach.
    Thanks in advance
    Praveen Javvaji

    Duplicate post -- question related to project performance management

  • Impact of real time cube on query performance and OLAP cache

    Hi:
    We have actual and plan cubes both setup as real time cubes (only plan cube is being planned against, not actual cube) and both cubes are compressed once a day.
    We are planning on implementing BIA accelerator and have questions related to query performance optimization:
    1/ Is there are any query performance benefits in changing the actual cube to be a basic cube (using program SAP_CONVERT_NORMAL_TRANS) if the F table is fully compressed
    2/ Can OLAP cache be leveraged for the queries run against the real time cubes e.g. actual cubes
    3/ What is the impact on BIA of having the actual cube as real time (whetehr or not there is data being loaded/planned during the day in that cube)
    Thank you in advance,
    Catherine

    1) Is there are any query performance benefits in changing the actual cube to be a basic cube (using program SAP_CONVERT_NORMAL_TRANS) if the F table is fully compressed
    From the performance point of view, tha actual cubes i.e std cubes are relatively better.
    2) Yes OLAP cache can be leveraged for bringing up the plan query but all the calculations are done in the planning buffer.
    3) Not sure.

  • A question about tempDB and performance

    Hello
    Can someone please help me with the following question.
    I am not a DBA but rather have a question related to database performance, as I read a related post and want to understand the relationships involved.
    If I have a physical Server with lets say 8 core CPU (possible one CPU with 8 cores or two CPU with four cores each).
    I read if you set your database up to have the same number of temp files as CPU cores ( in this case 8) you will achieve better performance, as each temp DB with have its own COU thread and each thread will run on one core, therefore spreading the load across
    all cores at the same time. It this correct? and if so what about the MDF file, should this also be split up into the same number of files as cores, or it that different as the data are first written to the tempDB before being applied to the database
    in the background.
    Please advise
    Thanks
    Ernest
    AAnotherUser__

    Adding files to tempdb should only be done if you are seeing tempdb contention. I have a environment where there are only two tempdb files even though physical cores are more than that I dont see any contention reason being tempdb is not utilized heavily.
    Read post given by FZB, the Pauls article, it would say its myth to add data file as per core and I agree to that adding tempdb file should not be proactive but reactive approach.
    You can monitor DDL Bottleneck in tempdb using
    this msdn blog
    You can monitor I/O bottleneck in tempdb using
    this msdn blog
    Managing Tempdb and its configuration
    I am not saying its always bad to have one tempdb data file per physical core but dont take it as benchmark you must monitor tempdb performance and then take necessary action.
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Query performance for WEBI on top of BEx

    Hello Everyone,
    Is there a way to get the time taken by a webi query execution on the BO side and in the BW side? Is this stat stored some where?
    Also, what is the best approach to take for comparing performance between the original bex query and the WEBI on top of the Bex?
    Thanks for your help.
    Aashish

    Hi,
    i posted some links for the same Question here:
    Poor query performance in WebI on top of BEx queries
    Maybe those can help you.
    Regards
    -Seb.

  • SQL query performance question

    So I had this long query that looked like this:
    SELECT a.BEGIN_DATE, a.END_DATE, a.DEAL_KEY, (select name from ideal dd where a.deal_key = dd.deal_key) DEALNAME, a.deal_term_key
    FROM
    ideal d, ideal_term a,( select deal_key, deal_term_key, max(createdOn) maxdate    from Ideal_term B
    where createdOn <= '03-OCT-12 10.03.00 AM' group by deal_key, deal_term_key ) B
    WHERE  a.begin_date <= '20-MAR-09 01.01.00 AM'
    *     and a.end_date >= '19-MAR-09 01.00.00 AM'*
    *     and A.deal_key = b.deal_key*
    *     and A.deal_term_key = b.deal_term_key*
    *     and     a.createdOn = b.maxdate*
    *     and d.deal_key = a.deal_key*
    *     and d.name like 'MVPP1 B'*
    order by
    *     a.begin_date, a.deal_key, a.deal_term_key;*
    This performed very poorly for a record in one of the tables that has 43,000+ revisions. It took about 1 minute and 40 seconds. I asked the database guy at my company for help with it and he re-wrote it like so:
    SELECT a.BEGIN_DATE, a.END_DATE, a.DEAL_KEY, (select name from ideal dd where a.deal_key = dd.deal_key) DEALNAME, a.deal_term_key
    FROM ideal d
    INNER JOIN (SELECT deal_key,
    deal_term_key,
    MAX(createdOn) maxdate
    FROM Ideal_term B2
    WHERE '03-OCT-12 10.03.00 AM' >= createdOn
    GROUP BY deal_key, deal_term_key) B1
    ON d.deal_key = B1.deal_key
    INNER JOIN ideal_term a
    ON B1.deal_key = A.deal_key
    AND B1.deal_term_key = A.deal_term_key
    AND B1.maxdate = a.createdOn
    AND d.deal_key = a.deal_key + 0
    WHERE a.begin_date <= '20-MAR-09 01.01.00 AM'
    AND a.end_date >= '19-MAR-09 01.00.00 AM'
    AND d.name LIKE 'MVPP1 B'
    ORDER BY a.begin_date, a.deal_key, a.deal_term_key
    this works much better, it only takes 0.13 seconds. I've bee trying to figure out why exaclty his version performs so much better. His only epxlanation was that the "+ 0" in the WHERE clause prevented Oracle from using an index for that column which created a bad plan initially.
    I think there has to be more to it than that though. Can someone give me a detailed explanation of why the second version of the query performed so much faster.
    Thanks.
    Edited by: su**** on Oct 10, 2012 1:31 PM

    I used Autotrace in SQL developer. Is that sufficient? Here is the Autotrace and Explain for the slow query:
    and for the fast query:
    I said that I thought there was more to it because when my team members and I looked at the re-worked query the database guy sent us, our initial thoughts were that in the slow query some of the tables didn't have joins and because of that the query formed a Cartesian product and this resulted in a huge 43,000+ rows matrix.
    In his version all tables had joins properly defined and in addition he had that +0 which told it to ignore the index for the attribute deal_key of table ideal_term. I spoke with the database guy today and he confirmed our theory.

  • Query Performance Related Issue

    Hi all,
    We have a problem when we are executing the Query in BI 7.0. that is
    When we are executing the query first time it will take 2 minutes time .
    when we are executing the same query second time ( With out logoff the query designer ) it will take 5seconds.
    please give me some inputs how it is taking more time at the first time ?
    How to reduce the query execution time for the first time?
    Already we have tried all the Query performance options.
    Thanks & regards,
    Kiran Manyam.

    Hi Kiran,
    As the others pointed, when you have executed the query for the 2nd time, it might have read from the cache.
    Do the following to investigate the performance issue:
    1) Go to RSRT and run your query in "Execute in debug mode"
    2) Select "Display Statistics Data", check "Do not use cache"
    3) After running the query, press the back button. The system will show a the statistics summary.
    4) Analyze what activity has the highest run time.
    5) If Data Manager time is >= than 30% of the query time and the if the ratio of number of records selected VS the number of records transfered is > 10% then, build an aggregate to help the query run time. This sacrifices data load time but will improve query execution time.
    To know what aggregate will help, run the query in debug mode again and choose the option "Display aggregates found". If the query did not hit an aggregate, the names of the cubes will appear. Use the data in the suggestions column (maked with S: ) as a basis for your aggregate.
    Hope this helps.

  • Question related to changing query from Production to Development

    Hello friends i have a question,
    I did query designing and transported it to production.
    Now there are lots of modification needs to be done in it.
    So can i change the queryin production or not.
    If i have to do modification what shall i do?
    If i do it in dev again, how to transport it back to production.
    what precaustion i need to take.
    thanks

    Hi Kartikey,
       Follow the below steps.
    1.Goto transport connection in RSA1 of ur development.
    2.Click on icon Bex.Create a Bex request.
    3.Edit ur query.Assign it to created Bex request.
    4.Transport it to Production.
    Your previous query in Production will be over written by the new Query.
    Assigning points is the way of saying thanks in SDN.
    Madhu.

  • Disappointing query performance with object-relational storag

    Hello,
    after some frustrating days trying to improve query performance on an xmltype table I'm on my wits' end. I have tried all possible combinations of indexes, added scopes, tried out of line and inline storage, removed recursive type definition from the schema, tried the examples from the form thread Setting Attribute SQLInline to false for Out-of-Line Storage (having the same problems) and still have no clue. I have prepared a stripped down example of my schema which shows the same problems as the real one. I'm using 10.2.0.4.0:
    SQL> select * from v$version;
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Prod
    PL/SQL Release 10.2.0.4.0 - Production
    CORE 10.2.0.4.0 Production
    TNS for Linux: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    You can find the script on http://www.grmblfrz.de/xmldbproblem.sql (I tried including it here but got an internal server error) The results are on http://www.grmblfrz.de/xmldbtest.lst . I have no idea how to improve the performance (and if even with this simple schema query rewrite does not work, how can oracle xmldb be feasible for more complex structures?). I must have made a mistake somewhere, hopefully someone can spot it.
    Thanks in advance.
    --Swen
    Edited by: user636644 on Aug 30, 2008 3:55 PM
    Edited by: user636644 on Aug 30, 2008 4:12 PM

    Marc,
    thanks, I did not know that it is possible to use "varray store as table" for the reference tables. I have tried your example. I can create the nested table, the scope and the indexes, but I get a different result - full table scan on t_element. With the original table I get an index scan. On the original table there is a trigger (t_element$xd) which is missing on the new table. I have tried the same with an xmltype table (drop table t_element; create table t_element of xmltype ...) with the same result. My script ... is on [google groups|http://groups.google.com/group/oracle-xmldb-temporary-group/browse_thread/thread/f30c3cf0f3dbcafc] (internal server error while trying to include it here). Here is the plan of the query
    select rt.object_value
    from t_element rt
    where existsnode(rt.object_value,'/mddbelement/group[attribute[@name="an27"]="99"]') = 1;
    Execution Plan
    Plan hash value: 4104484998
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 40 | 2505 (1)| 00:00:38 |
    | 1 | TABLE ACCESS BY INDEX ROWID | NT_GROUP | 1 | 20 | 3 (0)| 00:00:01 |
    |* 2 | INDEX RANGE SCAN | SYS_C0082879 | 1 | | 2 (0)| 00:00:01 |
    |* 3 | FILTER | | | | | |
    | 4 | TABLE ACCESS FULL | T_ELEMENT | 1000 | 40000 | 4 (0)| 00:00:01 |
    | 5 | NESTED LOOPS SEMI | | 1 | 88 | 5 (0)| 00:00:01 |
    | 6 | NESTED LOOPS | | 1 | 59 | 4 (0)| 00:00:01 |
    | 7 | TABLE ACCESS BY INDEX ROWID| NT_GROUP | 1 | 20 | 3 (0)| 00:00:01 |
    |* 8 | INDEX RANGE SCAN | SYS_C0082879 | 1 | | 2 (0)| 00:00:01 |
    |* 9 | TABLE ACCESS BY INDEX ROWID| T_GROUP | 1 | 39 | 1 (0)| 00:00:01 |
    |* 10 | INDEX UNIQUE SCAN | SYS_C0082878 | 1 | | 0 (0)| 00:00:01 |
    |* 11 | INDEX RANGE SCAN | SYS_IOT_TOP_184789 | 1 | 29 | 1 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - access("NESTED_TABLE_ID"=:B1)
    3 - filter( EXISTS (SELECT /*+ ???)
    8 - access("NESTED_TABLE_ID"=:B1)
    9 - filter("T_GROUP"."SYS_NC0001300014$" IS NOT NULL AND
    SYS_CHECKACL("ACLOID","OWNERID",xmltype('<privilege
    xmlns="http://xmlns.oracle.com/xdb/acl.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-insta
    nce" xsi:schemaLocation="http://xmlns.oracle.com/xdb/acl.xsd
    http://xmlns.oracle.com/xdb/acl.xsd DAV:http://xmlns.oracle.com/xdb/dav.xsd"><read-properties
    /><read-contents/></privilege>'))=1)
    10 - access("SYS_ALIAS_3"."COLUMN_VALUE"="T_GROUP"."SYS_NC_OID$")
    11 - access("NESTED_TABLE_ID"="T_GROUP"."SYS_NC0001300014$")
    filter("SYS_XDBBODY$"='99' AND "NAME"='an27')
    Edited by: user636644 on Sep 1, 2008 9:56 PM

  • Reg: Process Chain, query performance tuning steps

    Hi All,
    I come across a question like,  There is a process chain of 20 processes.out of which 5 processes are completed at the 6th step error occured and it cannot be rectified. I should start the chain again from the 7th step.If i go to a prticular step i can do that particular step, How can i start the entair chain again from step 7.i know that i need to use a function module but i dont know the name of FM. Please somebody help me out.
    Please let me know the steps involved in query performance tuning and aggregate tuning.
    Thanks & Regards
    Omkar.K

    Hi,
    Process Chain
    Method 1 (when it fails in a step/request)
    /people/siegfried.szameitat/blog/2006/02/26/restarting-processchains
    How is it possible to restart a process chain at a failed step/request?
    Sometimes, it doesn't help to just set a request to green status in order to run the process chain from that step on to the end.
    You need to set the failed request/step to green in the database as well as you need to raise the event that will force the process chain to run to the end from the next request/step on.
    Therefore you need to open the messages of a failed step by right clicking on it and selecting 'display messages'.
    In the opened popup click on the tab 'Chain'.
    In a parallel session goto transaction se16 for table rspcprocesslog and display the entries with the following selections:
    1. copy the variant from the popup to the variante of table rspcprocesslog
    2. copy the instance from the popup to the instance of table rspcprocesslog
    3. copy the start date from the popup to the batchdate of table rspcprocesslog
    Press F8 to display the entries of table rspcprocesslog.
    Now open another session and goto transaction se37. Enter RSPC_PROCESS_FINISH as the name of the function module and run the fm in test mode.
    Now copy the entries of table rspcprocesslog to the input parameters of the function module like described as follows:
    1. rspcprocesslog-log_id -> i_logid
    2. rspcprocesslog-type -> i_type
    3. rspcprocesslog-variante -> i_variant
    4. rspcprocesslog-instance -> i_instance
    5. enter 'G' for parameter i_state (sets the status to green).
    Now press F8 to run the fm.
    Now the actual process will be set to green and the following process in the chain will be started and the chain can run to the end.
    Of course you can also set the state of a specific step in the chain to any other possible value like 'R' = ended with errors, 'F' = finished, 'X' = cancelled ....
    Check out the value help on field rspcprocesslog-state in transaction se16 for the possible values.
    Query performance tuning
    General tips
    Using aggregates and compression.
    Using  less and complex cell definitions if possible.
    1. Avoid using too many nav. attr
    2. Avoid RKF and CKF
    3. Many chars in row.
    By using T-codes ST03 or ST03N
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particual day > check query execution time.
    /people/andreas.vogel/blog/2007/04/08/statistical-records-part-4-how-to-read-st03n-datasets-from-db-in-nw2004
    /people/andreas.vogel/blog/2007/03/16/how-to-read-st03n-datasets-from-db
    Try table rsddstats to get the statistics
    Using cache memoery will decrease the loading time of the report.
    Run reporting agent at night and sending results to email.This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
    Also try
    1.  Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
    2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
    Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
    It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
    3. --- sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
    Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
    5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
    By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
    You can go to T-Code DB20 which gives you all the performance related information like
    Partitions
    Databases
    Schemas
    Buffer Pools
    Tablespaces etc
    use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
    If aggregates contain incorrect data, you must regenerate them.
    Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
    Thanks,
    JituK

  • Question about bad query

    Hi all ,
    I have small question related to bad query .
    what is the affecting happens to database if there is lot of bad query , i know the performance issue is there what's other things .like archive log generated database time , IO , TMP tablespace , please give me information about it's .
    thanks & regards.

    user11969912 wrote:
    I have small question related to bad query . What do you consider a bad query? A bad query can be the result of poorly written and illogical SQL. It can be due to not using bind variables. It can be due to a on-optimal execution plan generated by the CBO. It can be due to poorly designed code that uses what seems to be a "good query", badly and in the wrong way (using bulk collection when a native SQL alone suffices, or hitting the very same data multiple times, etc).
    what is the affecting happens to database if there is lot of bad query Each of these have a different impact on the database. A "bad query" can cause a snapshot-too-old error. A deadlock error. A shared pool memory allocation error. Can cause no error and simple increase I/O. Or increase CPU. Etc.
    It is a lot more complex than what you seem to think, given your question.

  • Question related to concept of PCK

    Hi All.
    I am quite a newbie to XI and am only learning it thru online help. I have one conceptual question related to PCK.
    My understanding is that PCK is needed by a small business company to communicate to its larger partner which already has XI running. Correct me if I am wrong here itself.
    Now if the larger business partner already has XI, why does the smaller one need a PCK at all? The XI instance on the larger partner will have all the necessary adapters to understand any format send by the partner. So even if the partner(smaller) sends any format- be it IDOC/HTTP/FTP, the XI instance on larger partner will have its adapters ready to perform the conversion.
    Then why is this PCK needed at all?
    Thanks in advance. Hope my query is clear
    Regards.
    Samant

    Hi Samant,
      Your question is a very good one, Though I have not worked on PCK, I can share my ideas based on some brain storming session I had with our collegues & business partners.
      While executing projects, there are technical & operational issues. For example, when you access any HR related data of a UK based organisation then all those who work on that project have to undertake data security pledge. Like wise there are many constraints on data & system accesses, which vary across organisations.
    When you use PCK*, irrespective of different systems what the small vendors have, you communicate with your Big company's XI system only on the XI's msging protocol http(s)/SOAP. This allows a fair amount of ownership of data/access related issues to the small partners.
    -> you go for PCK, when there is no need for small vendors to go for XI.
    Hope this is of some help. As Michal said there might be much more (or even better) reasons.
    Michal, when you say "all of the mappings and transformations have to be on the small company side", what exactly you mean by this. can you please eloborate.
    Thanks & Regards
    Vishnu

  • Query performance to increase

    hi i'm using cte's as defined and took a temp table to take the columns from temp table.now,when i'm executing this logic of sp..its taking 7min for 3lakhs records. can any one pls help me how to improve my query performance..
    the code which i'm using is:
    create temtable as(cols);WITH Datematrix(AllocationDate)/*cte*/
    As
    SELECT @StartDate AS AllocationDate
    UNION ALL
    SELECT DATEADD(D,1,AllocationDate) AS AllocationDate
    FROM Datematrix WHERE AllocationDate<@EndDate
    /*cte*/ Allocation (Division,DivisionID,ResourceName,ResourceEmailID,ResourceID,Project
    ,ProjectID,Scope,ScopeID,WorkItem,TaskStartDate,TaskEndDate
    ,ProgramID ,Program,PortfolioID ,Portfolio,StatusID,Status,TaskID,EstimateHrs,ScopeEstimateHrs)
    AS
    SELECT
    DIV.Division
    ,RES.DivisionID
    ,RES.ResourceName
    ,ResourceEmailID = STUFF((
    SELECT COALESCE( ', ' + CONVERT(VARCHAR,RES.Email1), '')
    FROM dbo.TasksResource TSKRES WITH(NOLOCK) LEFT OUTER JOIN
    dbo.tb_Resource RES WITH(NOLOCK) ON RES.UID = TSKRES.ResourceID
    WHERE TSKRES.TaskID = TSK.TaskID
    FOR XML PATH('')), 1, 1, '')
    ,RES.UID ResourceID
    ,PRJ.Project + ' (' + CONVERT(VARCHAR(15),PRJ.StartDate,101) +' - ' + CONVERT(VARCHAR(15),PRJ.EndDate,101) + ')' as Project
    ,PRJ.UID ProjectID
    ,SCP.Title Scope
    ,SCP.ScopeID
    ,TSK.Title WorkItem
    ,TSK.StartDate TaskStartDate
    ,TSK.EndDate TaskEndDate
    ,PRJ.ProgramID
    ,PR.Program
    ,PR.PortfolioID
    ,PF.Portfolio
    ,TSK.StatusID
    ,ST.Status
    ,TSK.TaskID
    ,TSK.EstimateHrs
    ,(isnull(SCP.EstimateARCH,0) + isnull(SCP.EstimateBA,0) + isnull(SCP.EstimateDev,0) + isnull(SCP.EstimatePM,0) + isnull(SCP.EstimateQA,0) + isnull(SCP.EstimateRM,0)) as ScopeEstimateHrs
    --SCP.EstimateARCH + SCP.EstimateBA +SCP.EstimateDev +SCP.EstimatePM +SCP.EstimateQA +SCP.EstimateRM as ScopeEstimateHrs
    FROM Tasks TSK WITH(NOLOCK)
    INNER JOIN dbo.Scope SCP WITH(NOLOCK) ON TSK.ScopeID = SCP.ScopeID
    INNER JOIN dbo.tb_Project PRJ WITH(NOLOCK)ON TSK.ProjectID = PRJ.UID
    INNER JOIN dbo.tb_Program PR WITH(NOLOCK) ON PR.UID=PRJ.ProgramID
    INNER JOIN dbo.tb_Portfolio PF WITH(NOLOCK)ON PF.UID=PR.PortfolioID
    LEFT OUTER JOIN dbo.TasksResource TSKRES WITH(NOLOCK)ON TSKRES.TaskID = TSK.TaskID
    LEFT OUTER JOIN dbo.tb_Resource RES WITH(NOLOCK) ON RES.UID = TSKRES.ResourceID
    LEFT JOIN dbo.tb_Division DIV WITH(NOLOCK) ON RES.DivisionID = DIV.UID
    LEFT JOIN dbo.tb_Status ST WITH(NOLOCK) ON TSK.StatusID=ST.UID /*relating with the high level work items */
    WHERE (PRJ.UID = @Project OR @Project = -1)
    AND (PRJ.ProgramID = @Program OR @Program = -1)
    AND (PRJ.PortfolioID =@Portfolio OR @Portfolio = -1)
    ,/*columns used in 2 cte's are taken in below maindata*/
    MainData (AllocationDate,Division,DivisionID,ResourceName,ResourceEmailID,ResourceID,Project,ProjectID
    ,Scope,ScopeID,WorkItem,TaskStartDate,TaskEndDate
    ,ProgramID ,Program,PortfolioID ,Portfolio,StatusID,Status,TaskID,EstimateHrs,ScopeEstimateHrs,Allocated)
    AS
    ( SELECT
    Datematrix.*
    ,Allocation.*
    ,CASE WHEN ISDATE(TaskStartDate)=1 THEN 1 ELSE 0 END AS Allocated
    FROM Datematrix FULL OUTER JOIN Allocation
    ON ( Datematrix.AllocationDate>= Allocation.TaskStartDate
    AND Datematrix.AllocationDate<=Allocation.TaskEndDate
    )INSERT INTO #TempTable
    SELECT * FROM MainData
    OPTION (MAXRECURSION 0);this way the code goes...please help.. i need my query to be tuned..!!thanks in advance..
    lucky

    When asking performance related questions, it is usually a bad idea to use pseudo code. In this case, I am referring to the fact that your "temtable" declaration is invalid, and the fact that it is unclear where the local variables and/or parameter originate
    from.
    In your query, you are using the "optional parameter" pattern for local variables and/or parameters @Project, @Program and @Portfolio. I therefore assume that we are talking about a stored procedure here.
    In stored procedures, optional parameters often have a negative effect on performance. You can cancel some of that effect by adding the OPTION(RECOMPILE) hint (assuming it is part of the stored procedure, and we are talking about parameters, not local variables).
    Also, you could consider using a calendar table instead of the Datematrix CTE. Reason for that is, that it is now probably a difficult join with Allocation, because the optimizer probably hasn't established that Datematrix is a unique range of dates. Also,
    it will not have any index. A calendar table with unique index on the date can help.
    The rest is probably up to the indexes on the base tables. Since you did not post any DDL, I can't comment on that. Lack of proper indexes are usually the biggest reason of Select performance problems.
    Gert-Jan

  • Tuning SAP BW query performance

    Dear experts,
    I have a question regarding as the performance of a BW query.
    It takes 10 minutes to display about 23 thousands lines.
    This query read the data from an ODS object.
    According to the "where" clause in the "select" statement monitored via Oracle session when the query was running, I created an index for this ODS object.
    After rerunning the query, I found that the index had been taken by Oracle in reading this table (estimated cost is reduced to 2 from about 3000).
    However, it takes the same time as before.
    Is there any other reason or other factors that I should consider in tuning the performance of this query?K
    Thanks in advance

    check out tuning documents at [FAQ - The Future of SAP NetWeaver Business Intelligence in the Light of the NetWeaver BI&Business Objects Roadmap]

Maybe you are looking for