Query performance is BAD

My query is like :
SELECT s.cust_name,
t.file_name,
u.mail_name,
v.address,
count(1)
FROM
s, t, u,v
WHERE
GROUP BY s.cust_name,
t.file_name,
u.mail_name,
v.address
I have around 5 milion records I am getting in the query without the grouping finction in the qury above.
But when I am using the count(1) in the select as I have written in the query above
The performance is very BAD.
Is there any other way to rewrite the query with bit improved performance ?
Also I have checked with analytical function but still the performance is not good too.

Search in this forum for "When query takes too
long..."Or just When your query takes too long ...

Similar Messages

  • Bad query performance - how to analyze it?

    Hi all,
    since 8 weeks we locate a bad query performance (round about 30% worse than before) in our BW system. At the moment we use a BIA on revision 49 with 4 blades (16GB).
    I have already read note 1318214 and analyzed that the most time is spend on the virtual provider(over 80%!).
    I´ve seen that a lot of time is spend on the "Datamanager":
    For example: It takes 0,76s to select 3.5million items in the relative provider and 78s!!! to select 0 items in the virtual provider.
    information from RSDDSTATTREXSERV:
    RFC Server    BIA client  BIA Kernel    ABAP RFC
    497          464              450               619
    So it seems to be a problem an the BW site, what can we do to improve the performance or analyse the query performance better.
    Best Regards,
    Jens

    Hi Jens,
    A few checks you may consider doing.
    BIA Availability :  Check the BI connection with BIA.
    Check if you need to rebuild BIA indices again. SAP recommends to do this often, to repair the degenerate indices or delete the indice which are not referenced any more.(eg data in the cube was compressed/deleted and the indices are no more needed.)
    Check the if BIA  reorganization is required - This is done to see the indices are evenly distribueded areoss the BIA Landscape.
    Try to find from BI Admin if major administration work was done within these 8 weeks.eg: Copy cube, dimension restructureing, copying data to some copy cube, archiving etc.
    You can use the BIA monitor to peform checks/monitor alerts from BIA servers
    [ BIA monitor|http://help.sap.com/saphelp_nw70/helpdata/en/43/7719d270d81a6ee10000000a11466f/content.htm]
    This link would tell you on the overall status of the BIA and any actions if required.
    Also it has sublinks to other important transaction of BIA monitoring and maintainnance.
    To go to BIA monitor : RSA!---> BIA monitor icon.
    Is your virtual provider reading data from R/3 or BW.
    Generally virtual providers are used to read data from other systems , so it woulfd not have an indices in BIA, I believe if this is the case. except for some applications like BCS wher eyou may be reading data from BW itself.
    Hope this helps
    Bext regards,
    Sunmit.

  • Bad Query Performance

    Hi Experts,
    I have a Query which was built on a multiprovider,Which has a slow preformance.
    I think the main problem comes from selecting to many records from
    I think it selects 1,1 million rows to display about 500 rows in the result.
    Another point could be that the complicated restricted and calculated keyfigures, especially might spend a lot of time in the OLAP processor.
    Here are the Statistics of the Query.
    OLAP Initialization      :  3,186906
    Wait Time, User         :   56,971169
    OLAP: Settings               0,983193
    Read Cache                     0,015642
    Delete Cache                   0,019030
    Write Cache                    0,087655
    Data Manager                   462,039167
    OLAP: Data Selection      0,671566
    OLAP: Data Transfer       1,257884.
    ST03 Stat:
    %OLAP       :22,74
    %DB           :77,18
    OLAP Time  :29,2
    DBTime        :99,1
    It seems that the maximum time is consuming in the Database
    Any suggestion to speed up this Query response time would be great.
    Thanks in advance.
    BR
    Srini.

    Hi,
    You need to have standard Query performance tuning done for the underlying cubes like better design, aggregates, etc
    Improve Performance of Queries/Reports on Multi Cubes
    Refer SAP Note Number: 869487
    Performance optimization for MultiCubes
    How to Create Efficient Multi-Provider Queries
    Please see the How to Guide "How to Create Efficient MultiProvider Queries" at http://service.sap.com/bi > SAP NetWeaver 2004 - Release-Specific Information > How-to Guides > Business Intelligence
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/biw/how%20to%20create%20efficient%20multiprovider%20queries.pdf
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/afbad390-0201-0010-daa4-9ef0168d41b6
    Performance of MultiProviders
    Multiprovider performance / aggregate question
    Query Performance
    Multicube performances
    Create Efficient MultiProvider Queries
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/b03b7f4c-c270-2910-a8b8-91e0f6d77096
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/751be690-0201-0010-5e80-f4f92fb4e4ab
    Also try
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    tuning, short dumps
    Performance tuning in BW:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/afbad390-0201-0010-daa4-9ef0168d41b6
    Also notes
    0000903559 MultiProvider optimization is only partially active
    0000825396 Performance in reports with many selections
    multiprovider explanation i need
    Note 629541 - Multiprovider: Parallel Processing 
    Thanks,
    JituK

  • How to improve query performance built on a ODS

    Hi,
    I've built a report on FI_GL ODS (BW3.5). The report execution time takes almost 1hr.
    Is there any method to improve or optimize th query performance that build on ODS.
    The ODS got huge volume of data ~ 300 Million records for 2 years.
    Thanx in advance,
    Guru.

    Hi Raj,
    Here are some few tips which helps you in improving ur query performance
    Checklist for Query Performance
    1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
    3. Within structures, make sure the filter order exists with the highest level filter first.
    4. Check code for all exit variables used in a report.
    5. Move Time restrictions to a global filter whenever possible.
    6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
    7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
    8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
    9. If Alternative UOM solution is used, turn off query cache.
    10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
    11. Turn off formatting and results rows to minimize Frontend time whenever possible.
    12. Check for nested hierarchies. Always a bad idea.
    13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
    14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
    15. Do review of order of restrictions in formulas. Do as many restrictions as you can before
    calculations. Try to avoid calculations before restrictions.
    17. Turn off warning messages on queries.
    18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
    19. Check to see where currency conversions are happening if they are used.
    20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    21. Avoid Cell Editor use if at all possible.
    22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
    23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.

  • Query Performance - Query very slow to run

    I have built a query to show payroll costings per month per employee by cost centres for the current fiscal year. The cost centres are selected with a hierarchy variable - it's quite a latrge hierarchy. The problem is the query takes ages to run - nearly ten minutes. It's built on a DSO so I cant aggregate it. Is there anything I can do to improve performance.

    Hi Joel,
    Walkthrough Checklist for Query Performance:
    1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
    3. Within structures, make sure the filter order exists with the highest level filter first.
    4. Check code for all exit variables used in a report.
    5. Move Time restrictions to a global filter whenever possible.
    6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
    7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
    8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
    9. If Alternative UOM solution is used, turn off query cache.
    10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
    11. Turn off formatting and results rows to minimize Frontend time whenever possible.
    12. Check for nested hierarchies. Always a bad idea.
    13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
    14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
    15. Do review of order of restrictions in formulas. Do as many restrictions as you can before calculations. Try to avoid calculations before restrictions.
    16. Check Sequential vs Parallel read on Multiproviders.
    17. Turn off warning messages on queries.
    18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
    19. Check to see where currency conversions are happening if they are used.
    20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    21. Avoid Cell Editor use if at all possible.
    22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
    23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.
    24. Leverage characteristics or navigational attributes rather than hierarchies. Using a hierarchy requires reading temporary hierarchy tables and creates additional overhead compared to characteristics and navigational attributes. Therefore, characteristics or navigational attributes result in significantly better query performance than hierarchies, especially as the size of the hierarchy (e.g., the number of nodes and levels) and the complexity of the selection criteria increase.
    25. If hierarchies are used, minimize the number of nodes to include in the query results. Including all nodes in the query results (even the ones that are not needed or blank) slows down the query processing. The u201Cnot assignedu201D nodes in the hierarchy should be filtered out, and you should use a variable to reduce the number of hierarchy nodes selected.
    Regards
    Vivek Tripathi

  • How does Index fragmentation and statistics affect the sql query performance

    Hi,
    How does Index fragmentation and statistics affect the sql query performance
    Thanks
    Shashikala
    Shashikala

    How does Index fragmentation and statistics affect the sql query performance
    Very simple answer, outdated statistics will lead optimizer to create bad plans which in turn will require more resources and this will impact performance. If index is fragmented ( mainly clustered index,holds true for Non clustred as well) time spent in finding
    the value will be more as query would have to search fragmented index to look for data, additional spaces will increase search time.
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers
    My TechNet Wiki Articles

  • SQL query performance question

    So I had this long query that looked like this:
    SELECT a.BEGIN_DATE, a.END_DATE, a.DEAL_KEY, (select name from ideal dd where a.deal_key = dd.deal_key) DEALNAME, a.deal_term_key
    FROM
    ideal d, ideal_term a,( select deal_key, deal_term_key, max(createdOn) maxdate    from Ideal_term B
    where createdOn <= '03-OCT-12 10.03.00 AM' group by deal_key, deal_term_key ) B
    WHERE  a.begin_date <= '20-MAR-09 01.01.00 AM'
    *     and a.end_date >= '19-MAR-09 01.00.00 AM'*
    *     and A.deal_key = b.deal_key*
    *     and A.deal_term_key = b.deal_term_key*
    *     and     a.createdOn = b.maxdate*
    *     and d.deal_key = a.deal_key*
    *     and d.name like 'MVPP1 B'*
    order by
    *     a.begin_date, a.deal_key, a.deal_term_key;*
    This performed very poorly for a record in one of the tables that has 43,000+ revisions. It took about 1 minute and 40 seconds. I asked the database guy at my company for help with it and he re-wrote it like so:
    SELECT a.BEGIN_DATE, a.END_DATE, a.DEAL_KEY, (select name from ideal dd where a.deal_key = dd.deal_key) DEALNAME, a.deal_term_key
    FROM ideal d
    INNER JOIN (SELECT deal_key,
    deal_term_key,
    MAX(createdOn) maxdate
    FROM Ideal_term B2
    WHERE '03-OCT-12 10.03.00 AM' >= createdOn
    GROUP BY deal_key, deal_term_key) B1
    ON d.deal_key = B1.deal_key
    INNER JOIN ideal_term a
    ON B1.deal_key = A.deal_key
    AND B1.deal_term_key = A.deal_term_key
    AND B1.maxdate = a.createdOn
    AND d.deal_key = a.deal_key + 0
    WHERE a.begin_date <= '20-MAR-09 01.01.00 AM'
    AND a.end_date >= '19-MAR-09 01.00.00 AM'
    AND d.name LIKE 'MVPP1 B'
    ORDER BY a.begin_date, a.deal_key, a.deal_term_key
    this works much better, it only takes 0.13 seconds. I've bee trying to figure out why exaclty his version performs so much better. His only epxlanation was that the "+ 0" in the WHERE clause prevented Oracle from using an index for that column which created a bad plan initially.
    I think there has to be more to it than that though. Can someone give me a detailed explanation of why the second version of the query performed so much faster.
    Thanks.
    Edited by: su**** on Oct 10, 2012 1:31 PM

    I used Autotrace in SQL developer. Is that sufficient? Here is the Autotrace and Explain for the slow query:
    and for the fast query:
    I said that I thought there was more to it because when my team members and I looked at the re-worked query the database guy sent us, our initial thoughts were that in the slow query some of the tables didn't have joins and because of that the query formed a Cartesian product and this resulted in a huge 43,000+ rows matrix.
    In his version all tables had joins properly defined and in addition he had that +0 which told it to ignore the index for the attribute deal_key of table ideal_term. I spoke with the database guy today and he confirmed our theory.

  • Query performance improvement techniques(urgent)

    Hi experz,
    I am having a business requirement where i have to fill the variable with 59 days(mandatory field).from different regional cubes, these are all compressedcubes, records are almost 45 mlns in a cube when i give selection by default it selects one aggrigate defined on this, but aftr running some time it is going to short dump,if the time for report execution is more than 10 mts its performance is very bad i know! can you guys suggest me how can i improve the performance of this report.
    when i gave the same selections in list cube transaction it is saing  LSLVCF36 error,message_type_text error;
    can you guys help me in this,i ll give full points.
    thanks and regards,
    veeru

    Hi,
    Try these
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particual day > check query execution time.
    1.  Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
    2. Use the program SAP_INFOCUBE_DESIGNS to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
    3. --- sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report.
    5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Thanks,
    JituK

  • Query performance to increase

    hi i'm using cte's as defined and took a temp table to take the columns from temp table.now,when i'm executing this logic of sp..its taking 7min for 3lakhs records. can any one pls help me how to improve my query performance..
    the code which i'm using is:
    create temtable as(cols);WITH Datematrix(AllocationDate)/*cte*/
    As
    SELECT @StartDate AS AllocationDate
    UNION ALL
    SELECT DATEADD(D,1,AllocationDate) AS AllocationDate
    FROM Datematrix WHERE AllocationDate<@EndDate
    /*cte*/ Allocation (Division,DivisionID,ResourceName,ResourceEmailID,ResourceID,Project
    ,ProjectID,Scope,ScopeID,WorkItem,TaskStartDate,TaskEndDate
    ,ProgramID ,Program,PortfolioID ,Portfolio,StatusID,Status,TaskID,EstimateHrs,ScopeEstimateHrs)
    AS
    SELECT
    DIV.Division
    ,RES.DivisionID
    ,RES.ResourceName
    ,ResourceEmailID = STUFF((
    SELECT COALESCE( ', ' + CONVERT(VARCHAR,RES.Email1), '')
    FROM dbo.TasksResource TSKRES WITH(NOLOCK) LEFT OUTER JOIN
    dbo.tb_Resource RES WITH(NOLOCK) ON RES.UID = TSKRES.ResourceID
    WHERE TSKRES.TaskID = TSK.TaskID
    FOR XML PATH('')), 1, 1, '')
    ,RES.UID ResourceID
    ,PRJ.Project + ' (' + CONVERT(VARCHAR(15),PRJ.StartDate,101) +' - ' + CONVERT(VARCHAR(15),PRJ.EndDate,101) + ')' as Project
    ,PRJ.UID ProjectID
    ,SCP.Title Scope
    ,SCP.ScopeID
    ,TSK.Title WorkItem
    ,TSK.StartDate TaskStartDate
    ,TSK.EndDate TaskEndDate
    ,PRJ.ProgramID
    ,PR.Program
    ,PR.PortfolioID
    ,PF.Portfolio
    ,TSK.StatusID
    ,ST.Status
    ,TSK.TaskID
    ,TSK.EstimateHrs
    ,(isnull(SCP.EstimateARCH,0) + isnull(SCP.EstimateBA,0) + isnull(SCP.EstimateDev,0) + isnull(SCP.EstimatePM,0) + isnull(SCP.EstimateQA,0) + isnull(SCP.EstimateRM,0)) as ScopeEstimateHrs
    --SCP.EstimateARCH + SCP.EstimateBA +SCP.EstimateDev +SCP.EstimatePM +SCP.EstimateQA +SCP.EstimateRM as ScopeEstimateHrs
    FROM Tasks TSK WITH(NOLOCK)
    INNER JOIN dbo.Scope SCP WITH(NOLOCK) ON TSK.ScopeID = SCP.ScopeID
    INNER JOIN dbo.tb_Project PRJ WITH(NOLOCK)ON TSK.ProjectID = PRJ.UID
    INNER JOIN dbo.tb_Program PR WITH(NOLOCK) ON PR.UID=PRJ.ProgramID
    INNER JOIN dbo.tb_Portfolio PF WITH(NOLOCK)ON PF.UID=PR.PortfolioID
    LEFT OUTER JOIN dbo.TasksResource TSKRES WITH(NOLOCK)ON TSKRES.TaskID = TSK.TaskID
    LEFT OUTER JOIN dbo.tb_Resource RES WITH(NOLOCK) ON RES.UID = TSKRES.ResourceID
    LEFT JOIN dbo.tb_Division DIV WITH(NOLOCK) ON RES.DivisionID = DIV.UID
    LEFT JOIN dbo.tb_Status ST WITH(NOLOCK) ON TSK.StatusID=ST.UID /*relating with the high level work items */
    WHERE (PRJ.UID = @Project OR @Project = -1)
    AND (PRJ.ProgramID = @Program OR @Program = -1)
    AND (PRJ.PortfolioID =@Portfolio OR @Portfolio = -1)
    ,/*columns used in 2 cte's are taken in below maindata*/
    MainData (AllocationDate,Division,DivisionID,ResourceName,ResourceEmailID,ResourceID,Project,ProjectID
    ,Scope,ScopeID,WorkItem,TaskStartDate,TaskEndDate
    ,ProgramID ,Program,PortfolioID ,Portfolio,StatusID,Status,TaskID,EstimateHrs,ScopeEstimateHrs,Allocated)
    AS
    ( SELECT
    Datematrix.*
    ,Allocation.*
    ,CASE WHEN ISDATE(TaskStartDate)=1 THEN 1 ELSE 0 END AS Allocated
    FROM Datematrix FULL OUTER JOIN Allocation
    ON ( Datematrix.AllocationDate>= Allocation.TaskStartDate
    AND Datematrix.AllocationDate<=Allocation.TaskEndDate
    )INSERT INTO #TempTable
    SELECT * FROM MainData
    OPTION (MAXRECURSION 0);this way the code goes...please help.. i need my query to be tuned..!!thanks in advance..
    lucky

    When asking performance related questions, it is usually a bad idea to use pseudo code. In this case, I am referring to the fact that your "temtable" declaration is invalid, and the fact that it is unclear where the local variables and/or parameter originate
    from.
    In your query, you are using the "optional parameter" pattern for local variables and/or parameters @Project, @Program and @Portfolio. I therefore assume that we are talking about a stored procedure here.
    In stored procedures, optional parameters often have a negative effect on performance. You can cancel some of that effect by adding the OPTION(RECOMPILE) hint (assuming it is part of the stored procedure, and we are talking about parameters, not local variables).
    Also, you could consider using a calendar table instead of the Datematrix CTE. Reason for that is, that it is now probably a difficult join with Allocation, because the optimizer probably hasn't established that Datematrix is a unique range of dates. Also,
    it will not have any index. A calendar table with unique index on the date can help.
    The rest is probably up to the indexes on the base tables. Since you did not post any DDL, I can't comment on that. Lack of proper indexes are usually the biggest reason of Select performance problems.
    Gert-Jan

  • How to improve Query Performance

    Hi Friends...
    I Want to improve query performance.I need following things.
    1.What is the process to findout the performance?. Any transaction code's and how to use?.
    2.How can I know whether the query is running good or bad ,ie. in performance praspect.
    3.I want to see the values i.e. how much time it is taking to run?. and where the defect is?.
    4.How to improve the query performance?. After I did the needfull things to improve performance, I want to see the query execution time. i.e. it is running fast or not?.
    Eg..
    Eg 1.   Need to create aggregates.
    Solution:  where can I create aggregates?. Now I'm in production system. So where I need to create? .i.e. indevelopment or in Quality or in Production system?.
    Any chenges I need to do in Development?.Because I'm in Production system.
    So please tell me solution for my questions.
    Thanks
    Ganga
    Message was edited by: Ganga N

    hi ganga
    please refer oss note :557870 : Frequently asked questions on query performance
    also refer to
    Prakash's weblog
    /people/prakash.darji/blog/2006/01/27/query-creation-checklist
    /people/prakash.darji/blog/2006/01/26/query-optimization
    performance docs on query
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/064fed90-0201-0010-13ae-b16fa4dab695
    This is the oss notes of FAQ on query performance
    1. What kind of tools are available to monitor the overall Query Performance?
    1.     BW Statistics
    2.     BW Workload Analysis in ST03N (Use Export Mode!)
    3.     Content of Table RSDDSTAT
    2. Do I have to do something to enable such tools?
    Yes, you need to turn on the BW Statistics:
      RSA1, choose Tools -> BW statistics for InfoCubes
      (Choose OLAP and WHM for your relevant Cubes)
    3. What kind of tools is available to analyze a specific query in    detail?
    1.     Transaction RSRT
    2.     Transaction RSRTRACE
    4.  Do I have an overall query performance problem?
    i. Use ST03N -> BW System load values to recognize the problem. Use the  number given in table 'Reporting - InfoCubes:Share of total time (s)'  to check if one of the columns %OLAP, %DB, %Frontend shows a high   number in all Info Cubes.
    ii. You need to run ST03N in expert mode to get these values
    5. What can I do if the database proportion is high for all queries?
    Check:
    1.     If the database statistic strategy is set up properly for your DB platform (above all for the BW specific tables)
    2.     If database parameter set up accords with SAP Notes and SAP Services   (EarlyWatch)
    3.     If Buffers, I/O, CPU, memory on the database server are exhausted?
    4.     If Cube compression is used regularly
    5.     If Database partitioning is used (not available on all DB platforms)
    6. What can I do if the OLAP proportion is high for all queries?
    Check:
    1.     If the CPUs on the application server are exhausted
    2.     If the SAP R/3 memory set up is done properly (use TX ST02 to find bottlenecks)
    3.     If the read mode of the queries is unfavourable (RSRREPDIR, RSDDSTAT,  Customizing default)
    7. What can I do if the client proportion is high for all queries?
    Check whether most of your clients are connected via a WAN  connection and the amount of data which is transferred   is rather high.
    8. Where can I get specific runtime information for one query?
    1.     Again you can use ST03N -> BW System Load
    2.     Depending on the time frame you select, you get historical data or current data.
    3.     To get to a specific query you need to drill down using the InfoCube  name
    4.      Use Aggregation Query to get more runtime information about a   single query. Use tab All data to get to the details.   (DB, OLAP, and Frontend time, plus Select/ Transferred records,  plus number of cells and formats)
    9. What kind of query performance problems can I recognize using ST03N
       values for a specific query?
    (Use Details to get the runtime segments)
    1.     High Database Runtime
    2.     High OLAP Runtime
    3.     High Frontend Runtime
    10. What can I do if a query has a high database runtime?
    1.     Check if an aggregate is suitable (use All data to get values "selected records to transferred records", a high number here would  be an indicator for query performance improvement using an aggregate)
    2.     o Check if database statistics are update to data for the   Cube/Aggregate, use TX RSRV output (use database check for statistics  and indexes)
    3.     Check if the read mode of the query is unfavourable - Recommended (H)
    11. What can I do if a query has a high OLAP runtime?
    1.     Check if a high number of Cells transferred to the OLAP (use  "All data" to get value "No. of Cells")
    2.     Use RSRT technical Information to check if any extra OLAP-processing is necessary (Stock Query, Exception Aggregation, Calc. before   Aggregation, Virtual Char. Key Figures, Attributes in Calculated   Key Figs, Time-dependent Currency Translation)  together with a high number of records transferred.
    3.     Check if a user exit Usage is involved in the OLAP runtime?
    4.     Check if large hierarchies are used and the entry hierarchy level is  as deep as possible. This limits the levels of the hierarchy that must be processed. Use SE16 on the inclusion tables and use the List of Value feature on the column successor and predecessor to see which entry level of the hierarchy is used.
    5.     Check if a proper index on the inclusion  table exist
    12. What can I do if a query has a high frontend runtime?
    1.     Check if a very high number of cells and formatting are transferred   to the Frontend (use "All data" to get value "No. of Cells") which   cause high network and frontend (processing) runtime.
    2.     Check if frontend PC are within the recommendation (RAM, CPU MHz)
    3.     Check if the bandwidth for WAN connection is sufficient
    REWARDING POINTS IS THE WAY OF SAYING THANKS IN SDN
    CHEERS
    RAVI

  • Physical partitioning query performance wore than without

    Hello experts,
    I have copied InfoCube 0COPC_C04 to ZCPOC_C06 with a test-query and took over data from 0COPC_C04 to test the functionality of Repartitioning (~ 4 million records)
    I did Repartitioning following instructions from note 1008833 for 0FISCPER (I used se38 RSDU_SET_FV_TO_FIX_VALUE  to set 0FISCVARNT to constant) - starting-point is 001.2000 - end-point is 016.2010
    Indexes and Statistcs are re-created.
    We have Oracle 10.2.0.2.0 and SAPKW70022 on BW 700
    So - from my point of view everything is fine - BUT: Queryperformance is worst than before - shown in rsrv.
    Variable in Query is over 0FISCYEAR and 0FISCPER3 and NOT over 0FISCPER (a querytest with 0FISCPER brought the same bad performance result).
    Even I read, that with physical partitioning the query reads PARALLEL --- I can not see any parallel things in sm50 / sm66 - I have excpected to see parallel readings due to my query - I asked for results from 001.2006 - 12.2010 - and 13 partions were created (related to se14 ).
    What might be the problem?
    Thanks for your answer,
    Thomas
    Edited by: Thomas Liebschner on Jan 4, 2011 3:28 PM

    #1171650 is already implemented (since 14.12.2010) --- I asked my collegaue from Basis-Team to send me the report.
    > I read parallel-queryexcution in the SAP Press Galileo Press Book "SAP BW Peformanceoptimierung" written by Thomas Schröder (ISBN: 3-89842-718-8), Page 376. If you like, I can send this page as pdf to your mail stored in your business card.
    Would be interesting to see that excerpt, yes.
    > What I missed was to compress the F-Table.
    >
    > I'm wondering, that:
    > 1. After Compressing 0COPC_C04 with 8 million records gave a dramatically improvement of query-performance. No partitioning.
    Sure, usually a lot less data needs to be read now.
    > 2. After a new Full-load from 0COPC_C04 to ZCOPC_C06 with rebuilding indices and statistics I got similar query-performance compared to 1
    Also easy to understand.
    The F-Facttable contains data per load. So data concerning the same business items will be in that table multiple times and need to be aggregated on-the-fly during query execution.
    The E-Facttable in contrast just contains one entry per business item. Which is the same situation you'll find when you just have one single data load request.
    > 3. If I compress ZCOPC_C06, then query-performance drops dramatically down (needs double time, if I use 0FISCPER, needs 8 times more in query with 0FISCYEAR and 0FISCPER3)
    >
    > additional information: I have merged partitions from ZCOPC_C06 to one partition --> checked by se14 /BIC/FZCOPC_C06 -- storage parameters
    Hmm... ok, here we would really need to see what Oracle does with your Query and the partitioning scheme then.
    What seems obvious is that as soon as you have just one partition, then there is no way for Oracle to split up work and leave uninteresting data aside (partition-pruning),
    > Conclusion: it seems for me, that compression is enough - partitioning has no advantage in that dedicated scenario.
    > Do I think right?
    I think that compression is a must-do in 99% of all cases for SAP BI and that the partitioning of the E-Facttable should be reviewed on your system.
    Up to here there is too little information available to tell whether or not it could benefit your query execution time.
    Another aspect you might consider with partitioning the E-facttable is your data retention policy.
    If you're about to throw away your data on, say, a yearly basis, then having the table partitioned that way can provide huge benefits.
    My personal view to this is: you already payed for the most expensive and feature-rich database engine available for SAP BI - so why not go and exploit these features where possible?
    best regards
    Lars

  • Query performance.

    Hi
    I have created a procedure that accepts two bind variables from a report. The user will select one or the other, both or neither of the variables. To return the appropriate results i have created a view with the entire result set and in the procedure are a number of if statements that determine what to place in the where clause selecting from the view, depending on what variables populated.
    My concern is that the query that generates the view includes several joins and in total outputs around 150,000 records and seems to be rather slow to run.
    Would you recommend another solution such as placing the query in the procedure itself repeated for every if statement?
    Or should I work on the query performance?
    What would be the most efficient solution for my problem?
    Any advice would be greatly appreciated.
    Thanks

    [url http://forums.oracle.com/forums/thread.jspa?threadID=501834&tstart=0]When your query takes too long

  • How to improve the query performance in to report level and designer level

    How to improve the query performance in to report level and designer level......?
    Plz let me know the detail view......

    first its all based on the design of the database, universe and the report.
    at the universe Level, you have to check your Contexts very well to get the optimal performance of the universe and also your joins, keep your joins with key fields, will give you the best performance.
    at the report level, try to make the reports dynamic as much as you can, (Parameters) and so on.
    and when you create a paremeter try to get it match with the key fields in the database.
    good luck
    Amr

  • Report burst:To increase query performance in xcelsius

    Is there anyway to increase query performance in xcelsius by using report bursting

    Fremlin,
    Report bursting is only for distributing your reports to your end users.
    You can improve performance only by following the [Best practices|https://www.sdn.sap.com/irj/boc/index?rid=/library/uuid/a084a11c-6564-2b10-79ac-cc1eb3f017ac] in xcelsius.
    -Anil

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

Maybe you are looking for