How to test query performance

Hi Xperts,
   i know that we can test the query performance using the RSRT Debug mode.There so many options in RSRT i am so confused.
Can any body teach me or send me docs to test query performance suing RSRT.
[email protected]

hi Nrupal,
check
http://help.sap.com/saphelp_nw70/helpdata/en/a0/2a183d30805c59e10000000a114084/frameset.htm
pdf BEx Monitor(RSRT) is available in SDN ...
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/2d589190-0201-0010-f19a-c74465ce6e0f
also review query performance oss and docs
https://forums.sdn.sap.com/click.jspa?messageID=1182708
hope this helps.

Similar Messages

  • How to test the performed  SAP SCRIPT

    Hi friends can any one help for
    how to test the performed  SAP SCRIPT. What are the necessary codes or anything else for performing it

    Praveen,
    Can you be much clear with your query? What do you mean by performing? Do you mean how to execute it ?
    Regards,
    Vinod.

  • How to improve query performance built on a ODS

    Hi,
    I've built a report on FI_GL ODS (BW3.5). The report execution time takes almost 1hr.
    Is there any method to improve or optimize th query performance that build on ODS.
    The ODS got huge volume of data ~ 300 Million records for 2 years.
    Thanx in advance,
    Guru.

    Hi Raj,
    Here are some few tips which helps you in improving ur query performance
    Checklist for Query Performance
    1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
    3. Within structures, make sure the filter order exists with the highest level filter first.
    4. Check code for all exit variables used in a report.
    5. Move Time restrictions to a global filter whenever possible.
    6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
    7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
    8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
    9. If Alternative UOM solution is used, turn off query cache.
    10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
    11. Turn off formatting and results rows to minimize Frontend time whenever possible.
    12. Check for nested hierarchies. Always a bad idea.
    13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
    14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
    15. Do review of order of restrictions in formulas. Do as many restrictions as you can before
    calculations. Try to avoid calculations before restrictions.
    17. Turn off warning messages on queries.
    18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
    19. Check to see where currency conversions are happening if they are used.
    20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    21. Avoid Cell Editor use if at all possible.
    22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
    23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.

  • How to improve query performance using infoset

    I create one infoset that including 4 char.and 3 DSO which all are time-dependent.When query run, system show very poor perfomance, sometimes no data show in BEX anayzer. In this case I have to close BEX analyzer at first and then open it again, after that it show real results. It seems very strange. Does anybody has experience on infoset performance improvement. pls info, thanks!

    Hi
    As info set itself doesn't have any data so it improves Performance
    also go through the below tips.
    Find the query Run-time
    where to find the query Run-time ?
    557870 'FAQ BW Query Performance'
    130696 - Performance trace in BW
    This info may be helpful.
    General tips
    Using aggregates and compression.
    Using less and complex cell definitions if possible.
    1. Avoid using too many nav. attr
    2. Avoid RKF and CKF
    3. Many chars in row.
    By using T-codes ST03 or ST03N
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particular day > check query execution time.
    Statistical Records Part 4: How to read ST03N datasets from DB in NW2004
    How to read ST03N datasets from DB
    Try table rsddstats to get the statistics
    Using cache memory will decrease the loading time of the report.
    Run reporting agent at night and sending results to email. This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
    Also try
    1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
    2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
    Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
    It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
    3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
    Open the Aggregates...and observe VALUATION and USAGE columns.
    "---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
    In usage column,we will come to know how far the aggregate has been used in query.
    Thus we can check the performance of the aggregate.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    performance ISSUE related to AGGREGATE
    Note 356732 - Performance Tuning for Queries with Aggregates
    Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
    Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
    5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
    By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
    You can go to T-Code DB20 which gives you all the performance related information like
    Partitions
    Databases
    Schemas
    Buffer Pools
    Tablespaces etc
    use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
    If aggregates contain incorrect data, you must regenerate them.
    202469 - Using aggregate check tool
    Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
    You can find out whether an aggregate is usefull or useless you can find out through a proccess of checking the tables RSDDSTATAGGRDEF*
    Run the query in RSRT with statistics execute and come back you will get STATUID... copy this and check in the table...
    This gives you exactly which infoobjects it's hitting, if any one of the object is missing it's useless aggregate.
    6
    Check SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
    Generate Report in RSRT
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Business Intelligence Journal Improving Query Performance in Data Warehouses
    http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    Assign points if useful
    Cheers
    SM

  • How to improve query performance when reporting on ods object?

    Hi,
    Can anybody give me the answer, how to improve my query performance when reporting on ODS object?
    Thanks in advance,
    Ravi Alakuntla.

    Hi Ravi,
    Check these links which may cater your requirement,
    Re: performance issues of ODS
    Which criteria to follow to pick InfoObj. as secondary index of ODS?
    PDF on BW performance tuning,
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    Regards,
    Mani.

  • How to improve Query Performance

    Hi Friends...
    I Want to improve query performance.I need following things.
    1.What is the process to findout the performance?. Any transaction code's and how to use?.
    2.How can I know whether the query is running good or bad ,ie. in performance praspect.
    3.I want to see the values i.e. how much time it is taking to run?. and where the defect is?.
    4.How to improve the query performance?. After I did the needfull things to improve performance, I want to see the query execution time. i.e. it is running fast or not?.
    Eg..
    Eg 1.   Need to create aggregates.
    Solution:  where can I create aggregates?. Now I'm in production system. So where I need to create? .i.e. indevelopment or in Quality or in Production system?.
    Any chenges I need to do in Development?.Because I'm in Production system.
    So please tell me solution for my questions.
    Thanks
    Ganga
    Message was edited by: Ganga N

    hi ganga
    please refer oss note :557870 : Frequently asked questions on query performance
    also refer to
    Prakash's weblog
    /people/prakash.darji/blog/2006/01/27/query-creation-checklist
    /people/prakash.darji/blog/2006/01/26/query-optimization
    performance docs on query
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/064fed90-0201-0010-13ae-b16fa4dab695
    This is the oss notes of FAQ on query performance
    1. What kind of tools are available to monitor the overall Query Performance?
    1.     BW Statistics
    2.     BW Workload Analysis in ST03N (Use Export Mode!)
    3.     Content of Table RSDDSTAT
    2. Do I have to do something to enable such tools?
    Yes, you need to turn on the BW Statistics:
      RSA1, choose Tools -> BW statistics for InfoCubes
      (Choose OLAP and WHM for your relevant Cubes)
    3. What kind of tools is available to analyze a specific query in    detail?
    1.     Transaction RSRT
    2.     Transaction RSRTRACE
    4.  Do I have an overall query performance problem?
    i. Use ST03N -> BW System load values to recognize the problem. Use the  number given in table 'Reporting - InfoCubes:Share of total time (s)'  to check if one of the columns %OLAP, %DB, %Frontend shows a high   number in all Info Cubes.
    ii. You need to run ST03N in expert mode to get these values
    5. What can I do if the database proportion is high for all queries?
    Check:
    1.     If the database statistic strategy is set up properly for your DB platform (above all for the BW specific tables)
    2.     If database parameter set up accords with SAP Notes and SAP Services   (EarlyWatch)
    3.     If Buffers, I/O, CPU, memory on the database server are exhausted?
    4.     If Cube compression is used regularly
    5.     If Database partitioning is used (not available on all DB platforms)
    6. What can I do if the OLAP proportion is high for all queries?
    Check:
    1.     If the CPUs on the application server are exhausted
    2.     If the SAP R/3 memory set up is done properly (use TX ST02 to find bottlenecks)
    3.     If the read mode of the queries is unfavourable (RSRREPDIR, RSDDSTAT,  Customizing default)
    7. What can I do if the client proportion is high for all queries?
    Check whether most of your clients are connected via a WAN  connection and the amount of data which is transferred   is rather high.
    8. Where can I get specific runtime information for one query?
    1.     Again you can use ST03N -> BW System Load
    2.     Depending on the time frame you select, you get historical data or current data.
    3.     To get to a specific query you need to drill down using the InfoCube  name
    4.      Use Aggregation Query to get more runtime information about a   single query. Use tab All data to get to the details.   (DB, OLAP, and Frontend time, plus Select/ Transferred records,  plus number of cells and formats)
    9. What kind of query performance problems can I recognize using ST03N
       values for a specific query?
    (Use Details to get the runtime segments)
    1.     High Database Runtime
    2.     High OLAP Runtime
    3.     High Frontend Runtime
    10. What can I do if a query has a high database runtime?
    1.     Check if an aggregate is suitable (use All data to get values "selected records to transferred records", a high number here would  be an indicator for query performance improvement using an aggregate)
    2.     o Check if database statistics are update to data for the   Cube/Aggregate, use TX RSRV output (use database check for statistics  and indexes)
    3.     Check if the read mode of the query is unfavourable - Recommended (H)
    11. What can I do if a query has a high OLAP runtime?
    1.     Check if a high number of Cells transferred to the OLAP (use  "All data" to get value "No. of Cells")
    2.     Use RSRT technical Information to check if any extra OLAP-processing is necessary (Stock Query, Exception Aggregation, Calc. before   Aggregation, Virtual Char. Key Figures, Attributes in Calculated   Key Figs, Time-dependent Currency Translation)  together with a high number of records transferred.
    3.     Check if a user exit Usage is involved in the OLAP runtime?
    4.     Check if large hierarchies are used and the entry hierarchy level is  as deep as possible. This limits the levels of the hierarchy that must be processed. Use SE16 on the inclusion tables and use the List of Value feature on the column successor and predecessor to see which entry level of the hierarchy is used.
    5.     Check if a proper index on the inclusion  table exist
    12. What can I do if a query has a high frontend runtime?
    1.     Check if a very high number of cells and formatting are transferred   to the Frontend (use "All data" to get value "No. of Cells") which   cause high network and frontend (processing) runtime.
    2.     Check if frontend PC are within the recommendation (RAM, CPU MHz)
    3.     Check if the bandwidth for WAN connection is sufficient
    REWARDING POINTS IS THE WAY OF SAYING THANKS IN SDN
    CHEERS
    RAVI

  • How to improve Query performance on large table in MS SQL Server 2008 R2

    I have a table with 20 million records. What is the best option to improve query performance on this table. Is partitioning the table into filegroups  is a best option or splitting the table into multiple smaller tables? 

    Hi bala197164,
    First, I want to inform that both to partition the table into filegroups and split the table into multiple smaller tables can improve the table query performance, and they are fit for different situation. For example, our table have one hundred columns and
    some columns are not related to this table object directly (for example, there is a table named userinfo to store user information, it has columns address_street, address_zip,address_ province columns, at this time, we can create a new table named as Address,
    and add a foreign key in userinfo table references Address table), under this situation, by splitting a large table into smaller, individual tables, queries that access only a fraction of the data can run faster because there is less data to scan. Another
    situation is our table records can be grouped easily, for example, there is a column named year to store information about product release date, at this time, we can partition the table into filegroups to improve the query performance. Usually, we perform
    both of methods together. Additionally, we can add index to table to improve the query performance. For more detail information, please refer to the following document:
    Partitioning:
    http://msdn.microsoft.com/en-us/library/ms178148.aspx
    CREATE INDEX (Transact-SQL):
    http://msdn.microsoft.com/en-us/library/ms188783.aspx
    TechNet
    Subscriber Support 
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Allen Li
    TechNet Community Support

  • How to improve query performance of an ODS- with 320 million records

    <b>Issue:</b>
    The reports are giving time-outs while execution.
    <b>Scenario</b>:
    We have an ODS having approximately 320 millions of records in it.
    The reports are based on
    The ODS and
    InfoSets based on this ODS.
    These reports are giving time-outs while execution.
    <b>Few facts about this ODS:</b>
    There are around 75 restricted and calculated keyfigures used in the query definition.
    We can’t replace this ODS by cube as there is requirement of InfoSet on it.
    This is in BW 3.5 environment.
    <b>Few things we tried:</b>
    Secondary Indices are created on the fields which are appearing in the selection screen of the reports. It’s not worked.
    The Restriction/Calculation logic in the query definition can be moved to backend. Will it make the difference?
    Question:
    Can you suggest the ways to improve the query performance of this ODS?
    Your immediate response is highly appreciated. Thanks in advance.

    Hey!
    I think Oliver's questions are good. 320 Mio records are to much for an ODS. If you can get rid of the InfoSet that would be helpful. Why exactly do you need it? If you don't need you could partition your ODS with a characteristic and report over an MultiProvider.
    Is there a way to delete some data from the ODS?
    Maybe you make an Upgrade to 7.0 in the next time? There you can use InfoSets on InfoCubes.
    You also could try to precalculation like sam say. This is possible with reporting agent or Information Broadcasting. Then you have it in your cache. Look that your cache is large enough. Maybe you can use a table or something.
    Do you just need to make one or some special reports on a special time? Maybe you can make an update in another ODS writing just the result in it. For this you can use update rules or maybe analysisprocess designer (transaction RSANWB) is the better way.
    Maybe it is also possible to increase the parameter for your dialog-runtime rdisp/max_wprun_time (If you don't know, you basis should. Else look here https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/ab254cf2-0c01-0010-c28d-b26d04627e61)
    Best regards,
    Peter

  • How to improve query performance when query with mtart

    hi all,
    I need to know what is the best way to query out MATNR when i have values for MTART and WERKS?
    When a user select a MTART and WERKS, a whole bunch of MATNR related to them are display out. How do i improve this query so that it does not take a long time?
    Thanks
    Willliam Wilstroth

    Is that what you are looking for???
    select a~matnr a~mtart ... b~werks ...
           into table <itab>
           from mara as a
           inner join marc as b
           on a~matnr = b~matnr
           where mtart = p_mtart
           and   werks = p_werks.
    There is an index on MTART in table MARA.
    <b>T           Material Type</b>
    Kind Regards
    Eswar

  • How to improve query & loading performance.

    Hi All,
    How to improve query & loading performance.
    Thanks in advance.
    Rgrds
    shoba

    Hi Shoba
    There are lot of things to improve the query and loading performance.
    please refer oss note :557870 : Frequently asked questions on query performance
    also refer to
    weblogs:
    /people/prakash.darji/blog/2006/01/27/query-creation-checklist
    /people/prakash.darji/blog/2006/01/26/query-optimization
    performance docs on query
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/064fed90-0201-0010-13ae-b16fa4dab695
    This is the oss notes of FAQ on query performance
    1. What kind of tools are available to monitor the overall Query Performance?
    1. BW Statistics
    2. BW Workload Analysis in ST03N (Use Export Mode!)
    3. Content of Table RSDDSTAT
    2. Do I have to do something to enable such tools?
    Yes, you need to turn on the BW Statistics:
    RSA1, choose Tools -> BW statistics for InfoCubes
    (Choose OLAP and WHM for your relevant Cubes)
    3. What kind of tools is available to analyze a specific query in detail?
    1. Transaction RSRT
    2. Transaction RSRTRACE
    4. Do I have an overall query performance problem?
    i. Use ST03N -> BW System load values to recognize the problem. Use the number given in table 'Reporting - InfoCubes:Share of total time (s)' to check if one of the columns %OLAP, %DB, %Frontend shows a high number in all Info Cubes.
    ii. You need to run ST03N in expert mode to get these values
    5. What can I do if the database proportion is high for all queries?
    Check:
    1. If the database statistic strategy is set up properly for your DB platform (above all for the BW specific tables)
    2. If database parameter set up accords with SAP Notes and SAP Services (EarlyWatch)
    3. If Buffers, I/O, CPU, memory on the database server are exhausted?
    4. If Cube compression is used regularly
    5. If Database partitioning is used (not available on all DB platforms)
    6. What can I do if the OLAP proportion is high for all queries?
    Check:
    1. If the CPUs on the application server are exhausted
    2. If the SAP R/3 memory set up is done properly (use TX ST02 to find bottlenecks)
    3. If the read mode of the queries is unfavourable (RSRREPDIR, RSDDSTAT, Customizing default)
    7. What can I do if the client proportion is high for all queries?
    Check whether most of your clients are connected via a WAN connection and the amount of data which is transferred is rather high.
    8. Where can I get specific runtime information for one query?
    1. Again you can use ST03N -> BW System Load
    2. Depending on the time frame you select, you get historical data or current data.
    3. To get to a specific query you need to drill down using the InfoCube name
    4. Use Aggregation Query to get more runtime information about a single query. Use tab All data to get to the details. (DB, OLAP, and Frontend time, plus Select/ Transferred records, plus number of cells and formats)
    9. What kind of query performance problems can I recognize using ST03N
    values for a specific query?
    (Use Details to get the runtime segments)
    1. High Database Runtime
    2. High OLAP Runtime
    3. High Frontend Runtime
    10. What can I do if a query has a high database runtime?
    1. Check if an aggregate is suitable (use All data to get values "selected records to transferred records", a high number here would be an indicator for query performance improvement using an aggregate)
    2. o Check if database statistics are update to data for the Cube/Aggregate, use TX RSRV output (use database check for statistics and indexes)
    3. Check if the read mode of the query is unfavourable - Recommended (H)
    11. What can I do if a query has a high OLAP runtime?
    1. Check if a high number of Cells transferred to the OLAP (use "All data" to get value "No. of Cells")
    2. Use RSRT technical Information to check if any extra OLAP-processing is necessary (Stock Query, Exception Aggregation, Calc. before Aggregation, Virtual Char. Key Figures, Attributes in Calculated Key Figs, Time-dependent Currency Translation) together with a high number of records transferred.
    3. Check if a user exit Usage is involved in the OLAP runtime?
    4. Check if large hierarchies are used and the entry hierarchy level is as deep as possible. This limits the levels of the hierarchy that must be processed. Use SE16 on the inclusion tables and use the List of Value feature on the column successor and predecessor to see which entry level of the hierarchy is used.
    5. Check if a proper index on the inclusion table exist
    12. What can I do if a query has a high frontend runtime?
    1. Check if a very high number of cells and formatting are transferred to the Frontend (use "All data" to get value "No. of Cells") which cause high network and frontend (processing) runtime.
    2. Check if frontend PC are within the recommendation (RAM, CPU MHz)
    3. Check if the bandwidth for WAN connection is sufficient
    and the some threads:
    how can i increse query performance other than creating aggregates
    How to improve query performance ?
    Query performance - bench marking
    may be helpful
    Regards
    C.S.Ramesh
    [email protected]

  • Query Performance Issues on a cube sized 64GB.

    Hi,
    We have a non-time based cube whose size is 64 GB . Effectively, I can't use time dimension for partitioning. The transaction table has ~ 850 million records. We have 20+ dimensions among which 2 of the dimensions have 50 million records.
    I have equally distributed the fact table records among 60 partitions. Each partition size is around 900 MB.
    The processing of the cube is not an issue as it completes in 3.5 hours. The issue is with the query performance of the cube.
    When an MDX query is submitted, unfortunately, in majority of the cases the storage engine has to scan all the partitions (as our cube  is not time dependent and we can't find a suitable dimension that will fit the bill to partition measure group based
    on it.)
    I'm aware of the cache warming and  usage based aggregation(UBO) techniques.
    However, the cube is available for users to perform adhoc queries and hence the benefits of cache warming and UBO may cease to contribute to the performance gain as there is a high probability that each user may look at the data from different perspectives
    (especially when we have 20 + dimensions) as day(s) progress.
    Also, we have 15 + average calculations (calculated measures) in the cube. So, the storage engine sends all the granular data that the formula engine might have requested (could be millions of rows) and then perform the average calculation.
    A look at the profiler suggested that considerable amount of time has been spent by storage engine to gather the records (from 60 partitions).
    FYI - Our server has RAM 32 GB and 8 cores  and it is exclusive to Analysis Services.
    I would appreciate comments from anyone who has worked on a large cube that is not time dependent and the steps they took to improve the adhoc query performance for the users.
    Thanks
    CoolP

    Hello CoolP,
    Here is a good articles regarding how to tuning query performance in SSAS, please see:
    Analysis Services Query Performance Top 10 Best Practices:
    http://technet.microsoft.com/en-us/library/cc966527.aspx
    Hope you can find some helpful clues to tuning your SSAS Server query performance. Moreover, there are two ways to improve the query response time for an increasing number of end-users:
    Adding more power to the existing server (scale up)
    Distributing the load among several small servers (scale out)
    For detail information, please see:
    http://technet.microsoft.com/en-us/library/cc966449.aspx
    Regards,
    Elvis Long
    TechNet Community Support

  • How to test SQL query performance - realiably?

    I have certain queries and I want to test which one is faster, and how big is the difference.
    How can I do this reliably?
    The problem is, when I execute the queries, Oracle does it's caching and execution planning and whatnot, and results of the queries are dependent on the order I execute them.
    Example: query A and query B, supposed to return same data.
    query A, run 1: 587 seconds
    query A, run 2: 509 seconds
    query B, run 1: 474 seconds
    query B, run 2: 451 seconds
    It would seem that A is somewhat faster than B, but if I change the order and execute B before A, results are different.
    Also I'm running the queries in SQL Developer, and it only returns 100 first lines, how can I remove this effect and simulate real scenario where all lines are fetched?
    I can also use EXPLAIN PLANs and look at the costs but I'm not sure how much I can trust those either. I understand they are only estimations and even if cost(a) = 1.5 * cost (b), b could still end up executing faster in practise due to inaccuracies in the cost calculation....right? EDIT: actually event if cost(a) = 5000 * cost(b), b can still execute faster.....seems like query A's cost is 15836 and B's cost is 3 while A seems to be faster in practise.
    Edited by: user620914 on 19-Jan-2010 01:42

    user620914 wrote:
    I have to say I don't understand your point either :)
    What are you saying, that people should not test their SQL performance? That tools such as autotrace are useless?No.. what I'm saying is that you need a baseline to make an informed decision about SQL performance.
    What does a 4 second SQL performance mean for query foo ? Nothing really.. wearing my dba cap I would point at that this is actually utterly useless for me to determine the impact of your query on production, or use it to determine how to scale it.
    If instead you tell me that is hits that table using an index range scan.. I know what it is doing and have a far better idea what it will do to the production instance.
    Thus my questioning this "+elapsed time+" measurement approach. I as a dba cannot use it... and I'm not sure what benefit (wearing my developer hat) you will find from it either.
    You can form your SQL queries better or worse, or select your table structure / indexes better or worse. Some choices may end up executing orders of magnitude slower than others. Obviously you can't get exact measurements "this query executes in 43123 ns" and there are a lot of unpredictable variables that affect the end performance. Still, it's often better to test your querie's / table's performance before implementing them in the application than not.Exactly. I'm not questioning the fact that optimising your code (and ALL your code, not just SQL) is a Good Thing (tm) - but how you go about that optimisation process.
    For example, your PL/SQL code fires off a query. It returns on average 10,000 rows, hits a single partition (SQL enables partitioning pruning) and then uses a local bitmap index to identify the rows.
    An optimal query by the sounds of it, and one that will perform and scale well.. even when the database instance needs to service a 100 clients using your code and running this query.
    Only, the code does a single bulk collect of all the rows and stuff it into dedicated process memory (PGA). Servicing a 100 clients means that dedicated server memory is now needed for 100x10000 rows - there's insufficient free memory, causing the kernel to start swapping pages in and out of memory heavily as all 100 client sessions are active and wanting to process the rows returned by the optimal query.
    What happens to scalability and performance now?
    Testing for performance is not simply measuring a query and then trying to use that or extrapolate that to determine application performance and the impact on production.
    It starts with the design of the tables, the design of the application, the writing of the code (application and SQL). It is not something that should be done after the fact as in "+okay, application all done, let's see how she performs!+".. and especially not using time as the baseline for performance measurement.

  • How to improve the query performance in to report level and designer level

    How to improve the query performance in to report level and designer level......?
    Plz let me know the detail view......

    first its all based on the design of the database, universe and the report.
    at the universe Level, you have to check your Contexts very well to get the optimal performance of the universe and also your joins, keep your joins with key fields, will give you the best performance.
    at the report level, try to make the reports dynamic as much as you can, (Parameters) and so on.
    and when you create a paremeter try to get it match with the key fields in the database.
    good luck
    Amr

  • How to measure the performance of sql query?

    Hi Experts,
    How to measure the performance, efficiency and cpu cost of a sql query?
    What are all the measures available for an sql query?
    How to identify i am writing optimal query?
    I am using Oracle 9i...
    It ll be useful for me to write efficient query....
    Thanks & Regards

    psram wrote:
    Hi Experts,
    How to measure the performance, efficiency and cpu cost of a sql query?
    What are all the measures available for an sql query?
    How to identify i am writing optimal query?
    I am using Oracle 9i... You might want to start with a feature of SQL*Plus: The AUTOTRACE (TRACEONLY) option which executes your statement, fetches all records (if there is something to fetch) and shows you some basic statistics information, which include the number of logical I/Os performed, number of sorts etc.
    This gives you an indication of the effectiveness of your statement, so that can check how many logical I/Os (and physical reads) had to be performed.
    Note however that there are more things to consider, as you've already mentioned: The CPU bit is not included in these statistics, and the work performed by SQL workareas (e.g. by hash joins) is also credited only very limited (number of sorts), but e.g. it doesn't cover any writes to temporary segments due to sort or hash operations spilling to disk etc.
    You can use the following approach to get a deeper understanding of the operations performed by each row source:
    alter session set statistics_level=all;
    alter session set timed_statistics = true;
    select /* findme */ ... <your query here>
    SELECT
             SUBSTR(LPAD(' ',DEPTH - 1)||OPERATION||' '||OBJECT_NAME,1,40) OPERATION,
             OBJECT_NAME,
             CARDINALITY,
             LAST_OUTPUT_ROWS,
             LAST_CR_BUFFER_GETS,
             LAST_DISK_READS,
             LAST_DISK_WRITES,
    FROM     V$SQL_PLAN_STATISTICS_ALL P,
             (SELECT *
              FROM   (SELECT   *
                      FROM     V$SQL
                      WHERE    SQL_TEXT LIKE '%findme%'
                               AND SQL_TEXT NOT LIKE '%V$SQL%'
                               AND PARSING_USER_ID = SYS_CONTEXT('USERENV','CURRENT_USERID')
                      ORDER BY LAST_LOAD_TIME DESC)
              WHERE  ROWNUM < 2) S
    WHERE    S.HASH_VALUE = P.HASH_VALUE
             AND S.CHILD_NUMBER = P.CHILD_NUMBER
    ORDER BY ID
    /Check the V$SQL_PLAN_STATISTICS_ALL view for more statistics available. In 10g there is a convenient function DBMS_XPLAN.DISPLAY_CURSOR which can show this information with a single call, but in 9i you need to do it yourself.
    Note that "statistics_level=all" adds a significant overhead to the processing, so use with care and only when required:
    http://jonathanlewis.wordpress.com/2007/11/25/gather_plan_statistics/
    http://jonathanlewis.wordpress.com/2007/04/26/heisenberg/
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • How to improve the performance of one program in one select query

    Hi,
    I am facing performance issue in one program. I have given some part of the code of the program.
    it is taking much time below select query. How to improve the performance.
    Quick response is highly appreciated.
    Program code
    DATA: BEGIN OF t_dels_tvpod OCCURS 100,
    vbeln LIKE tvpod-vbeln,
    posnr LIKE tvpod-posnr,
    lfimg_diff LIKE tvpod-lfimg_diff,
    calcu LIKE tvpod-calcu,
    podmg LIKE tvpod-podmg,
    uecha LIKE lips-uecha,
    pstyv LIKE lips-pstyv,
    xchar LIKE lips-xchar,
    grund LIKE tvpod-grund,
    END OF t_dels_tvpod,
    DATA: l_tabix LIKE sy-tabix,
    lt_dels_tvpod LIKE t_dels_tvpod OCCURS 10 WITH HEADER LINE,
    ls_dels_tvpod LIKE t_dels_tvpod.
    SELECT vbeln INTO TABLE lt_dels_tvpod FROM likp
    FOR ALL ENTRIES IN t_dels_tvpod
    WHERE vbeln = t_dels_tvpod-vbeln
    AND erdat IN s_erdat
    AND bldat IN s_bldat
    AND podat IN s_podat
    AND ernam IN s_ernam
    AND kunnr IN s_kunnr
    AND vkorg IN s_vkorg
    AND vstel IN s_vstel
    AND lfart NOT IN r_del_types_exclude.
    Waiting for quick response.
    Best regards,
    BDP

    Bansidhar,
    1) You need to add a check to make sure that internal table t_dels_tvpod (used in the FOR ALL ENTRIES clause) is not blank. If it is blank skip the SELECt statement.
    2)  Check the performance with and without clause 'AND lfart NOT IN r_del_types_exclude'. Sometimes NOT causes the select statement to not use the index. Instead of 'lfart NOT IN r_del_types_exclude' use 'lfart IN r_del_types_exclude' and build r_del_types_exclude by using r_del_types_exclude-sign = 'E' instead of 'I'.
    3) Make sure that the table used in the FOR ALL ENTRIES clause has unique delivery numbers.
    Try doing something like this.
    TYPES: BEGIN OF ty_del_types_exclude,
             sign(1)   TYPE c,
             option(2) TYPE c,
             low       TYPE likp-lfart,
             high      TYPE likp-lfart,
           END OF ty_del_types_exclude.
    DATA: w_del_types_exclude TYPE          ty_del_types_exclude,
          t_del_types_exclude TYPE TABLE OF ty_del_types_exclude,
          t_dels_tvpod_tmp    LIKE TABLE OF t_dels_tvpod        .
    IF NOT t_dels_tvpod[] IS INITIAL.
    * Assuming that I would like to exclude delivery types 'LP' and 'LPP'
      CLEAR w_del_types_exclude.
      REFRESH t_del_types_exclude.
      w_del_types_exclude-sign = 'E'.
      w_del_types_exclude-option = 'EQ'.
      w_del_types_exclude-low = 'LP'.
      APPEND w_del_types_exclude TO t_del_types_exclude.
      w_del_types_exclude-low = 'LPP'.
      APPEND w_del_types_exclude TO t_del_types_exclude.
      t_dels_tvpod_tmp[] = t_dels_tvpod[].
      SORT t_dels_tvpod_tmp BY vbeln.
      DELETE ADJACENT DUPLICATES FROM t_dels_tvpod_tmp
        COMPARING
          vbeln.
      SELECT vbeln
        FROM likp
        INTO TABLE lt_dels_tvpod
        FOR ALL ENTRIES IN t_dels_tvpod_tmp
        WHERE vbeln EQ t_dels_tvpod_tmp-vbeln
        AND erdat IN s_erdat
        AND bldat IN s_bldat
        AND podat IN s_podat
        AND ernam IN s_ernam
        AND kunnr IN s_kunnr
        AND vkorg IN s_vkorg
        AND vstel IN s_vstel
        AND lfart IN t_del_types_exclude.
    ENDIF.

Maybe you are looking for

  • Third-party back order process

    Hi, We created a third party sales order which in turn created the purchase requisition. The requisition is converted to PO and received a partial quantity. But when I ran the back order processing transactions V_RA or CO06 the sales order is not app

  • I lost all my songs when syncing

    When I downloaded the new version I lost all my contacts and all my apps and all my downloaded songs.  I've managed to get everthing else back except for the music.  I had approximately 150 songs.  Any help would be greatly apreciated. Thanks,

  • How to output BufferedImage as PNG in HTML page

    I have a servlet that creates a chart as a BufferedImage. I can output it as a file using ImageIO.write(bimg,"PNG","fn.ext") I would rather put it in a web page but I don't know how. It seems I can either put out an html page or an image file but not

  • For our ELVIS II units, I receive error message -200220. What is wrong with these units when they worked a month ago?

    On several computer stations, we receive the error message -200220 when selecting any instrument on the Instrument Launcher when the Run option is pressed. This message has not been cleared after the computer was rebooted and the Box power cycled. Wh

  • Weblogic 8.1 SP6 with 10g SocketException: Write failed: Broken pipe

    Hi We have recently upgraded DB from 9.2.0 to 10.2.0 (DB installed on XP), Our application is deployed on WebLogic 8.1 SP6 (installed on Linux Enterprise 5.2), every thing worked fine but after upgrade, application server log throw following exceptio