Query Performance - Analyzer vs WAD

Hi experts,
We've been having user complaints regarding performance in BW.  The queries are accessed using Analyzer/Excel interface.  As a test I've tried comparing opening the query through Analyzer and then through Web Application Designer.  With Excel my load from S900 was taking almost 2 minutes.  With the Web Browser my initial load was just 30 seconds!  This is great however, this number is deceiving because it's only showing me a subset of the data; the first 25% of the data is shown and if I want to view more I have to click 'next page' (called 'Next Rows' on screen).  So the subsequent set of records (say 25-50%) then takes another 30 seconds.  So if you hit Next page each time you will end up getting 4 loads of 30 seconds each thus the same performance as Excel (2 minutes total wait), although maybe less stressful to sit and watch.
QUESTION:
It seems like the subsequent loads are taking 30 seconds because they are re-querying the back-end data (the S900 Cube) instead of re-querying the resultset.  Is that true?  Do you see my problem?  Is there a better way to do this to improve performance?
Thanks for any help you can provide!
-Patrick

hi,
dealing with query performance, please check following, hope you get ideas ....
Prakash's weblog on this topic..
/people/prakash.darji/blog/2006/01/27/query-creation-checklist
/people/prakash.darji/blog/2006/01/26/query-optimization
oss note
557870 'FAQ BW Query Performance'
and 567746 'Composite note BW 3.x performance Query and Web'
some docs
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc

Similar Messages

  • Bad query performance - how to analyze it?

    Hi all,
    since 8 weeks we locate a bad query performance (round about 30% worse than before) in our BW system. At the moment we use a BIA on revision 49 with 4 blades (16GB).
    I have already read note 1318214 and analyzed that the most time is spend on the virtual provider(over 80%!).
    I´ve seen that a lot of time is spend on the "Datamanager":
    For example: It takes 0,76s to select 3.5million items in the relative provider and 78s!!! to select 0 items in the virtual provider.
    information from RSDDSTATTREXSERV:
    RFC Server    BIA client  BIA Kernel    ABAP RFC
    497          464              450               619
    So it seems to be a problem an the BW site, what can we do to improve the performance or analyse the query performance better.
    Best Regards,
    Jens

    Hi Jens,
    A few checks you may consider doing.
    BIA Availability :  Check the BI connection with BIA.
    Check if you need to rebuild BIA indices again. SAP recommends to do this often, to repair the degenerate indices or delete the indice which are not referenced any more.(eg data in the cube was compressed/deleted and the indices are no more needed.)
    Check the if BIA  reorganization is required - This is done to see the indices are evenly distribueded areoss the BIA Landscape.
    Try to find from BI Admin if major administration work was done within these 8 weeks.eg: Copy cube, dimension restructureing, copying data to some copy cube, archiving etc.
    You can use the BIA monitor to peform checks/monitor alerts from BIA servers
    [ BIA monitor|http://help.sap.com/saphelp_nw70/helpdata/en/43/7719d270d81a6ee10000000a11466f/content.htm]
    This link would tell you on the overall status of the BIA and any actions if required.
    Also it has sublinks to other important transaction of BIA monitoring and maintainnance.
    To go to BIA monitor : RSA!---> BIA monitor icon.
    Is your virtual provider reading data from R/3 or BW.
    Generally virtual providers are used to read data from other systems , so it woulfd not have an indices in BIA, I believe if this is the case. except for some applications like BCS wher eyou may be reading data from BW itself.
    Hope this helps
    Bext regards,
    Sunmit.

  • Display Query performance in WAD

    All
    Is there a way to display the query performance statistics in the report result for a web report created in the WAD (is there a standard template/web object?
    Eg display number of records read and time taken for query result.
    I am sure I have seen this in SAP demos but don't know if it is a standard content offering
    Thanks
    Gidon

    Thanks, very helpfull.
    I´m using BI 7.0
    However, when I do as you suggested, I still get the query name, not the query view name.
    The query view should be selected dynamically from the view that is displayed at that moment.
    Also, can you incorporate saved views from the Bex Portfolio and display their names?
    Edited by: Max Cohen on Sep 11, 2008 5:16 PM

  • Urgent: WAD performance is very slow than query performance

    Hi,
       If i execute the report(using query with variables) which contains 8,50,000 records in WAD then its taking more than 900 seconds and say Connection timed out at the end.
      If i execute the same query in Query designer using Web browser then it will take 400 seconds to show all the data in hierarchy or tabular view.
      I've done tuning using RSRT on query read mode and persistent mode etc..
      Can you please help me?
    THanks in advance. Points will be given...
    Reg,
      Varun

    Varun,
    Did you ever solve the performance issue of WAD report.  We are having the same issue of WAD performance lot slower than executing just as a Query.
    Thanks

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • How to improve query performance built on a ODS

    Hi,
    I've built a report on FI_GL ODS (BW3.5). The report execution time takes almost 1hr.
    Is there any method to improve or optimize th query performance that build on ODS.
    The ODS got huge volume of data ~ 300 Million records for 2 years.
    Thanx in advance,
    Guru.

    Hi Raj,
    Here are some few tips which helps you in improving ur query performance
    Checklist for Query Performance
    1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
    3. Within structures, make sure the filter order exists with the highest level filter first.
    4. Check code for all exit variables used in a report.
    5. Move Time restrictions to a global filter whenever possible.
    6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
    7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
    8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
    9. If Alternative UOM solution is used, turn off query cache.
    10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
    11. Turn off formatting and results rows to minimize Frontend time whenever possible.
    12. Check for nested hierarchies. Always a bad idea.
    13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
    14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
    15. Do review of order of restrictions in formulas. Do as many restrictions as you can before
    calculations. Try to avoid calculations before restrictions.
    17. Turn off warning messages on queries.
    18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
    19. Check to see where currency conversions are happening if they are used.
    20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    21. Avoid Cell Editor use if at all possible.
    22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
    23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.

  • Oracle 10g vs Oracle 11g query performance

    Hi everyone,
    We are moving from Oracle 10g to Oracle 11g database.
    I have a query which in Oracle 1g takes 85 seconds to run, but when I run the same query in Oracle 11g database, it takes 635 seconds.
    I have confirmed that all indexes on tables involved are enabled.
    Does anyone have any pointers, what should I look into. I have compared explain plans and clearly they are different. Oracle 11g is taking a different approach than Oracle 1g.
    Thanks

    Pl post details of OS versions, exact database versions (to 4 digits) and init.ora parameters of the 10g and 11g databases. Have statistics been gathered after the upgrade ?
    For posting tuning requests, pl see these threads
    HOW TO: Post a SQL statement tuning request - template posting
    When your query takes too long ...
    Pl see if the SQL Performance Analyzer can help - MOS Doc 562899.1 (TESTING SQL PERFORMANCE IMPACT OF AN ORACLE 9i TO ORACLE DATABASE 10g RELEASE 2 UPGRADE WITH SQL PERFORMANCE ANALYZER)
    HTH
    Srini

  • Query Performance - Query very slow to run

    I have built a query to show payroll costings per month per employee by cost centres for the current fiscal year. The cost centres are selected with a hierarchy variable - it's quite a latrge hierarchy. The problem is the query takes ages to run - nearly ten minutes. It's built on a DSO so I cant aggregate it. Is there anything I can do to improve performance.

    Hi Joel,
    Walkthrough Checklist for Query Performance:
    1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
    3. Within structures, make sure the filter order exists with the highest level filter first.
    4. Check code for all exit variables used in a report.
    5. Move Time restrictions to a global filter whenever possible.
    6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
    7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
    8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
    9. If Alternative UOM solution is used, turn off query cache.
    10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
    11. Turn off formatting and results rows to minimize Frontend time whenever possible.
    12. Check for nested hierarchies. Always a bad idea.
    13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
    14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
    15. Do review of order of restrictions in formulas. Do as many restrictions as you can before calculations. Try to avoid calculations before restrictions.
    16. Check Sequential vs Parallel read on Multiproviders.
    17. Turn off warning messages on queries.
    18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
    19. Check to see where currency conversions are happening if they are used.
    20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    21. Avoid Cell Editor use if at all possible.
    22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
    23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.
    24. Leverage characteristics or navigational attributes rather than hierarchies. Using a hierarchy requires reading temporary hierarchy tables and creates additional overhead compared to characteristics and navigational attributes. Therefore, characteristics or navigational attributes result in significantly better query performance than hierarchies, especially as the size of the hierarchy (e.g., the number of nodes and levels) and the complexity of the selection criteria increase.
    25. If hierarchies are used, minimize the number of nodes to include in the query results. Including all nodes in the query results (even the ones that are not needed or blank) slows down the query processing. The u201Cnot assignedu201D nodes in the hierarchy should be filtered out, and you should use a variable to reduce the number of hierarchy nodes selected.
    Regards
    Vivek Tripathi

  • How to improve query performance using infoset

    I create one infoset that including 4 char.and 3 DSO which all are time-dependent.When query run, system show very poor perfomance, sometimes no data show in BEX anayzer. In this case I have to close BEX analyzer at first and then open it again, after that it show real results. It seems very strange. Does anybody has experience on infoset performance improvement. pls info, thanks!

    Hi
    As info set itself doesn't have any data so it improves Performance
    also go through the below tips.
    Find the query Run-time
    where to find the query Run-time ?
    557870 'FAQ BW Query Performance'
    130696 - Performance trace in BW
    This info may be helpful.
    General tips
    Using aggregates and compression.
    Using less and complex cell definitions if possible.
    1. Avoid using too many nav. attr
    2. Avoid RKF and CKF
    3. Many chars in row.
    By using T-codes ST03 or ST03N
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particular day > check query execution time.
    Statistical Records Part 4: How to read ST03N datasets from DB in NW2004
    How to read ST03N datasets from DB
    Try table rsddstats to get the statistics
    Using cache memory will decrease the loading time of the report.
    Run reporting agent at night and sending results to email. This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
    Also try
    1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
    2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
    Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
    It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
    3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
    Open the Aggregates...and observe VALUATION and USAGE columns.
    "---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
    In usage column,we will come to know how far the aggregate has been used in query.
    Thus we can check the performance of the aggregate.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    performance ISSUE related to AGGREGATE
    Note 356732 - Performance Tuning for Queries with Aggregates
    Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
    Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
    5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
    By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
    You can go to T-Code DB20 which gives you all the performance related information like
    Partitions
    Databases
    Schemas
    Buffer Pools
    Tablespaces etc
    use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
    If aggregates contain incorrect data, you must regenerate them.
    202469 - Using aggregate check tool
    Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
    You can find out whether an aggregate is usefull or useless you can find out through a proccess of checking the tables RSDDSTATAGGRDEF*
    Run the query in RSRT with statistics execute and come back you will get STATUID... copy this and check in the table...
    This gives you exactly which infoobjects it's hitting, if any one of the object is missing it's useless aggregate.
    6
    Check SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
    Generate Report in RSRT
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Business Intelligence Journal Improving Query Performance in Data Warehouses
    http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    Assign points if useful
    Cheers
    SM

  • Query Performance Related Issue

    Hi all,
    We have a problem when we are executing the Query in BI 7.0. that is
    When we are executing the query first time it will take 2 minutes time .
    when we are executing the same query second time ( With out logoff the query designer ) it will take 5seconds.
    please give me some inputs how it is taking more time at the first time ?
    How to reduce the query execution time for the first time?
    Already we have tried all the Query performance options.
    Thanks & regards,
    Kiran Manyam.

    Hi Kiran,
    As the others pointed, when you have executed the query for the 2nd time, it might have read from the cache.
    Do the following to investigate the performance issue:
    1) Go to RSRT and run your query in "Execute in debug mode"
    2) Select "Display Statistics Data", check "Do not use cache"
    3) After running the query, press the back button. The system will show a the statistics summary.
    4) Analyze what activity has the highest run time.
    5) If Data Manager time is >= than 30% of the query time and the if the ratio of number of records selected VS the number of records transfered is > 10% then, build an aggregate to help the query run time. This sacrifices data load time but will improve query execution time.
    To know what aggregate will help, run the query in debug mode again and choose the option "Display aggregates found". If the query did not hit an aggregate, the names of the cubes will appear. Use the data in the suggestions column (maked with S: ) as a basis for your aggregate.
    Hope this helps.

  • Query Performance and Cache

    Hi,
    I am trying to analyze Query Performance in ST03N.
    I run a Query with Property = Cache Inactive.
    I get Runtime as say 180 sec.
    Again when I run the same Query the Runtime is getting decreased .Now it is say 150 sec.
    Pls expalin me the reason for the less runtime when there is No Cache used.

    Could be two things occurring.  When you say cache is inactive - that is probably referring to the OLAP Global Cache.  The use session, still has alocal cache so that if they run execute the same navigation in the same session, results are retrieved from local cache.
    Also,as the others have mentioned, if the same DB query is really being executed on the DB a second time, not only is the SQL stmt parsing/optimization not need to occur, but almost certainly some/all of the data still remains in the DB's buffer, so it only needs to be retrieved from memory, rather than physically reading the disk again.

  • Improve query performance

    Hi,
    I am executing one query it takes 40-45 mins, can anybody tell me where is the issue because I have index on SUBSCRIPTION table.
    Query is taking time in Nested Loop. Can anyboduy please help to improve query performance.
    Select count(unique individual_id)
    from SUBSCRIPTION S ,SOURCE D WHERE S.ORDER_DOCUMENT_KEY_CD=D.FULFILLMENT_KEY_CD AND prod_abbr='TOH'
    and to_char(source_start_dt,'YYMM')>='1010' and mke_mag_source_type_cd='D';
    select count(*) from source; ----------3,425,131
    select count(*) from subscription;---------394,517,271
    Below is exlain Plan
    Plan
    SELECT STATEMENT CHOOSECost: 219 Bytes: 38 Cardinality: 1                                              
    13 SORT GROUP BY Bytes: 38 Cardinality: 1                                                   
    12 PX COORDINATOR                                              
         11 PX SEND QC (RANDOM) SYS.:TQ10001 Bytes: 38 Cardinality: 1                                         
         10 SORT GROUP BY Bytes: 38 Cardinality: 1                                    
         9 PX RECEIVE Bytes: 38 Cardinality: 1                               
              8 PX SEND HASH SYS.:TQ10000 Bytes: 38 Cardinality: 1                          
              7 SORT GROUP BY Bytes: 38 Cardinality: 1                     
              6 TABLE ACCESS BY LOCAL INDEX ROWID TABLE SUBSCRIPTION Cost: 21 Bytes: 3,976 Cardinality: 284                
                   5 NESTED LOOPS Cost: 219 Bytes: 604,276 Cardinality: 15,902           
              2 PX BLOCK ITERATOR      
                   1 TABLE ACCESS FULL TABLE SOURCE Cost: 72 Bytes: 1,344 Cardinality: 56
                   4 PARTITION HASH ALL Cost: 2 Cardinality: 284 Partition #: 12 Partitions accessed #1 - #16     
                   3 INDEX RANGE SCAN INDEX XAK1SUBSCRIPTION Cost: 2 Cardinality: 284 Partition #: 12 Partitions accessed #1 - #16
    Please suggest

    it eliminate hidden conversation from char to numberi dont know indexes/partition on TC table, and you?
    drop table test;
    create table test as select level id, sysdate + level/24/60/60 datum from dual connect by level < 10000;
    create index idx1 on test(datum);
    analyze table test compute statistics;
    explain plan for select count(*) from test where to_char(datum,'YYYYMMDD') > '20120516';   
    SELECT * FROM TABLE(dbms_xplan.display);
    PLAN_TABLE_OUTPUT                                                              
    Plan hash value: 3467505462                                                    
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |    
    |   0 | SELECT STATEMENT   |      |     1 |     7 |     7  (15)| 00:00:01 |    
    |   1 |  SORT AGGREGATE    |      |     1 |     7 |            |          |    
    |*  2 |   TABLE ACCESS FULL| TEST |   500 |  3500 |     7  (15)| 00:00:01 |    
    Predicate Information (identified by operation id):                            
       2 - filter(TO_CHAR(INTERNAL_FUNCTION("DATUM"),'YYYYMMDD')>'20120516')       
    explain plan for select count(*) from test where datum > trunc(sysdate);   
    SELECT * FROM TABLE(dbms_xplan.display);
    PLAN_TABLE_OUTPUT                                                              
    Plan hash value: 2330213601                                                    
    | Id  | Operation             | Name | Rows  | Bytes | Cost (%CPU)| Time     | 
    |   0 | SELECT STATEMENT      |      |     1 |     7 |     7  (15)| 00:00:01 | 
    |   1 |  SORT AGGREGATE       |      |     1 |     7 |            |          | 
    |*  2 |   INDEX FAST FULL SCAN| IDX1 |  9999 | 69993 |     7  (15)| 00:00:01 | 
    Predicate Information (identified by operation id):                            
       2 - filter("DATUM">TRUNC(SYSDATE@!))                                        
    drop index idx1;
    create index idx1 on test(to_number(to_char(datum,'YYYYMMDD')));
    analyze table test compute statistics;
    explain plan for select count(*) from test where to_number(to_char(datum,'YYYYMMDD')) > 20120516;   
    SELECT * FROM TABLE(dbms_xplan.display);
    PLAN_TABLE_OUTPUT                                                              
    Plan hash value: 227046122                                                     
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |     
    |   0 | SELECT STATEMENT  |      |     1 |     5 |     2   (0)| 00:00:01 |     
    |   1 |  SORT AGGREGATE   |      |     1 |     5 |            |          |     
    |*  2 |   INDEX RANGE SCAN| IDX1 |     1 |     5 |     2   (0)| 00:00:01 |     
    Predicate Information (identified by operation id):                            
       2 - access(TO_NUMBER(TO_CHAR(INTERNAL_FUNCTION("DATUM"),'YYYYMMDD'))>       
                  20120516)                                                        
    explain plan for select count(*) from test where datum > trunc(sysdate);   
    SELECT * FROM TABLE(dbms_xplan.display);
    PLAN_TABLE_OUTPUT                                                              
    Plan hash value: 3467505462                                                    
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |    
    |   0 | SELECT STATEMENT   |      |     1 |     7 |     7  (15)| 00:00:01 |    
    |   1 |  SORT AGGREGATE    |      |     1 |     7 |            |          |    
    |*  2 |   TABLE ACCESS FULL| TEST |  9999 | 69993 |     7  (15)| 00:00:01 |    
    Predicate Information (identified by operation id):                            
       2 - filter("DATUM">TRUNC(SYSDATE@!))                                        

  • How to improve Query Performance

    Hi Friends...
    I Want to improve query performance.I need following things.
    1.What is the process to findout the performance?. Any transaction code's and how to use?.
    2.How can I know whether the query is running good or bad ,ie. in performance praspect.
    3.I want to see the values i.e. how much time it is taking to run?. and where the defect is?.
    4.How to improve the query performance?. After I did the needfull things to improve performance, I want to see the query execution time. i.e. it is running fast or not?.
    Eg..
    Eg 1.   Need to create aggregates.
    Solution:  where can I create aggregates?. Now I'm in production system. So where I need to create? .i.e. indevelopment or in Quality or in Production system?.
    Any chenges I need to do in Development?.Because I'm in Production system.
    So please tell me solution for my questions.
    Thanks
    Ganga
    Message was edited by: Ganga N

    hi ganga
    please refer oss note :557870 : Frequently asked questions on query performance
    also refer to
    Prakash's weblog
    /people/prakash.darji/blog/2006/01/27/query-creation-checklist
    /people/prakash.darji/blog/2006/01/26/query-optimization
    performance docs on query
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/064fed90-0201-0010-13ae-b16fa4dab695
    This is the oss notes of FAQ on query performance
    1. What kind of tools are available to monitor the overall Query Performance?
    1.     BW Statistics
    2.     BW Workload Analysis in ST03N (Use Export Mode!)
    3.     Content of Table RSDDSTAT
    2. Do I have to do something to enable such tools?
    Yes, you need to turn on the BW Statistics:
      RSA1, choose Tools -> BW statistics for InfoCubes
      (Choose OLAP and WHM for your relevant Cubes)
    3. What kind of tools is available to analyze a specific query in    detail?
    1.     Transaction RSRT
    2.     Transaction RSRTRACE
    4.  Do I have an overall query performance problem?
    i. Use ST03N -> BW System load values to recognize the problem. Use the  number given in table 'Reporting - InfoCubes:Share of total time (s)'  to check if one of the columns %OLAP, %DB, %Frontend shows a high   number in all Info Cubes.
    ii. You need to run ST03N in expert mode to get these values
    5. What can I do if the database proportion is high for all queries?
    Check:
    1.     If the database statistic strategy is set up properly for your DB platform (above all for the BW specific tables)
    2.     If database parameter set up accords with SAP Notes and SAP Services   (EarlyWatch)
    3.     If Buffers, I/O, CPU, memory on the database server are exhausted?
    4.     If Cube compression is used regularly
    5.     If Database partitioning is used (not available on all DB platforms)
    6. What can I do if the OLAP proportion is high for all queries?
    Check:
    1.     If the CPUs on the application server are exhausted
    2.     If the SAP R/3 memory set up is done properly (use TX ST02 to find bottlenecks)
    3.     If the read mode of the queries is unfavourable (RSRREPDIR, RSDDSTAT,  Customizing default)
    7. What can I do if the client proportion is high for all queries?
    Check whether most of your clients are connected via a WAN  connection and the amount of data which is transferred   is rather high.
    8. Where can I get specific runtime information for one query?
    1.     Again you can use ST03N -> BW System Load
    2.     Depending on the time frame you select, you get historical data or current data.
    3.     To get to a specific query you need to drill down using the InfoCube  name
    4.      Use Aggregation Query to get more runtime information about a   single query. Use tab All data to get to the details.   (DB, OLAP, and Frontend time, plus Select/ Transferred records,  plus number of cells and formats)
    9. What kind of query performance problems can I recognize using ST03N
       values for a specific query?
    (Use Details to get the runtime segments)
    1.     High Database Runtime
    2.     High OLAP Runtime
    3.     High Frontend Runtime
    10. What can I do if a query has a high database runtime?
    1.     Check if an aggregate is suitable (use All data to get values "selected records to transferred records", a high number here would  be an indicator for query performance improvement using an aggregate)
    2.     o Check if database statistics are update to data for the   Cube/Aggregate, use TX RSRV output (use database check for statistics  and indexes)
    3.     Check if the read mode of the query is unfavourable - Recommended (H)
    11. What can I do if a query has a high OLAP runtime?
    1.     Check if a high number of Cells transferred to the OLAP (use  "All data" to get value "No. of Cells")
    2.     Use RSRT technical Information to check if any extra OLAP-processing is necessary (Stock Query, Exception Aggregation, Calc. before   Aggregation, Virtual Char. Key Figures, Attributes in Calculated   Key Figs, Time-dependent Currency Translation)  together with a high number of records transferred.
    3.     Check if a user exit Usage is involved in the OLAP runtime?
    4.     Check if large hierarchies are used and the entry hierarchy level is  as deep as possible. This limits the levels of the hierarchy that must be processed. Use SE16 on the inclusion tables and use the List of Value feature on the column successor and predecessor to see which entry level of the hierarchy is used.
    5.     Check if a proper index on the inclusion  table exist
    12. What can I do if a query has a high frontend runtime?
    1.     Check if a very high number of cells and formatting are transferred   to the Frontend (use "All data" to get value "No. of Cells") which   cause high network and frontend (processing) runtime.
    2.     Check if frontend PC are within the recommendation (RAM, CPU MHz)
    3.     Check if the bandwidth for WAN connection is sufficient
    REWARDING POINTS IS THE WAY OF SAYING THANKS IN SDN
    CHEERS
    RAVI

  • Hierarical Query Performance

    Hi Folks,
    I need a quick help from you. I have a hierarical query that is used in the front-end application.
    WITH EMP ([EMP_ID],[NAME],  [MANAGER_NAME]  ,[MANAGER_ID], [CHGS]) AS 
    (   SELECT [EMP_ID],[NAME],[MANAGER_NAME]  ,[MANAGER_ID], [CHGS]  FROM EMPLOYEE
     WHERE [MANAGER_ID]=101 or [EMP_ID]=101 and  BILLDATE  BETWEEN '10-APRIL-2013' and '10-MAY-2013'
      UNION all  
    SELECT a.[EMP_ID], a.[NAME], a.[MANAGER_NAME]  ,a.[MANAGER_ID], a.[CHGS] FROm EMPLOYEE as a inner  JOIN EMP b ON a.[MANAGER_ID]=b.[EMP_ID]) 
    SELECT round(sum(distinct([CHGS]))/100,0) as charges FROM EMP.
    In Employee table there were 2000 records and it is taking 6 minutes.
    Can you kindly suggest me how to improve the performance of this query. Appreciate your help.
    Thanks,
    Ven.

    Hi Ven Raja,
    As other post, agree with Erland. When the tables would be much larger, and indexes could provide considerable performance improvements for certain queries. Usually, creating useful indexes is one of the most important ways to achieve better query performance.
    You can evaluate the selectivity of an index by running the sp_show_statistics stored procedures, avoid indexing in small tables, and so on, you can review the following articles.
    http://technet.microsoft.com/en-us/library/ms172984(v=sql.110).aspx
    In addition, you can use SQL Server Performance Tuning to analyze all your indexes to identify non-performing indexes and missing indexes that can improve performance and so on. For more information, see:
    http://blog.sqlauthority.com/sql-server-performance-tuning/
    Regards,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • SQL Performance Analyzer Information

    Hi,
    i would use SQL Performance Analyzer to test changes between 10g and 11g database.
    I will use Parameter Change option in SQL Performance Analyzer Workflows and test the change of parameter OPTIMIZER_FEATURES_ENABLE.
    Now, if I find some query that should be optimized how can I save new profiles from the test environment and apply it after i upgrade the production environment?
    tnx

    >
    i would use SQL Performance Analyzer to test changes between 10g and 11g database.
    I will use Parameter Change option in SQL Performance Analyzer Workflows and test the change of parameter OPTIMIZER_FEATURES_ENABLE.
    Now, if I find some query that should be optimized how can I save new profiles from the test environment and apply it after i upgrade the production environment?
    >
    To me, your question translates to: 'How can I export SQL Profiles from test and import into production?'
    Answer: Use
    DBMS_SQLTUNE.CREATE_STGTAB_SQLPROF
    DBMS_SQLTUNE.PACK_STGTAB_SQLPROF
    to create a util table on test and store SQL Profiles into.
    Then datapump util table to production and
    DBMS_SQLTUNE.UNPACK_STGTAB_SQLPROF
    there.
    Kind regards
    Uwe Hesse
    http://uhesse.wordpress.com

Maybe you are looking for