Is aggregate DB index is required for query performance

Hi,
Cube mange tab when i check for the aggregate DB index it shows red, here my doubt is, is the DB index is required for query performance
is the check statics should be in green?
Please let me know
Regards,
Kiran

Hi,
Yes it improves the reports performance..
Using the Check Indexes button, you can check whether indexes already exist and whether these existing indexes are of the correct type (bitmap indexes).
Yellow status display: There are indexes of the wrong type
Red status display: No indexes exist, or one or more indexes are faulty
You can also list missing indexes using transaction DB02, pushbutton Missing Indexes. If a lot of indexes are missing, it can be useful to run the ABAP reports SAP_UPDATE_DBDIFF and SAP_INFOCUBE_INDEXES_REPAIR.
See SAP Help
http://help.sap.com/saphelp_nw2004s/helpdata/en/80/1a6473e07211d2acb80000e829fbfe/content.htm
Thanks
Reddy

Similar Messages

  • Help required for improving performance of the Query

    Hello SAP Techies,
    I have MRP Query which shows Inventory projection by Calendar Year/Month wise.
    There are 2 variables Plant and Material in free charateristics where it has been restricted by replacement of Query result .
    Another query is Control M Query which is based on multiprovider. Multiprovider is created on 5 cubes.
    The Query is taking 20 -15 Mins to get the result.
    Due to replacement path by query result for the 2 variables first the control M Query is excuted. Business wanted to see all those materials in MRP query which are allocated to base plant hence they designed the query to use replacement Path by Query result. So it will get all the materials and plants from the control M query and will find the Invetory projection for the same selection in MRP query.
    Is there any way I can improve the performance of the Query.
    Query performance has been discussed innumerable times in the forums and there is a lot of information on the blogs and the WIKI - please search the forums before posting and if the existing posts do no answer your question satisfactorily then please raise a new post - else almost all the answers you get will be rehashed versions of previous posts ( and in most cases without attribution to the original author )
    Edited by: Arun Varadarajan on Apr 19, 2011 9:23 PM

    Hi ,
    Please see if you can make these changes currently to the report . It will help in improving the performance of the query
    1. Select the right read mode.
        Reading data during navigation minimizes the impact on
         the application server resources because only data that
        the user requires will be retrieved.
    2. Leverage filters as much as possible. Using filters contributes to
       reducing the number of database reads and the size of the result set,
        hereby significantly improving query runtimes.
       Filters are especially valuable when associated with u201Cbig
        dimensionsu201D where there is a large number of characteristics such as
        customers and document numbers.
    3. Reduce RKFs in the query to as few as possible. Also, define
    calculated & RKFs on the Infoprovider level instead of locally within the query.
    Regards
    Garima

  • Step by step usage of RSRT for Query performance

    Hi,
    Can I get any step by step document, how to test the query performance using RSRT?
    I am sorry if this has been posted already. I tried my best to search for this but could not find this
    Thanks
    Ananth

    Olap cache settings
    RSRT u2013> choose a query u2013>properties u2013> Cache Mode: Persistent Cache per Application server, Persistence Mode: Flat File.
    RSRT
    http://www.sap-press.de/download/dateien/1049/sappress_bw_performance_optimization_guide_080.pdf

  • Programme required for query output

    Hi,
    We have created a Query in system which will give output of non-printed invoices in system. User select these invoices list and prints it manually.
    Now user has comeup with a requirement that he wanted a programme that selects the query output (non printed invoices) and that automatically adds the output type XXXX.
    Pls let me know does anyone know about such progrmme??
    Thanks in advance
    Srikky

    hi,
    1. sorry we have some limitation in query.
    2. copy the logic into Zprogram and create newly.
    3. add check boxes for each invoice in teh report.
    4. keep select all and deselect all push buttons on the top.
    5. write a SPOOL request inside the program which converts into PDF.
    6. and execute for getting print out.
    or
    an out put is generated automatically when you save an invoice.
    requisites:  a spool program has to be attached to the driver program and execute the spool by selecting it.
    regards,
    balajia

  • Memory requirements for query

    Hello,
    is there some way that i could find out how much memory query will demand to be succesfuly executed ?
    Maybe to see how much each object referenced in the FROM part of the clause is big ?
    requirements

    ok , thank you.
    By big i ment would it help for this memory calculation if we use sum(bytes) from _segments for all the objects referenced in that query.                                                                                                                                                                                                                                                                                                                       

  • Hardware requirements for portal performance

    I will be using BEA portal 8.1.5 to host a portal application. We are anticipating about 1 million hits per day. I am looking for some recommendations for hardware requirements to handle the load.
    - What kind of hardware (e.g. optron 220/440) and platform (Solaris or Linux) is best suited for WLP?
    and minimum of how many portal servers are recommended to keep every request under 1 second?
    Thanks
    -Shiv

    Shiv,
    WebLogic Portal is largely CPU bound, so my recommendation is to throw as much CPU as you can (measured by total clock-speed, GHz) at it.
    We have tested with Linux and Windows and the performance does not differ that much between the platforms. JRockit is the preferred JVM due to the many optimizations which will make things faster. Solaris should only be used if you have the latest processors (not the old sparc ones) in use.
    I urge you to review the Capacity Planning Guide as a reference to see how well the WebLogic Portal Framework performs. It can be found here: http://e-docs.bea.com/wlp/docs81/capacityplanning/index.html
    Matt
    WLP Performance

  • Explain plan for Query performance

    Hi Gurus,
    I need to improve the performance of a procedure. The procedure has the below QUery. I dont have Idea on how to imrpove the perf by seeing the explain plan. Can anyone please help me to explain where I need to change the code.
    The below are the code and Explain plan for the same.
    -----------Code----------------------------
    SELECT IV_STORECODE AS STORECODE,
                                   TO_CHAR(RD.ITEMCODE) AS ITEMCODE,
                                   C.ITEMCATEGORYNAME,
                                   ISUB.ITEMSUBCATEGORY1NAME
                              FROM RECEIPTS R
                             INNER JOIN RECEIPTDETAILS RD
                                ON R.RECEIPTID = RD.RECEIPTID
                             INNER JOIN ITEMCOMPANY IC
                                ON IC.ITEMCODE = RD.ITEMCODE
                             INNER JOIN ITEMCATEGORY C
                                ON IC.ITEMCATEGORY = C.ITEMCATEGORYID
                             LEFT OUTER JOIN ITEMSUBCATEGORY1 ISUB
                                ON ISUB.ITEMSUBCATEGORY1ID = IC.ITEMSUBCATEGORY1
                               AND ISUB.ITEMSUBCATEGORY1ID IN
                                   (SELECT ITEMSUBCATEGORY1ID
                                      FROM ITEMSUBCATEGORY1
                                     WHERE ITEMCATEGORYID = IV_ITEMCATEGORY)
                             INNER JOIN STORE SE
                                ON SE.STORECODE = R.STORECODE
                             WHERE R.STORECODE = IV_STORECODE
                               AND SE.HOSPITALID = IV_HOSPITALID
                               AND TRUNC(R.CREATEDDATE) BETWEEN V_FROMDATE AND
                                   V_TODATE
                               AND R.STATUSID NOT IN (99, 5)
                                  AND
                                   (IV_DRUGTYPE IS NULL OR
                                   IC.DRUGTYPECATEGORYID = IV_DRUGTYPE)
                            UNION
                            SELECT IV_STORECODE AS STORECODE,
                                   TO_CHAR(STD.ITEMCODE) AS ITEMCODE,
                                   C.ITEMCATEGORYNAME,
                                   ISUB.ITEMSUBCATEGORY1NAME
                              FROM STOCKTRANSACTION ST
                             INNER JOIN STOCKTRANSACTIONDETAILS STD
                                ON ST.STOCKTRANSACTIONID = STD.STOCKTRANSACTIONID
                             INNER JOIN ITEMCOMPANY IC
                                ON IC.ITEMCODE = STD.ITEMCODE
                             INNER JOIN ITEMCATEGORY C
                                ON IC.ITEMCATEGORY = C.ITEMCATEGORYID
                              LEFT OUTER JOIN ITEMSUBCATEGORY1 ISUB
                                ON ISUB.ITEMSUBCATEGORY1ID = IC.ITEMSUBCATEGORY1
                               AND ISUB.ITEMSUBCATEGORY1ID IN
                                   (SELECT ITEMSUBCATEGORY1ID
                                      FROM ITEMSUBCATEGORY1
                                     WHERE ITEMCATEGORYID = IV_ITEMCATEGORY)
                             INNER JOIN STORE SE
                                ON SE.STORECODE = ST.STORECODE
                             WHERE ST.STORECODE = IV_STORECODE
                               AND SE.HOSPITALID = IV_HOSPITALID
                               AND TRUNC(ST.CREATEDDATE) BETWEEN V_FROMDATE AND
                                   V_TODATE
                               AND ST.STATUS <> 99
                               AND STD.ITEMCODE NOT LIKE '%#%'
                               AND                   
                                   (IV_DRUGTYPE IS NULL OR
                                   IC.DRUGTYPECATEGORYID = IV_DRUGTYPE)
                            UNION
                            SELECT D.STORECODE,
                                   TO_CHAR(D.ITEMCODE) AS ITEMCODE,
                                   C.ITEMCATEGORYNAME,
                                   ISUB.ITEMSUBCATEGORY1NAME
                              FROM DAILYINVENTORY D
                             INNER JOIN ITEMCOMPANY IC
                                ON IC.ITEMCODE = D.ITEMCODE
                             INNER JOIN ITEMCATEGORY C
                                ON IC.ITEMCATEGORY = C.ITEMCATEGORYID
                              LEFT OUTER JOIN ITEMSUBCATEGORY1 ISUB
                                ON ISUB.ITEMSUBCATEGORY1ID = IC.ITEMSUBCATEGORY1
                               AND ISUB.ITEMSUBCATEGORY1ID IN
                                   (SELECT ITEMSUBCATEGORY1ID
                                      FROM ITEMSUBCATEGORY1
                                     WHERE ITEMCATEGORYID = IV_ITEMCATEGORY)
                             INNER JOIN STORE SE
                                ON SE.STORECODE = D.STORECODE
                             WHERE D.STORECODE = IV_STORECODE
                               AND SE.HOSPITALID = IV_HOSPITALID
                               AND TRUNC(D.UPDATEDDATE) <= V_TODATE
                               AND D.QTY > 0
                               AND D.ITEMCODE NOT LIKE '%#%'
                               AND C.ITEMCATEGORYID = IV_ITEMCATEGORY
                                  AND (IV_DRUGTYPE IS NULL OR
                                   IC.DRUGTYPECATEGORYID = IV_DRUGTYPE)
                               AND (IV_SUBITEMCATEGORY IS NULL OR
                                   ISUB.ITEMSUBCATEGORY1ID = IV_SUBITEMCATEGORY) Will post the explain plan ..
    Thanks in advance..

    Ensure you also include all the other information people will need to help you, e.g. database version, table structures/relationships and cardinalities, row counts etc.
    See the two threads linked to in the FAQ: {message:id=9360003}

  • OLAP Cache for Query Performance

    Hi Experts,
    I have below 2 Questions before we implement OLAP Cache for our Queries:
    1) We have 15 imporatant queries - which do NOT have any variable/selection screen.
    (question here is will it work for those kind of queries which dont have any selection screen/ variant ? ) --> client wants to prime the cache for few queries which dont have variable screen.
    In this case, say if data is later filtered on any CHAR , will it take the data from Cache?
    2) I have a query which initially will have few characteristics in the drill down when we first execute and users would be drilling down on many other characteristics after that. So if I want to fill OLAP cache for this query then what is the best way so that each drilldown in the query gets data from cache.
    Thank you,
    -Su

    Hi Raghavendra,
    Thanks for your response.
    For first Question, Do you mean , even if we do not have any Selection varaibale on Query--we still can fill OLAP cache for it for its all values (i.e. No selection means "*") ?
    If this is the case, then what we need to defile in General Precalculation (Variable assignment) while creating a new setting for the query.
    Thanks,
    -Su

  • Bandwidth required for Peoplesoft

    Hi,
    We are doing sizing in terms of the bandwidth required for peoplesoft v9.0. The server is going to be based in a central location. It is my understanding that Self service and GP would be the most bandwidth consuming modules. GPl would be run from a central location and all employees who will be spread over different geographical locations would have access to Self Service. Do you have any idea of what the minimum bandwidth required would be in this case? I understand that Peoplesoft is PIA but there will be some minimum bandwidth requirement for good performance. I would appreciate any input in this regard, even rough estimates or past experiences will be really helpful. Thanks

    I don't have any solid tcp/ip numbers for you. Most of the network traffic is between the web, app, and database servers. If your users can browse the web (MSN, Google, etc) from their location without complaints, they will be able to use PeopleSoft. If you turn on gzip compression in your web profile, that will help a lot. Most of the PeopleSoft JavaScript, CSS, and images are stored as external files so they benefit from client side browser cache. Once a user loads a page, all that will travel back and forth is the HTML that comprises the page itself. Yes, the HTML is a bit bloated as it is based on the HTML design model from several years ago (lots of tables for formatting). A FieldChange, button click, or save will reload the entire page.
    Are your users going to be using 56k modems or something better? Really, latency is probably more important than bandwidth in this case.
    PeopleSoft doesn't maintain a persistent connection between the web server and browser, so I'm not sure that distance is really an issue.
    Just to give you some rough numbers, with JavaScripts, etc in my browser cache, the contents of the Timesheet page is 39k without gzip compression and 6k with gzip compression. So, with gzip compression turned on, you should get much better use of your bandwidth.

  • How to improve query performance built on a ODS

    Hi,
    I've built a report on FI_GL ODS (BW3.5). The report execution time takes almost 1hr.
    Is there any method to improve or optimize th query performance that build on ODS.
    The ODS got huge volume of data ~ 300 Million records for 2 years.
    Thanx in advance,
    Guru.

    Hi Raj,
    Here are some few tips which helps you in improving ur query performance
    Checklist for Query Performance
    1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
    3. Within structures, make sure the filter order exists with the highest level filter first.
    4. Check code for all exit variables used in a report.
    5. Move Time restrictions to a global filter whenever possible.
    6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
    7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
    8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
    9. If Alternative UOM solution is used, turn off query cache.
    10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
    11. Turn off formatting and results rows to minimize Frontend time whenever possible.
    12. Check for nested hierarchies. Always a bad idea.
    13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
    14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
    15. Do review of order of restrictions in formulas. Do as many restrictions as you can before
    calculations. Try to avoid calculations before restrictions.
    17. Turn off warning messages on queries.
    18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
    19. Check to see where currency conversions are happening if they are used.
    20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    21. Avoid Cell Editor use if at all possible.
    22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
    23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.

  • Query Performance - Query very slow to run

    I have built a query to show payroll costings per month per employee by cost centres for the current fiscal year. The cost centres are selected with a hierarchy variable - it's quite a latrge hierarchy. The problem is the query takes ages to run - nearly ten minutes. It's built on a DSO so I cant aggregate it. Is there anything I can do to improve performance.

    Hi Joel,
    Walkthrough Checklist for Query Performance:
    1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
    3. Within structures, make sure the filter order exists with the highest level filter first.
    4. Check code for all exit variables used in a report.
    5. Move Time restrictions to a global filter whenever possible.
    6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
    7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
    8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
    9. If Alternative UOM solution is used, turn off query cache.
    10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
    11. Turn off formatting and results rows to minimize Frontend time whenever possible.
    12. Check for nested hierarchies. Always a bad idea.
    13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
    14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
    15. Do review of order of restrictions in formulas. Do as many restrictions as you can before calculations. Try to avoid calculations before restrictions.
    16. Check Sequential vs Parallel read on Multiproviders.
    17. Turn off warning messages on queries.
    18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
    19. Check to see where currency conversions are happening if they are used.
    20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    21. Avoid Cell Editor use if at all possible.
    22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
    23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.
    24. Leverage characteristics or navigational attributes rather than hierarchies. Using a hierarchy requires reading temporary hierarchy tables and creates additional overhead compared to characteristics and navigational attributes. Therefore, characteristics or navigational attributes result in significantly better query performance than hierarchies, especially as the size of the hierarchy (e.g., the number of nodes and levels) and the complexity of the selection criteria increase.
    25. If hierarchies are used, minimize the number of nodes to include in the query results. Including all nodes in the query results (even the ones that are not needed or blank) slows down the query processing. The u201Cnot assignedu201D nodes in the hierarchy should be filtered out, and you should use a variable to reduce the number of hierarchy nodes selected.
    Regards
    Vivek Tripathi

  • How to improve Query Performance

    Hi Friends...
    I Want to improve query performance.I need following things.
    1.What is the process to findout the performance?. Any transaction code's and how to use?.
    2.How can I know whether the query is running good or bad ,ie. in performance praspect.
    3.I want to see the values i.e. how much time it is taking to run?. and where the defect is?.
    4.How to improve the query performance?. After I did the needfull things to improve performance, I want to see the query execution time. i.e. it is running fast or not?.
    Eg..
    Eg 1.   Need to create aggregates.
    Solution:  where can I create aggregates?. Now I'm in production system. So where I need to create? .i.e. indevelopment or in Quality or in Production system?.
    Any chenges I need to do in Development?.Because I'm in Production system.
    So please tell me solution for my questions.
    Thanks
    Ganga
    Message was edited by: Ganga N

    hi ganga
    please refer oss note :557870 : Frequently asked questions on query performance
    also refer to
    Prakash's weblog
    /people/prakash.darji/blog/2006/01/27/query-creation-checklist
    /people/prakash.darji/blog/2006/01/26/query-optimization
    performance docs on query
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/064fed90-0201-0010-13ae-b16fa4dab695
    This is the oss notes of FAQ on query performance
    1. What kind of tools are available to monitor the overall Query Performance?
    1.     BW Statistics
    2.     BW Workload Analysis in ST03N (Use Export Mode!)
    3.     Content of Table RSDDSTAT
    2. Do I have to do something to enable such tools?
    Yes, you need to turn on the BW Statistics:
      RSA1, choose Tools -> BW statistics for InfoCubes
      (Choose OLAP and WHM for your relevant Cubes)
    3. What kind of tools is available to analyze a specific query in    detail?
    1.     Transaction RSRT
    2.     Transaction RSRTRACE
    4.  Do I have an overall query performance problem?
    i. Use ST03N -> BW System load values to recognize the problem. Use the  number given in table 'Reporting - InfoCubes:Share of total time (s)'  to check if one of the columns %OLAP, %DB, %Frontend shows a high   number in all Info Cubes.
    ii. You need to run ST03N in expert mode to get these values
    5. What can I do if the database proportion is high for all queries?
    Check:
    1.     If the database statistic strategy is set up properly for your DB platform (above all for the BW specific tables)
    2.     If database parameter set up accords with SAP Notes and SAP Services   (EarlyWatch)
    3.     If Buffers, I/O, CPU, memory on the database server are exhausted?
    4.     If Cube compression is used regularly
    5.     If Database partitioning is used (not available on all DB platforms)
    6. What can I do if the OLAP proportion is high for all queries?
    Check:
    1.     If the CPUs on the application server are exhausted
    2.     If the SAP R/3 memory set up is done properly (use TX ST02 to find bottlenecks)
    3.     If the read mode of the queries is unfavourable (RSRREPDIR, RSDDSTAT,  Customizing default)
    7. What can I do if the client proportion is high for all queries?
    Check whether most of your clients are connected via a WAN  connection and the amount of data which is transferred   is rather high.
    8. Where can I get specific runtime information for one query?
    1.     Again you can use ST03N -> BW System Load
    2.     Depending on the time frame you select, you get historical data or current data.
    3.     To get to a specific query you need to drill down using the InfoCube  name
    4.      Use Aggregation Query to get more runtime information about a   single query. Use tab All data to get to the details.   (DB, OLAP, and Frontend time, plus Select/ Transferred records,  plus number of cells and formats)
    9. What kind of query performance problems can I recognize using ST03N
       values for a specific query?
    (Use Details to get the runtime segments)
    1.     High Database Runtime
    2.     High OLAP Runtime
    3.     High Frontend Runtime
    10. What can I do if a query has a high database runtime?
    1.     Check if an aggregate is suitable (use All data to get values "selected records to transferred records", a high number here would  be an indicator for query performance improvement using an aggregate)
    2.     o Check if database statistics are update to data for the   Cube/Aggregate, use TX RSRV output (use database check for statistics  and indexes)
    3.     Check if the read mode of the query is unfavourable - Recommended (H)
    11. What can I do if a query has a high OLAP runtime?
    1.     Check if a high number of Cells transferred to the OLAP (use  "All data" to get value "No. of Cells")
    2.     Use RSRT technical Information to check if any extra OLAP-processing is necessary (Stock Query, Exception Aggregation, Calc. before   Aggregation, Virtual Char. Key Figures, Attributes in Calculated   Key Figs, Time-dependent Currency Translation)  together with a high number of records transferred.
    3.     Check if a user exit Usage is involved in the OLAP runtime?
    4.     Check if large hierarchies are used and the entry hierarchy level is  as deep as possible. This limits the levels of the hierarchy that must be processed. Use SE16 on the inclusion tables and use the List of Value feature on the column successor and predecessor to see which entry level of the hierarchy is used.
    5.     Check if a proper index on the inclusion  table exist
    12. What can I do if a query has a high frontend runtime?
    1.     Check if a very high number of cells and formatting are transferred   to the Frontend (use "All data" to get value "No. of Cells") which   cause high network and frontend (processing) runtime.
    2.     Check if frontend PC are within the recommendation (RAM, CPU MHz)
    3.     Check if the bandwidth for WAN connection is sufficient
    REWARDING POINTS IS THE WAY OF SAYING THANKS IN SDN
    CHEERS
    RAVI

  • Query Performance Issues

    Hi Gurus,
    I m woking on performance tuning at the moment and wants some tips
    regarding the Query performance tuning,if anyone can helpme in that
    rfrence.
    the thing is that i have got an idea about the system and now the
    issues r with DB space, Abap Dumps, problem in free space in table
    space, no number range buffering,cubes r using too many aggrigates,
    large IC,Large ODS, and many others.
    So my questionis that is anyone can tell me that how to resolve the
    issues with the Large master data tables,and large ODS,and one more
    importnat issue is KPI´s exceding there refrence valuses so any idea
    how to deal with them.
    waiting for the valuable responces.
    thanks In advance
    Redards
    Amit

    Hi Amit
    For Query performance issue u can go for :-
    Aggregates : They will help u a lot to make ur query faster becuase query doesnt hits ur cube it hits the aggregates which has very less number of records in comp to ur cube
    secondly i wud suggest u is use CKF in place of formulaes if any in the query
    other thing is avoid upto the extent possible the use fo nav attr . if u want to use them use it upto the minimal level reason i am saying so is during the query exec whn ever there is nav attr it provides unncessary  join to ur MD and thus dec query perfo
    be specifc to rows and columns if u r not sure of a kf or a char then better put it in a free char.
    use filters if possible
    if u follow these m sure ur query perfo will inc
    Assign points if applicable
    Thanks
    puneet

  • Query performance - bench marking

    Performance of  a query at any given time is determined by many variables, and I would like to know how to detect performance issues against a benchmark. One thought was to store some data in a separate cube (and not load any more data into it), run a specific query daily (few times) and have the run time info. updated on BW statistics cube. And when users complain about any performance issues on the system for different queries, we can run this specific query to see if there is really a true performance issue in the system.
    The objective is to establish bench marks for tracking query performance. Any thoughts or suggestions would be appreciated.
    Thanks,
    Sree

    hi Sree,
    that's good idea, but i think only volume data factor may considered here, other factors like query design, aggregate, frontend, network, etc. take a look following for query performance
    Prakash's weblog on this topic..
    /people/prakash.darji/blog/2006/01/27/query-creation-checklist
    /people/prakash.darji/blog/2006/01/26/query-optimization
    oss note
    557870 'FAQ BW Query Performance'
    and 567746 'Composite note BW 3.x performance Query and Web'
    some docs
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
    https://websmp206.sap-ag.de/~sapidb/011000358700001394912002
    hope this helps.

  • Structures Vs RKFs and CKFs In Query performance

    Hi Gurus,
    I am creating a GL query which will be returning with a couple of KFs and some calculations as well with different GL accounts and I wanted to know which one is going to be more beneficial for me either to create Restricted keyfigures and Calculated Keyfigures or to just use a structure for all the selections and formula calculations?
    Which option will be better for query performance?
    Thanks in advance

    As compared to formulas that are evaluated during query execution, calculated key figures are pre-calculated and their definitions are stored in the metadata repository for reuse in queries. The incorporation of business metrics and key performance indicators as calculated key figures, such as gross profit and return on investment (which are frequently used, widely understood, and rarely changed), improve query performance and ensure that calculated key figures are reported consistently by different users. Note that this approach improves query runtime performance but slows InfoCube or ODS object update time. As a rule of thumb, if multiple and frequently used queries use the same formula to compute calculated fields, use calculated key figures instead of formulas.
    RKFs result in additional database processing and complexity in retrieving the query result and therefore should be avoided when possible.

Maybe you are looking for

  • How to insert and edit equations with Math Type in IBA??

    Hi, I want to add  fractions in IBA with Math Type. Here is what i found: And here is the General Preferences Like you see, i can't select Insert and edit equations with Math Type, it's grey. How can i use Math Type? Tx

  • Problem with transferring itunes

    awhile back, i transfered the music from my itunes from my pc to my laptop using "podutil" which copied all of the songs from my ipod to my laptop. the program created a folder that held copies of all the songs so that itunes would be able to recogni

  • Need to install AVFoundationCF.dll

    Cant get on itunes cause it says I need to install AVFoundationCF.dll  .  please help

  • BTengineer charges rip off

    Hi All  I had a problem with my phone line last october.The broadband was working ok ,i could ring people up on the phone but on an incoming call, the phone would ring once and then cut off. I checked all internal wiring and extension to make sure th

  • Itunes version 11.2.2.3 crashing upon opening

    Why do i get this message, then windows closes the app. Thanks for any help Problem signature:   Problem Event Name:    BEX   Application Name:    iTunes.exe   Application Version:    11.2.2.3   Application Timestamp:    5383f31a   Fault Module Name: