Re: Bex FI GL query performance

Hi Guys,
I have a query which runs on GL infoprovider, which takes more time(30 minutes) to open the report.
Pls give me some suggestions to improve the performance.
Regards,
Anand

Hello,
You can check the following notes:
859456 - Increasing performance using the cache
195634 - Collective note: Performance when executing queries
Also, to to create "Aggregates" and try to compress the CUBE.
Then, the query should works better.
Thank you.
Edward John

Similar Messages

  • Query performance for WEBI on top of BEx

    Hello Everyone,
    Is there a way to get the time taken by a webi query execution on the BO side and in the BW side? Is this stat stored some where?
    Also, what is the best approach to take for comparing performance between the original bex query and the WEBI on top of the Bex?
    Thanks for your help.
    Aashish

    Hi,
    i posted some links for the same Question here:
    Poor query performance in WebI on top of BEx queries
    Maybe those can help you.
    Regards
    -Seb.

  • Tuning query performance

    Dear experts,
    I have a question regarding as the performance of a BW query.
    It takes 10 minutes to display about 23 thousands lines.
    This query read the data from an ODS object.
    According to the "where" clause in the "select" statement monitored via Oracle session when the query was running, I created an index for this ODS object.
    After rerunning the query, I found that the index had been taken by Oracle in reading this table (estimated cost is reduced to 2 from about 3000).
    However, it takes the same time as before.
    Is there any other reason or other factors that I should consider in tuning the performance of this query?K
    Thanks in advance

    Hi David,
              Query performance when reporting on ODS object is slower compared to infocubes, infosets, multiproviders etc because of no aggregates and other performance techinques in DSO.
    Basically for DSO/ODS you need to turn on the BEx reporting flag, which again is an overhead for query execution and affects performance.
    To improve the performance when reporting on ODS you can create secondary indexes from BW workbench.
    Please check the below links.
    [Re: performance issues of ODS]
    [Which criteria to follow to pick InfoObj. as secondary index of ODS?;
    Hope this helps.
    Regards,
    Haritha.

  • How to improve query performance using infoset

    I create one infoset that including 4 char.and 3 DSO which all are time-dependent.When query run, system show very poor perfomance, sometimes no data show in BEX anayzer. In this case I have to close BEX analyzer at first and then open it again, after that it show real results. It seems very strange. Does anybody has experience on infoset performance improvement. pls info, thanks!

    Hi
    As info set itself doesn't have any data so it improves Performance
    also go through the below tips.
    Find the query Run-time
    where to find the query Run-time ?
    557870 'FAQ BW Query Performance'
    130696 - Performance trace in BW
    This info may be helpful.
    General tips
    Using aggregates and compression.
    Using less and complex cell definitions if possible.
    1. Avoid using too many nav. attr
    2. Avoid RKF and CKF
    3. Many chars in row.
    By using T-codes ST03 or ST03N
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particular day > check query execution time.
    Statistical Records Part 4: How to read ST03N datasets from DB in NW2004
    How to read ST03N datasets from DB
    Try table rsddstats to get the statistics
    Using cache memory will decrease the loading time of the report.
    Run reporting agent at night and sending results to email. This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
    Also try
    1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
    2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
    Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
    It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
    3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
    Open the Aggregates...and observe VALUATION and USAGE columns.
    "---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
    In usage column,we will come to know how far the aggregate has been used in query.
    Thus we can check the performance of the aggregate.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    performance ISSUE related to AGGREGATE
    Note 356732 - Performance Tuning for Queries with Aggregates
    Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
    Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
    5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
    By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
    You can go to T-Code DB20 which gives you all the performance related information like
    Partitions
    Databases
    Schemas
    Buffer Pools
    Tablespaces etc
    use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
    If aggregates contain incorrect data, you must regenerate them.
    202469 - Using aggregate check tool
    Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
    You can find out whether an aggregate is usefull or useless you can find out through a proccess of checking the tables RSDDSTATAGGRDEF*
    Run the query in RSRT with statistics execute and come back you will get STATUID... copy this and check in the table...
    This gives you exactly which infoobjects it's hitting, if any one of the object is missing it's useless aggregate.
    6
    Check SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
    Generate Report in RSRT
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Business Intelligence Journal Improving Query Performance in Data Warehouses
    http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    Assign points if useful
    Cheers
    SM

  • Display Query performance in WAD

    All
    Is there a way to display the query performance statistics in the report result for a web report created in the WAD (is there a standard template/web object?
    Eg display number of records read and time taken for query result.
    I am sure I have seen this in SAP demos but don't know if it is a standard content offering
    Thanks
    Gidon

    Thanks, very helpfull.
    I´m using BI 7.0
    However, when I do as you suggested, I still get the query name, not the query view name.
    The query view should be selected dynamically from the view that is displayed at that moment.
    Also, can you incorporate saved views from the Bex Portfolio and display their names?
    Edited by: Max Cohen on Sep 11, 2008 5:16 PM

  • How to test query performance

    Hi Xperts,
       i know that we can test the query performance using the RSRT Debug mode.There so many options in RSRT i am so confused.
    Can any body teach me or send me docs to test query performance suing RSRT.
    [email protected]

    hi Nrupal,
    check
    http://help.sap.com/saphelp_nw70/helpdata/en/a0/2a183d30805c59e10000000a114084/frameset.htm
    pdf BEx Monitor(RSRT) is available in SDN ...
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/2d589190-0201-0010-f19a-c74465ce6e0f
    also review query performance oss and docs
    https://forums.sdn.sap.com/click.jspa?messageID=1182708
    hope this helps.

  • Query performance slow

    Hi Experts,
    Please clarify my doubts.
    1. How can we know the particular query performance slow in all?
    2. How can we define a cell in BEx?
    3. Info cube is info provider, Info Object is not Info Provider why?
    Thanks in advance

    Hi,
    1. How can we know the particular query performance slow in all?
       When you run the query it's take more time we know that query is taken more if where that query is taking more time you can collect the statics.
       like Selct your cube and set BI statics check box after that it will give the all statics data regarding your query.
      DB time (Data based time),Frent end Time (Query), Agrreation time like etc. based on that we go for the perfomance aggreations, compresion, indexes etc.
    2. How can we define a cell in BEx? 
       In Your Bex query your using two structures it's enabled. If you want create the different formulate by row wise you go for this.
    3. Info cube is info provider, Info Object is not Info Provider why?  
        Info object also info provider,
        when your info object also you can convert into info provider using " Convert as data target".
    Thanks and Regards,
    Venkat.
    Edited by: venkatewara reddy on Jul 27, 2011 12:05 PM

  • Easiest way to monitor query performance?

    Hi,
    What is the easiest method to track query performance? I need information about data collection from DataSource, data processing speed and front-end performance.
    I would like to compare the results when:
    1. the query is executed directly from BEx/WAD and
    2. the query accessed from portal.
    So basically I'm looking for a tool-set to monitor reporting performance at query level, per user.
    -Miikka

    It really depends on how indepth you want to go with your monitoring.  You could issue the show interface command and view the details there.  You could also view the top talkers on the ASDM dashboard. And another option is to use Netflow.
    Outside of that you would need to get either get Cisco Prime or another 3rd party monitoring software.
    Please remember to rate helpful posts and select a correct answer

  • Query performance check

    Is there a way to find out :
    Best performing Queries and
    Worst performing queries  in respect to the list of queries for a specific module ?
    if so what is the procedure ?
    any type of answers are most welcome and will assign points.

    Hi,
    Generate Report in RSRT  
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/e0e6a6e3-0601-0010-e6bd-ede96db89ec7
    Performance review
    /people/marc.bernard/blog/2005/02/08/follow-me-into-the-world-of-business-intelligence
    OSS notes to improve the query performence.
    1111470 Poor performance during filtering with many single
    1039781 Input-ready query: Performance
    1101143 Collective note: BEx Analyzer performance
    Performance Tuning with the OLAP Cache
    http://www.sapadvisors.com/resources/Howto...PerformanceTuningwiththeOLAPCache$28pdf$29.pdf
    OLAP: Cache Monitor 
    http://help.sap.com/saphelp_nw2004s/helpdata/en/41/b987eb1443534ba78a793f4beed9d5/frameset.htm
    Hope this helps.
    Thanks,
    JituK

  • Query performance.

    Hi
    I have created a procedure that accepts two bind variables from a report. The user will select one or the other, both or neither of the variables. To return the appropriate results i have created a view with the entire result set and in the procedure are a number of if statements that determine what to place in the where clause selecting from the view, depending on what variables populated.
    My concern is that the query that generates the view includes several joins and in total outputs around 150,000 records and seems to be rather slow to run.
    Would you recommend another solution such as placing the query in the procedure itself repeated for every if statement?
    Or should I work on the query performance?
    What would be the most efficient solution for my problem?
    Any advice would be greatly appreciated.
    Thanks

    [url http://forums.oracle.com/forums/thread.jspa?threadID=501834&tstart=0]When your query takes too long

  • How to improve the query performance in to report level and designer level

    How to improve the query performance in to report level and designer level......?
    Plz let me know the detail view......

    first its all based on the design of the database, universe and the report.
    at the universe Level, you have to check your Contexts very well to get the optimal performance of the universe and also your joins, keep your joins with key fields, will give you the best performance.
    at the report level, try to make the reports dynamic as much as you can, (Parameters) and so on.
    and when you create a paremeter try to get it match with the key fields in the database.
    good luck
    Amr

  • Report burst:To increase query performance in xcelsius

    Is there anyway to increase query performance in xcelsius by using report bursting

    Fremlin,
    Report bursting is only for distributing your reports to your end users.
    You can improve performance only by following the [Best practices|https://www.sdn.sap.com/irj/boc/index?rid=/library/uuid/a084a11c-6564-2b10-79ac-cc1eb3f017ac] in xcelsius.
    -Anil

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Query Performance issue in Oracle Forms

    Hi All,
    I am using oracle 9i DB and forms 6i.
    In query form ,qry took long time to load the data into form.
    There are two tables used here.
    1 table(A) contains 5 crore records another table(B) has 2 crore records.
    The recods fetching range 1-500 records.
    Table (A) has no index on main columns,after created the index on main columns in table A ,the query is fetched the data quickly.
    But DBA team dont want to create index on table A.Because of table space problem.
    If create the index on main table (A) ,then performance overhead in production.
    Concurrent user capacity is 1500.
    Is there any alternative methods to handle this problem.
    Regards,
    RS

    1) What is a crore? Wikipedia seems to indicate that it's either 10,000,000 or 500,000
    http://en.wikipedia.org/wiki/Crore
    I'll assume that we're talking about tables with 50 million and 20 million rows, respectively.
    2) Large tables with no indexes are definitely going to be slow. If you don't have the disk space to create an appropriate index, surely the right answer is to throw a bit of disk into the system.
    3) I don't understand the comment "If create the index on main table (A) ,then performance overhead in production." That seems to contradict the comment you made earlier that the query performs well when you add the index. Are you talking about some other performance overhead?
    Justin

  • How to improve query performance built on a ODS

    Hi,
    I've built a report on FI_GL ODS (BW3.5). The report execution time takes almost 1hr.
    Is there any method to improve or optimize th query performance that build on ODS.
    The ODS got huge volume of data ~ 300 Million records for 2 years.
    Thanx in advance,
    Guru.

    Hi Raj,
    Here are some few tips which helps you in improving ur query performance
    Checklist for Query Performance
    1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
    3. Within structures, make sure the filter order exists with the highest level filter first.
    4. Check code for all exit variables used in a report.
    5. Move Time restrictions to a global filter whenever possible.
    6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
    7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
    8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
    9. If Alternative UOM solution is used, turn off query cache.
    10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
    11. Turn off formatting and results rows to minimize Frontend time whenever possible.
    12. Check for nested hierarchies. Always a bad idea.
    13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
    14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
    15. Do review of order of restrictions in formulas. Do as many restrictions as you can before
    calculations. Try to avoid calculations before restrictions.
    17. Turn off warning messages on queries.
    18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
    19. Check to see where currency conversions are happening if they are used.
    20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    21. Avoid Cell Editor use if at all possible.
    22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
    23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.

Maybe you are looking for