Query performance against EDW and ODS in OBIEE?

HI ALL,
please help me on these performances.
including quality of queries that are built, especially with highly normalized data model
its urgent for me.
Thanks,
kapil.

HI ALL,
please help me on these performances.
including quality of queries that are built, especially with highly normalized data model
its urgent for me.
Thanks,
kapil.

Similar Messages

  • How to improve query performance built on a ODS

    Hi,
    I've built a report on FI_GL ODS (BW3.5). The report execution time takes almost 1hr.
    Is there any method to improve or optimize th query performance that build on ODS.
    The ODS got huge volume of data ~ 300 Million records for 2 years.
    Thanx in advance,
    Guru.

    Hi Raj,
    Here are some few tips which helps you in improving ur query performance
    Checklist for Query Performance
    1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
    3. Within structures, make sure the filter order exists with the highest level filter first.
    4. Check code for all exit variables used in a report.
    5. Move Time restrictions to a global filter whenever possible.
    6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
    7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
    8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
    9. If Alternative UOM solution is used, turn off query cache.
    10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
    11. Turn off formatting and results rows to minimize Frontend time whenever possible.
    12. Check for nested hierarchies. Always a bad idea.
    13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
    14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
    15. Do review of order of restrictions in formulas. Do as many restrictions as you can before
    calculations. Try to avoid calculations before restrictions.
    17. Turn off warning messages on queries.
    18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
    19. Check to see where currency conversions are happening if they are used.
    20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    21. Avoid Cell Editor use if at all possible.
    22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
    23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.

  • How to improve query performance when reporting on ods object?

    Hi,
    Can anybody give me the answer, how to improve my query performance when reporting on ODS object?
    Thanks in advance,
    Ravi Alakuntla.

    Hi Ravi,
    Check these links which may cater your requirement,
    Re: performance issues of ODS
    Which criteria to follow to pick InfoObj. as secondary index of ODS?
    PDF on BW performance tuning,
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    Regards,
    Mani.

  • SQL query performance between TOAD and APEX

    Hi Guys,
    I would like to know if there is any performance difference between a simple query run in TOAD and APEX(classic report).
    The reason being, I have a query based on a single table(conataining 15000 rows) which takes almost 30seconds in APEX whereas it takes just 2-3 seconds in TOAD.
    Thanks,
    Raj.

    Varad,
    Thanks for your suggestion.
    I tried changing the pagination but not much it helped.
    Basically I have 5 reports on the same page.
    When the user first navigates to this page then Report-1 is generated first with data as links to other reports.
    So I guess when I click on any of the column links on the Report-1 then the page is refreshed and this time its taking total time for Report-1 and Report-2.
    Is there a possibility that we can circumvent the execution of the first query or cache the results of report-1 so that when the page is refreshed it displays the data from the cache for Report-1 and executes the query for Report-2 ?
    -Raj

  • Performance of my query based on cube ? and ods?

    hi all,
    how to identify the performance of my query based on a cube nor ods. I have requirement which enables to do flat file extraction and the extraction is only once and the records are less too. I need to sort whether my query will be faster based upon cube nor on ods.
    Can anyone let me know how to measure the performance of my query based upon cube and ods and how to find out which one will be faster. bcos i need to explain them the entire process of going to load the data directly to ods and do reporting from there nor data loaded directly to cube and do reporting from cube.
    thanxs
    haritha

    Hi,
    ODS is 2 Dimensional  so avoid reporting on ODS,
    Cube is MultiDim, for analysis perpose we can go reporting on Cube only
    Records in ODS are Overwritten whereas in Cube records are Aggregated
    and can also do compression on Cube, which will increase the query performance and so data retrieval in cube is faster
    Thanks

  • Tuning query performance

    Dear experts,
    I have a question regarding as the performance of a BW query.
    It takes 10 minutes to display about 23 thousands lines.
    This query read the data from an ODS object.
    According to the "where" clause in the "select" statement monitored via Oracle session when the query was running, I created an index for this ODS object.
    After rerunning the query, I found that the index had been taken by Oracle in reading this table (estimated cost is reduced to 2 from about 3000).
    However, it takes the same time as before.
    Is there any other reason or other factors that I should consider in tuning the performance of this query?K
    Thanks in advance

    Hi David,
              Query performance when reporting on ODS object is slower compared to infocubes, infosets, multiproviders etc because of no aggregates and other performance techinques in DSO.
    Basically for DSO/ODS you need to turn on the BEx reporting flag, which again is an overhead for query execution and affects performance.
    To improve the performance when reporting on ODS you can create secondary indexes from BW workbench.
    Please check the below links.
    [Re: performance issues of ODS]
    [Which criteria to follow to pick InfoObj. as secondary index of ODS?;
    Hope this helps.
    Regards,
    Haritha.

  • System/Query Performance: What to look for in these tcodes

    Hi
    I have been researching on system/query performance in general in the BW environment.
    I have seen tcodes such as
    ST02 :Buffer/Table analysis
    ST03 :System workload
    ST03N:
    ST04 : Database monitor
    ST05 : SQL trace
    ST06 :
    ST66:
    ST21:
    ST22:
    SE30: ABAP runtime analysis
    RSRT:Query performance
    RSRV: Analysis and repair of BW objects
    For example, Note 948066 provides descriptions of these tcodes but what I am not getting are thresholds and their implications. e.g. ST02 gave “tune summary” screen with several rows and columns (?not sure what they are called) with several numerical values.
    Is there some information on these rows/columns such as the typical range for each of these columns and rows; and the acceptable figures, and which numbers under which columns suggest what problems?
    Basically some type of a metric for each of these indicators provided by these performance tcodes.
    Something similar to when you are using an Operating system, and the CPU performance is  consistently over 70%  which may suggest the need to upgrade CPU; while over 90% suggests your system is about to crush, etc.
    I will appreciate some guidelines on the use of these tcodes and from your personal experience, which indicators you pay attention to under each tcode and why?
    Thanks

    hi Amanda,
    i forgot something .... SAP provides Early Watch report, if you have solution manager, you can generate it by yourself .... in Early Watch report there will be red, yellow and green light for parameters 
    http://help.sap.com/saphelp_sm40/helpdata/EN/a4/a0cd16e4bcb3418efdaf4a07f4cdf8/frameset.htm
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/e0f35bf3-14a3-2910-abb8-89a7a294cedb
    EarlyWatch focuses on the following aspects:
    ·        Server analysis
    ·        Database analysis
    ·        Configuration analysis
    ·        Application analysis
    ·        Workload analysis
    EarlyWatch Alert – a free part of your standard maintenance contract with SAP – is a preventive service designed to help you take rapid action before potential problems can lead to actual downtime. In addition to EarlyWatch Alert, you can also decide to have an EarlyWatch session for a more detailed analysis of your system.
    ask your basis for Early Watch sample report, the parameters in Early Watch should cover what you are looking for with red, yellow, green indicators
    Understanding Your EarlyWatch Alert Reports
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4b88cb90-0201-0010-5bb1-a65272a329bf
    hope this helps.

  • 3.x query performance on upgraded BI 7.0 - worse than before upgrade?

    Dear Sirs,
    upgraded a BI system this weekend from 3.x to 7.0. Purely technical upgrade so no upgrade to 7.0 queries yet.
    What we have seen is that queries with "loose"/lagre selection ? (e.g all plant .. all months) etc have worse performance than before upgrade.
    One example: a query went from 26second to 60 seconds to run.
    Small selection (e.g one specific plant and date) is same run time.
    Has anyone had similar experience?
    Is BI 7.0 optimized for 7.0 queries, or are there any performance parameteres I could look at.
    Best regards,
    Jørgen

    Check this thread:
    Performance problems on NW04S/BI 7.0 after the upgrade
    Additional:
    [Improving Query Performance by Effective and Efficient Maintenance of Aggregates|https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/906de68a-0a28-2c10-398b-ef0e11ef2022]
    BI Performance Improvements in 7.0
    BI Performance Improvements in 7.0
    Hope this helps.
    Regards
    Andreas

  • Query Performance OBIEE 11.1.1.5-Essbase 9.3.1

    I have installed OBIEE 11.1.1.5 and Essbase 9.3.1 on a PC with 8GB ram and core i7, when I run a report or dashboard for the first time (*no data stored in cache*) the data display is very slow (*takes up to 15 min*), if I run a second time the same report or dashboard display and is instantaneous (as already exists in the query cache), what way I can get the cached data without having any questions?, some settings?, or there is a parameter that is not considering?, I can give some tip to improve query performance? thanks.

    One good place to start troubleshooting what took so long is - Take the logical sql and run it against the database to see how long it is taking to retrieve results back.

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Impact of real time cube on query performance and OLAP cache

    Hi:
    We have actual and plan cubes both setup as real time cubes (only plan cube is being planned against, not actual cube) and both cubes are compressed once a day.
    We are planning on implementing BIA accelerator and have questions related to query performance optimization:
    1/ Is there are any query performance benefits in changing the actual cube to be a basic cube (using program SAP_CONVERT_NORMAL_TRANS) if the F table is fully compressed
    2/ Can OLAP cache be leveraged for the queries run against the real time cubes e.g. actual cubes
    3/ What is the impact on BIA of having the actual cube as real time (whetehr or not there is data being loaded/planned during the day in that cube)
    Thank you in advance,
    Catherine

    1) Is there are any query performance benefits in changing the actual cube to be a basic cube (using program SAP_CONVERT_NORMAL_TRANS) if the F table is fully compressed
    From the performance point of view, tha actual cubes i.e std cubes are relatively better.
    2) Yes OLAP cache can be leveraged for bringing up the plan query but all the calculations are done in the planning buffer.
    3) Not sure.

  • Coherence-Extend and Continuous Query performance

    Hi,
    I am trying to evaluate the performance impact of continous queries, when using coherence extend (TCP). The idea is that desktop clients will be running continuous queries against a cluster, and other processes will be updating the data in that cluster. The clients themselves take a purely read-only view of the data.
    In my tests, I find that the updater process takes about 250ms to update 5000 values in the cache (using a putAll operation). When I have a continuous query running against a remote cache, linked with coherence extend, the update time increases to about 1500ms. This is not CPU bound.
    Is this what people would expect?
    If so this raises questions to me about:
    1) slow subscribers - what if one of my clients is very badly behaved? Can I detect this and/or take action?
    2) conflation of updates - can Coherence do conflation?
    3) can I get control to send object deltas over the wire rather than entire objects?
    Is this a use case for which CoherenceExtend and continuous queries were designed?
    Robert

    Yes, it is certainly possible, although depending on your requirements it may be more or less additional coding. You have a few choices. For example, since you have a CQC on the cache, you could conceivably aggregate locally (on any event). In other words, since all the data are local, there is no need to do the parallel aggregation (unless it is CPU limited). Depending on the aggregation, you may only have to recalculate part of it.
    You can access the internal data structure (Map) within the CQC as follows:
    Map map = cqc.getInternalCache();
    // now we can do aggregation
    NamedCache cache = new WrapperNamedCache(map);
    cache.aggregate(..);More complex approaches would only recalculate portions based on the event, or (depending on the function) might use the event to adjust the aggregated results.
    Peace,
    Cameron Purdy | Oracle Coherence
    http://coherence.oracle.com/

  • Structures Vs RKFs and CKFs In Query performance

    Hi Gurus,
    I am creating a GL query which will be returning with a couple of KFs and some calculations as well with different GL accounts and I wanted to know which one is going to be more beneficial for me either to create Restricted keyfigures and Calculated Keyfigures or to just use a structure for all the selections and formula calculations?
    Which option will be better for query performance?
    Thanks in advance

    As compared to formulas that are evaluated during query execution, calculated key figures are pre-calculated and their definitions are stored in the metadata repository for reuse in queries. The incorporation of business metrics and key performance indicators as calculated key figures, such as gross profit and return on investment (which are frequently used, widely understood, and rarely changed), improve query performance and ensure that calculated key figures are reported consistently by different users. Note that this approach improves query runtime performance but slows InfoCube or ODS object update time. As a rule of thumb, if multiple and frequently used queries use the same formula to compute calculated fields, use calculated key figures instead of formulas.
    RKFs result in additional database processing and complexity in retrieving the query result and therefore should be avoided when possible.

  • Query Performance on ODS

    Hi all,
            I have an ODS with around 23000 records. The ODS has around 20 fields in it (3 key fields). I built a query on the ODS. The User only inputs Cost center Hierarchy. The report does not have any calculations. It is a direct view of the fields, but the report is taking a long time to run if I select first 2nd or 3rd level of the hierarchy as the starting point. If I  select the 5th or 6th level for the hierarchy then the query output is fast. With just 23000 records on the ODS I thought query performance should be fast for any level of hierarchy.
    I even created an index on Costcenter, even then no improvement in performance. Is there any way to achieve faster query performance on the ODS.
    Thanks,
    Prabhu.

    Technical content cubes[ if installed ] gives more generic statistics of query run time...
    RSRT option mentioned by Sudheer is more targeted approach.
    The same RSRT can be help full to do the following....[ from help.sap]
    Definition
    The read mode determines how the OLAP processor gets data during navigation. You can set the mode in Customizing for an InfoProvider and in the Query Monitor for a query.
    Use
    The following types are supported:
           1.      Query to be read when you navigate or expand hierarchies (H)
    The amount of data transferred from the database to the OLAP processor is the smallest in this mode. However, it has the highest number of read processes.
    In the following mode Query to read data during navigation, the data for the fully expanded hierarchy is requested for a hierarchy drilldown. In the Query to be read when you navigate or expand hierarchies mode, the data across the hierarchy is aggregated and transferred to the OLAP processor on the hierarchy level that is the lowest in the start list. When expanding a hierarchy node, the children of this node are then read.
    You can improve the performance of queries with large presentation hierarchies by creating aggregates on a middle hierarchy level that is greater or the same as the hierarchy start level.
           2.      Query to read data during navigation (X)
    The OLAP processor only requests data that is needed for each navigational status of the query in the Business Explorer. The data that is needed is read for each step in the navigation.
    In contrast to the Query to be read when you navigate or expand hierarchies mode, presentation hierarchies are always imported completely on a leaf level here.
    The OLAP processor can read data from the main memory when the nodes are expanded.
    When accessing the database, the best aggregate table is used and, if possible, data is aggregated in the database.
           3.      Query to read all data at once (A)
    There is only one read process in this mode. When you execute the query in the Business Explorer, all data in the main memory area of the OLAP processor that is needed for all possible navigational steps of this query is read. During navigation, all new navigational states are aggregated and calculated from the data from the main memory.
    The read mode Query to be read when you navigate or expand hierarchies significantly improves performance in almost all cases compared to the other two modes. The reason for this is that only the data the user wants to see is requested in this mode.
    Compared to the Query to be read when you navigate or expand hierarchies, the setting Query to read data during navigation only effects performance for queries with presentation hierarchies.
    Unlike the other two modes, the setting Query to Read All Data At Once also has an effect on performance for queries with free characteristics. The OLAP processor aggregates on the corresponding query view. For this reason, the aggregation concept, that is, working with pre-aggregated data, is least supported in the Query to read all data at once mode.
    We recommend you choose the mode Query to be read when you navigate or expand hierarchies.
    Only choose a different read mode in exceptional circumstances. The read mode Query to read all data at once may be of use in the following cases:
    §         The InfoProvider does not support selection. The OLAP processor reads significantly more data than the query needs anyway.
    §         A user exit is active in a query. This prevents data from already being aggregated in the database.

  • How to improve query performance of an ODS- with 320 million records

    <b>Issue:</b>
    The reports are giving time-outs while execution.
    <b>Scenario</b>:
    We have an ODS having approximately 320 millions of records in it.
    The reports are based on
    The ODS and
    InfoSets based on this ODS.
    These reports are giving time-outs while execution.
    <b>Few facts about this ODS:</b>
    There are around 75 restricted and calculated keyfigures used in the query definition.
    We can’t replace this ODS by cube as there is requirement of InfoSet on it.
    This is in BW 3.5 environment.
    <b>Few things we tried:</b>
    Secondary Indices are created on the fields which are appearing in the selection screen of the reports. It’s not worked.
    The Restriction/Calculation logic in the query definition can be moved to backend. Will it make the difference?
    Question:
    Can you suggest the ways to improve the query performance of this ODS?
    Your immediate response is highly appreciated. Thanks in advance.

    Hey!
    I think Oliver's questions are good. 320 Mio records are to much for an ODS. If you can get rid of the InfoSet that would be helpful. Why exactly do you need it? If you don't need you could partition your ODS with a characteristic and report over an MultiProvider.
    Is there a way to delete some data from the ODS?
    Maybe you make an Upgrade to 7.0 in the next time? There you can use InfoSets on InfoCubes.
    You also could try to precalculation like sam say. This is possible with reporting agent or Information Broadcasting. Then you have it in your cache. Look that your cache is large enough. Maybe you can use a table or something.
    Do you just need to make one or some special reports on a special time? Maybe you can make an update in another ODS writing just the result in it. For this you can use update rules or maybe analysisprocess designer (transaction RSANWB) is the better way.
    Maybe it is also possible to increase the parameter for your dialog-runtime rdisp/max_wprun_time (If you don't know, you basis should. Else look here https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/ab254cf2-0c01-0010-c28d-b26d04627e61)
    Best regards,
    Peter

Maybe you are looking for