Performance flag on cube

Hi'
I'm having some reporting performance issues and have tried to refresh statistics on the performance flag in manage of the infocube. Even though the batchjob "BI_STAT..." has finished correctly, the traffic light remains yellow. I have tried this on several cubes and all remains "yellow". If I do this in our development system traffic light turns green as expected. Does anyone have an idea why this is happening ?
KR Michael

There is a real lack of definitive information regarding best practices for maintaining DB stats for BW.
I beleive there was a change in recent SPs that changed what stats are collected when you run the stats job from the performance tab.  It used to be that stats were refreshed on the fact and dim tables, <b>and</b> all of the master data related tables.  I beleive that has been changed to just the fact and dim tables now. SAP was  flagging the stats as possibly being obsolete simply due to the date they were last collected. Old stats could be perfectly OK if there has not been any/much in the way of insert/delete activity to the table.  
Using BRCONNECT with an appropriate change threshold is probably the best approach I've seen.  This can start to get tricky with large ODSs. I do think the default sampling rates that SAP uses are too low.  They'll sample as little as 1% on larger tables and I believe that is generally too low, frequently yielding unsatisfactory DB optimizer choices.
Partitioned tables also tend to be a challenge. Making sure you collect partition level stats, but also  usually needing to collect global stats. 
The challenge in a lot of shops is getting adequate DBA support.  Most places, the DBAs must focus on supporting R3, and support for BW tends to be secondary and the DBAs really never work withthe BW staff enough to learn the necessary detail to tweak BW.  That continues to improve as the DB vendors and SAP try to make more of the DB self tuning.

Similar Messages

  • How to tune performance of a cube with multiple date dimension?

    Hi, 
    I have a cube where I have a measure. Now for a turn time report I am taking the date difference of two dates and taking the average, max and min of the date difference. The graph is taking long time to load. I am using Telerik report controls. 
    Is there any way to tune up the cube performance with multiple date dimension to it? What are the key rules and beset practices for a cube to perform well? 
    Thanks, 
    Amit

    Hi amit2015,
    According to your description, you want to improve the performance of a SSAS cube with multiple date dimension. Right?
    In Analysis Services, there are many tips to improve the performance of a cube. In this scenario, I suggest you only keep one dimension, and only include the column which are required for your calculation. Please refer to "dimension design" in
    the link below:
    http://www.mssqltips.com/sqlservertip/2567/ssas--best-practices-and-performance-optimization--part-3-of-4/
    If you have any question, please feel free to ask.
    Simon Hou
    TechNet Community Support

  • Performance tuning in Cubes in Anlytic Workspace Manager 10g

    Hi,
    Can anyone tell me or suggest anything how i should i improve the performance of cube maintainance in Analytic Workspace Manager..

    generate statspack/AWR reports
    HOW To Make TUNING request
    https://forums.oracle.com/forums/thread.jspa?threadID=2174552#9360003

  • MDX query performance on ASO cube with dynamic members

    We have an ASO cube and we are using MDX queries to extract data from that cube. We are doing some performance testing on the MDX data extract.
    Recently we made around 15-20 account dimension members dynamic in the ASO cube, and it is taking around 1 and a half hour for the query to run on an empty cube. Earlier the query was running in 1 minute on the empty cube when there were no dynamic members in the cube.
    Am not clear why it takes so much time to extract data from MDX on an empty cube when there is nothing to extract. The performance has also degraded while extracting data on the cube with data in it.
    Does dynamic members in the outline affect the MDX performance? Is there a way to exclude dynamic members from the MDX extract?
    I appreciate any insights on this issue.

    I guess it depends on what the formulas of those members in the dynamic hierarchy are doing.
    As an extreme example, I can write a member formula to count every unique member combination in the cube and assign it to multiple members, regardless if I have any data in the database or not, that function is going to resolve itself when you query it and it is going to take a lot of time. You are probably somewhere in between there and a simple function that doesn't require any over head. So without seeing the MDX it is hard to say what about it might be causing an issue.
    As far as excluding members there are various function in MDX to narrow down the set you are querying
    Filter(), Contains(), Except(), Is(), Subset(), UDA(), etc.
    Keep in mind you did not make members dynamic, you made a hierarchy dynamic, that is not the same thing and it does impact the way Essbase internally optimizes the database based on Stored vs dynamic hierarchies. So that alone can have an impact as well.

  • Query performance on Inventory Cube

    Hi All,
            I have a query on Inventory Cube with non cumulative key figures, when I ran a query with them its taking 60 to 70 minutes. When I ran the same query by removing non cumulatives its displaing results in 25 seconds. Is there any way we can improve query performance which is effected  by non cumulative keyfigures.
        I have checked the performance related tools like RSRV on cube and master data no errors, in RSRT> execute debug the more time query consumes in data manager, ST03> DB and data manager time and also unassigned time is more.
        I know that query consumes time because of non cumulative keyfigures as it need to perform calculations on fly but its taking lot more than that. I apprecate your inputs to this query in advance.
      I will reward points.
    Regards
    Satish Reddy

    Hi Anil,
        Its nice to see you. We have compressed the cube with marker update and we are using only two infosources to the cube(BF and UM). As there are 150 queries on that cube I don't want to build aggregate especially for that query. I also treid DB stats refresh, there is a process chain to delete and recreate indexes, analysed cube and master data in RSRV etc. it didn't really helped me. Would you please suggest any good solution for this. I apprecaite it in advance.
    When i check in Application log in Cube Manage it is displayed that Mass Upsert of Markers update so I assumed that markers are updated.
    Regards
    Satish Arra.

  • Query performance: DSO or Cube?

    For a new report (two queries) we will receive a few 100.000 service messages pushed into a PSA. These messages have various timestamps for different statuses of the items that come in. The client would like to report on very short term (5-15 minutes near real-time) on a subset of the data, and selecting on the date and time.
    The idea is as follows. From the PSA we load a subset of the fields to a DSO on real-time basis, because the business has a need for operational reporting on very short basis. This DSO should only have data for max a few days, so the size won't grow over a million. Different messages on the same unique key of the item can come in. On this DSO, we build a simple query.
    The other DSO, which gets the data from the PSA with a regular non-real-time DTP (or maybe also real-time? since it should refresh every few hours) contains more fields and has several timestamps on which the client wants to apply selections. We want to build a more complex query on this DSO. This DSo will accumulate to millions of records on a weekly basis.
    Question is now: should we apply the use of a query directly on the large DSO (with indices on the selection fields), or build a cube on top? Many fields are timestamps and therefore not suitable as a dimension (maybe line item). However, there are some characteristics that do not have many different values (like 0/1 for true/false selections). Only data from the same PSA is used, no derivations from other data or the use of master data is involved. The timestamp for the
    I'm wondering if a cube has advantages over a DSO here. I haven't found clear documentation explaining if it's better to use a DSO or cube. Any ideas?
    Edited by: Zephania Wilder on Jan 18, 2010 5:54 PM

    Zephania,
    The answer to your question depends on how the report is run. I didn't understand clearly, but it looks like you said the client will filter using multiple timestamp fields.
    If you can determine what the selection screen filter fields are and if you can make sure to make those field mandatory, then you can just create index on those fields and just report from DSO.
    If you can't determine the search criteria, want to give full freedom to the users, but users don't mind waiting when the database performs full table scans, then you may report from DSO. Basis may not like that, though.
    If you want to give full freedom to users, and the report should run quickly, then you have to build an InfoCube and report from that.

  • Performance Optimization for Cubes

    Hi All,
    In our project, we have a daily proces chain which will refresh four reporting cube, which is consuming 8-10 hours to complete the refresh. We have suggested to archive the historical data to the new cube to improve the performance of the daily load
    In UAT, the performance of the daily load did not improved after we performed the archiving.
    Kindly suggest the performance improvement for the cubes.
    Regards
    Suresh Kumar

    Hi,
    Before loading the cube , you need to delete the index and once the load is complete recreate the same.For this you have to go to the manage screen of the infocube----> Performance Tab.
    Also Create the DB Statistics.For this you have to go to the manage screen of the infocube----> Performance Tab.This will reduce the load time to a considerable amount.
    Also increase the Maximum size of the data packet in the Infopackage. For this you have to go to the infopackage-->Scheduler in the menu bar--> Data S. Default Data Transfer.Increase the size to a considerable amount(not very high).Also increase the Number of Data packets per info IDOC. This field will be available just after Maximum size of the data packet in the Infopackage.
    Hope It Helps,
    Regards,
    Amit
    Edited by: Amit Kr on Sep 4, 2009 5:37 PM

  • Query performance on virtual cube

    Hello,
    We have this requirement wherein we are switching a infocube with a virtual one to reduce BWA space. As part of this activity I am carrying out performance testing on existing queries. For most queries I can see an increase of around 20% which is acceptable. But for one query, we are running into issue where the query is getting timed out after hogging a lot of memory. On debugging I could see that this happens inside RSDRC_INFOPROV_READ ->run_query_handler wherein it ends up doing sequential read on SID table (0BILL_NUM). Strangely bill_num is neither part of the rows/free characteristics nor is it used anyplace else (filters,restrictions). Any pointers to what might be going wrong will be highly appreciated. Thanks.
    Regards,
    Dhrubo

    Hi,
    In this case try to find where more time is utilized by using the ST03.
    if there is more time consuming in front end, rearrange the query by using less in the row and using more free chars and filter areas. using the variable also will work.
    use reporting agent and schedule the query in background to fill the cache and rerun the query.
    Reg,
    Vishwa

  • Performance issue on a virtual cube

    Hi BW gurus,
    I am working on Consolidation virtual cube and the query performance thru that cube is very bad. I know that we cannot build aggregates or partition on a virtual cube ..what should be my approach then.....
    Your suggestions will be appreciated with lots of points

    Hi Nick,
    If you can not move out of the virtualcube option, then I think you should try to improve the performance of the virtual cube. This mainly ABAP work. You can use SE30 to analyze what parts of the code is taking too much time. Follow this steps:
    1) Create a breakpoint in the function module of your virtualcube
    2) Go to listcube and initiate an extraction from your virtualcube.
    3) On a separate session, run SE30 to start an analysis of the extraction process for your virtual cube.
    You can use the report from SE30 as a starting point on your performance optimization work.
    Note: Transaction ST05 can help you determine what database calls are taking a long time.
    Hope this helps.

  • Query Performance Issues on a cube sized 64GB.

    Hi,
    We have a non-time based cube whose size is 64 GB . Effectively, I can't use time dimension for partitioning. The transaction table has ~ 850 million records. We have 20+ dimensions among which 2 of the dimensions have 50 million records.
    I have equally distributed the fact table records among 60 partitions. Each partition size is around 900 MB.
    The processing of the cube is not an issue as it completes in 3.5 hours. The issue is with the query performance of the cube.
    When an MDX query is submitted, unfortunately, in majority of the cases the storage engine has to scan all the partitions (as our cube  is not time dependent and we can't find a suitable dimension that will fit the bill to partition measure group based
    on it.)
    I'm aware of the cache warming and  usage based aggregation(UBO) techniques.
    However, the cube is available for users to perform adhoc queries and hence the benefits of cache warming and UBO may cease to contribute to the performance gain as there is a high probability that each user may look at the data from different perspectives
    (especially when we have 20 + dimensions) as day(s) progress.
    Also, we have 15 + average calculations (calculated measures) in the cube. So, the storage engine sends all the granular data that the formula engine might have requested (could be millions of rows) and then perform the average calculation.
    A look at the profiler suggested that considerable amount of time has been spent by storage engine to gather the records (from 60 partitions).
    FYI - Our server has RAM 32 GB and 8 cores  and it is exclusive to Analysis Services.
    I would appreciate comments from anyone who has worked on a large cube that is not time dependent and the steps they took to improve the adhoc query performance for the users.
    Thanks
    CoolP

    Hello CoolP,
    Here is a good articles regarding how to tuning query performance in SSAS, please see:
    Analysis Services Query Performance Top 10 Best Practices:
    http://technet.microsoft.com/en-us/library/cc966527.aspx
    Hope you can find some helpful clues to tuning your SSAS Server query performance. Moreover, there are two ways to improve the query response time for an increasing number of end-users:
    Adding more power to the existing server (scale up)
    Distributing the load among several small servers (scale out)
    For detail information, please see:
    http://technet.microsoft.com/en-us/library/cc966449.aspx
    Regards,
    Elvis Long
    TechNet Community Support

  • Performance of cube

    Hi experts,
    We have created the data flow upto cubes.
    How can i maintain the performance of my objects. I mean to say by creating aggregates etc.
    Anyone please explain how to create aggregates for the cubes.
    Regards,
    Nishuv.

    Hi Nishuv,
    Well there are various was to enhance performance of the cube, but first of all you should make sure your design is fine. Then what you are planning to enhance the loading performance or the query performance.
    Aggregates, BIA, caching are few steps of improving the query performance. Kindly please search the forum there there n number of threads floating around on these.
    For your help here are a few :
    http://help.sap.com/saphelp_nw04s/helpdata/en/10/244538780fc80de10000009b38f842/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/10/244538780fc80de10000009b38f842/frameset.htm
    Aggregates:
    http://help.sap.com/saphelp_nw70/helpdata/en/44/70f4bb1ffb591ae10000000a1553f7/frameset.htm
    Filling Aggregates:
    http://help.sap.com/saphelp_nw70/helpdata/en/4f/187d3bce09c874e10000000a11402f/frameset.htm
    Hope this helps.
    Regards
    Dennis
    §§ Assign some Points if found Helpful §§

  • Performance Tuning Data Load for ASO cube

    Hi,
    Anyone can help how to fine tune data load on ASO cube.
    We have ASO cube which load around 110 million records from a total of 20 data files.
    18 of the data files has 4 million records each and the last two has around 18 million records.
    On average, to load 4 million records it took 130 seconds.
    The data file has 157 data column representing period dimension.
    With BSO cube, sorting the data file normally help. But with ASO, it does not seem to have
    any impact. Any suggestion how to improve the data load performance for ASO cube?
    Thanks,
    Lian

    Yes TimG it sure looks identical - except for the last BSO reference.
    Well nevermind as long as those that count remember where the words come from.
    To the Original Poster and to 960127 (come on create a profile already will you?):
    The sort order WILL matter IF you are using a compression dimension. In this case the compression dimension acts just like a BSO Dense dimension. If you load part of it in one record then when the next record comes along it has to be added to the already existing part. The ASO "load buffer" is really a file named <dbname.dat> that is built in your temp tablespace.
    The most recent x records that can fit in the ASO cache are still retained on the disk drive in the cache. So if the record is still there it will not have to be reread from the disk drive. So you could (instead of sorting) create an ASO cache as large as your final dat file. Then the record would already still be on the disk.
    BUT WAIT BEFORE YOU GO RAISING YOUR ASO CACHE. All operating systems use memory mapped IO therefore even if it is not in the cache it will likely still be in on the disk in "Standby" memory (the dark blue memory as seen in Resource Monitor) this will continue until the system runs out of "Free" memory (light blue in resource monitor).
    So in conclusion if your system still has Free memory there is no need (in a data load) to increase your ASO cache. And if you are out of Free memory then all you will do is slow down the other applications running on your system by increasing ASO Cache during a data load - so don't do it.
    Finally, if you have enough memory so that the entire data file fits in StandBY + Free memory then don't bother to sort it first. But if you do not have enough then sort it.
    Of course you have 20 data files so I hope that you do not have compression members spread out amongst these files!!!
    Finally, you did not say if you were using parallel load threads. If you need to have 20 files read up on having parrallel load buffers and parallel load scripts. that will make it faster.
    But if you do not really need 20 files and just broke them up to load parallel then create one single file and raise your DLTHREADSPREPARE and DLTHREADSWRITE settings. Heck these will help even if you do go parallel and really help if you don't but still keep 20 separate files.

  • Which field ,table gets the entry when we refresh cube statistics

    Hey,
    I just want to know  into which field or table entry is made when we Refresh statistics using the manage tab of the Infocube.. I saw the fields of the table RSDCUBE , there are two includes as given below and their fields, Im noy able to quite figure out which field gets the entry when i click on refresh statistics button in the Performance tab.
    1) RSDCUBEDBFLAGS (InfoCube: DB Performance Flags): which has the following fields namely
    CLEAR_INDEX     Boolean
    DELTA_INDEX     Boolean
    REBUILD_STAT     Boolean
    DELTA_STAT     Boolean
    PERCENTAGE     internal use
    NULLCOMP     Zero elimination in the summarization module
    COMP_DISJ     Flag: only disjunct records
    REFUPDATE     No update of the non-cumulative marker
    2)RSDCUBEBWSTAT (InfoCube: BW Statistics)
    BWSTATISTICS     Boolean
    BWSTATWHM     Boolean
    However , when i click on refresh statics, there is a job which runs from my id which triggers the program RSDSTAT1 with variant as the cube name..
    its kinda urgent...  helllpp
    Thanks,
    vaish

    Hey,
    I just want to know  into which field or table entry is made when we Refresh statistics using the manage tab of the Infocube.. I saw the fields of the table RSDCUBE , there are two includes as given below and their fields, Im noy able to quite figure out which field gets the entry when i click on refresh statistics button in the Performance tab.
    1) RSDCUBEDBFLAGS (InfoCube: DB Performance Flags): which has the following fields namely
    CLEAR_INDEX     Boolean
    DELTA_INDEX     Boolean
    REBUILD_STAT     Boolean
    DELTA_STAT     Boolean
    PERCENTAGE     internal use
    NULLCOMP     Zero elimination in the summarization module
    COMP_DISJ     Flag: only disjunct records
    REFUPDATE     No update of the non-cumulative marker
    2)RSDCUBEBWSTAT (InfoCube: BW Statistics)
    BWSTATISTICS     Boolean
    BWSTATWHM     Boolean
    However , when i click on refresh statics, there is a job which runs from my id which triggers the program RSDSTAT1 with variant as the cube name..
    its kinda urgent...  helllpp
    Thanks,
    vaish

  • I want to know Cube,Ods,Multiproviders,infopackage groups,process chains

    Hi all,
    1.how to make techinical specifications Cube,Ods,Multiproviders,infopackage groups,process chains.
    2.how to decide the size of the cube.
    3.what is the default size& what is the max size & In the case of ods.
    4.performance wise which is best multiprovider or cube.
    5.why cube is best (performance wise) for  cube over ods.
    6.What is max no of chars & keyfigures can insert in ods.
    7.Is sid concept exists for ods.if yes when they generate.
    Thanks
    cheta.

    Hi
    1.From functional specification requirements,find out the datasources by using business content check(offline) or using Metadata repository(online), u need to have TREX installed in ur system.Do the GAP Analysis and note down the objects which are not in business content and create Z objects(Datasources,ODS or Infobjects,Cubes). Draw the bubble diagram and derrive ER diagram from it and finally do logical information model(cube model) and decide what u need in data flow.
    2.You have a quick sizer in market place which can help you to decide the sizing of BI project.Take care of trade-off's between line item dimensions and normal dimension which have large impact on ur sizing.
    3.There is no such default sizing,depends on ur reporting requirements whether u need detailed reporting or snap shot reporting.
    4.There is no trade-off's between multiprovider and Cube regarding performance.From performance point of view we have to consider other factors such as aggregates,Indexing,Partitioning etc..
    6.The Max number of Keyfields are 16 and data fields are 749.
    7.Yes SID's are created for ODS when you check the flag for BEX Reporting.
    Thanks
    Chandru

  • Performing Compression

    Hello Gurus,
    How Function (easy way) Performing Compression ???
    THNX

    Hi Baris,
    Compressing an infocube saves space on disk, but the request ID's are removed during the compression process.so, before you go for compression it is very imp that u make sure that data in the cube is correct, becoz once u compress the data u no longer can delete incorrect requests.
    Steps to Perform compression on cube:
    Check the content of ur uncompressed fact table.  
         1. Switch to contents tab page
         2. Choose dispaly to display the fact table
         3. In the dta Browser, chosse correct button
    After determining that data is correct, compress cube as follow
        1. Select required Cube and chosse Manage Button from the context menu
        2. On the collapse tab strip in the request ID fields. enter the request ID of ur most recent request.
        3. Chosse the release button and then the selection button.
        4. In the start time window, choose immediately and then save the job. Now the compression process begins.
        5. Goto request tab page, in the compression status coloumn, a green flag indicates that compression is sucessful.
    Choose refresh i u r unable to see greeen flag immediately.
    Regards,
    Rajkandula.

Maybe you are looking for