ASO Cube, increase aggregation or improve retrieval performance

We've been using the essbase cube to create report using OBIEE.
When we use level-0 member filter, it takes quit a long time to get the results.
Any idea to improve the performance?
Is there anyway that I can improve the number of aggregation occurs at a time? Thank you.

What doesn't make sense to me is you don't need aggregations on level zero members as that is where the data is stored. I'm guessing oyu mean level sero members of one dimension and higher level members of other dimensions. Are those other dimensions dynamic or stored? Do you have a lot of calculations going on that are being retrieved? Have you materalized aggregations on the cube?

Similar Messages

  • Improving retrieval performance of essbase server in unix environment

    HI,
    Our production environment is unix system. can any one suggest settings which impact the retrieval performance and how to do these settings in unix environment.

    Naveen,
    For retrieval perfomance, Increase the retreival buffer size.
    Default is 10 KB for 32 bit platforms and 20 KB for 64 bit.
    make it 100 KB.
    2. if the data block size is large ,and you are retriving cells across several blocks
    set VLBREPORT true in the essbase.cfg configuration file
    NOTE: this will increase the retrival process but , its applicable to the outlines which does not include dynamic calcs.
    3. If the format of your reoport is of not much importance. group dense dimension in colums and groping sparse in rows ,this would be faster.
    4. An applicaion/database does has a limt on its memory consumption.
    So, RAM is the key for the speed.
    Best part is that ,as you have UNIX operating system ,addressabe memory in your case is 3.9GB(which is very good) unlike 2 GB in case of windows.This is per application.
    Sandeep Reddy Enti
    HCC

  • Adding commentary in ASO Cube and aggregating it to TOP Level

    Hi Gurus,
    I have one peculiar problem. We are adding commentary in BSO Planning cube now i have couple of problem related to it.
    a) These commentary needs to be pushed in ASO cube (Reporting) which need are entered in Lower Level.
    b) At top level these commentaried need to be Aggregated or rather Concatenated.
    eg.
    ProfitCentre has two child P1 and P2 and user enter commantary in BSO for P1 and P2 as "Market Risk Deviation" and "Standard Output"
    then in the HSPgetval Smartview report the conent of report will look like:
    Profit Centre          Market Risk Deviation + Standard Output
    P1                        Market Risk Deviation
    P2                        Standard Output.
    Any thoughts/ Suggestions/ Input/ Ways to achieve so
    Thanks
    Anubhav

    Apart from what Glenn suggested
    Not out of box, you are looking at a JAVA API + SQL based solution here
    Here are my thoughts
    Either use a Select query and get the Text values and IDs from the tables HSP_CELL_TEXT (or HSP_TEXT_CELL) table
    Create a Java API, which can import a TextList in ASO cube, ID is going to be what you get from the table
    Load the data to ASO from Planning
    Now for the aggregation/concatenation part, you'll have to Add those as again Smart List, This can be done by looking at HSP_CELL_TEXT (or HSP_TEXT_CELL) table, there is an ID associated with each text, get the id associated
    So for example Market Risk Deviation is 1 and Standard Output Deivation is 2, then you should add Market Risk Deviation + Standard Output as 3, however you'll have to make sure that there is no entry from Planning for 3
    It is complicated
    Regards
    Celvin Kattookaran

  • ASO Cube Performance Issue

    We have been working on 2 ASO cubes, performance was great. No modification hasn't been done ever since, but we have been experiencing performance issue now. What could be the possible cause or how I could resolve this. Thank you.

    'Performance issue' isn't very descriptive - performance of what? Query? Data load? Aggregation? Restructure?
    As a start, has the volume of data (input data cells) been increasing significantly?

  • Performance Tuning Data Load for ASO cube

    Hi,
    Anyone can help how to fine tune data load on ASO cube.
    We have ASO cube which load around 110 million records from a total of 20 data files.
    18 of the data files has 4 million records each and the last two has around 18 million records.
    On average, to load 4 million records it took 130 seconds.
    The data file has 157 data column representing period dimension.
    With BSO cube, sorting the data file normally help. But with ASO, it does not seem to have
    any impact. Any suggestion how to improve the data load performance for ASO cube?
    Thanks,
    Lian

    Yes TimG it sure looks identical - except for the last BSO reference.
    Well nevermind as long as those that count remember where the words come from.
    To the Original Poster and to 960127 (come on create a profile already will you?):
    The sort order WILL matter IF you are using a compression dimension. In this case the compression dimension acts just like a BSO Dense dimension. If you load part of it in one record then when the next record comes along it has to be added to the already existing part. The ASO "load buffer" is really a file named <dbname.dat> that is built in your temp tablespace.
    The most recent x records that can fit in the ASO cache are still retained on the disk drive in the cache. So if the record is still there it will not have to be reread from the disk drive. So you could (instead of sorting) create an ASO cache as large as your final dat file. Then the record would already still be on the disk.
    BUT WAIT BEFORE YOU GO RAISING YOUR ASO CACHE. All operating systems use memory mapped IO therefore even if it is not in the cache it will likely still be in on the disk in "Standby" memory (the dark blue memory as seen in Resource Monitor) this will continue until the system runs out of "Free" memory (light blue in resource monitor).
    So in conclusion if your system still has Free memory there is no need (in a data load) to increase your ASO cache. And if you are out of Free memory then all you will do is slow down the other applications running on your system by increasing ASO Cache during a data load - so don't do it.
    Finally, if you have enough memory so that the entire data file fits in StandBY + Free memory then don't bother to sort it first. But if you do not have enough then sort it.
    Of course you have 20 data files so I hope that you do not have compression members spread out amongst these files!!!
    Finally, you did not say if you were using parallel load threads. If you need to have 20 files read up on having parrallel load buffers and parallel load scripts. that will make it faster.
    But if you do not really need 20 files and just broke them up to load parallel then create one single file and raise your DLTHREADSPREPARE and DLTHREADSWRITE settings. Heck these will help even if you do go parallel and really help if you don't but still keep 20 separate files.

  • Essbase ASO Cube query performance from OBI EE

    Hi all
    I have serious problems of performance when I query an ASO cube from OBI EE. The problem born when I implement a filter in some dimension of model in the Business Model and Mapping layer. This filter is to level-0 of the dimension, the values are obtained from a session variable in OBI EE. The objetive of this is apply filters depending of users. Then, for session variable I've a table in relational dabase base with relation between user and "access", then my dimensions (not all) have as level-0 the "access" of users (as duplicated members).
    The session variable in OBI EE is filled with row-wise option, so it has all values of "access" that correspond to user (:USER system variabe).
    When I query only by one of this filtered dimensions the respond is very fast, When I query for one of this filtered dimensions and a metric the respond is fast (10 seconds). But when I query for two of this filtered dimensions and metric the respond take 25 minutes. I checked Essbase app log and found this:
    +[Mon Nov 15 19:56:01 2010]Local/TestSec5/TestSec5/admin/Info(1013091)+
    +Received Command [MdxReport] from user [admin]+
    +[Mon Nov 15 20:28:28 2010]Local/TestSec5/TestSec5/admin/Info(1260039)+
    MaxL DML Execution Elapsed Time : [1947.18] seconds
    When I look the MDX query generated by OBI I see that the aggregation process is doing in the fly in the members filtered of the crossjoin of two dimensions:
    With
    set [CATALOGO_INSTITUCIONAL2] as '[CATALOGO_INSTITUCIONAL].Generations(2).members'
    set [CATALOGO_PRESUPUESTARIO2] as '[CATALOGO_PRESUPUESTARIO].Generations(2).members'
    *member [METRICAS_PRESUPUESTARIAS].[MS1] as 'AGGREGATE(filter(crossjoin (Descendants([CATALOGO_INSTITUCIONAL].currentmember,[CATALOGO_INSTITUCIONAL].Generations(7)),Descendants([CATALOGO_PRESUPUESTARIO].currentmember,[CATALOGO_PRESUPUESTARIO].Generations(7))),(([CATALOGO_INSTITUCIONAL].CurrentMember.MEMBER_ALIAS = "01.01" OR [CATALOGO_INSTITUCIONAL].CurrentMember.MEMBER_Name = "01.01")) AND (([CATALOGO_PRESUPUESTARIO].CurrentMember.MEMBER_ALIAS = "G" OR [CATALOGO_PRESUPUESTARIO].CurrentMember.MEMBER_Name = "G") OR ([CATALOGO_PRESUPUESTARIO].CurrentMember.MEMBER_ALIAS = "I0101" OR [CATALOGO_PRESUPUESTARIO].CurrentMember.MEMBER_Name = "I0101") OR ([CATALOGO_PRESUPUESTARIO].CurrentMember.MEMBER_ALIAS = "S01" OR [CATALOGO_PRESUPUESTARIO].CurrentMember.MEMBER_Name = "S01"))),METRICAS_PRESUPUESTARIAS.[Compromiso])', SOLVE_ORDER = 100*
    select
    { [METRICAS_PRESUPUESTARIAS].[MS1]
    } on columns,
    NON EMPTY {crossjoin ({[CATALOGO_INSTITUCIONAL2]},{[CATALOGO_PRESUPUESTARIO2]})} properties ANCESTOR_NAMES, GEN_NUMBER on rows
    from [TestSec5.TestSec5]
    Can somebody tell me if is possible to change the way in that OBI built the query or if is possible to use aggregations previously materialized of essbase?

    hi Amol,
    1. On what basis , did you estimate your cube to around 400GB to 600GB.
    2. If ASO is an option, its huge advantage lies in space, its does not take more space , unlike BSO.
    3. I have seen cubes ,who size was around 300-400GB in BSO,when made the same cube into ASO , its consumed space of 40GB-45GB.
    HOpe this helps
    Sandeep Reddy Enti
    HCC
    http://hyperionconsutlancy.com/

  • ASO cube aggregation behavior

    Hi All,
    We have an ASO cubes with millions of records having 10 dimensions (1 Account dimension).Few facts values are getting changed after aggregation where without aggregation, we are getting correct values.
    We are loading data from source system (Teradata) where we have rounded off values till 7 places of decimal. But when we see data at report level (WA) it displays non zero values after 7 places of decimal for few facts.
    For us it is making a huge difference as we are calculating var% value on report. The denominator value for this var% which is 0 for certain line items but due to aggregation its showing some value (0.0000000198547587458) which result very large values for var%.
    Is it the tool behavior or there is some problem with aggregation?
    Any input will be really appreciable.
    Thanks,

    What doesn't make sense to me is you don't need aggregations on level zero members as that is where the data is stored. I'm guessing oyu mean level sero members of one dimension and higher level members of other dimensions. Are those other dimensions dynamic or stored? Do you have a lot of calculations going on that are being retrieved? Have you materalized aggregations on the cube?

  • Performance of Financial Reports against a ASO Cube to a BSO Cube

    Hi All,
    I am working on Financial Reporting and Essbase. I wanted to understand and find some relevant documentation, which tells about the performance issues or difference when a financial report hits a BSO cube or an ASO cube.
    1. If there is a difference in the performance for an ASO vs BSO for Financial Reports, where can I find the document or details for it?
    2. If there is a difference in the performance for an ASO vs BSO for Financial Reports, what is the reason for the same?
    3. How can we improve the ASO performance for the reports?
    Any insights for the same, would be highly appreciated.
    Thanks
    Ankur Jain.

    Thanks Sean V,
    Its quite amazing for me as well, and that is why I am trying drill into any of the FR documentation which might contain something like this. As of now,since I don't have access to the cube, I have nothing to add, nor have the insights to the cube, based upon which I could explain, more on the Cube and Outline design.
    But as soon as I get the access, I will bring this back and have a discussion on the forum.
    Thanks for confirming the same, of what I had been explaining to my Client side as well. Though, I still need some basis to explain and prove them exactly and a prototype as well, which might be needed as well.
    Thanks,
    Ankur Jain.

  • ASO Cube with attributes very slow in retrieval

    Hi,
    I have a ASO Cube with 5 base dimensions and 8-9 attributes on the entity dimension. I have only 5-6 measures, which do the averages and counts based on the 40 day period. Howere, the data is loaded at the 15 minute increment
    Entity
    Date - (date-time, lowest level being date)
    TIme - ( 15 minute time for the full 24 hour period, has a attribute assocuated with oit)
    LocationType
    Measures.
    The sample formula is
    IIF(Islevel([Locations].CurrentMember,0), Avg(CrossJoin({[Measure].[Sale]},{[DateDim].CurrentMember.lag(40):[DateDim].CurrentMember})),Missing)
    Is there a way, i can have this calculated as a part of the script? DO you suggest i create a BSO, to do these calculations and pass on the result.
    In OBIEE, the report is to display the followung based on the date input.
    Entity Gen7, Entity Gen 6..... Entity Gen 2, Attr1, Attr2, Attr3, Attr4, Attr5, Attr6, Attr7, Measures

    2 things I would look at
    1st - I don't know how much performance you would get out of this, but I'm not clear why you are using a crossjoin in your MDX, it seems unnecessary and may cause more overhead. The following should work, you could also try using IsLeaf instead and see if that is any faster
    IIF(Isleaf([Locations].CurrentMember),
    Avg({[DateDim].CurrentMember.lag(40):[DateDim].CurrentMember},
    [Measure].[Sale], INCLUDEEMPTY)
    2nd - your problem mostly revolves around the fact that you are running a 40 member sum/avg for every member you are querying. It also sounds like the average is at the Day level, which is not level 0. So for all forty days, ASO also has to calc the results of each of those days. Remember that aggregations only get you so far, you should really think of everything in ASO as dynamic and that is why you can see what you have set up is not going to work that well, it is too calc intensive.
    I don't know how practical this is, but to get this to work fast you would probably need to break out the 15 minute increments below the day level to another dimension so the day level becomes a stored level zero member. The 15 minute increment dimension should also be stored. If at all possible you would want to have an alternate stored hierarchy with the 40 days you want to base the average on. Enable alternate hierarchies in your aggregations, then change your MDX calc to be based on the parent of the 40 day hierarchy divided by 40. That would be fast.
    I suppose you could opt to not break out the 15 minute increments and just have the shared hierarchy made up of the 15 minute increments that are below the 40 days. That would still give you a good stored subtotal that with some query hints you could get optimized.

  • MDX query performance on ASO cube with dynamic members

    We have an ASO cube and we are using MDX queries to extract data from that cube. We are doing some performance testing on the MDX data extract.
    Recently we made around 15-20 account dimension members dynamic in the ASO cube, and it is taking around 1 and a half hour for the query to run on an empty cube. Earlier the query was running in 1 minute on the empty cube when there were no dynamic members in the cube.
    Am not clear why it takes so much time to extract data from MDX on an empty cube when there is nothing to extract. The performance has also degraded while extracting data on the cube with data in it.
    Does dynamic members in the outline affect the MDX performance? Is there a way to exclude dynamic members from the MDX extract?
    I appreciate any insights on this issue.

    I guess it depends on what the formulas of those members in the dynamic hierarchy are doing.
    As an extreme example, I can write a member formula to count every unique member combination in the cube and assign it to multiple members, regardless if I have any data in the database or not, that function is going to resolve itself when you query it and it is going to take a lot of time. You are probably somewhere in between there and a simple function that doesn't require any over head. So without seeing the MDX it is hard to say what about it might be causing an issue.
    As far as excluding members there are various function in MDX to narrow down the set you are querying
    Filter(), Contains(), Except(), Is(), Subset(), UDA(), etc.
    Keep in mind you did not make members dynamic, you made a hierarchy dynamic, that is not the same thing and it does impact the way Essbase internally optimizes the database based on Stored vs dynamic hierarchies. So that alone can have an impact as well.

  • Does increasing ram improves game performance?

    does increasing ram improves game performance of my MBP 13" 2012??

    Using Activity Monitor to read System Memory & determine how much RAM is used
    Adding RAM only makes it possible to run more programs concurrently.  It doesn't speed up the computer nor make games run faster.  What it can do is prevent the system from having to use disk-based VM when it runs out of RAM because you are trying to run too many applications concurrently or using applications that are extremely RAM dependent.  It will improve the performance of applications that run mostly in RAM or when loading programs.

  • Retrieval performance become poor with dynamic calc members with formulas

    We are facing the retrieval performance issue on our partititon cube.
    It was fine before applying the member formulas for 4 of measures and made them dynamic calc.
    The retrieval time has increased from 1sec to 5 sec.
    Here is the main formula on a member, and all these members are dynamic calc (having member formula)
    IF (@ISCHILD ("YTD"))
    IF (@ISMBR("JAN_YTD") AND @ISMBR ("Normalised"))
    "Run Rate" =
    (@AVG(SKIPNONE, @LIST (@CURRMBR ("Year")->"JAN_MTD",
    @RANGE (@SHIFT(@CURRMBR ("Year"),-1, @LEVMBRS ("Year", 0)), @LIST("NOV_MTD","DEC_MTD")))) *
    @COUNT(SKIPNONE,@RSIBLINGS(@CURRMBR ("Period")))) + "04";
    ELSE
    IF (@ISMBR("FEB_YTD") AND @ISMBR ("Normalised"))
    "Run Rate" =
    (@AVG (SKIPNONE, @RANGE (@SHIFT(@CURRMBR ("Year"),-1, @LEVMBRS ("Year", 0)),"DEC_MTD"),
    @RANGE (@CURRMBR ("Year"), @LIST ("JAN_MTD", "FEB_MTD"))) *
    @COUNT(SKIPNONE,@RSIBLINGS(@CURRMBR ("Period")))) + "04";
    ELSE
    "Run Rate"
    =(@AVGRANGE(SKIPNONE,"Normalised Amount",@CURRMBRRANGE("Period",LEV,0,-14,-12))*
    @COUNT(SKIPNONE,@RSIBLINGS(@CURRMBR ("Period"))))
    + "Normalised"->"04";
    ENDIF;
    ENDIF;
    ELSE 0;
    ENDIF
    Period is dense
    Year is dense
    Measures (normalised) is dense
    remaining all sparse
    block size 112k
    index cache to 10mb
    Rertrieval buffer 70kb
    dynamiccalccahe max set to 200mb
    Please not that, this is partition cube, retriving data from 2 ASO, 1 BSO underline cubes.

    I received the following from Hyperion. I had the customer add the following line to their essbase.cfg file and it increased their performance of Analyzer retrieval from 30 seconds to 0.4 seconds. CalcReuseDynCalcBlocks FALSE This is an undocumented setting (will be documented in Essbase v6.2.3). Here is a brief explanation of this setting from development: This setting is used to turn off a method of reusing dynamically calculated values during retrievals. The method is turned on by default and can speed up retrievals when it involves a large number of dynamically calculated blocks that are each required to compute several other blocks. This may happen when there is a big hierarchy of sparse dynamic calc members. However, a large dynamic calculator cache size or a large value of CALCLOCKBLOCK may adversely affect the retrieval performance when this method is used. In such cases, the method should be turned off by setting CalcReuseDynCalcBlocks to FALSE in the essbase.cfg file. Only retrievals are affected by this setting.

  • Cube with aggregation more slow that a cube without aggregation

    Hi,
    I have a SSAS 2012 database that I am trying to tune to give more performance.
    I improve several best practices like define attribute relationships between attributes and also create aggregations (with bids helper) to try so tune the speed of the query.
    But every time I compare my Cube with the old without aggregations the times are almost the same.
    I run the Sql profiler and I saw that the aggregations are used but at the end a Query Subcube is always done I increase the duration of the query.
    If my query can get all from the aggregation it's usual to do at the end a Query SubCube ?
    In the Old Cube no aggregations are used and all information is used by the partition and query Subcube.
    Thanks,
    Manuel Gomes

    Hi Gomes,
    If I understanding correctly, you encounter performance issue when using SQL Server Analysis Services. Since not know the structure of you SSAS database, it hard to give your the root reason that cause the performance issue. I'd suggest you enable SQL
    Sever profiler to monitor the queries fired by the process, once you find some queries took a very long time to run, consider creating the smaller cube partition or optimzing the query by adding index or partition to improve the query performance.
    Here are some links about performance tuning, please see:
    http://www.mssqltips.com/sqlservertip/2565/ssas--best-practices-and-performance-optimization--part-1-of-4/
    http://technet.microsoft.com/en-us/library/cc966527.aspx
    Hope this helps.
    Regards,
    Charlie Liao
    TechNet Community Support

  • How to improve query performance using infoset

    I create one infoset that including 4 char.and 3 DSO which all are time-dependent.When query run, system show very poor perfomance, sometimes no data show in BEX anayzer. In this case I have to close BEX analyzer at first and then open it again, after that it show real results. It seems very strange. Does anybody has experience on infoset performance improvement. pls info, thanks!

    Hi
    As info set itself doesn't have any data so it improves Performance
    also go through the below tips.
    Find the query Run-time
    where to find the query Run-time ?
    557870 'FAQ BW Query Performance'
    130696 - Performance trace in BW
    This info may be helpful.
    General tips
    Using aggregates and compression.
    Using less and complex cell definitions if possible.
    1. Avoid using too many nav. attr
    2. Avoid RKF and CKF
    3. Many chars in row.
    By using T-codes ST03 or ST03N
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particular day > check query execution time.
    Statistical Records Part 4: How to read ST03N datasets from DB in NW2004
    How to read ST03N datasets from DB
    Try table rsddstats to get the statistics
    Using cache memory will decrease the loading time of the report.
    Run reporting agent at night and sending results to email. This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
    Also try
    1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
    2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
    Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
    It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
    3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
    Open the Aggregates...and observe VALUATION and USAGE columns.
    "---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
    In usage column,we will come to know how far the aggregate has been used in query.
    Thus we can check the performance of the aggregate.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    performance ISSUE related to AGGREGATE
    Note 356732 - Performance Tuning for Queries with Aggregates
    Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
    Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
    5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
    By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
    You can go to T-Code DB20 which gives you all the performance related information like
    Partitions
    Databases
    Schemas
    Buffer Pools
    Tablespaces etc
    use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
    If aggregates contain incorrect data, you must regenerate them.
    202469 - Using aggregate check tool
    Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
    You can find out whether an aggregate is usefull or useless you can find out through a proccess of checking the tables RSDDSTATAGGRDEF*
    Run the query in RSRT with statistics execute and come back you will get STATUID... copy this and check in the table...
    This gives you exactly which infoobjects it's hitting, if any one of the object is missing it's useless aggregate.
    6
    Check SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
    Generate Report in RSRT
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Business Intelligence Journal Improving Query Performance in Data Warehouses
    http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    Assign points if useful
    Cheers
    SM

  • Re: How to Improve the performance on Rollup of Aggregates for PCA Infocube

    Hi BW Guru's,
    I have unresolved issue and our team is still working on it.
    I have already posted several questions on this but not clear on how to reduce the time on Rollup of Aggregates process.
    I have requested for OSS note and searching myself but still could not found.
    Finally i have executed one of the cube in RSRV with the database selection
    "Database indexes of an InfoCube and its aggregates"  and got warning messages i was tried to correct the error and executed once again but still i found warning message. and the error message are as follows: (this is only for one info cube we got 6 info cubes i am executing one by one).
    ORACLE: Index /BI0/IACCOUNT~0 has possibly degenerated
    ORACLE: Index /BI0/IPROFIT_CTR~0 has possibly degenerated     
    ORACLE: Index /BI0/SREQUID~0 has possibly degenerated
    ORACLE: Index /BIC/D1001072~010 has possibly degenerated
    ORACLE: Index /BIC/D1001132~010 has possibly degenerated
    ORACLE: Index /BIC/D1001212~010 has possibly degenerated
    ORACLE: Index /BIC/DGPCOGC062~01 has possibly degenerated
    ORACLE: Index /BIC/IGGRA_CODE~0 has possibly degenerated
    ORACLE: Index /BIC/QGMAPGP1~0 has possibly degenerated
    ORACLE: Index /BIC/QGMAPPC2~0 has possibly degenerated
    ORACLE: Index /BIC/SGMAPGP1~0 has possibly degenerated
    i don't know how to move further on this can any one tell me how to tackle this problem to increase the performance on Rollup of Aggregates (PCA Info cubes).
    every time i use to create index and statistics regularly to improve the performance it will work for couple of days and again the performance of the rollup of aggregates come down gradually.
    Thanks and Regards,
    Venkat

    hi,
    check in a sql client the sql created by Bi and the query that you use directy from your physical layer...
    The time between these 2 must be 2-3 seconds,otherwise you have problems.(these seconds are for scripts that needed by Bi)
    If you use "like" in your sql then forget indexes....
    For more informations about indexes check google or your Dba .
    Last, i mentioned that materialize view is not perfect,it help a lot..so why not try to split it to smaller ones....
    ex...
    logiacal dimensions
    year-half-day
    company-department
    fact
    quantity
    instead of making one...make 3,
    year - department - quantity
    half - department - quantity
    day - department - quantity
    and add them as datasource and assign them the appropriate logical level at bussiness layer in administrator...
    Do you use partioning functionality???
    i hope i helped....
    http://greekoraclebi.blogspot.com/
    ///////////////////////////////////////

Maybe you are looking for

  • How do I sync one item at a time?

    I recently got a refurbished iphone 4. I had to reset it as a new phone to get rid of a previous passcode setting on the applications store. But I had already synced it. Now I would like to restore my contacts, photos, music, messsages, etc. without

  • Using PI 7.0 to handle moving a PDF document into a SOAP Message & sending

    Here is our situation.  We have the business writing all invoices for a day out of SAP to disk formatted as a PDF document; meaning, if the file is FTP'd from the disk of SAP to an NT server inside our firewall it is recognised as a PDF by Windows an

  • What version of adobe flash player

    I need help.  I have a kindle fire hd and cannot figure out what flash player to download.  There are quite a few

  • Transferring old iPhoto pictures to new MacBook

    I am trying to merge my old iPhoto pictures from a MacBook running OS X 10.5.8 to a new MacBook running 10.7.5 I tried connecting the two computers with a USB cable, I can't seem to get the two to connect when I open finder.

  • Where is the OS X 10.4.11 combo download?

    I'm trying to find the Mac OS X 10.4.11 Combo Update ((Intel), but whenever I find a link to it (such as http://support.apple.com/downloads/Mac_OS_X_10_4_11_Combo_Update__Intel_) and click on the download button, I get sent back to the Downloads page