Query Tracking

Hi All,
Is there any PeopleSoft delivered tools using which we can track the queries ran by the users and the time taken for the query to complete. We need to track the chnages made to the quries and their previous run times. So we are looking for a tool (if delivered ny PeopleSoft) to track the run-time.
Any help would be appreciated.
Thanks,
Chris

Go to PeopleTools > Utilities > Query Administration
Search for the queries you want to monitor.
Select them and click on 'Logging On' button
After this, PS will track the query stats..
you can come to the same page and see
Avg Time, Avg Rows, # of Runs, Last Run Date, Last Run Time etc

Similar Messages

  • ASO - query tracking

    Hi all,
    We have an ASO cube and we are using EPM 11.1.2.1. I need some advice about query tracking.
    1. after enabling the "Query tracking option" , which all tools can be used to track the query except add-in and MDX ? Can querying through smart view or OBIEE track queries ?
    2. How can we automate the process of materializing the tracked queries views ?
    Kindly advice.
    Thanks
    Edited by: user11135143 on Jan 25, 2012 12:54 PM

    In ASO query tracking is not something you can see in any tool. It is stored internally and last until you either stop the database or restructure it. It will track retrieves from Smart View, the Add-in, Financial Reports, Web analysis, OBIEE and any other reporting tool you use against the cube.
    As for automation, You can automate the materialization using Maxl Scripts. look up execute Aggregation process, build and selection in the tech refernence for info

  • How to track querying time in ASO

    Hi
    I am working on an ASO cube and have lot of member formulas in the outline.
    The formulas are really big and could be modified with some tweeks.
    Now i want to check, whether on changing formulas, my querying time is increasing or decreasing. Is there any way or log file to track, how much time did it take when i query for set of members from my spreadsheet.
    I want to compare how much seconds/min did a querying event took.

    Dude,
    I believe the best practice here is going to be to Enable Query Tracking in the Essbase Database (Right-Click database > Query Tracking > Enable).
    At this point we typically write several report scripts that simulate queries against HFR reports, etc. that the business would be pulling on a daily basis (or other frequency). You can even schedule the report scripts so that they execute at some intense frequency to check how the server handles concurrent requests, etc.
    The query tracking output should then provide you with most of the infromation you seek.
    Be sure to turn off query tracking for that database after you've completed your testing/exercise as it add additional processing to the database which will slow down a production server.
    Some References:
    http://download.oracle.com/docs/cd/E17236_01/epm.1112/eas_help/frameset.htm?qrytrack.html
    http://download.oracle.com/docs/cd/E17236_01/epm.1112/eas_help/frameset.htm?dbwzagg_3.html
    If this was helpful or the correct answer please award points.
    Cheer,
    Christian
    http://www.artofbi.com

  • Track of Inventory opening balance

    Hi All,
    Can you please guide a bit regarding the below query
    Track of Inventory opening balance     
    How to track or get the opening balances (of the concerned a/c(s)) entered for the items.
    Thanks a lot in advance...

    HI,
    under (Inventory->Inventory reports->'Inventory Audit report)... you can find it opening balance for each item

  • ASO Calc Script (AKA Saved Query)

    We leveraging query tracking and have attempted to leverage the saving of the tracked queries as script we should be able to run monthly... Should be able to is the problem..!!
    Our outline dimensions are static, but the structured are very fluid and this seems to keep our scripts from working. Does anyone have an idea how we can leverage these scripts, or something like. I have attempted to leverage the following process with limited success
    1. enable query tracking
    2. materialize suggested views
    3. run report scripts to mirror regular retrieval issues
    4. materialize the views based on the tracking
    Length process, but beats nothing..
    Any other idea's??

    I think you might be confusing BSO calc scripts with ASO custom calcs, there is a whole section in the documentation on ASO calcs - http://docs.oracle.com/cd/E40248_01/epm.1112/essbase_db/aso_custcalc_alloc.html
    There are also examples on the internet if you search spend some time researching.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Questions on OBIEE features.

    Hi,
    I am new to Oracle BI.
    I have few questions regarding OBIEE features, Kindly answer to best of your knowledge, thanks in advance.
    1.     What database types/ sources can the BI pull from?
    2.     Is the license portable from one server to another?
    3.     Are pages, graphs, reports, etc. 508 compliant?
    4.     Is the license portable from one server to another?
    5.     Are pages, graphs, reports, etc. 508 compliant?
    6.     How large a dataset can the BI handle?
    7.     How easily does the BI tool integrate new data to it's database?
    8.     What precautions do the BI use to back up data?
    9.     Level of Effort/ Steps involved to install, upgrade, or maintain BI
    10.     Can the BI be used to create/moderate forums (wordpress or other)
    11.     Can user comments be routed to different mailboxes based on predefined conditions (for example, url or user actions on the page)?
    12.     Does dashboard support embedding flash?
    13.     Are cross-tab capabilities supported in business view, dashboard, and ad hoc queries?
    14.     What type of charts can be created with the BI?
    15.     What is the level of effort/ steps involved to create new tools and widgets?
    16.     Is the BI compatible with Linux/Unix?
    17.     Are end user analytics recorded? (Web trends records, query tracking, page usage, etc.)
    18.     How does the BI handle version control?

    877064 wrote:
    Hi,
    I am new to Oracle BI.
    I have few questions regarding OBIEE features, Kindly answer to best of your knowledge, thanks in advance.
    1.     What database types/ sources can the BI pull from?OBIEE supports many popular databases out there including flat file, xml's as sources.
    2.     Is the license portable from one server to another?No. As far as I know you have to buy the license based on number of servers you are doing the installation on.
    3.     Are pages, graphs, reports, etc. 508 compliant?OBIEE is 508 compliant application.
    4.     Is the license portable from one server to another?Refer to answer for Question # 2.
    5.     Are pages, graphs, reports, etc. 508 compliant?Refer to answer to Question # 3.
    6.     How large a dataset can the BI handle?Depends. You can do performance tuning both on BI server side as well as database to boost up the performance. Caching plays a major role here.
    7.     How easily does the BI tool integrate new data to it's database?Not sure what you are asking here.
    8.     What precautions do the BI use to back up data?OBIEE does not store any data. It only stores the metadata of the tables you are using.
    9.     Level of Effort/ Steps involved to install, upgrade, or maintain BIPretty good documentation out there and many blog posts that should help you ease you through the installation process.
    10.     Can the BI be used to create/moderate forums (wordpress or other)No. OBIEE is not a CMS.
    11.     Can user comments be routed to different mailboxes based on predefined conditions (for example, url or user actions on the page)?Yes.
    12.     Does dashboard support embedding flash?Yes.
    13.     Are cross-tab capabilities supported in business view, dashboard, and ad hoc queries?Yes.
    14.     What type of charts can be created with the BI?Many different kinds. Some of them are:
    Bar
    Line
    Area
    Pie
    Line-Bar
    Time Series Line
    Pareto
    Scatter
    Bubble
    Radar
    15.     What is the level of effort/ steps involved to create new tools and widgets?Not sure what you are asking here.
    16.     Is the BI compatible with Linux/Unix?Yep.
    17.     Are end user analytics recorded? (Web trends records, query tracking, page usage, etc.)Yes. Usage tracking.
    18.     How does the BI handle version control?There are good number of useful tools for version controlling in OBIEE.
    Refer: http://www.rittmanmead.com/2009/01/simple-version-control-for-obiee-using-subversion-visualsvn-server-and-tortoisesvn/
    -Amith

  • ExceptionInInitializerError on EAS 7.1.3 Install

    Hi,Just installing EAS 7.1.3 on Win 2000 SP4 server.Selecting both Server and Console with Default MySQL and Apache Tomcat.Goes through the installation wizard all right until the very end when I get"Fatal Application Error""This Application has Unexpectedly Quit""Invocation of This Java Application has caused an ExceptionInInitializerError. This application will now exit (LAX)"Click on details gives me a whole heap of java related errors.The console works fine if connecting to another EAS Server, but cannot start the Server on that local box, cannot start the service (via the start_service.bat file in the /EAS/Server/bin directory).Cannot un-install either, wizard goes through a few next/next/next, but the description area of the wizard box remains empty until the end, and the software remains on the local disk.Any ideas ??

    One aspect of designing aggregations in EAS (AAS in System 9) is that you can enable query tracking which causes the server to watch how queries actually use ASO. Then, you can use EAS/AAS to add aggregations based on your actual usage, which may improve performance on queries of interest. Once you've saved the aggregation plan produced by that, <BR><BR>I believe you can just script the building of aggregations using that plan through maxl with the following command line:<BR><BR>execute aggregate build on database <your-db> using view_file <aggregation-view-file> ;<BR><BR>HTH<BR>

  • Routing logs to individual log file in multi rules_file MaxL

    Hi Gurus,
    I have been pretty late to this forum after long time. I have a situation here, and trying to find out the best way for operational benefits.
    We have an ASO cube (Historical) keeps 24 months snapshot data and refreshed monthly just like last 24 months rolling. The cube size is around 18.5 GB. The input level data size is around 13 GB. For monthly refresh the current process rebuilds the cube from scratch, deletes the 1/24 snapshot as it is going to add last months snapshot. The entire process takes 13 hours of processing time becuase the server doesn't have number of CPUs to support parallel operations.
    Since we recently moved to 11.1.2.3, and have ample amounts of CPUs(8) and RAM (16gb), I'd like to take davantage of parallelism, and will go for incremental load. Prior to that since the outline build is EPMA driven I'd only like to rebuild the dimension with all data, essentially restructures the DB, with data after metadata refresh, so that I can keep my history intact, and only proceed for loading the last month's data after clearing out the 1st snapshot.
    My MaxL script looks like below:
    /* Set up logs */
    set timestamp on;
    spool on to $(mxlLog).log;
    /* Connect to Essbase */
    login $key $essUser $key $essPwd on $essServer;
    alter application "$essApp" load database "$essDB";
    /* Disable User Access to DB */
    alter application "$essApp" disable connects;
    /* Unlock all objects */
    alter database "$essApp"."$essDB" unlock all objects;
    /* Clear all data for previous month*/
    alter database "$essApp"."$essDB" clear data in region 'CrossJoin({([ACTUAL])},{[&CLEAR_PERIOD]})' physical;
    /* Load SQL Data */
    import database "$essApp"."$essDB" data connect as $key $edsUser identified by $key $edsPwd using multiple rules_file 'LOADDATA','LOADJNLS','LOADFX','LOAD_J1','LOAD_J2','LOAD_J3','LOADDELQ' to load_buffer_block starting with buffer_id 1 on error write to "$(mxlLog)_LOADDATA.err";
    /* Selects and build an aggregation that permits the database to grow by no more than 300% */
    execute aggregate process on database "$essApp"."$essDB" stopping when total_size exceeds 4 enable alternate_rollups;
    /* build query tracking views */
    execute aggregate build on database "$essApp"."$essDB" using view_file 'gw';
    /* Enable Query Tracking */
    alter database "$essApp"."$essDB" enable query_tracking;
    /* Enable User Access to DB */
    alter application "$essApp" enable connects;
    logout;
    exit;
    I am able to achive performance but not satisfactory. So I have couple of queries below.
    1. Whether bule shaded codes can further be tuned. I have major problem in clearing only 1 month snapshot : where I require to clear one scenario and the designated 1st month.
    2. Multiple rules_file statement, how do I write logs of each load rule to separte log files instead one, my previous process is wrting error-log for each load rule in separte log file and consolidates at the end of batch run to a single file for the whole batch execution.
    Apprecaite any help in this regrad.
    Thanks,
    DD

    Thanks Celvin. I'd rather route MaxL logs in one log file and consolidate into the batch logs instead of using
    multiple log files.
    Regrading Partial Clear:
    My worry is, I first tried partial clear with 'logical', that too took considerable amonut of time, and the
    difference between logical and physical clear is only 15-20 minutes. FYI, I have 31 dimensions in this cube,
    and the MDX clear script that use Scenario->ACTUAL and Period->&CLEAR_PERIOD (SubVar) is of dynamic hierarchy
    type.
    Is there a way I can rewrite the clear data MDX script in betterway  so that it will clear faster, than this
    <<CrossJoin({([ACTUAL])},{[&CLEAR_PERIOD]})>>
    Does this clear MDX have any effect on dynamic/stored hierarchy nature of the dimension, if not, then what
    would be optimized way to write this MDX?
    Thanks,
    DD

  • Essbase add views to existing aggregation

    Hello All,
    I need help adding new views to an existing .csc file.
    Here is the process:
    1)     After cube data is loaded query tracking is enabled
    2)     Users run their retrieves
    3)     Aggregation is saved and new views are added to the existing views in the file
    The MaxL I have for step 3 is the following:
    execute aggregate selection on database application.database based on query_data dump to view_file myviewfile;
    The following error displays with this script:
    Viewfile [] already exists
    Unexpected Essbase error 12141180
    What do I need to do to add the views to the existing file?
    Thank you

    Thanks John - I wasn't sure of that myself and ran a quick test last night, your skepticism prompted me to re-run it.
    The first time around I was sure it produced a file that contained all existing views plus the new ones (which isn't strictly the same as appending to the file, but produces the same results in this situation). However, when I retest in the cold light of day, it doesn't contain all the existing views - only some of them, presumably because they just happen to have been reselected by the new selection process.
    There is some syntax in the Tech Ref which purports to ensure that existing views wind up in the new script, which may help the o/p:
    execute aggregate selection on database app.db using views 0, 1, 2, 3, 4 with outline_id 12345 force display based on query_data force_dump to view_file filename;
    The choice of 'display' or 'suppress' keywords determines whether the new file contains the existing views or not. There doesn't seem to be a shorthand for listing out the view ids, which is a bit of a pain. The view ids can be obtained from the original view file (they are the values on every other line from the third line onward) or as output from:
    query database app.db list existing_views;
    Never actually used this except just now to test it. It'd be a good idea to back up the existing view file regardless of approach. If I used the above syntax, I think I'd be inclined to use 'suppress' to produce a second script rather than modifying the original.

  • In OBIEE, retrieving base members along with their attributes is very slow

    I can use Attributes in the prompts and not in the actual report columns, then the report retrieval is very fast. But as soon as I add the Base Member, Attribute1, Measures1, Measure2 etc, the retrieval is slow. All the measures are dynamically calculated. Adding the attribute to the prompt - works fine. How do I design a aggregation with the attributes, I have enabled the enable query tracking. Any suggestions.

    I really recommend reading the DBAG section I linked in your other recent question: Re: ASO Cube with attributes very slow in retrieval
    For various classes of attribute queries, aggregations are not / can not be used - unfortunately this is just how ASO works. Personally, I have had to make what would logically be an attribute into a real, stored dimension because of this limitation.

  • ASO update of attribute associations causes Data to be "converted"?

    Has anyone seen the issue in 11.1.2.3(.506) where associating attributes  with a dimension causes Essbase to reorganize the data (it is referred to as convert)?
    e.g.
    import database app.db dimensions connect as user identified by password using server rules_file 'DimFuel' on error write to '/essapp/subject/sos/logs/sosp04_dims.build_dim_sosp04_sosp04_dimopen.20150429011053.err':
      OK/INFO - 1270086 - Restructuring converted [5.72103e+08] input cells and removed [0] input cells.
      OK/INFO - 1270087 - Restructuring converted [32] and removed [0] aggregate views.
      OK/INFO - 1007067 - Total Restructure Elapsed Time : [1944.88] seconds.
    Previoiusly this process ran in 3 seconds in 11.1.2.2(.100)

    I agree with everything that Tim has said.  Let me elaborate some more that might be helpful: 
    The fact that agg views are not based on query tracking makes no difference in the analysis.  Query tracking only affects WHICH views are selected.  Once views are selected by whatever means they are handled in the same way.  Is there a reason you think otherwise?
    Let's divide the question into two parts: 
        1.  What is a restructure and Why is a restructure needed?
        2.  Why must the agg views be converted
    But first realize there are two possibilities concerning WHICH agg views actually exist:
      a. All views might be on the primary hierarchy only.  (You told Essbase to consider alternate hierarchies when creating aggregates) - let's call that the Agg_Primary option
      b. Views might be based on both primary and alternate (which includes attribute) hierarchies - let's call that the Agg_Any option
    All of this is is discussed in my Chapter "How ASO Works and How to Design for Performance" in the book "Developing Essbase Applications" edited by Cameron Lackpour.  You will also find a discussion of the bitmap in the section of the DBAG entitled "An aggregate storage database outline cannot exceed 64 bits per dimension" (Essbase Administrators Guide Release 11.1.2.1 Chapter 62 page 934). and in a presentation I made at Kscope 20120 which you can find on ODTUG.com
    1.  Why a restructure?  As Tim says the outline has changed and anytime the number of levels per hierarchy or the width of the hierarchy changes then the coding system used for the data changes.  OK what do i mean by this?  The binary system by which each piece of data in your cube is described is called the "BitMap".  In actuality, this bitmap only reflects the data's position in the primary hierarchy for each dimension.  The primary hierarchy for a specific dimension is not necessarily the first hierarchy seen in your outline.  It is the "widest" hierarchy - the one requiring the greatest number of bits to represent all Level 0 (L0) members found in that hierarchy and each L0's full ancestry within that hierarchy.
    If you read the references above you will see that the number of bits used to determine the "widest" hierarchy is a function of the number of levels and the size of the largest "family" in a hierarchy.  A hierarchy of 5 levels where the largest family has 17 members will require 3 bits more than a hierarchy of 5 levels with 4 members in the family (17 requires 5 bits and 4 requires 2 bits).  So you can see that any time you add more members you could be causing the size of the largest family to exceed a power of 2 boundary.
    Additionally, if the primary hierarchy is NOT all inclusive - i.e. it contains all L0 members for that dimension then you have to add a sufficient number of bits to enumerate the hierarchies.
    So, in summary changes to the width or the height (number of levels) will require a restructure forcing the the identifying bits on every piece of data to be updated.
    In the case the OP mentions where the ONLY changes are to add attribute associations you normally would NOT expect to see a change in the bitmap due to the number of bits required.  This is because attribute dimensions can never have new unique L0 members (you can not associate to something that does not exist).  If you go thru the math (and realize that the bitmap for an attribute dimension does NOT have consider size of the L0 families of the primary hierarchy - or whichever Level the association is based on) you will find that there is no possible attribute dimension that can require more bits than the primary hierarchy.  UNLESS you have been sloppy and you have a primary dimension with x L0 members and a very large secondary hierarchy with x-n members UNIQUE L0 members (i.e. L0 members not appearing in the primary hierarchy).
    So the answer is to inspect the statistics tab before and after your addition of member associations.
    Note - (and this does NOT pertain in the OP's Case) a similar situation exists if all secondary hierarchies contain ONLY members that already exist within the primary hierarchy.  However, remember that the FIRST hierarchy in your outline is not necessarily the primary hierarchy.  Most people however consider the first hierarchy as the de-facto primary.  If one is in a hurry and adds the member to that first hierarchy but does not add it to ALL of the alternate hierarchies (one of which is the true primary), then bits will have to be added to each piece of data to enumerate the hierarchies - thus triggering a restructure.
    Finally, I am relying on the OP's description of the conditions where it was stated that ONLY associations were added and that no upper level attribute members created.  If upper level attribute members are added it is possible that the number of levels for the attribute dimension is changed.  In this case a mini restructure would be required - one that would not change the bitmap on every data item but a change to the mapping table that relates alternate hierarchies to the primary hierarchy.  Note the existence of this table and its exact structure is not acknowledged by Oracle - I (Dan Pressman) have postulated it as one possible implementation of the observed functionality.
    2.  Aggregate View Conversion:  Each data item is tagged not only with a bitmap indicating its position within each primary hierarchy but with a "View ID".  This is the number that is seen in the ASO version of the .CSC file created whenever the view definition is saved.  The input data is always identified by a View ID of 0.  The view ID of other views is a function of the number of levels and the bits required therefore of ALL hierarchies of ALL dimensions.  Therefore any restructure will require that the View IDs of all aggregate views be translated (or converted) from the old scheme to the new scheme.  Note that this is purely a translation and no aggregation is required.
    Please excuse me if I post this now and add some more later on this second question - actually let me know if anyone has actually read this far and is interested.
    Please note in the above whenever i have said "Each data item" I am referring to the situation where no dimension has been tagged as the "compression" dimension.  If a compression dimension has been used then replace the phrase "Each data item" with the phrase "each data bundle".  I will leave it to the reader to find the section of the DBAG that describes compression and data bundles.

  • Retrieval time unchanged even after extra dimensions removed from ASO app

    We have an ASO application on which HFR reports are running. The application has 18 dimensions. The HFR reports running were taking a long time to get generated so we created a duplicate of that application and removed 5 dimensions which were not being used. Even after running the reports out of the duplicate application the generation time has not reduced.
    Please advice.

    Do you know how the report generation time relates to the Essbase query time (check the Essbase application log)? It might be that you're doing a lot of work on the FR side.
    If the problem really is on the ASO side, did the number of input data cells and / or key length change significantly when you removed the dimensions? If not, removing dimensions (which I am assuming were not referenced in the report in the first place) might not get you very much in terms of query performance. Have you created any aggregate views on the ASO cube?
    If you have a particular 'problem' report it's possible to use query tracking to create some aggregations directly to support that one query, particularly if the report always retrieves at the same level in each dimension.

  • Studio MaxL Help

    Hi Experts,
    Currently i am loading the ASO cube using essbase studio maxl script.
    Could you please help me what script i have to attach to this *"to enable query tracking and to use already saved aggregation list*".
    *"deploy data from model 'XXXX' in cube schema '\XXX' login 'XXX' identified by 'XXX' on host 'XXX' to application 'XXX' database 'XXX' overwrite values use streaming build using connection 'XXXX' keep 200 errors on error ignore dataload write to default;"*
    Please kindly help me to achieve this issue.

    After your deploy command add
    Alter database xxxxx.yyyyy enable query_tracking;
    execute aggregate build on database xxxx.yyyy using view_file xxxxxx;

  • Design Aggrigation for ASO cubes

    Hi All,
    We are 12 ASO cubes in each server and each cube have minimum 5-6 months data .Normal aggregation data preview is taking lot of time .So we are planing to do design aggregation .We have a small doubt about this aggregation like
    **1)its going to be do every time we load the data into cubes or once we do this it will create automatically**
    **2)Suppose this month we have loaded 2 months data into cubes after that we perform design aggregation .Then again next month we load the 2 months data into cubes . again we need to do total 4 months design aggregation or is there is any method we can do partial design aggregation or incremental design aggregation .i.eLike already we have done the design aggregation so i want to do only next two months not total 4 months **
    Kindly let me know if any automation process available for this .
    Thanks,
    Vikram
    Edited by: Vicky143 on Jan 13, 2010 10:00 PM
    Edited by: Vicky143 on Jan 13, 2010 10:02 PM

    Hi Vikram,
    Which version are you using?
    1) Do you reset the cube(clear the data) whenever you re-load your cube?
    If yes, you can't expect your earlier aggregations be still there. However, if you've saved your agg. selections & outline is more/less the same, you can materialize the aggs. by using the saved script.
    If your load is an incremental one, the changes that take place to your outline matters a lot as they may invalidate your previous agg. selections.
    2) As more Lev0 data starts flowing in, you've to periodically monitor for the trade-off b/n agg. time/space and performance requirements. The only thing I know about incremental agg. for ASO is via enabling user query tracking & doing the necessary aggs.
    Visit this link to know how to automate-
    [execute aggregation selection|http://download.oracle.com/docs/cd/E10530_01/doc/epm.931/html_esb_techref/maxl/ddl/statements/execaggsel.htm] !
    [execute aggregate process|http://download.oracle.com/docs/cd/E10530_01/doc/epm.931/html_esb_techref/maxl/ddl/statements/execagg.htm]
    Let us know incase you've any questions.
    - Natesh

  • ASO Aggregations

    <p>Hi,  I'm having a difficult time find the right balancebetween aggregation, cube size and retrieval time.  I have anASO cube that has 3 years of data in it, totalling at 160MB. It has a total of 8 dimensions, 6 of which that containmultiple rollups.  So the data is obviously being aggregatedat many different points.  The problem is that if I go to setthe aggregation size higher this will help make the retreival timesfaster, but at the expense of making the cube much larger.  For example an aggregation of 400 MB will take 14.79seconds, where an aggreagtion of 1500 MB will take 7.66 seconds.   My goal is to get retreival times down to 1-3seconds, where the user would notice almost no delay at all.</p><p> </p><p>Does anyone have any suggestions on how to find a good balancebetween faster retrievals and the different aggregation sizes? FYI, I've also tried changing whether members are tagged asDynamic or Multiple Hierarchies enabled, but I didn't find anynoticeable difference.  Is there anything else that I shouldconsider to making the retreivals faster?  </p><p>Thanks in advance.</p>

    Given that disk space is relatively inexpensive, this should often be something you don't worry about as much as something like query performance. Especially since even a highly aggregated ASO cube is only a fraction of the foot print for a BSO cube. You should maximize as much aggregation as you can to get the best retrieval time possible and not worry about the disk space too much. Unfortunately in current releases of Essbase you cannot customize aggregations. You can use query tracking to improve aggregations by focusing on the areas most often queried. So I would suggest enabling that, let your users get in there and then start to aggregate again based on the results of the tracking. Of course remembering that by its very nature ASO cubes are very dynamic and some queries are just going to take a little longer. Something to also consider is how complex the query is and is the amount of time it takes to retrieve appropriate. Are you pulling back thousands and thousands of members? If you are then you have to expect a certain amount of time just to bring over the meta-data. Try turning on navigate without data. If your queries still take a long time to come back even though you are not pulling data, then it just means you have a large result set coming back and that's just the way it is. Also look if your result set is returning members with member formulas. MDX formulas can take a little while to run depending on how complex they are and how well they are written.<BR>

Maybe you are looking for