Scripting Aggregate storage Views

I cannot seem to find anywhere that I can use maxl to script a view materialisation specifically. I can obviously tell it to aggregate up to a certain size, after having used the wizard to create the optimum views, and cribbing the size of the views created. However, is there syntax to allow me to specify views specifically e.g. {0,1,0,2,2,3,3,2,3}to specify the levels of aggregation for a 9 dimensional aso model. I would like to automate this as part of an overnight process.

<p>An example from the Sample:Basic database:</p><p> </p><p>// This Report Script was generated by the Essbase QueryDesigner</p><p><SETUP { TabDelimit } { decimal 13 } { IndentGen -5 }<ACCON <SYM <QUOTE <END</p><p><COLUMN("Year")</p><p><ROW("Measures","Product","Market","Scenario")</p><p>// Page Members</p><p>// Column Members</p><p>// Selection rules and output options for dimension: Year</p><p>{OUTMBRNAMES} <Link ((<LEV("Year","Lev0,Year")) AND ( <IDESC("Year")))</p><p>// Row Members</p><p>// Selection rules and output options for dimension:Measures</p><p>{OUTMBRNAMES} <Link ((<LEV("Measures","Lev0,Measures")) AND (<IDESC("Measures")))</p><p>// Selection rules and output options for dimension: Product</p><p>{OUTMBRNAMES} <Link ((<LEV("Product","SKU")) AND ( <IDESC("Product")))</p><p>// Selection rules and output options for dimension: Market</p><p>{OUTMBRNAMES} <Link ((<LEV("Market","Lev0,Market")) AND ( <IDESC("Market")))</p><p>// Selection rules and output options for dimension:Scenario</p><p>{OUTMBRNAMES} <Link ((<LEV("Scenario","Lev0,Scenario")) AND (<IDESC("Scenario")))</p><p>!</p><p>// End of Report</p><p> </p><p>Note that no attempt was made here to eliminate shared membervalues.</p>

Similar Messages

  • Aggregate Storage Backup level 0 data

    <p>When exporting level 0 data from aggregate storage through abatch job you can use a maxL script with "export database[dbs-name] using server report_file [file_name] to data_file[file_name]". But how do I build a report script that exportsall level 0 data so that I can read it back with a load rule?</p><p> </p><p>Can anyone give me an example of such a report script, thatwould be very helpful.</p><p> </p><p>If there is a better way to approach this matter, please let meknow.</p><p> </p><p>Thanks</p><p>/Fredrik</p>

    <p>An example from the Sample:Basic database:</p><p> </p><p>// This Report Script was generated by the Essbase QueryDesigner</p><p><SETUP { TabDelimit } { decimal 13 } { IndentGen -5 }<ACCON <SYM <QUOTE <END</p><p><COLUMN("Year")</p><p><ROW("Measures","Product","Market","Scenario")</p><p>// Page Members</p><p>// Column Members</p><p>// Selection rules and output options for dimension: Year</p><p>{OUTMBRNAMES} <Link ((<LEV("Year","Lev0,Year")) AND ( <IDESC("Year")))</p><p>// Row Members</p><p>// Selection rules and output options for dimension:Measures</p><p>{OUTMBRNAMES} <Link ((<LEV("Measures","Lev0,Measures")) AND (<IDESC("Measures")))</p><p>// Selection rules and output options for dimension: Product</p><p>{OUTMBRNAMES} <Link ((<LEV("Product","SKU")) AND ( <IDESC("Product")))</p><p>// Selection rules and output options for dimension: Market</p><p>{OUTMBRNAMES} <Link ((<LEV("Market","Lev0,Market")) AND ( <IDESC("Market")))</p><p>// Selection rules and output options for dimension:Scenario</p><p>{OUTMBRNAMES} <Link ((<LEV("Scenario","Lev0,Scenario")) AND (<IDESC("Scenario")))</p><p>!</p><p>// End of Report</p><p> </p><p>Note that no attempt was made here to eliminate shared membervalues.</p>

  • Derived Cells in Aggregate storage

    <BR>The aggregate storage loads obviously ignore the derived cells. Is there a way to get these ignored records diverted to a log or error file to view and correct the data at the source system !?<BR><BR>Has anybody tried any methods for this !? Any help would be much appreciated.<BR><BR>-Jnt

    did yuo ever get this resolved? we are running into the same problem.We have an ASO db which requires YTD calcs and TB Last. We've tried using two separate options (CASE and IF) statements on the YTD, Year and Qtr members (ie MarYTD). Both worked and now concerned about performance. Any suggestions?

  • SSPROCROWLIMIT and Aggregate Storage

    I have been experimenting with detail level data in an Aggregate Storage style cube. I will have 2 million members in one of my a dimensions, for testing I have 514683.If I try to use the spreadsheet addin to retrieve from my cube, I get the error "Maximum number of rows processed [250000] exceeded [514683]" This indicates that my SSPROCROWLIMIT is too low. Unfortunately, the upper limit for SSPROCROWLIMIT is below my needs.What good is this new storage model if I can't retrieve data from the cube! Any plans to remove the limit?Craig Wahlmeier

    We are using ASO for a very large (20 dims) database. The data compression and performance has been very impressive. The ASO cubes are much easier to build, however have much fewer options. No calc scripts, formulas are limited in that they can only use on small dimensions and only on one dimension. The other big difference is that you need to reload and calc your data every time you change metadata. The great thing for me about 7.1 it gives you options, particularly when dealing with very large sparse non-finance cubes.If your client is talking about making calcs faster - ASO is only going to work if it is an aggregation calc.

  • Aggregate storage data export failed - Ver 9.3.1

    Hi everyone,
    We have two production server; Server1 (App/DB/Shared Services Server), Server2 (Anaytics). I am trying to automate couple of our cubes using Win Batch Scripting and MaxL. I can export the data within EAS successfully but when I use the following command in a MaxL Editor, it gives the following error.
    Here's the MaxL I used, which I am pretty sure that it is correct.
    Failed to open file [S:\Hyperion\AdminServices\deployments\Tomcat\5.0.28\temp\eas62248.tmp]: a system file error occurred. Please see application log for details
    [Tue Aug 19 15:47:34 2008]Local/MyAPP/Finance/admin/Error(1270083)
    A system error occurred with error number [3]: [The system cannot find the path specified.]
    [Tue Aug 19 15:47:34 2008]Local/MyAPP/Finance/admin/Error(1270042)
    Aggregate storage data export failed
    Does any one have any clue that why am I getting this error.
    Thnx in advance!
    Regards
    FG

    This error was due to incorrect SSL settings for our shared services.

  • Load and Unload Alias Table - Aggregate Storage

    Hi everyone,<BR><BR>Here I am again with another question about aggregate storage...<BR><BR>There is no "load" or "unload" alias table listed as a parameter for "alter database" in the syntax guidelines for aggregate storage (see <a target=_blank class=ftalternatingbarlinklarge href="http://dev.hyperion.com/techdocs/eas/eas_712/easdocs/techref/maxl/ddl/aso/altdb_as.htm">http://dev.hyperion.com/techdo...l/ddl/aso/altdb_as.htm</a> )<BR><BR><BR>Is this not a valid parameter for aggregate storage? If not, how do you load and unload alias tables if you're running a batch script in MaxL and you need the alias table update to be automated?<BR><BR>Thanks in advance for your help.<BR><BR>

    Hi anaguiu2, <BR><BR>I have the same problem now. Do you find a solution about the load and unload alias table - Aggregate storage ? Could you give me your solution used if you have one. <BR><BR>Thanks, Manon

  • Out of memory Error while querying SQL Script based Calculation View

    Hi All,
    I wanted to test the performance of Graphical and SQL Script based Calculation views.
    Created Graphical (CA_GRPH) and SQL Script (CA_SQL) Calculation views.
    Analytic View (AN_GRPH) for both Calculation views are the same which is Graphical based (90 Attributes and 5 Measures)
    In Analytic View data foundation I have a Fact table which has 1.5 Billion records and 9 Dimension Tables –collectively 500 million records (7 Attribute Views). 9 Referential joins with cardinality N:1 and 1 Referential join with cardinality N:N.
    I wanted to keep (CA_GRPH) and (CA_SQL) as a base Calculation views and leverage those to create various calculation views (Will be creating different Calc views for respective Business segments)
    In order to test this I have created below calc views on top of base calc views.
    Graphical Based: Created (CA_GRAPH_XYZ) by having CA_GRPH in projection with 30 Calculated Columns. – This retrieves data in 13 secs
    SQL Script Based: Created (CA_GRPH_ABC) by having CA_SQL in projection view with 30 calculated columns – This errors out after 1.50 mins.
    Could not execute 'SELECT "COLUMN_A","COLUMN _B"," COLUMN _C"," COLUMN _D", SUM("COLUMN _REVENUE") AS ...' in 1:50.480 minutes .
    SAP DBTech JDBC: [2048]: column store error:  [2048] column store error: search table error: [1000002] Error executing physical plan: exception 1000002:
    ltt/impl/memory.cpp:63
    Out of memory ; $size$=1507711; $name$=ihm; $type$=pool; $inuse_count$=170104; $allocated_size$=219215007925
    exception 1000002:
    Any suggestion / help in fixing this issue will be greatly appreciated.
    Regards,
    Av

    Hi Raj,
    Thanks for your time, please find edited snap hot of Analytic View (AN_GRPH) below,
    Calculation view(CA_SQL)
            /********* Begin Procedure Script ************/
    BEGIN
           var_out =
         SELECT
                "COLUMN_1"
                "COLUMN_2",
                "COLUMN_84",
                "COLUMN_85;",
                SUM("REVN") AS "REVN",
                SUM("MGN") AS "MGN",
                SUM("ORD_QTY") AS "ORD_QTY",
                SUM("SYS_QTY1") AS "SYS_QTY1",
                SUM("SYS_QTY") AS "SYS_QTY"
    FROM
          "_SYS_BIC"."XYZ/AN_GRPH"
    GROUP BY
                "COLUMN_1"
                "COLUMN_2",
                "COLUMN_84",
                "COLUMN_85";
    END
    /********* End Procedure Script ************/
    Later i have built one more Calculation view(CA_GRPH_ABC) using (CA_SQL)in projection. i have 30 calculated measures in this final calc view. this final calc view is throwing above mentioned error.
    Not sure if i can use SQL script based calc view in graphical based calc views?
    Regards,
    AV

  • Call a Graphical Calc view with input Parameters from a Script Based Calc View

    Hi All.
    I am trying to call a graphical calculation view with input parameters from a script based calculation view as below but getting syntax error:
    SESSION_SAMPLE = SELECT SESSION_CREATE_DATE,SHA256,CA_MEASURE
                                 FROM "_SYS_BIC"."WILDFIRE/CV_SESSION_SAMPLE"
                                 WITH PARAMETERS  ('PLACEHOLDER' = ('$$IP_START_DATE$$',:START_DATE),
                                     'PLACEHOLDER' = ('$$IP_END_DATE$$',:END_DATE));
    START_DATE  and END_DATE are input parameters of the script based calculation view.
    Can anyone please help me with the correct syntax for accomplishing this?
    Thanks,
    Goutham

    Hi Gautham,
    One more option  what i would like you to try is the below option , here i have just changed the order of passing nothing else.
    SESSION_SAMPLE = SELECT SESSION_CREATE_DATE,SHA256,CA_MEASURE
                                 FROM "_SYS_BIC"."WILDFIRE/CV_SESSION_SAMPLE"
                                   ('PLACEHOLDER' = ('$$IP_END_DATE$$','$$END_DATE$$'),
                                  'PLACEHOLDER' = ('$$IP_START_DATE$$','$$START_DATE$$'))
    Regards,
    Vinoth

  • Loading data using send function in Excel to aggregate storage cube

    Hi there
    just got version 9.3.1. installed. Can finally load to aggregate storage database using excel essbase send. however, very slow, especially when loading many lines of data. Block storage much much faster. Is there any way you can speed up loading to aggreagate storage data base? Or is this an architectural issue and therefore not much can be done?

    As far as I know, it is an architectural issue.. Further, I would expect it to slow down even further if you have numerous people writing back simultaneously because, as I understand it, they are throttling the update process on the server side so a single user is actually 'writing' at a time. At least this is better than earlier versions where other users couldn't even do a read when the database was being loaded; I believe that restriction has been lifted as part of the 'trickle-feed' support (although I haven't tested it)..
    Tim Tow
    Applied OLAP, Inc

  • Clear Partial Data in an Essbase Aggregate storage database

    Can anyone let me know how to clear partial data from an Aggregate storage database in Essbase v 11.1.13? We are trying to clear some data in our dbase and don’t want to clear out all the data. I am aware that in Version 11 Essbase it will allow for a partial clear if we write using mdx commands.
    Can you please help me on the same by giving us some examples n the same?
    Thanks!

    John, I clearly get the difference between two. What I am asking is in the EAS tool itself for v 11.1.1.3 we have option - right clicking on the DB and getting option "Clear" and in turn sub options like "All Data", "All aggregations" and "Partial Data".
    I want to know more on this option part. How will this option know which partial data to be removed or will this option ask us to write some MAXL query for the same"?

  • YTD Performance in Aggregate Storage

    Has anyone had any problems with performance of YTD calculations in Aggregate storage? Any solutions?

    did yuo ever get this resolved? we are running into the same problem.We have an ASO db which requires YTD calcs and TB Last. We've tried using two separate options (CASE and IF) statements on the YTD, Year and Qtr members (ie MarYTD). Both worked and now concerned about performance. Any suggestions?

  • Dataload in Aggregate storage outline

    Hi All,My existing code which works while loading data into Block storage outline is not working for Aggregate storage outline. When I pass "SendString" api simultaneously about 3-4 times, I got an error "Not supported for agg. storage outline". Is there any API changes for loading data into agg. storage outline. I didnt find nething related to such changes in Documentation.Regards,Samrat

    I know that EsbUpdate and EsbImport both work with ASO

  • Change Aggregate Storage Cache

    Does anyone know how to change the aggregate storage cache setting in Maxl? I can no longer see it in EAS and I don't think I can change it in MaxL. Any clue?
    Thanks for your help.

    Try something like
    alter application ASOSamp set cache_size 64MB;
    I thought you right click the ASO app in EAS and edit properties > Pending cache size limit.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Incremental Load in Aggregate Storage

    <p>Hi,</p><p> </p><p>From what I understand, Aggregate Storage (ASO) clears all dataif a new member gets added to the outline.</p><p>This is unlike Block Storage (BSO) where we can restructure thecube if new member is added to the outline.</p><p> </p><p>We need to load data daily into an ASO cube and the cubecontains 5 yrs of data. We may get a new member in the customerdimension daily. Is there a way we can retain (restructure)existing data when updating the customer dimension and then add thenew data? Otherwise, we will have to rebuild the cube daily andtherefore reload 5 yrs of data (about 600 million recs) on a dailybasis.</p><p> </p><p>Is there a better way of doing this in ASO?</p><p> </p><p>Any help would be appreciated.</p><p> </p><p>Thanks</p><p>--- suren_v</p>

    Good information Steve. Is the System 9 Essbase DB Admin Guide available online? I could not find it here: <a target=_blank class=ftalternatingbarlinklarge href="http://dev.hyperion.com/resource_library/technical_documentation">http://dev.hyperion.com/resour...echnical_documentation</a><BR><BR>(I recently attended the v7 class in Dallas and it was excellent!)<BR><BR><BR><blockquote>quote:<br><hr><i>Originally posted by: <b>scran4d</b></i><BR>Suren:<BR><BR><BR><BR>In the version 7 releases of Essbase ASO, there is not a way to hold on to the data if a member is added to the outline; data must be reloaded each time.<BR><BR><BR><BR>This is changed in Hyperion's latest System 9 release, however.<hr></blockquote><BR><BR>

  • Aggregate storage cache warning during buffer commit

    h5. Summary
    Having followed the documentation to set the ASO storage cache size I still get a warning during buffer load commit that says it should be increased.
    h5. Storage Cache Setting
    The documentation says:
    A 32 MB cache setting supports a database with approximately 2 GB of input-level data. If the input-level data size is greater than 2 GB by some factor, the aggregate storage cache can be increased by the square root of the factor. For example, if the input-level data size is 3 GB (2 GB * 1.5), multiply the aggregate storage cache size of 32 MB by the square root of 1.5, and set the aggregate cache size to the result: 39.04 MB.
    My database has 127,643,648k of base data which is 60.8x bigger than 2GB. SQRT of this is 7.8 so I my optimal cache size should be (7.8*32MB) = 250MB. My cache size is in fact 256MB because I have to set it before the data load based on estimates.
    h5. Data Load
    The initial data load is done in 3 maxl sessions into 3 buffers. The final import output then looks like this:
    MAXL> import database "4572_a"."agg" data from load_buffer with buffer_id 1, 2, 3;
    OK/INFO - 1270041 - For better performance, increase the size of aggregate storage cache.
    OK/INFO - 1270041 - For better performance, increase the size of aggregate storage cache.
    OK/INFO - 1270041 - For better performance, increase the size of aggregate storage cache.
    OK/INFO - 1003058 - Data load buffer commit elapsed time : [5131.49] seconds.
    OK/INFO - 1241113 - Database import completed ['4572_a'.'agg'].
    MAXL>
    h5. The Question
    Can anybody tell me why the final import is recommending increasing the storage cache when it is already slightly larger than the value specified in the documentation?
    h5. Versions
    Essbase Release 11.1.2 (ESB11.1.2.1.102B147)
    Linux version 2.6.32.12-0.7-default (geeko@buildhost) (gcc version 4.3.4 [gcc-4_3-branch revision 152973] (SUSE Linux) ) #1 SMP 2010-05-20 11:14:20 +0200 64 bit

    My understanding is that storage cache setting calculation you quoted is based on the cache requirements for retrieval. This recommendation has remained unchanged since ASO was first introduced in v7 (?) and was certainly done before the advent of parallel loading.
    I think that the ASO cache is used during the combination of the buffers. As a result depending on how ASO works internally you would get this warning unless your buffer was:
    1. = to the final load size of the database
    2. OR if the cache was only used when data existed for the same "Sparse" combination of dimensions in more than one buffer the required size would be a function of the number of cross buffer combinations required
    3. OR if the Cache is needed only when compression dimension member groups cross buffers
    By "Sparse" dimension I mean the non-compressed dimensions.
    Therefore you might try some experiments. To test case x above:
    1. Forget it you will get this message unless you have a cache large enough for the final data set size on disk
    2. sort your data so that no dimensional combination exists in more than one buffer - ie sort by all non-compression dimensions then by the compression dimension
    3. Often your compression dimension is time based (EVEN THOUGH THIS IS VERY SUB-OPTIMAL). If so you could sort the data by the compression dimension only and break the files so that the first 16 compression members (as seen in the outline) are in buffer 1, the next 16 in buffer 2 and the next in buffer 3
    Also if your machine is IO bound (as most are during a load of this size) and your cpu is not - try using os level compression on your input files - it could speed things up greatly.
    Finally regarding my comments on time based compression dimension - you should consider building a stored dimension for this along the lines of what I have proposed in some posts on network54 (search for DanP on network54.com/forum/58296 - I would give you a link but it is down now).
    OR better yet in the forthcoming book (of which Robb is a co-author) Developing Essbase Applications: Advanced Techniques for Finance and IT Professionals http://www.amazon.com/Developing-Essbase-Applications-Techniques-Professionals/dp/1466553308/ref=sr_1_1?ie=UTF8&qid=1335973291&sr=8-1
    I really hope you will try the suggestions above and post your results.

Maybe you are looking for