SSPROCROWLIMIT and Aggregate Storage

I have been experimenting with detail level data in an Aggregate Storage style cube. I will have 2 million members in one of my a dimensions, for testing I have 514683.If I try to use the spreadsheet addin to retrieve from my cube, I get the error "Maximum number of rows processed [250000] exceeded [514683]" This indicates that my SSPROCROWLIMIT is too low. Unfortunately, the upper limit for SSPROCROWLIMIT is below my needs.What good is this new storage model if I can't retrieve data from the cube! Any plans to remove the limit?Craig Wahlmeier

We are using ASO for a very large (20 dims) database. The data compression and performance has been very impressive. The ASO cubes are much easier to build, however have much fewer options. No calc scripts, formulas are limited in that they can only use on small dimensions and only on one dimension. The other big difference is that you need to reload and calc your data every time you change metadata. The great thing for me about 7.1 it gives you options, particularly when dealing with very large sparse non-finance cubes.If your client is talking about making calcs faster - ASO is only going to work if it is an aggregation calc.

Similar Messages

  • Load and Unload Alias Table - Aggregate Storage

    Hi everyone,<BR><BR>Here I am again with another question about aggregate storage...<BR><BR>There is no "load" or "unload" alias table listed as a parameter for "alter database" in the syntax guidelines for aggregate storage (see <a target=_blank class=ftalternatingbarlinklarge href="http://dev.hyperion.com/techdocs/eas/eas_712/easdocs/techref/maxl/ddl/aso/altdb_as.htm">http://dev.hyperion.com/techdo...l/ddl/aso/altdb_as.htm</a> )<BR><BR><BR>Is this not a valid parameter for aggregate storage? If not, how do you load and unload alias tables if you're running a batch script in MaxL and you need the alias table update to be automated?<BR><BR>Thanks in advance for your help.<BR><BR>

    Hi anaguiu2, <BR><BR>I have the same problem now. Do you find a solution about the load and unload alias table - Aggregate storage ? Could you give me your solution used if you have one. <BR><BR>Thanks, Manon

  • Aggregate Storage Backup level 0 data

    <p>When exporting level 0 data from aggregate storage through abatch job you can use a maxL script with "export database[dbs-name] using server report_file [file_name] to data_file[file_name]". But how do I build a report script that exportsall level 0 data so that I can read it back with a load rule?</p><p> </p><p>Can anyone give me an example of such a report script, thatwould be very helpful.</p><p> </p><p>If there is a better way to approach this matter, please let meknow.</p><p> </p><p>Thanks</p><p>/Fredrik</p>

    <p>An example from the Sample:Basic database:</p><p> </p><p>// This Report Script was generated by the Essbase QueryDesigner</p><p><SETUP { TabDelimit } { decimal 13 } { IndentGen -5 }<ACCON <SYM <QUOTE <END</p><p><COLUMN("Year")</p><p><ROW("Measures","Product","Market","Scenario")</p><p>// Page Members</p><p>// Column Members</p><p>// Selection rules and output options for dimension: Year</p><p>{OUTMBRNAMES} <Link ((<LEV("Year","Lev0,Year")) AND ( <IDESC("Year")))</p><p>// Row Members</p><p>// Selection rules and output options for dimension:Measures</p><p>{OUTMBRNAMES} <Link ((<LEV("Measures","Lev0,Measures")) AND (<IDESC("Measures")))</p><p>// Selection rules and output options for dimension: Product</p><p>{OUTMBRNAMES} <Link ((<LEV("Product","SKU")) AND ( <IDESC("Product")))</p><p>// Selection rules and output options for dimension: Market</p><p>{OUTMBRNAMES} <Link ((<LEV("Market","Lev0,Market")) AND ( <IDESC("Market")))</p><p>// Selection rules and output options for dimension:Scenario</p><p>{OUTMBRNAMES} <Link ((<LEV("Scenario","Lev0,Scenario")) AND (<IDESC("Scenario")))</p><p>!</p><p>// End of Report</p><p> </p><p>Note that no attempt was made here to eliminate shared membervalues.</p>

  • Loading data using send function in Excel to aggregate storage cube

    Hi there
    just got version 9.3.1. installed. Can finally load to aggregate storage database using excel essbase send. however, very slow, especially when loading many lines of data. Block storage much much faster. Is there any way you can speed up loading to aggreagate storage data base? Or is this an architectural issue and therefore not much can be done?

    As far as I know, it is an architectural issue.. Further, I would expect it to slow down even further if you have numerous people writing back simultaneously because, as I understand it, they are throttling the update process on the server side so a single user is actually 'writing' at a time. At least this is better than earlier versions where other users couldn't even do a read when the database was being loaded; I believe that restriction has been lifted as part of the 'trickle-feed' support (although I haven't tested it)..
    Tim Tow
    Applied OLAP, Inc

  • Clear Partial Data in an Essbase Aggregate storage database

    Can anyone let me know how to clear partial data from an Aggregate storage database in Essbase v 11.1.13? We are trying to clear some data in our dbase and don’t want to clear out all the data. I am aware that in Version 11 Essbase it will allow for a partial clear if we write using mdx commands.
    Can you please help me on the same by giving us some examples n the same?
    Thanks!

    John, I clearly get the difference between two. What I am asking is in the EAS tool itself for v 11.1.1.3 we have option - right clicking on the DB and getting option "Clear" and in turn sub options like "All Data", "All aggregations" and "Partial Data".
    I want to know more on this option part. How will this option know which partial data to be removed or will this option ask us to write some MAXL query for the same"?

  • YTD Performance in Aggregate Storage

    Has anyone had any problems with performance of YTD calculations in Aggregate storage? Any solutions?

    did yuo ever get this resolved? we are running into the same problem.We have an ASO db which requires YTD calcs and TB Last. We've tried using two separate options (CASE and IF) statements on the YTD, Year and Qtr members (ie MarYTD). Both worked and now concerned about performance. Any suggestions?

  • Derived Cells in Aggregate storage

    <BR>The aggregate storage loads obviously ignore the derived cells. Is there a way to get these ignored records diverted to a log or error file to view and correct the data at the source system !?<BR><BR>Has anybody tried any methods for this !? Any help would be much appreciated.<BR><BR>-Jnt

    did yuo ever get this resolved? we are running into the same problem.We have an ASO db which requires YTD calcs and TB Last. We've tried using two separate options (CASE and IF) statements on the YTD, Year and Qtr members (ie MarYTD). Both worked and now concerned about performance. Any suggestions?

  • Dataload in Aggregate storage outline

    Hi All,My existing code which works while loading data into Block storage outline is not working for Aggregate storage outline. When I pass "SendString" api simultaneously about 3-4 times, I got an error "Not supported for agg. storage outline". Is there any API changes for loading data into agg. storage outline. I didnt find nething related to such changes in Documentation.Regards,Samrat

    I know that EsbUpdate and EsbImport both work with ASO

  • Change Aggregate Storage Cache

    Does anyone know how to change the aggregate storage cache setting in Maxl? I can no longer see it in EAS and I don't think I can change it in MaxL. Any clue?
    Thanks for your help.

    Try something like
    alter application ASOSamp set cache_size 64MB;
    I thought you right click the ASO app in EAS and edit properties > Pending cache size limit.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Incremental Load in Aggregate Storage

    <p>Hi,</p><p> </p><p>From what I understand, Aggregate Storage (ASO) clears all dataif a new member gets added to the outline.</p><p>This is unlike Block Storage (BSO) where we can restructure thecube if new member is added to the outline.</p><p> </p><p>We need to load data daily into an ASO cube and the cubecontains 5 yrs of data. We may get a new member in the customerdimension daily. Is there a way we can retain (restructure)existing data when updating the customer dimension and then add thenew data? Otherwise, we will have to rebuild the cube daily andtherefore reload 5 yrs of data (about 600 million recs) on a dailybasis.</p><p> </p><p>Is there a better way of doing this in ASO?</p><p> </p><p>Any help would be appreciated.</p><p> </p><p>Thanks</p><p>--- suren_v</p>

    Good information Steve. Is the System 9 Essbase DB Admin Guide available online? I could not find it here: <a target=_blank class=ftalternatingbarlinklarge href="http://dev.hyperion.com/resource_library/technical_documentation">http://dev.hyperion.com/resour...echnical_documentation</a><BR><BR>(I recently attended the v7 class in Dallas and it was excellent!)<BR><BR><BR><blockquote>quote:<br><hr><i>Originally posted by: <b>scran4d</b></i><BR>Suren:<BR><BR><BR><BR>In the version 7 releases of Essbase ASO, there is not a way to hold on to the data if a member is added to the outline; data must be reloaded each time.<BR><BR><BR><BR>This is changed in Hyperion's latest System 9 release, however.<hr></blockquote><BR><BR>

  • Aggregate storage data export failed - Ver 9.3.1

    Hi everyone,
    We have two production server; Server1 (App/DB/Shared Services Server), Server2 (Anaytics). I am trying to automate couple of our cubes using Win Batch Scripting and MaxL. I can export the data within EAS successfully but when I use the following command in a MaxL Editor, it gives the following error.
    Here's the MaxL I used, which I am pretty sure that it is correct.
    Failed to open file [S:\Hyperion\AdminServices\deployments\Tomcat\5.0.28\temp\eas62248.tmp]: a system file error occurred. Please see application log for details
    [Tue Aug 19 15:47:34 2008]Local/MyAPP/Finance/admin/Error(1270083)
    A system error occurred with error number [3]: [The system cannot find the path specified.]
    [Tue Aug 19 15:47:34 2008]Local/MyAPP/Finance/admin/Error(1270042)
    Aggregate storage data export failed
    Does any one have any clue that why am I getting this error.
    Thnx in advance!
    Regards
    FG

    This error was due to incorrect SSL settings for our shared services.

  • Aggregate storage cache warning during buffer commit

    h5. Summary
    Having followed the documentation to set the ASO storage cache size I still get a warning during buffer load commit that says it should be increased.
    h5. Storage Cache Setting
    The documentation says:
    A 32 MB cache setting supports a database with approximately 2 GB of input-level data. If the input-level data size is greater than 2 GB by some factor, the aggregate storage cache can be increased by the square root of the factor. For example, if the input-level data size is 3 GB (2 GB * 1.5), multiply the aggregate storage cache size of 32 MB by the square root of 1.5, and set the aggregate cache size to the result: 39.04 MB.
    My database has 127,643,648k of base data which is 60.8x bigger than 2GB. SQRT of this is 7.8 so I my optimal cache size should be (7.8*32MB) = 250MB. My cache size is in fact 256MB because I have to set it before the data load based on estimates.
    h5. Data Load
    The initial data load is done in 3 maxl sessions into 3 buffers. The final import output then looks like this:
    MAXL> import database "4572_a"."agg" data from load_buffer with buffer_id 1, 2, 3;
    OK/INFO - 1270041 - For better performance, increase the size of aggregate storage cache.
    OK/INFO - 1270041 - For better performance, increase the size of aggregate storage cache.
    OK/INFO - 1270041 - For better performance, increase the size of aggregate storage cache.
    OK/INFO - 1003058 - Data load buffer commit elapsed time : [5131.49] seconds.
    OK/INFO - 1241113 - Database import completed ['4572_a'.'agg'].
    MAXL>
    h5. The Question
    Can anybody tell me why the final import is recommending increasing the storage cache when it is already slightly larger than the value specified in the documentation?
    h5. Versions
    Essbase Release 11.1.2 (ESB11.1.2.1.102B147)
    Linux version 2.6.32.12-0.7-default (geeko@buildhost) (gcc version 4.3.4 [gcc-4_3-branch revision 152973] (SUSE Linux) ) #1 SMP 2010-05-20 11:14:20 +0200 64 bit

    My understanding is that storage cache setting calculation you quoted is based on the cache requirements for retrieval. This recommendation has remained unchanged since ASO was first introduced in v7 (?) and was certainly done before the advent of parallel loading.
    I think that the ASO cache is used during the combination of the buffers. As a result depending on how ASO works internally you would get this warning unless your buffer was:
    1. = to the final load size of the database
    2. OR if the cache was only used when data existed for the same "Sparse" combination of dimensions in more than one buffer the required size would be a function of the number of cross buffer combinations required
    3. OR if the Cache is needed only when compression dimension member groups cross buffers
    By "Sparse" dimension I mean the non-compressed dimensions.
    Therefore you might try some experiments. To test case x above:
    1. Forget it you will get this message unless you have a cache large enough for the final data set size on disk
    2. sort your data so that no dimensional combination exists in more than one buffer - ie sort by all non-compression dimensions then by the compression dimension
    3. Often your compression dimension is time based (EVEN THOUGH THIS IS VERY SUB-OPTIMAL). If so you could sort the data by the compression dimension only and break the files so that the first 16 compression members (as seen in the outline) are in buffer 1, the next 16 in buffer 2 and the next in buffer 3
    Also if your machine is IO bound (as most are during a load of this size) and your cpu is not - try using os level compression on your input files - it could speed things up greatly.
    Finally regarding my comments on time based compression dimension - you should consider building a stored dimension for this along the lines of what I have proposed in some posts on network54 (search for DanP on network54.com/forum/58296 - I would give you a link but it is down now).
    OR better yet in the forthcoming book (of which Robb is a co-author) Developing Essbase Applications: Advanced Techniques for Finance and IT Professionals http://www.amazon.com/Developing-Essbase-Applications-Techniques-Professionals/dp/1466553308/ref=sr_1_1?ie=UTF8&qid=1335973291&sr=8-1
    I really hope you will try the suggestions above and post your results.

  • Workflow without 'save as' and huge storage consumption.

    I use Aperture & photoshp daily. Aperture no longer has 'save as' .. Photoshop hasn't, as yet followe suit, I am told. Very concerned about workflow changes incurring more complexity, more steps and hughe storage waste.
    Current work flow: I open one of the apps, then an image, initiate a bunch of edits, 'save as' my final desired result. I now have 2 files, the UNTOUCHED original and the edited second image.
    Several calls to Apple. They are not sure how the missing 'save as' works into this work flow, do not know how many more steps will be needed and how mush more complex. They did say, for the first time, now you have to check out each program to see how it's 'save as' concept will work... I guess and try to remember them all. WOW! 
    Also, with the 'versions' concept and processing raw images, instead of 2 files residing in memory at the end of the multiple edits, there will be a version for each edit step meaning there could be like 20, 50, even more huge files ... not wanted, not needed but consuming huge amounts of storage.
    It was finally decided there had to be a way to purge all those unnecessary files to win back storage space. More work and how?
    Their final assessment, you may not be able to proceed with Lion... with the future Mac's ongoing OS. I guess all photographers are at a loss then. Sad.
    Is this hopefully just a bad dream??

    The new Autosave/Versions paradigm is ideal for the situation you describe as “Current work flow”. It’s not a bit more complicated, it’s safer, and — if you can get past the fact that it is different from what you are used to — it is more logical. What you are doing is creating a new, modified copy of the original. So, open the original and Duplicate (File > Duplicate). That creates a new copy that you can edit, but it is treated like a new file, so when you close it, you get a Save As dialog. Or you can save it manually at any time just like any new file. No extra steps — just Duplicate at the beginning instead of Save As at the end.
    I said it is safer. Consider the following: In your current work flow: by accident or absent mindedness, you mistakenly Save rather than Save As. Now your original is clobbered. Of course, if you are smart, you have a backup. Otherwise it is gone. In the new workflow you might forget to duplicate at the beginning. But it’s not too late! Go ahead and duplicate later. When you duplicate an edited file you get a dialog asking what to do with the original. The choices are to leave it in its edited state or to revert to its original state. Even if you close the file without duplicating it, you can still re-open it and revert using Versions.
    As Pondini pointed out, you misunderstand how versions work. Versions are not files. It is more like a super sophisticated Undo. As you edit a file, only the latest version is saved in the file. Enough information to reconstruct intermediate versions is saved in a hidden versions database. So in your scenario, you still have only 2 files plus some information in the database. In general the versions information should be efficiently stored.
    This is how Aperture has always worked. As Terence Devlin pointed out, no version of Aperture has ever had a Save or Save As command. I wouldn’t be surprised if Aperture served as an inspiration or testing ground for Autosave/Versions. I am a serious amateur photographer, and I am not the least bit sad. I think this is a great advancement.

  • Can I use two Time Capsules? one as an extension of my laptop (for music and video storage) and the other one to back up everything from the laptop and  Time Capsule (for music and videos)

    Can I use two Time Capsules? one as an extension of my laptop (for music and video storage) and the other one to back up everything from the laptop and  Time Capsule (for music and videos)

    Not via Time Machine.   It cannot back up from a network location.
    The 3rd-party apps CarbonCopyCloner and ChronoSync may be workable alternatives.
    EDIT:  And, if you're going to do that, you could back up from the Time Capsule to a USB drive connected to the TC's USB port.  Second TC not required.
    Message was edited by: Pondini

  • Any difference between distinct and aggregate function in sql query cost???

    Hi,
    I have executed many sql stmts patterns- such as:
    a) using a single table
    b) using two tables, using simple joins or outer joins
    but i have not noticed any difference in sql stmts in cost and in execution plan....
    Anyway, my colleague insists on that using aggregate function is less costly compared to
    distinct....(something i have not confirmed, that's why i beleive that they are exactly the same...)
    For the above reffered 1st sql pattern.. we could for example use
    select distinct deptno
    from emp
    select count(*), deptno
    from emp
    group by deptno select distinct owner, object_type from all_objects
    select count(*), owner, object_type from all_objects
    group by owner, object_typeHave you found any difference between the two ever...????
    Note: I use Ora DB 10g v2.
    Thank you,
    Sim

    distinct and aggregate function are for different uses and may give same result but if u r using aggregate function to get distinct records, it will be expensive...
    ex
    select distinct deptno from scott.dept;
    Statistics
    0 recursive calls
    0 db block gets
    2 consistent gets
    0 physical reads
    0 redo size
    584 bytes sent via SQL*Net to client
    488 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    4 rows processed
    select deptno from scott.emp group by deptno;
    Statistics
    307 recursive calls
    0 db block gets
    60 consistent gets
    6 physical reads
    0 redo size
    576 bytes sent via SQL*Net to client
    488 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    6 sorts (memory)
    0 sorts (disk)
    3 rows processed
    Nimish Garg
    Software Developer
    *(Oracle & ASP.NET)*
    Indiamart Intermesh Limited, Noida
    To Get Free Oracle & ASP.NET Code Snippets
    Follow: http://nimishgarg.blogspot.com

Maybe you are looking for

  • IPhone 5s voice memo

    I have to say that i am getting really tired of upgrading and spending money to have things go wrong with apple !!!! It seems like there is always something going wrong. I always sync my iphone, but today i realized that i did not sync for a while,so

  • Help Required regarding dynamic subforms in Adobe Output designer

    Hi I am working on a report(Purchase Order) whose requiremets are like this: 1. It contains a Header. a detail part and a footer(trailer). The header and footer should get printed on every page. 2. The detail part consists of 8 subforms. Some of the

  • Video playback problems in iTunes 10.5.2/3

    Hi, I'm running iTunes 10.5.3 on my MacBook Pro, Mac OS X Lion 10.7.2, 2,66 Ghz, Intel i7, and I have some issues with my video playback. None of my movies/videos in iTunes will play. It's actually really annoying. I was having this problem already o

  • No routing found error

    Hi SAP experts, I have one issue that I have created one new order type. For this order type i am creating one production order in co07; with input sale order and sale order item and order type. But after giving the order quantity, date and material

  • Need Query for empty partitions

    I am having nearly 700 partitions for a table.Now i want to find out only the empty partitions.I need query for that. Thankx..