Allocations on Aggregate Storage Databases

How do I use EssPerformAllocationAso function and ESS_PERF_ALLOC_T API structure to run an Allocation in an ASO database? I need help in creating this.

First I'll assume you are on 11.1.2 becouae it this is the only version this is valid for. I'm not sure why you are coing the api route. The best way to do this is either through Calc Manager or MAxL Execute Allocation command. Even if you want to use the api to do it, looking at the parameters in the MaxL statement and the example would give you a good indication of what needs to be passed

Similar Messages

  • Clear Partial Data in an Essbase Aggregate storage database

    Can anyone let me know how to clear partial data from an Aggregate storage database in Essbase v 11.1.13? We are trying to clear some data in our dbase and don’t want to clear out all the data. I am aware that in Version 11 Essbase it will allow for a partial clear if we write using mdx commands.
    Can you please help me on the same by giving us some examples n the same?
    Thanks!

    John, I clearly get the difference between two. What I am asking is in the EAS tool itself for v 11.1.1.3 we have option - right clicking on the DB and getting option "Clear" and in turn sub options like "All Data", "All aggregations" and "Partial Data".
    I want to know more on this option part. How will this option know which partial data to be removed or will this option ask us to write some MAXL query for the same"?

  • Loading data using send function in Excel to aggregate storage cube

    Hi there
    just got version 9.3.1. installed. Can finally load to aggregate storage database using excel essbase send. however, very slow, especially when loading many lines of data. Block storage much much faster. Is there any way you can speed up loading to aggreagate storage data base? Or is this an architectural issue and therefore not much can be done?

    As far as I know, it is an architectural issue.. Further, I would expect it to slow down even further if you have numerous people writing back simultaneously because, as I understand it, they are throttling the update process on the server side so a single user is actually 'writing' at a time. At least this is better than earlier versions where other users couldn't even do a read when the database was being loaded; I believe that restriction has been lifted as part of the 'trickle-feed' support (although I haven't tested it)..
    Tim Tow
    Applied OLAP, Inc

  • Aggregate Storage Backup level 0 data

    <p>When exporting level 0 data from aggregate storage through abatch job you can use a maxL script with "export database[dbs-name] using server report_file [file_name] to data_file[file_name]". But how do I build a report script that exportsall level 0 data so that I can read it back with a load rule?</p><p> </p><p>Can anyone give me an example of such a report script, thatwould be very helpful.</p><p> </p><p>If there is a better way to approach this matter, please let meknow.</p><p> </p><p>Thanks</p><p>/Fredrik</p>

    <p>An example from the Sample:Basic database:</p><p> </p><p>// This Report Script was generated by the Essbase QueryDesigner</p><p><SETUP { TabDelimit } { decimal 13 } { IndentGen -5 }<ACCON <SYM <QUOTE <END</p><p><COLUMN("Year")</p><p><ROW("Measures","Product","Market","Scenario")</p><p>// Page Members</p><p>// Column Members</p><p>// Selection rules and output options for dimension: Year</p><p>{OUTMBRNAMES} <Link ((<LEV("Year","Lev0,Year")) AND ( <IDESC("Year")))</p><p>// Row Members</p><p>// Selection rules and output options for dimension:Measures</p><p>{OUTMBRNAMES} <Link ((<LEV("Measures","Lev0,Measures")) AND (<IDESC("Measures")))</p><p>// Selection rules and output options for dimension: Product</p><p>{OUTMBRNAMES} <Link ((<LEV("Product","SKU")) AND ( <IDESC("Product")))</p><p>// Selection rules and output options for dimension: Market</p><p>{OUTMBRNAMES} <Link ((<LEV("Market","Lev0,Market")) AND ( <IDESC("Market")))</p><p>// Selection rules and output options for dimension:Scenario</p><p>{OUTMBRNAMES} <Link ((<LEV("Scenario","Lev0,Scenario")) AND (<IDESC("Scenario")))</p><p>!</p><p>// End of Report</p><p> </p><p>Note that no attempt was made here to eliminate shared membervalues.</p>

  • SSPROCROWLIMIT and Aggregate Storage

    I have been experimenting with detail level data in an Aggregate Storage style cube. I will have 2 million members in one of my a dimensions, for testing I have 514683.If I try to use the spreadsheet addin to retrieve from my cube, I get the error "Maximum number of rows processed [250000] exceeded [514683]" This indicates that my SSPROCROWLIMIT is too low. Unfortunately, the upper limit for SSPROCROWLIMIT is below my needs.What good is this new storage model if I can't retrieve data from the cube! Any plans to remove the limit?Craig Wahlmeier

    We are using ASO for a very large (20 dims) database. The data compression and performance has been very impressive. The ASO cubes are much easier to build, however have much fewer options. No calc scripts, formulas are limited in that they can only use on small dimensions and only on one dimension. The other big difference is that you need to reload and calc your data every time you change metadata. The great thing for me about 7.1 it gives you options, particularly when dealing with very large sparse non-finance cubes.If your client is talking about making calcs faster - ASO is only going to work if it is an aggregation calc.

  • Aggregate storage cache warning during buffer commit

    h5. Summary
    Having followed the documentation to set the ASO storage cache size I still get a warning during buffer load commit that says it should be increased.
    h5. Storage Cache Setting
    The documentation says:
    A 32 MB cache setting supports a database with approximately 2 GB of input-level data. If the input-level data size is greater than 2 GB by some factor, the aggregate storage cache can be increased by the square root of the factor. For example, if the input-level data size is 3 GB (2 GB * 1.5), multiply the aggregate storage cache size of 32 MB by the square root of 1.5, and set the aggregate cache size to the result: 39.04 MB.
    My database has 127,643,648k of base data which is 60.8x bigger than 2GB. SQRT of this is 7.8 so I my optimal cache size should be (7.8*32MB) = 250MB. My cache size is in fact 256MB because I have to set it before the data load based on estimates.
    h5. Data Load
    The initial data load is done in 3 maxl sessions into 3 buffers. The final import output then looks like this:
    MAXL> import database "4572_a"."agg" data from load_buffer with buffer_id 1, 2, 3;
    OK/INFO - 1270041 - For better performance, increase the size of aggregate storage cache.
    OK/INFO - 1270041 - For better performance, increase the size of aggregate storage cache.
    OK/INFO - 1270041 - For better performance, increase the size of aggregate storage cache.
    OK/INFO - 1003058 - Data load buffer commit elapsed time : [5131.49] seconds.
    OK/INFO - 1241113 - Database import completed ['4572_a'.'agg'].
    MAXL>
    h5. The Question
    Can anybody tell me why the final import is recommending increasing the storage cache when it is already slightly larger than the value specified in the documentation?
    h5. Versions
    Essbase Release 11.1.2 (ESB11.1.2.1.102B147)
    Linux version 2.6.32.12-0.7-default (geeko@buildhost) (gcc version 4.3.4 [gcc-4_3-branch revision 152973] (SUSE Linux) ) #1 SMP 2010-05-20 11:14:20 +0200 64 bit

    My understanding is that storage cache setting calculation you quoted is based on the cache requirements for retrieval. This recommendation has remained unchanged since ASO was first introduced in v7 (?) and was certainly done before the advent of parallel loading.
    I think that the ASO cache is used during the combination of the buffers. As a result depending on how ASO works internally you would get this warning unless your buffer was:
    1. = to the final load size of the database
    2. OR if the cache was only used when data existed for the same "Sparse" combination of dimensions in more than one buffer the required size would be a function of the number of cross buffer combinations required
    3. OR if the Cache is needed only when compression dimension member groups cross buffers
    By "Sparse" dimension I mean the non-compressed dimensions.
    Therefore you might try some experiments. To test case x above:
    1. Forget it you will get this message unless you have a cache large enough for the final data set size on disk
    2. sort your data so that no dimensional combination exists in more than one buffer - ie sort by all non-compression dimensions then by the compression dimension
    3. Often your compression dimension is time based (EVEN THOUGH THIS IS VERY SUB-OPTIMAL). If so you could sort the data by the compression dimension only and break the files so that the first 16 compression members (as seen in the outline) are in buffer 1, the next 16 in buffer 2 and the next in buffer 3
    Also if your machine is IO bound (as most are during a load of this size) and your cpu is not - try using os level compression on your input files - it could speed things up greatly.
    Finally regarding my comments on time based compression dimension - you should consider building a stored dimension for this along the lines of what I have proposed in some posts on network54 (search for DanP on network54.com/forum/58296 - I would give you a link but it is down now).
    OR better yet in the forthcoming book (of which Robb is a co-author) Developing Essbase Applications: Advanced Techniques for Finance and IT Professionals http://www.amazon.com/Developing-Essbase-Applications-Techniques-Professionals/dp/1466553308/ref=sr_1_1?ie=UTF8&qid=1335973291&sr=8-1
    I really hope you will try the suggestions above and post your results.

  • Load and Unload Alias Table - Aggregate Storage

    Hi everyone,<BR><BR>Here I am again with another question about aggregate storage...<BR><BR>There is no "load" or "unload" alias table listed as a parameter for "alter database" in the syntax guidelines for aggregate storage (see <a target=_blank class=ftalternatingbarlinklarge href="http://dev.hyperion.com/techdocs/eas/eas_712/easdocs/techref/maxl/ddl/aso/altdb_as.htm">http://dev.hyperion.com/techdo...l/ddl/aso/altdb_as.htm</a> )<BR><BR><BR>Is this not a valid parameter for aggregate storage? If not, how do you load and unload alias tables if you're running a batch script in MaxL and you need the alias table update to be automated?<BR><BR>Thanks in advance for your help.<BR><BR>

    Hi anaguiu2, <BR><BR>I have the same problem now. Do you find a solution about the load and unload alias table - Aggregate storage ? Could you give me your solution used if you have one. <BR><BR>Thanks, Manon

  • YTD Performance in Aggregate Storage

    Has anyone had any problems with performance of YTD calculations in Aggregate storage? Any solutions?

    did yuo ever get this resolved? we are running into the same problem.We have an ASO db which requires YTD calcs and TB Last. We've tried using two separate options (CASE and IF) statements on the YTD, Year and Qtr members (ie MarYTD). Both worked and now concerned about performance. Any suggestions?

  • Derived Cells in Aggregate storage

    <BR>The aggregate storage loads obviously ignore the derived cells. Is there a way to get these ignored records diverted to a log or error file to view and correct the data at the source system !?<BR><BR>Has anybody tried any methods for this !? Any help would be much appreciated.<BR><BR>-Jnt

    did yuo ever get this resolved? we are running into the same problem.We have an ASO db which requires YTD calcs and TB Last. We've tried using two separate options (CASE and IF) statements on the YTD, Year and Qtr members (ie MarYTD). Both worked and now concerned about performance. Any suggestions?

  • Dataload in Aggregate storage outline

    Hi All,My existing code which works while loading data into Block storage outline is not working for Aggregate storage outline. When I pass "SendString" api simultaneously about 3-4 times, I got an error "Not supported for agg. storage outline". Is there any API changes for loading data into agg. storage outline. I didnt find nething related to such changes in Documentation.Regards,Samrat

    I know that EsbUpdate and EsbImport both work with ASO

  • Change Aggregate Storage Cache

    Does anyone know how to change the aggregate storage cache setting in Maxl? I can no longer see it in EAS and I don't think I can change it in MaxL. Any clue?
    Thanks for your help.

    Try something like
    alter application ASOSamp set cache_size 64MB;
    I thought you right click the ASO app in EAS and edit properties > Pending cache size limit.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Incremental Load in Aggregate Storage

    <p>Hi,</p><p> </p><p>From what I understand, Aggregate Storage (ASO) clears all dataif a new member gets added to the outline.</p><p>This is unlike Block Storage (BSO) where we can restructure thecube if new member is added to the outline.</p><p> </p><p>We need to load data daily into an ASO cube and the cubecontains 5 yrs of data. We may get a new member in the customerdimension daily. Is there a way we can retain (restructure)existing data when updating the customer dimension and then add thenew data? Otherwise, we will have to rebuild the cube daily andtherefore reload 5 yrs of data (about 600 million recs) on a dailybasis.</p><p> </p><p>Is there a better way of doing this in ASO?</p><p> </p><p>Any help would be appreciated.</p><p> </p><p>Thanks</p><p>--- suren_v</p>

    Good information Steve. Is the System 9 Essbase DB Admin Guide available online? I could not find it here: <a target=_blank class=ftalternatingbarlinklarge href="http://dev.hyperion.com/resource_library/technical_documentation">http://dev.hyperion.com/resour...echnical_documentation</a><BR><BR>(I recently attended the v7 class in Dallas and it was excellent!)<BR><BR><BR><blockquote>quote:<br><hr><i>Originally posted by: <b>scran4d</b></i><BR>Suren:<BR><BR><BR><BR>In the version 7 releases of Essbase ASO, there is not a way to hold on to the data if a member is added to the outline; data must be reloaded each time.<BR><BR><BR><BR>This is changed in Hyperion's latest System 9 release, however.<hr></blockquote><BR><BR>

  • Aggregate storage data export failed - Ver 9.3.1

    Hi everyone,
    We have two production server; Server1 (App/DB/Shared Services Server), Server2 (Anaytics). I am trying to automate couple of our cubes using Win Batch Scripting and MaxL. I can export the data within EAS successfully but when I use the following command in a MaxL Editor, it gives the following error.
    Here's the MaxL I used, which I am pretty sure that it is correct.
    Failed to open file [S:\Hyperion\AdminServices\deployments\Tomcat\5.0.28\temp\eas62248.tmp]: a system file error occurred. Please see application log for details
    [Tue Aug 19 15:47:34 2008]Local/MyAPP/Finance/admin/Error(1270083)
    A system error occurred with error number [3]: [The system cannot find the path specified.]
    [Tue Aug 19 15:47:34 2008]Local/MyAPP/Finance/admin/Error(1270042)
    Aggregate storage data export failed
    Does any one have any clue that why am I getting this error.
    Thnx in advance!
    Regards
    FG

    This error was due to incorrect SSL settings for our shared services.

  • Used and allocated Shadow Copy Storage space is showing values in GB .

    Hi All ,
    This is an production issue and i need an urgent help on this case.
    In my environment i am having exchange server 2013 in DAG extended with three sites.On that i am having two mailbox servers in the production site and one mailbox server in remaining two sites.
    we have mounted all the mailbox databases in one production mailbox server.Those databases are present in LUN mapped to the production mailbox servers from SAN storage.
    Issue : 
    Few of the mailbox databases are occupying too much of size on the LUN than it's original size.
    Say for instance i have kept the mailbox database called "test" which is present on the LUN with the size of 190 GB.On that whole space the mailbox database is occupying only 100GB but on the diskmgmt and also on the explorer it is showing
    only the free space as 17 GB.
    Finally we have run the below mentioned command and found that the used and allocated shadow storage is occupying the size around 34 GB.
    Attached snap for your reference : 
    Question : How to reclaim the space occupied by the shadow copy storage to the LUN ? If possible someone tell me why this issue is happening ?
    Thanks & Regards S.Nithyanandham

    Hi,
    According to my research and technology, it’s caused by Volume Shadow Copy Service which is used to backup infrastructure for the Microsoft operation systems, as well as a mechanism for creating consistent point-in-time copies of data known as shadow copies.
    We can use VssAdmin to create, delete, and list information about shadow copies. More details about Volume Shadow Copy Service, for your reference:
    https://technet.microsoft.com/en-us/library/ee923636(v=ws.10).aspx
    Besides, this issue may be related to Windows server. I recommend you to contact to Windows Server Team so that you can get more professional suggestions, please refer to:
    https://social.technet.microsoft.com/Forums/windowsserver/en-US/home?category=windowsserver
    Thanks
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact [email protected]
    Allen Wang
    TechNet Community Support

  • How to move a virtual disk's physical allocation within a Storage Pool

    I have a pool of 3x500GB where one the physical drives is having intermittent issues. Currently, there is only one parity Virtual Disk of 300GB Fixed across 3 columns. I want to replace the bad drive with a good one. The old way (pre-2012) was replace the
    disk, repair the RAID 5, resync and done. These basic steps are not working.
    So far I have added a 4th 500GB drive to the pool. After searching and failing to find a way to move the data non-destructively, I decided to just pull the data cable on the disk I wanted to replace. After refresh/rescan, the disconnected drive shows "lost
    communication" and the virtual disk (after trying to repair) shows "unknown" (but the volume on that disk is accessible in Explorer).  When I try to remove the physical disk in Server Manager, I get "The selected physical disk cannot
    be removed". Reading the error message, I see that the replacement disk cannot contain any part of a virtual disk. The replacement disk that I just added appears to have some space allocated (possibly because I have tried this same procedure a couple
    of times already?). When I look at the parity disk properties/health, it shows all four physical disks under "physical disks in use".
    I have deleted and recreated a lot of storage pools lately while trying to understand how they work but I would like to avoid that this time. The data on the virtual disk in question is highly deduplicated and it took quite a while to get it that way. Since
    I can't find a way to copy/mirror the disk while keeping it fully deduplicated, I would need 3x the space to copy it all off, or a lot of time to load up and deduplicate a new virtual disk.
    I have several questions:
    1. How can a 3 column parity disk use parts of four physical disks? And can that be fixed without recreating the virtual disk?
    2. When creating a virtual disk (for example a 3 column disk in a pool that has four or more physical drives), is there a way to specify which physical disks to use?
    3. I understand that after a physical disk failure, the recovery process will move a virtual disk's allocation to a replacement disk, but can a virtual disk's allocation be moved manually among physical disks within the same storage pool
    using a PS script?
    4. Can a deduplicated virtual disk be moved/mirrored/backed up without expanding the data?
    Any help is appreciated.

    Im still fighting with storage pools and need more tests to be done and have a lot of questions my self but ther goes what I understood so far.
    You may define physical disks used for virtual disk by Powershell ,
    for list of all commands follow this:
    http://technet.microsoft.com/en-us/library/hh848705(v=wps.620).aspx ,
    specific command defining physical disks to be used on already existing virtual disk:
    Example 4: Manually assigning physical disks to a virtual disk
    This example gets two physical disks that have already been added to the storage pool and designated as ManualSelect disks,
    PhysicalDisk3 and PhysicalDisk4, and assigns them to the virtual disk
    UserData.
    PS C:\> Add-PhysicalDisk –VirtualDiskFriendlyName UserData –PhysicalDisks (Get-PhysicalDisk -FriendlyName PhysicalDisk3, PhysicalDisk4)
    http://technet.microsoft.com/en-us/library/hh848702(v=wps.620).aspx
    If You haven't seen this yet You may check it out http://blogs.technet.com/b/yungchou/archive/2011/12/06/free-ebooks.aspx

Maybe you are looking for

  • Status 29 in ALe Idocs

    Hi All, Recently I've come across a problem in ALE Idoc. The status of the idoc is 29 Error in ALE service. This kind of error is encountered when there is a configuration problem in either Partner profiles or Port definitions. The Partner profile is

  • LCDS License

    In the past, we builded web app to create pdf file using xdp template. We drop the LCDS license file into WEB-INF folder and it worked fine. Now We want to create a java stand alone application to generate the pdf file using xdp template.  When I com

  • Bluetooth via phone able to wake computer?

    Back when I got my new Macbook, the guy who helped me out made mention of that his son had his computer set where when he walked in his room with his cell phone, which had bluetooth, that it would wake the computer and start playing a itunes playlist

  • What is slowing firefox, it hangs, and I did what was recommended..

    I used to use firefox almost exclusively, it was so good. But I had to reinstall, and also there are newer versions and it has become such a troublesome browser I use the others. The main problem is firefox hangs if I click a link in a webpage, and i

  • Crashing Issue!  Can Anyone Help?

    Here's the deal. I have been trying for months to get my CD collection downloaded into iTunes but it keeps crashing. The odd thing is that it loves "Copy Protected" CDs but crashes on older "unprotected" ones. I noticed the CDs it usually (but not al