Aggregate storage fails

hi there ,
I was trying to excute MDX with spcific month and it runs perfectly
POV "CrossJoin({([month],[Year])},
but when i try to chnage it dynamic like adding subtitutinal variable , it dont work
CrossJoin ({Uda([Month],[&variable])},
or
CrossJoin ({Uda([Month],&variable)},
or
CrossJoin ({Uda([Month],"&variable")},
also is there any function like *NOT <UDA("month", &variable))* like how we have in report script so i can igonore that variable?
i know is small mistake but i cant figure it out
thanks again

This is a custom calculation, right? What is the value of the subvar you are setting? What error do you get?
I'm not actually 100% certain that you can use a subvar in an ASO custom calc, based on the list here: http://docs.oracle.com/cd/E26232_01/doc.11122/esb_dbag/dotcreat.html#dotcreat1053369
But in any case, that section also contains a bunch of guidelines about setting subvars for use in MDX that may be relevant, probably worth ensuring that you've complied with all of them.

Similar Messages

  • Aggregate storage data export failed - Ver 9.3.1

    Hi everyone,
    We have two production server; Server1 (App/DB/Shared Services Server), Server2 (Anaytics). I am trying to automate couple of our cubes using Win Batch Scripting and MaxL. I can export the data within EAS successfully but when I use the following command in a MaxL Editor, it gives the following error.
    Here's the MaxL I used, which I am pretty sure that it is correct.
    Failed to open file [S:\Hyperion\AdminServices\deployments\Tomcat\5.0.28\temp\eas62248.tmp]: a system file error occurred. Please see application log for details
    [Tue Aug 19 15:47:34 2008]Local/MyAPP/Finance/admin/Error(1270083)
    A system error occurred with error number [3]: [The system cannot find the path specified.]
    [Tue Aug 19 15:47:34 2008]Local/MyAPP/Finance/admin/Error(1270042)
    Aggregate storage data export failed
    Does any one have any clue that why am I getting this error.
    Thnx in advance!
    Regards
    FG

    This error was due to incorrect SSL settings for our shared services.

  • Aggregate Storage Backup level 0 data

    <p>When exporting level 0 data from aggregate storage through abatch job you can use a maxL script with "export database[dbs-name] using server report_file [file_name] to data_file[file_name]". But how do I build a report script that exportsall level 0 data so that I can read it back with a load rule?</p><p> </p><p>Can anyone give me an example of such a report script, thatwould be very helpful.</p><p> </p><p>If there is a better way to approach this matter, please let meknow.</p><p> </p><p>Thanks</p><p>/Fredrik</p>

    <p>An example from the Sample:Basic database:</p><p> </p><p>// This Report Script was generated by the Essbase QueryDesigner</p><p><SETUP { TabDelimit } { decimal 13 } { IndentGen -5 }<ACCON <SYM <QUOTE <END</p><p><COLUMN("Year")</p><p><ROW("Measures","Product","Market","Scenario")</p><p>// Page Members</p><p>// Column Members</p><p>// Selection rules and output options for dimension: Year</p><p>{OUTMBRNAMES} <Link ((<LEV("Year","Lev0,Year")) AND ( <IDESC("Year")))</p><p>// Row Members</p><p>// Selection rules and output options for dimension:Measures</p><p>{OUTMBRNAMES} <Link ((<LEV("Measures","Lev0,Measures")) AND (<IDESC("Measures")))</p><p>// Selection rules and output options for dimension: Product</p><p>{OUTMBRNAMES} <Link ((<LEV("Product","SKU")) AND ( <IDESC("Product")))</p><p>// Selection rules and output options for dimension: Market</p><p>{OUTMBRNAMES} <Link ((<LEV("Market","Lev0,Market")) AND ( <IDESC("Market")))</p><p>// Selection rules and output options for dimension:Scenario</p><p>{OUTMBRNAMES} <Link ((<LEV("Scenario","Lev0,Scenario")) AND (<IDESC("Scenario")))</p><p>!</p><p>// End of Report</p><p> </p><p>Note that no attempt was made here to eliminate shared membervalues.</p>

  • Loading data using send function in Excel to aggregate storage cube

    Hi there
    just got version 9.3.1. installed. Can finally load to aggregate storage database using excel essbase send. however, very slow, especially when loading many lines of data. Block storage much much faster. Is there any way you can speed up loading to aggreagate storage data base? Or is this an architectural issue and therefore not much can be done?

    As far as I know, it is an architectural issue.. Further, I would expect it to slow down even further if you have numerous people writing back simultaneously because, as I understand it, they are throttling the update process on the server side so a single user is actually 'writing' at a time. At least this is better than earlier versions where other users couldn't even do a read when the database was being loaded; I believe that restriction has been lifted as part of the 'trickle-feed' support (although I haven't tested it)..
    Tim Tow
    Applied OLAP, Inc

  • Clear Partial Data in an Essbase Aggregate storage database

    Can anyone let me know how to clear partial data from an Aggregate storage database in Essbase v 11.1.13? We are trying to clear some data in our dbase and don’t want to clear out all the data. I am aware that in Version 11 Essbase it will allow for a partial clear if we write using mdx commands.
    Can you please help me on the same by giving us some examples n the same?
    Thanks!

    John, I clearly get the difference between two. What I am asking is in the EAS tool itself for v 11.1.1.3 we have option - right clicking on the DB and getting option "Clear" and in turn sub options like "All Data", "All aggregations" and "Partial Data".
    I want to know more on this option part. How will this option know which partial data to be removed or will this option ask us to write some MAXL query for the same"?

  • YTD Performance in Aggregate Storage

    Has anyone had any problems with performance of YTD calculations in Aggregate storage? Any solutions?

    did yuo ever get this resolved? we are running into the same problem.We have an ASO db which requires YTD calcs and TB Last. We've tried using two separate options (CASE and IF) statements on the YTD, Year and Qtr members (ie MarYTD). Both worked and now concerned about performance. Any suggestions?

  • Derived Cells in Aggregate storage

    <BR>The aggregate storage loads obviously ignore the derived cells. Is there a way to get these ignored records diverted to a log or error file to view and correct the data at the source system !?<BR><BR>Has anybody tried any methods for this !? Any help would be much appreciated.<BR><BR>-Jnt

    did yuo ever get this resolved? we are running into the same problem.We have an ASO db which requires YTD calcs and TB Last. We've tried using two separate options (CASE and IF) statements on the YTD, Year and Qtr members (ie MarYTD). Both worked and now concerned about performance. Any suggestions?

  • Dataload in Aggregate storage outline

    Hi All,My existing code which works while loading data into Block storage outline is not working for Aggregate storage outline. When I pass "SendString" api simultaneously about 3-4 times, I got an error "Not supported for agg. storage outline". Is there any API changes for loading data into agg. storage outline. I didnt find nething related to such changes in Documentation.Regards,Samrat

    I know that EsbUpdate and EsbImport both work with ASO

  • Change Aggregate Storage Cache

    Does anyone know how to change the aggregate storage cache setting in Maxl? I can no longer see it in EAS and I don't think I can change it in MaxL. Any clue?
    Thanks for your help.

    Try something like
    alter application ASOSamp set cache_size 64MB;
    I thought you right click the ASO app in EAS and edit properties > Pending cache size limit.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • The shadow copy of volume E: were aborted because the shadow copy storage failed to grow

    Hello there, 
    I am facing an issue regarding DPM 2010 backup. 
    Every time my tape backup is being failed and the error is "The shadow copy of volume E: were aborted because the shadow copy storage failed to grow" .
    My protected server is an win 2008R2 File server which is getting backed up from DPM 2010 server. 
    The total space of E: is 700 GB out of which 750 MB space is free. 
    So i am quite confused how much space should be free in protected server drive for successful backup.
    It would be great if someone could help me regarding this matter. 
    Thanks in advance.
    Regards, Ishuv

    Hello, 
    Though it is written that minimum space required is 300 MB but i still do have 700 MB free out of 759 GB in my protected server drive (E:).
    The error detail is ;
    Source: Volsnap
    Event ID: 35
    Error General: "The Shadow copies of volume E: were aborted because the shadow copy storage failed to grow."
    The backup server is DPM 2010 attached with IBM tape library which have LTO 5 cartridge.
    Protected server is Win2k8R2 file server. 'E:' drive total space is 759 GB in which 700 MB is free.
    So , as per my understanding from the error above is VSS is unable to grow because of limited free space in the specific E drive. In Drive (E:) there is only 700 MB space free. Now my question is how much free space should be there in the protected server
    drive to complete the backup successfully by the DPM 2010 server which is storing the backups in IBM LTO5 tape library.
    Is there someone who had faced this issue. Please help!!
    Regards,
    Ishuv
    Regards, Ishuv

  • Incremental Load in Aggregate Storage

    <p>Hi,</p><p> </p><p>From what I understand, Aggregate Storage (ASO) clears all dataif a new member gets added to the outline.</p><p>This is unlike Block Storage (BSO) where we can restructure thecube if new member is added to the outline.</p><p> </p><p>We need to load data daily into an ASO cube and the cubecontains 5 yrs of data. We may get a new member in the customerdimension daily. Is there a way we can retain (restructure)existing data when updating the customer dimension and then add thenew data? Otherwise, we will have to rebuild the cube daily andtherefore reload 5 yrs of data (about 600 million recs) on a dailybasis.</p><p> </p><p>Is there a better way of doing this in ASO?</p><p> </p><p>Any help would be appreciated.</p><p> </p><p>Thanks</p><p>--- suren_v</p>

    Good information Steve. Is the System 9 Essbase DB Admin Guide available online? I could not find it here: <a target=_blank class=ftalternatingbarlinklarge href="http://dev.hyperion.com/resource_library/technical_documentation">http://dev.hyperion.com/resour...echnical_documentation</a><BR><BR>(I recently attended the v7 class in Dallas and it was excellent!)<BR><BR><BR><blockquote>quote:<br><hr><i>Originally posted by: <b>scran4d</b></i><BR>Suren:<BR><BR><BR><BR>In the version 7 releases of Essbase ASO, there is not a way to hold on to the data if a member is added to the outline; data must be reloaded each time.<BR><BR><BR><BR>This is changed in Hyperion's latest System 9 release, however.<hr></blockquote><BR><BR>

  • SSPROCROWLIMIT and Aggregate Storage

    I have been experimenting with detail level data in an Aggregate Storage style cube. I will have 2 million members in one of my a dimensions, for testing I have 514683.If I try to use the spreadsheet addin to retrieve from my cube, I get the error "Maximum number of rows processed [250000] exceeded [514683]" This indicates that my SSPROCROWLIMIT is too low. Unfortunately, the upper limit for SSPROCROWLIMIT is below my needs.What good is this new storage model if I can't retrieve data from the cube! Any plans to remove the limit?Craig Wahlmeier

    We are using ASO for a very large (20 dims) database. The data compression and performance has been very impressive. The ASO cubes are much easier to build, however have much fewer options. No calc scripts, formulas are limited in that they can only use on small dimensions and only on one dimension. The other big difference is that you need to reload and calc your data every time you change metadata. The great thing for me about 7.1 it gives you options, particularly when dealing with very large sparse non-finance cubes.If your client is talking about making calcs faster - ASO is only going to work if it is an aggregation calc.

  • Aggregate storage cache warning during buffer commit

    h5. Summary
    Having followed the documentation to set the ASO storage cache size I still get a warning during buffer load commit that says it should be increased.
    h5. Storage Cache Setting
    The documentation says:
    A 32 MB cache setting supports a database with approximately 2 GB of input-level data. If the input-level data size is greater than 2 GB by some factor, the aggregate storage cache can be increased by the square root of the factor. For example, if the input-level data size is 3 GB (2 GB * 1.5), multiply the aggregate storage cache size of 32 MB by the square root of 1.5, and set the aggregate cache size to the result: 39.04 MB.
    My database has 127,643,648k of base data which is 60.8x bigger than 2GB. SQRT of this is 7.8 so I my optimal cache size should be (7.8*32MB) = 250MB. My cache size is in fact 256MB because I have to set it before the data load based on estimates.
    h5. Data Load
    The initial data load is done in 3 maxl sessions into 3 buffers. The final import output then looks like this:
    MAXL> import database "4572_a"."agg" data from load_buffer with buffer_id 1, 2, 3;
    OK/INFO - 1270041 - For better performance, increase the size of aggregate storage cache.
    OK/INFO - 1270041 - For better performance, increase the size of aggregate storage cache.
    OK/INFO - 1270041 - For better performance, increase the size of aggregate storage cache.
    OK/INFO - 1003058 - Data load buffer commit elapsed time : [5131.49] seconds.
    OK/INFO - 1241113 - Database import completed ['4572_a'.'agg'].
    MAXL>
    h5. The Question
    Can anybody tell me why the final import is recommending increasing the storage cache when it is already slightly larger than the value specified in the documentation?
    h5. Versions
    Essbase Release 11.1.2 (ESB11.1.2.1.102B147)
    Linux version 2.6.32.12-0.7-default (geeko@buildhost) (gcc version 4.3.4 [gcc-4_3-branch revision 152973] (SUSE Linux) ) #1 SMP 2010-05-20 11:14:20 +0200 64 bit

    My understanding is that storage cache setting calculation you quoted is based on the cache requirements for retrieval. This recommendation has remained unchanged since ASO was first introduced in v7 (?) and was certainly done before the advent of parallel loading.
    I think that the ASO cache is used during the combination of the buffers. As a result depending on how ASO works internally you would get this warning unless your buffer was:
    1. = to the final load size of the database
    2. OR if the cache was only used when data existed for the same "Sparse" combination of dimensions in more than one buffer the required size would be a function of the number of cross buffer combinations required
    3. OR if the Cache is needed only when compression dimension member groups cross buffers
    By "Sparse" dimension I mean the non-compressed dimensions.
    Therefore you might try some experiments. To test case x above:
    1. Forget it you will get this message unless you have a cache large enough for the final data set size on disk
    2. sort your data so that no dimensional combination exists in more than one buffer - ie sort by all non-compression dimensions then by the compression dimension
    3. Often your compression dimension is time based (EVEN THOUGH THIS IS VERY SUB-OPTIMAL). If so you could sort the data by the compression dimension only and break the files so that the first 16 compression members (as seen in the outline) are in buffer 1, the next 16 in buffer 2 and the next in buffer 3
    Also if your machine is IO bound (as most are during a load of this size) and your cpu is not - try using os level compression on your input files - it could speed things up greatly.
    Finally regarding my comments on time based compression dimension - you should consider building a stored dimension for this along the lines of what I have proposed in some posts on network54 (search for DanP on network54.com/forum/58296 - I would give you a link but it is down now).
    OR better yet in the forthcoming book (of which Robb is a co-author) Developing Essbase Applications: Advanced Techniques for Finance and IT Professionals http://www.amazon.com/Developing-Essbase-Applications-Techniques-Professionals/dp/1466553308/ref=sr_1_1?ie=UTF8&qid=1335973291&sr=8-1
    I really hope you will try the suggestions above and post your results.

  • Load and Unload Alias Table - Aggregate Storage

    Hi everyone,<BR><BR>Here I am again with another question about aggregate storage...<BR><BR>There is no "load" or "unload" alias table listed as a parameter for "alter database" in the syntax guidelines for aggregate storage (see <a target=_blank class=ftalternatingbarlinklarge href="http://dev.hyperion.com/techdocs/eas/eas_712/easdocs/techref/maxl/ddl/aso/altdb_as.htm">http://dev.hyperion.com/techdo...l/ddl/aso/altdb_as.htm</a> )<BR><BR><BR>Is this not a valid parameter for aggregate storage? If not, how do you load and unload alias tables if you're running a batch script in MaxL and you need the alias table update to be automated?<BR><BR>Thanks in advance for your help.<BR><BR>

    Hi anaguiu2, <BR><BR>I have the same problem now. Do you find a solution about the load and unload alias table - Aggregate storage ? Could you give me your solution used if you have one. <BR><BR>Thanks, Manon

  • OSD Request User State Storage fails after SCCM 2012 upgrade to R2

    After SCCM upgrade from SP1 to R2 OSD fails on Request User State Storage. Error:
    Task sequence: XXXXX has failed with the error code (0x00004005). For more information, contact your system administrator or helpdesk operator.
    In smsts.log:
    =================================
    <![LOG[Certificate is a self signed certificate. It will not be checked for revocation or expiration.]LOG]!><time="09:27:37.133-120" date="03-14-2014" component="OSDSMPClient" context="" type="0" thread="5984"
    file="smpclientutil.cpp:420">
    <![LOG[Successfuly retrieved public key and verified signature.]LOG]!><time="09:27:37.133-120" date="03-14-2014" component="OSDSMPClient" context="" type="1" thread="5984" file="smpclient.cpp:2604">
    <![LOG[Sending share info request message: <RequestStateStore><ClientID>GUID:BF0F737B-1834-4625-984C-5C9DE2391CFC</ClientID><Reserved2>308202E4308201CCA00302010202103A4DDA5B85D41CAE4D9D220E5006A4FF300D06092A864886F70D01010B050030203110300E0603550403130742313439353230310C300A06035504031303534D533020170D3134303331313038353935385A180F32313134303231363038353935385A30203110300E0603550403130742313439353230310C300A06035504031303534D5330820122300D06092A864886F70D01010105000382010F003082010A0282010100C739D1BFA21BF715C0BCB55FC3B86983AAB9A0AF8BB33977E85CED2225EDAB0635DA31B1864420AD0BECBB947A148C669FA3C3B4EF17F76E2A5F6939FB4390ACA8EA53FE213C487E7DDBBD653963F0DC5F1ED08C20205657902E2EC70881654B7CFFA5C46006F94BCFE6F3C694E4A12A4B70A7F449473F00770660B0721305E2D17856FBE4ECDDEA0EB844E4628CBBCB6CA6859DF935FEF8F2F026D50FD39FFA99833F380F0298EA2A8CD6E023F00B3A008D64BBCCF694A303E5C4A2B9920C5DEDC72D123C4D2B02C429FFC29F80BB792AA05C367490AD5C3FB368D84C4373A7DD0A77A80B1D955E6E34F694B04BB1BF87CD67D697A4BB1FE291150FC8699EB50203010001A318301630140603551D25040D300B06092B0601040182376502300D06092A864886F70D01010B0500038201010031207E781BCF10D96166416C305AADFA53D85343168E5E8A6B41B2B12B53886D1AC75BDFA045598F088AF465853E1261A5D6F0F4DEB2443C0C5F25B4F2746DB52F1B46D5E4D4D7B139CA8281ACA03C37890285BEB721ED186F3D70098EA9D7E61265210B73C18245FE25C16D72E545C9F4544A788632C7A61F6106BAB6A357530241FBB4245F7D5244805BC571F1B024C0825D04C85021E8AC7F94B901F30039C28BAC6DC307F2AC04B9B14A3949697EED015E64611D3427E41537B800D1222CC1FDAF333DE7F59B1A242BBCB65F4D85722A62147CA391684EB6A53D547C78050C341DA0FC99A09842A184842DEE3C5138F2C5AF515DF8D531F850E7A3839E90</Reserved2></RequestStateStore>]LOG]!><time="09:27:37.134-120"
    date="03-14-2014" component="OSDSMPClient" context="" type="1" thread="5984" file="smpclient.cpp:2285">
    <![LOG[Requesting SMP Root share config information from http://server.int.domain.com:0]LOG]!><time="09:27:37.290-120" date="03-14-2014" component="OSDSMPClient" context="" type="1" thread="5984"
    file="smpclient.cpp:2348">
    <![LOG[Received 3963 byte response.]LOG]!><time="09:27:37.333-120" date="03-14-2014" component="OSDSMPClient" context="" type="0" thread="5984" file="smpclient.cpp:2363">
    <![LOG[Adding \\server.int.domain.com\SMPSTORED_39C7CA32$ to list ]LOG]!><time="09:27:37.367-120" date="03-14-2014" component="OSDSMPClient" context="" type="1" thread="5984" file="smpclient.cpp:2403">
    <![LOG[Failed to connect to "\\server.int.domain.com\SMPSTORED_39C7CA32$" (1203).]LOG]!><time="09:27:37.440-120" date="03-14-2014" component="OSDSMPClient" context="" type="2" thread="5984"
    file="tsconnection.cpp:340">
    <![LOG[Failed to connect to "\\server.int.domain.com\SMPSTORED_39C7CA32$" (1203).]LOG]!><time="09:27:37.468-120" date="03-14-2014" component="OSDSMPClient" context="" type="2" thread="5984"
    file="tsconnection.cpp:340">
    <![LOG[Cannot connect to http://server.int.domain.com SMP root share]LOG]!><time="09:27:37.468-120" date="03-14-2014" component="OSDSMPClient" context="" type="3" thread="5984" file="smpclient.cpp:1754">
    <![LOG[ClientRequestToSMP::DoRequest failed. error = (0x80004005).]LOG]!><time="09:27:37.468-120" date="03-14-2014" component="OSDSMPClient" context="" type="3" thread="5984" file="smpclient.cpp:1882">
    <![LOG[Request to SMP 'http://server.int.domain.com' failed with error (Code 0x80004005). Trying next SMP.]LOG]!><time="09:27:37.468-120" date="03-14-2014" component="OSDSMPClient" context="" type="2"
    thread="5984" file="smpclient.cpp:1590">
    <![LOG[Sleeping for 60 seconds before next attempt to locate an SMP.]LOG]!><time="09:27:37.468-120" date="03-14-2014" component="OSDSMPClient" context="" type="1" thread="5984" file="smpclient.cpp:1565">
    =======================
    IIS log:
    =======================
    172.20.0.xx CCM_POST /SMSSMP/.sms_smp op=KeyInfo 80 - 172.20.6.yy SMS+CCM+5.0+TS 200 0 0 2281 2
    172.20.0.xx CCM_POST /SMSSMP/.sms_smp op=RootShareInfo 80 - 172.20.6.yy SMS+CCM+5.0+TS 200 0 0 4088 29
    172.20.0.xx OPTIONS /SMPSTORED_39C7CA32$ - 80 - 172.20.6.yy Microsoft-WebDAV-MiniRedir/6.1.7601 200 0 0 206 19
    172.20.0.xx PROPFIND /SMPSTORED_39C7CA32$ - 80 - 172.20.6.yy Microsoft-WebDAV-MiniRedir/6.1.7601 405 0 0 1496 1
    172.20.0.xx PROPFIND /SMPSTORED_39C7CA32$ - 80 - 172.20.6.yy Microsoft-WebDAV-MiniRedir/6.1.7601 405 0 0 1496 1
    ======================
    Allow clients to connect anonymously enabled on DP
    On SMP file share permissions enabled for everyone
    Any suggestions where to look?

    Hi,
    Please try to access
    \\server.int.domain.com\SMPSTORED_39C7CA32$ using Network Access Account from the workstation you working on.
    Best Regards,
    Joyce Li
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

Maybe you are looking for