Calc Performance 5.0.2 vs 6.5

We are upgrading from 5.0.2p11 to 6.5. Before the conversion I ran the ?CALC ALL? on the database, which took around 3hr 15 min. Keeping the same settings and data on the version 6.5 run time is 38 hrs.The only exception is, I have added CALCPARALLEL 3 in Essbase.cfg fileCan anyone tell me what should I look into?Settings I have looked into:Index Cache is 120M Data Cache is 200MBuffered I/oCalc Cache is 200KI am running this on NT server with 3G RAM and 4 processors. Thank you,Mrunal

If you add the line 'SET MSG SUMMARY;' or 'SET MSG INFO;' before the 'CALC ALL;' and run the calc using esscmd, you will see a lot of useful information such as if parallel calc is working on how many dimensions and also how many empty tasks there were. This information is also written to the application log file. Parallel calc won't work if the outline contains complex formulae and isn't beneficial if there are a large number of empty tasks.I've found it better to take it out of the essbase.cfg file and then if I need it I add 'SET CALCPARALLEL 3;' to the calc scripts. However, that shouldn't explain your massive increase in calc times. If you have 3gb of Ram then try increasing your data cache (try 1gb). Add 'SET CACHE ALL;' to the calc script. Check your block density and block sizes in database/information. There could be any number of things that affect your calc time. I have also found that when upgrading, I re-create the index by exporting input data, clear the data, unload the application, delete the (app).ind file, delete the (app).esm and (app).tct files as well, load the application, load the data and calculate. This has helped us to improve the database stability.

Similar Messages

  • Calc performance on new machine

    We are migrating from 5.02 patch 13a to 6.5 over the next month. As a part of this migration we have moved to a new Unix server, with faster processors. The move changes OS from 11.0 to 11.i. We are seeing calc times INCREASE dramatically, with interesting memory usage (memory leak?). Calcs which previously run in 1 hour have taken 11 hours to complete. The FIRST time the calc was run (on the new machine), it processed in roughly an equal time, then it gets progressively worse. Restarting the server seems to have no impact. We have "mirrored" the servers, as they are the same class HP machines. The only change is a faster chip and the 11.i OS. Any thoughts? Thanks in advance. Glen Moser Consultant

    Just move the iPhoto Library. There are more details in iPhoto: Move your iPhoto library to a new location

  • CREATEBLOCKONEQ: calc performance issue.

    Hello Everyone,
    We've been using one of the calc on but it takes a heck lot of time to finish.It runs almost for a day. I can see that CREATEBLOCKONEQ is set to true for this calc. I understand that this setting works on sparse dimension however ProjCountz (Accounts) and BegBalance(Period) is member on dense dimension in our outline. One flaw that I see is that ProjCount data sits in every scenario. However, we just want it in one scenario. So we will try to narrow the calc down to only one scenario. Other than that, do you see any major flaw in the calc?
    Its delaying a lot of things. Any help appreciated. Thanks in Advance.
    /* Set the calculator cache. */
    SET CACHE HIGH;
    /* Turn off Intelligent Calculation. */
    SET UPDATECALC OFF;
    /* Make sure missing values DO aggregate*/
    SET AGGMISSG ON;
    /*Utilizing Parallel Calculation*/
    SET CALCPARALLEL 6;
    /*Utilizing Parallel Calculation Task Dimensions*/
    SET CALCTASKDIMS 1;
    /*STOPS EMPTY MEMBER SET*/
    SET EMPTYMEMBERSETS ON;
    SET CREATEBLOCKONEQ ON;
    SET LOCKBLOCK HIGH;
    FIX("Proj_Countz")
    clearblock all;
    ENDFIX;
    Fix(@Relative(Project,0), "BegBalance", "FY11")
    "Proj_Countz"
    "Proj_Countz"="Man-Months"->YearTotal/ "Man-Months"->YearTotal;
    ENDFIX;
    Fix("Proj_Countz")
    AGG("Project");
    ENDFIX;

    You are valuing a dense member (Proj_Countz) by dividing a dense member combination (Man Months->YearTotal/Man Months->YearTotal). There can be no block creation going on as everything is in the block. CREATEBLOCKSONEQ isn't coming into play and isn't needed.
    The code is making three passes through the code.
    Pass#1 -- It is touch every block in the db. This is going to be expensive.
    FIX("Proj_Countz")
    clearblock all;
    ENDFIX;Pass#2
    Fix(@Relative(Project,0), "BegBalance", "FY11")
    "Proj_Countz"
    "Proj_Countz"="Man-Months"->YearTotal/ "Man-Months"->YearTotal;
    ENDFIX;Pass#3 -- It's calcing more than FY11. Why?
    Fix("Proj_Countz")
    AGG("Project");
    ENDFIX;Why not try this:
    FIX("FY11", "BegBalance", @LEVMBRS(whateverotherdimensionsyouhave))
    Fix(@Relative(Project,0))
    "Proj_Countz"
    "Proj_Countz"="Man-Months"->YearTotal/ "Man-Months"->YearTotal;
    ENDFIX
    AGG(Project,whateverotherdimensionsyouhave) ;
    ENDFIXThe clear of Proj_Countz is pointless unless Man-Months gets deleted. Actually, even if it does, Essbase should do a #Missing/#Missing and zap the value. The block will exist if Proj_Countz is valued and the cells MM and YT) will be there and clear the PC value.
    I would also look at the parallelism of your calculation -- I don't think you're getting any with one taskdim.
    Regards,
    Cameron Lackpour

  • Essbase calc performance

    Dears,
    I have one calculation script in which there are all datacopy statements ( about 26000) lines and script is taking almost 3-4 hours to run for a month in a year.
    how can we improve the performance with using datacopy or some other function. Please if some experts suggestion that will be useful.
    sample code.

    Calculation scripts performance can be reduced .Can you provide us with more info about the actual datacopy script which takes 3-4 hours (Sample Example of the script will be nice)
    Thanks,
    Sreekumar Hariharan

  • Essbase inconsistent calc performance

    Edited by: user610131 on Aug 23, 2012 11:32 AM
    Edited by: user610131 on Aug 23, 2012 11:32 AM

    Edited by: user610131 on Aug 23, 2012 11:32 AM
    Edited by: user610131 on Aug 23, 2012 11:32 AM

  • AGG and CALC DIM Essbase script recently started to grow our pag files

    We have a Essbase script that does nothing but AGG and CALC DIM that ran fine for months in that it did not grow our Workforce cube. Starting in late Jan it started to grow its pag files. Workforce cube used to be 7 GB in Dec 2010, then it grew to 10GB today. I tested running it and it grew our pag files by 170MB 2nd time and then by 70MB the 3rd time I ran it. Has anyone seen this?

    Thanks a million Cameron.
    1) I do dense restructures every night - apparently that does not remove all defragmentation.
    last questions:
    2) I exported level zero, cleared all data, then imported level zero data. That should clear up all defragmentation, wouldn't it?
    3) After importing level zero data, I ran a simple Calc Dim calc script on Accounts dim only on this Workforce BSO cube that is only 400MB. It took over 30 mins. On my second and third run of same calc script, it took 9 mins. My BSO cube grew a few MB. Can I assume that blocks have been build by first run and that all subsequent runs will stay around 9 mins since blocks have now been build?
    Here is the calc script
    SET CACHE HIGH;
    SET UPDATECALC OFF;
    SET CLEARUPDATESTATUS OFF;
    SET LOCKBLOCK HIGH;
    SET AGGMISSG ON;
    SET CALCPARALLEL 3;
    FIX (febscenario,Working)
    FIX(@RELATIVE(TTC,0),@RELATIVE(TCI,0),@LEVMBRS("Project",0),@RELATIVE("Total Employees",0))
    FIX(FY11, FY12 "Jan":"Dec")
    FIX("HSP_InputValue","Local","USD")
    CALC DIM ("Account");
    CALC TWOPASS;
    ENDFIX
    ENDFIX /* &YearNext */
    ENDFIX
    ENDFIX
    4) When I calc only FY11, it takes 3 seconds to calc on the first to 4th run of the above calc. However, when I calc FY12, it takes over 30 mins on first calc and 9 mins subsequently. Why is that? Should I use SET CALCONMISSINGBLK in my calc script?
    5) I am running calc as Essbase admin user. The level zero text file I loaded is only 460MB. After calc, the BSO cube's pag files are only 420MB. We are thinking of calc'ing older scenarios for historical purposes but am not sure if that will degrade the calc performance. My experience has been that - increasing the size of the BSO cube by calc'ing will degrade future calc times. Is that your experience?
    Edited by: Essbase Fan on Feb 25, 2011 9:15 AM
    Edited by: Essbase Fan on Feb 25, 2011 9:17 AM

  • Any substitute for Text  in #Missing under Excel?

    The Excel SPreadsheet add-in returns missing cells as text, even when specifying 0 as the default to be displayed. This is a pain when forumulas exploit Essbase returns.Is there a way to have this replaced by a real 0 value or an empty cell without actually filling the database with zeros, or having a macro clean-up the Excel sheet after each retrieve?

    This is a pain when forumulas exploit Essbase returns. Is there a way to have this replaced by a real 0 value or an empty cell without actually filling the database with zeros, or having a macro clean-up the Excel sheet after each retrieve? <<You can't replace them with 'real' zeros w/o code. Two issues to consider:1) If you are doing a writeback application (budgeting/forecasting) and you put 'real' zeros in the cells, zeros will put updated back into Essbase and will, most likely, cause the creation of blocks increasing db size and affecting calc performance; some of the Essbase products, including ours, have a feature to 'automagically' convert zeros to #Missing before sending to Essbase.2) There isn't an event that fires in Excel when a retrieve occurs, so you can't automatically have a macro run. Too bad as the Excel VBA code is trivial (something like ActiveSheet.UsedRange.Replace 0, 0, xlWhole). Again, most of the Essbase products (again including ours) does this replace with 'real' zeros automatically.If the only problem you are having is that Excel formulas return the #VALUE! error if text is included, you can fix your Excel formulas.. You get the #VALUE! when you use Excel formulas in this format: =A1+A2If you use a function instead, you may be able to bypass getting #VALUE!:=SUM(A1:A2) or=SUM(A1)SUM(A2)Tim TowApplied OLAP, Inc

  • # of dimensions

    Hi all,I have a cube (1) with 12 dimensions and the block size is 361648. I created another cube (2) and merged 2 sparse dimensions so I have 11 dimensions in the cube but the interesting part is that my block size is still the same 361648. Can someone tell me why the block size didn't decrease?Thanks,Haroon

    I certainly wouldn't make measures sparse in this situation as you would have an extremely small block size. Also, ALL interdependent measure calculations would be across blocks, which is generally best kept to a minimum for optimum calc performance.Another possibility for you to reduce block size is to make as many of your measures and / or time dimension members Dynamic Calc (NOT Dynamic Calc and Store) as you can. This takes those members out of the block.

  • Why hourglass format

    Hi experts
    Why hyperion recommended the hour glass format is the best way to design an outline?.
    dimension with more dense members
    dimension with less dense
    dimension with less sparse
    diemension with more sparse
    Attribute dimensions why??? I am curious to know about this,
    Retrieval performance is going to be good? if so how
    Calc performance is going to be good? if so how
    No of block retrievals while writing the data is less? if so how
    Please throw your technical stuff here... Thank You

    if you look at the dbag on the section on calculation sequence, you will see that the way it processes is to go through the dimensions in a particular order. having the hourglass helps that order to be most efficient. In some cases, it is not and the hourglass on a sticck works better. It all depends. For instance, I have a cube, where if I use the hourglass on a stick, I only get one processor, if I use regular hourglass I get 4. (shich is what I have).

  • Essbase on San Drive

    We have installed Essbase in SAN drive. It was working fine for past 2 years recently we have noticed some abnormal behaviour on the calc performance. The same calc takes 2 hrs in the morning but if we run at around 6 pm takes 4 - 5 hrs. I(there is no backup process running at 6 pm).
    Any suggestion what needs to be check to find the issue?

    Hi,
    As SAN setup is a common storage setup for all the applications/systems of an enterprise. It might have some maintance issues ..which might impact our essbase environment.
    But, I personally would prefer to have the installation of an application ( like essbase on the local drive of the server ) and distribute its storage across the SAN drives( as my storage requirement for essbase might be huge, and might beed in TB's ,which is possible only through SAN).
    Its more important for an applications performance that SAN is configured optimally for performance. There are few parameters at SAN end like
    1. Raid configuration( EX if its Raid 1 0 , though its more expensive , its faster ).
    2. How the LUNS and LUSE 's have been assigned
    3. Count of spindles ....etc
    This is what I can add, hope its useful
    Sandeep Reddy Enti
    HCC
    http://hyperionconsultancy.com/

  • Cache settings impact

    Hi experts,
    I d like to know if cache settings in a cube have impact on the retreival performance as well?? or it impacts only the calcs performance ??
    tried searching the forum but cud nt find a solution

    Hi Dave,
    Good question,
    1) Calculations performance depends on your cache ettings.
    http://docs.oracle.com/cd/E17236_01/epm.1112/esb_tech_ref/frameset.htm?maxl_export.html
    2) Retrieval performance is based on member storage properties.
    Also cache settings impact your calculation performance as well retrieval performance.
    Thanks.

  • Exponential dataloadtimes after incremental dataloads

    We're loading large amounts of data (+500MB) weekly in our cube. Every new week added takes longer than the previous one. It ranges from half an our for the first week to 4 hours for the 10th week. Does anybody have the same experience or knows an explanation for this?We are using Essbase 6.1.3a.Thanks.

    There are many possible explainations, but I believe the most likely one would be the sparse/dense configuration of your database.If you are updating data on a time basis (every day/week/month) and your time dimension(s) are dense, then subsequent loads must cycle through a lot of existing blocks, plus create new ones to load the new values.This reading and writing process can also cause fragmentation of the data files, which can also affect load/calc performance.If you plan to continue loading data incrementally in this fashion, you may want to revisit your database design. Try changing the time dimension to sparse and testing the loads. It will rule out the issue I outlined above. You will however have to perform and end to end process to ensure the calc times and results are not adversely affected.Regards,Jade---------------------------------Jade ColeSenior Business Intelligence ConsultantClarity [email protected]

  • Retrieval performance become poor with dynamic calc members with formulas

    We are facing the retrieval performance issue on our partititon cube.
    It was fine before applying the member formulas for 4 of measures and made them dynamic calc.
    The retrieval time has increased from 1sec to 5 sec.
    Here is the main formula on a member, and all these members are dynamic calc (having member formula)
    IF (@ISCHILD ("YTD"))
    IF (@ISMBR("JAN_YTD") AND @ISMBR ("Normalised"))
    "Run Rate" =
    (@AVG(SKIPNONE, @LIST (@CURRMBR ("Year")->"JAN_MTD",
    @RANGE (@SHIFT(@CURRMBR ("Year"),-1, @LEVMBRS ("Year", 0)), @LIST("NOV_MTD","DEC_MTD")))) *
    @COUNT(SKIPNONE,@RSIBLINGS(@CURRMBR ("Period")))) + "04";
    ELSE
    IF (@ISMBR("FEB_YTD") AND @ISMBR ("Normalised"))
    "Run Rate" =
    (@AVG (SKIPNONE, @RANGE (@SHIFT(@CURRMBR ("Year"),-1, @LEVMBRS ("Year", 0)),"DEC_MTD"),
    @RANGE (@CURRMBR ("Year"), @LIST ("JAN_MTD", "FEB_MTD"))) *
    @COUNT(SKIPNONE,@RSIBLINGS(@CURRMBR ("Period")))) + "04";
    ELSE
    "Run Rate"
    =(@AVGRANGE(SKIPNONE,"Normalised Amount",@CURRMBRRANGE("Period",LEV,0,-14,-12))*
    @COUNT(SKIPNONE,@RSIBLINGS(@CURRMBR ("Period"))))
    + "Normalised"->"04";
    ENDIF;
    ENDIF;
    ELSE 0;
    ENDIF
    Period is dense
    Year is dense
    Measures (normalised) is dense
    remaining all sparse
    block size 112k
    index cache to 10mb
    Rertrieval buffer 70kb
    dynamiccalccahe max set to 200mb
    Please not that, this is partition cube, retriving data from 2 ASO, 1 BSO underline cubes.

    I received the following from Hyperion. I had the customer add the following line to their essbase.cfg file and it increased their performance of Analyzer retrieval from 30 seconds to 0.4 seconds. CalcReuseDynCalcBlocks FALSE This is an undocumented setting (will be documented in Essbase v6.2.3). Here is a brief explanation of this setting from development: This setting is used to turn off a method of reusing dynamically calculated values during retrievals. The method is turned on by default and can speed up retrievals when it involves a large number of dynamically calculated blocks that are each required to compute several other blocks. This may happen when there is a big hierarchy of sparse dynamic calc members. However, a large dynamic calculator cache size or a large value of CALCLOCKBLOCK may adversely affect the retrieval performance when this method is used. In such cases, the method should be turned off by setting CalcReuseDynCalcBlocks to FALSE in the essbase.cfg file. Only retrievals are affected by this setting.

  • Essbase performance issue when calc scripts are run on FDM cube on same server

    We have a large Essbase application which has high usage on a daily basis, which is being impacted when we run Calc scripts on an FDM forecast cube which is on the same server. The large application is on EIS 11.1.2 and the FDM cubes are being migrated to the same server and also being upgraded from EIS 7.1 on Unix to EIS 11.1.2 on NT. Every time the Calc scripts are run on the FDM cube, the performance of the Essbase application is degraded and it shuts down after some time.

    Sudhir,
    Do you work at a help desk or are you a consultant? you ask such a varied range of questions I think the former. If you do work at a help desk, don't you have a next level support that could help you? If you are a consultant, I suggest getting together with another consultant that actually knows more. You might also want to close some of your questions,. You have 24 open and perhaps give points to those that helped you.

  • Outline Order, Calc Script Performance, Substitution Variables

    Hi All,
    I am currently looking in to the performance side.
    This is mainly about the calculation script performance.
    There are lot of questions in my mind and as it is said you can get the results only by testing.
    1. Outline order should be from least sparse to most sparse
    (other reason : to accomodate as many sparse members in to calculator cache) correct me if I am wrong
    2. Is Index entry created based on the outline order. For example I have outline order as Scenarios, Products, Markets then does my index entry be like scenario -> Products -> Markets ?
    3. Does this order has to match with the order of members in FIX Statement of calculation script?
    4. I have 3 sparse dimensions. P (150 members), M (8 members), V (20 members).
    I use substitution variables for these three in the calculation script. And these three are the mandotary things in my calculation script. Now when I see the fix statement, these three are the first 3 parameters of the fix statemtn and since I am fixing on a specific member, placing these three members as the first 3 sparse dimensions in the outline, ill it improve performance?
    In one way, I can say that a member from P, M,V becomes my key for the data.
    Theoritically if I think, may be it will...but in practical terms I don't see any of such thing.. Correct me If my thinking is wrong.
    One more thing, I have a calc script with say around 10 FIX statements and this P,M,V is being used in every FIX statemnts. Since my entire calculation will be only on one P, one M, one V. Can I put everything in one FIX at the beginning and exclude it from remaining FIX statememts?
    5. I have a lot of cross dimensional operations in my calc scripts for accounts dimension (500 + ) members.
    Is there a way to reduce these?
    6. My cube statistics..
    Cube size : 80 GB +
    Block Size : 18 KB (Approx)
    Block density : 0.03 . This is what I am more worried about. This really hurts me.
    This is one of the reason why my calculation time is > 7 hours and some times it is horrible when there is huge amount of data (it takes aound 20 + hours) for calculation.
    I would be looking forward for your suggestions.
    It would be really apprecialble if It is Ok to share your contact number so that I can get in touch with you. That could be of great help from your side.

    I have provided some answers below:
    There are lot of questions in my mind and as it is said you can get the results only by testing.
    ----------------------------You are absolutely right here but it helps to understand the underlying principles and best practices as you seem to understand.
    1. Outline order should be from least sparse to most sparse
    (other reason : to accomodate as many sparse members in to calculator cache) correct me if I am wrong
    ----------------------------This is one reason but another is to manage disk I/O during calculations. Especially when performing the intial calculation of a cube, the order of sparse dimensions from smallest to largest will measurably affect your calc times. There is another consideration here though. The smallest to largest (or least to most) sparse dimension argument assumes single threading of the calculations. You can gain improvements in calc time by multi-threading. Essbase will be able to make more effective use of multi-threading if the non-aggregating sparse dimensions are at the end of the outline.
    2. Is Index entry created based on the outline order. For example I have outline order as Scenarios, Products, Markets then does my index entry be like scenario -> Products -> Markets ?
    ----------------------------Index entry or block numbering is indeed based on outline order. However, you do not have to put the members in a cross-dimensional expression in the same order.
    3. Does this order has to match with the order of members in FIX Statement of calculation script?
    ----------------------------No it does not.
    4. I have 3 sparse dimensions. P (150 members), M (8 members), V (20 members).
    I use substitution variables for these three in the calculation script. And these three are the mandotary things in my calculation script. Now when I see the fix statement, these three are the first 3 parameters of the fix statemtn and since I am fixing on a specific member, placing these three members as the first 3 sparse dimensions in the outline, ill it improve performance?
    --------------------------This will not necessarily improve performance in and of itself.
    In one way, I can say that a member from P, M,V becomes my key for the data.
    Theoritically if I think, may be it will...but in practical terms I don't see any of such thing.. Correct me If my thinking is wrong.
    One more thing, I have a calc script with say around 10 FIX statements and this P,M,V is being used in every FIX statemnts. Since my entire calculation will be only on one P, one M, one V. Can I put everything in one FIX at the beginning and exclude it from remaining FIX statememts?
    --------------------------You would be well advised to do this and it would almost certainly improve performance. WARNING: There may be a reason for the multiple fix statements. Each fix statement is one pass on all of the blocks of the cube. If the calculation requires certain operations to happen before others, you may have to live with the multiple fix statements. A common example of this would be calculating totals in one pass and then allocating those totals in another pass. The allocation often cannot properly happen in one pass.
    5. I have a lot of cross dimensional operations in my calc scripts for accounts dimension (500 + ) members.
    Is there a way to reduce these?
    -------------------------Without knowing more about the application, there is no way of knowing. Knowledge is power. You may want to look into taking the Calculate Databases class. It is a two day class that could help you gain a better understanding of the underlying calculation principles of Essbase.
    6. My cube statistics..
    Cube size : 80 GB +
    Block Size : 18 KB (Approx)
    Block density : 0.03 . This is what I am more worried about. This really hurts me.
    This is one of the reason why my calculation time is > 7 hours and some times it is horrible when there is huge amount of data (it takes aound 20 + hours) for calculation.
    ------------------------Your cube size is large and block density is quite low but there are too many other factors to consider to simply say that you should make changes based solely on these parameters. Too often we get focused on block density and ignore other factors. (To use an analogy from current events, this would be like making a decision on which car to buy solely based on gas mileage. You could do that but then how do you fit all four kids into the sub-compact you just bought?)
    Hope this helps.
    Brian

Maybe you are looking for

  • Issue in Custom Breadcrumb.

    Hi SDNers, I have developed a custom breadcrumb using the Navigational Tag library. When i set the iView's Isolation method to Embedded, the breadcrumb works fine. However when i set it to URL the breadcrumb does not show the correct path. I also hav

  • Different take on conditional formats - parameterizing the color?

    Hi - I'm trying to do conditional formatting, but all the examples & help I've found shows a sample that hardcodes the actual color used. I have a requirement where I need to pass in the color in as part of my XML input. The logic used to determine w

  • Cannot connect to RV110w VPN error 619

    Hello, I'm having problems logging into my RV110w using either quickvpn or a windows pptp client connection.... I've been following the guide here but I just can't connect....I can connect via remote management however.... https://supportforums.cisco

  • Can't open iPhoto-get a -600 error message?

    Can't open iPhoto get a -600 error message? 

  • Everything freezing / working very slowly

    I've been away for about 2 weeks, & since I got back & switched on my Mac, nothing seems to work properly; Firefox, Bridge & Photoshop are all seizing up; freezing, sticking, not reacting when I type, or taking several minutes to react. Almost everyt