Calc Script performance

<BR> Hello,<BR><BR> A customer have a cube that is taking a lot longer to calculate after each new load. The cube have 7 dimensions, monthly data from Jan 2005 on, 20 GB of data. It's taking around 14 hours to calculate it, but if you load the data on an identical cube with no data, it is calculated in less than 2 hours. <BR><BR>The calc scripts include a FIX on a dense dimension, as shown below:<BR><BR><BR><BR>Fix( &CurrentYear, &CurrentMonth, Actual, Local) <--- sparse dims<BR><BR> Fix (@IDescendants("REVENUE"), "Qtd VP Interna") <--- dense dim members (Accounts)<BR><BR> Calc Dim (Presidencia, Product); <--- sparse dims<BR><BR> EndFix<BR><BR>EndFix<BR><BR><BR><BR> The question is: since FIXing on a dense dimension causes all data blocks to be touched, is the inner FIX causing a scan in all data blocks of the database, even if the outer FIX refers to sparse dims only? <BR><BR> And during the calc process, the Windows performance monitor shows very little CPU activity and only accasionally a disk reading...<BR><BR><BR> And since Calc Dim is not allowed within an IF command, is there another way to obtain that consolidation?<BR><BR> Thanks in advance!<BR>

<BR> Hello Gary!<BR><BR> I agree that calculating a new month's data in an empty cube should be faster than calculation the same data in a cube that already have 16 months of data, but I think that it's taking much longer that expected. I expected it to be 50% slower, but not 700% !<BR><BR> I even recreated the production cube from scratch, loading and calculating one month at a time, in 2 different servers. The results are always the same: the new calc time is a lot longer the previous one.<BR><BR> And when I use the windows' performance monitor to compare the server's behavior between the calcs of the empty cube an the production one, you can see that the server is either acessing the hard disk or is calculating 100% of the time for the empty cube, but the graphs for the production cube indicates very low disk access and CPU activities. It seems to be waiting for something...<BR><BR> I have already made many configuration changes, such as resizing the Index, data and data-file caches (I'm using direct i/o), number of lock blocks, compression mode among others, but the performance gains obtained for the calc in the empty cube is not reflected for the production cube, maybe because it's (apparently) doing nothing most of the time...<BR><BR> Is there a trace I can use to check what the ESSBASE is doing during the calc? I have used the MSG Detail but this didn't help.<BR><BR><BR> Thank you for your help!<BR>

Similar Messages

  • Will block size effect the calc script performance?

    Hi Experts,
    I have a cube called RCI_LA:RCI_LA, now I have created calc scripts and working fine. But those calc scripts are taking too much time than expected (normally it should not take more than 15 min but those are taking nearly 1 hr or more some calc scripts.)
    In database properties I found that block size is 155896 B i.e. 152.KB but this size should be 8 to 100 KB & Block density is 0.72%
    If block size exceeds more than 100 KB will it impact the performance of Calc scripts?
    I think answer to the above question is “yes”. In this case what should I need to do to improve calc scripts performance?
    Could you please share your experience here with me to come out of this problem?
    Thanks in advance.
    Ram

    I believe Sandeep was trying to say "Dynamic" rather than "Intelligent".
    The ideal block size is a factor in all calcs, but the contributing reasons are many (The main three are CPU caching, Data I/O overhead, Index I/O overhead).
    Generally speaking, the ideal block size is achieved when you can minimize the combination of Data I/O overhead and Index I/O overhead. For this reason a block size that is too large will incur too much Data I/O, while a block size that is too small will incur too much Index I/O. If your Index file is small, increasing your block size may help, although the commonly acceptible block size is between 8K and 64K in size, this is just a guideline.
    In other words, if you test it with something right in the middle and your index file is tiny, you might want to test it with a smaller block size. If your index file is very large (i.e. 400 MB or more), you may want to increase the block size and retest.
    Ways to increase/decrease it are also many. Obviously, changing the dense/sparse settings is the main way, but there are some considerations that make this a touchy process. Other ways are to use dynamic calc in the dense dimensions. I say start at the top of your smallest dense dimension and keep the number of DIMENSIONS that you use D-C on limited. Using D-C members in a dense dimension does NOT increase the index file, so it could be considered a "free" reduction in block size -- the penulty is paid on the retrieve side (there is no free ride).

  • Outline Order, Calc Script Performance, Substitution Variables

    Hi All,
    I am currently looking in to the performance side.
    This is mainly about the calculation script performance.
    There are lot of questions in my mind and as it is said you can get the results only by testing.
    1. Outline order should be from least sparse to most sparse
    (other reason : to accomodate as many sparse members in to calculator cache) correct me if I am wrong
    2. Is Index entry created based on the outline order. For example I have outline order as Scenarios, Products, Markets then does my index entry be like scenario -> Products -> Markets ?
    3. Does this order has to match with the order of members in FIX Statement of calculation script?
    4. I have 3 sparse dimensions. P (150 members), M (8 members), V (20 members).
    I use substitution variables for these three in the calculation script. And these three are the mandotary things in my calculation script. Now when I see the fix statement, these three are the first 3 parameters of the fix statemtn and since I am fixing on a specific member, placing these three members as the first 3 sparse dimensions in the outline, ill it improve performance?
    In one way, I can say that a member from P, M,V becomes my key for the data.
    Theoritically if I think, may be it will...but in practical terms I don't see any of such thing.. Correct me If my thinking is wrong.
    One more thing, I have a calc script with say around 10 FIX statements and this P,M,V is being used in every FIX statemnts. Since my entire calculation will be only on one P, one M, one V. Can I put everything in one FIX at the beginning and exclude it from remaining FIX statememts?
    5. I have a lot of cross dimensional operations in my calc scripts for accounts dimension (500 + ) members.
    Is there a way to reduce these?
    6. My cube statistics..
    Cube size : 80 GB +
    Block Size : 18 KB (Approx)
    Block density : 0.03 . This is what I am more worried about. This really hurts me.
    This is one of the reason why my calculation time is > 7 hours and some times it is horrible when there is huge amount of data (it takes aound 20 + hours) for calculation.
    I would be looking forward for your suggestions.
    It would be really apprecialble if It is Ok to share your contact number so that I can get in touch with you. That could be of great help from your side.

    I have provided some answers below:
    There are lot of questions in my mind and as it is said you can get the results only by testing.
    ----------------------------You are absolutely right here but it helps to understand the underlying principles and best practices as you seem to understand.
    1. Outline order should be from least sparse to most sparse
    (other reason : to accomodate as many sparse members in to calculator cache) correct me if I am wrong
    ----------------------------This is one reason but another is to manage disk I/O during calculations. Especially when performing the intial calculation of a cube, the order of sparse dimensions from smallest to largest will measurably affect your calc times. There is another consideration here though. The smallest to largest (or least to most) sparse dimension argument assumes single threading of the calculations. You can gain improvements in calc time by multi-threading. Essbase will be able to make more effective use of multi-threading if the non-aggregating sparse dimensions are at the end of the outline.
    2. Is Index entry created based on the outline order. For example I have outline order as Scenarios, Products, Markets then does my index entry be like scenario -> Products -> Markets ?
    ----------------------------Index entry or block numbering is indeed based on outline order. However, you do not have to put the members in a cross-dimensional expression in the same order.
    3. Does this order has to match with the order of members in FIX Statement of calculation script?
    ----------------------------No it does not.
    4. I have 3 sparse dimensions. P (150 members), M (8 members), V (20 members).
    I use substitution variables for these three in the calculation script. And these three are the mandotary things in my calculation script. Now when I see the fix statement, these three are the first 3 parameters of the fix statemtn and since I am fixing on a specific member, placing these three members as the first 3 sparse dimensions in the outline, ill it improve performance?
    --------------------------This will not necessarily improve performance in and of itself.
    In one way, I can say that a member from P, M,V becomes my key for the data.
    Theoritically if I think, may be it will...but in practical terms I don't see any of such thing.. Correct me If my thinking is wrong.
    One more thing, I have a calc script with say around 10 FIX statements and this P,M,V is being used in every FIX statemnts. Since my entire calculation will be only on one P, one M, one V. Can I put everything in one FIX at the beginning and exclude it from remaining FIX statememts?
    --------------------------You would be well advised to do this and it would almost certainly improve performance. WARNING: There may be a reason for the multiple fix statements. Each fix statement is one pass on all of the blocks of the cube. If the calculation requires certain operations to happen before others, you may have to live with the multiple fix statements. A common example of this would be calculating totals in one pass and then allocating those totals in another pass. The allocation often cannot properly happen in one pass.
    5. I have a lot of cross dimensional operations in my calc scripts for accounts dimension (500 + ) members.
    Is there a way to reduce these?
    -------------------------Without knowing more about the application, there is no way of knowing. Knowledge is power. You may want to look into taking the Calculate Databases class. It is a two day class that could help you gain a better understanding of the underlying calculation principles of Essbase.
    6. My cube statistics..
    Cube size : 80 GB +
    Block Size : 18 KB (Approx)
    Block density : 0.03 . This is what I am more worried about. This really hurts me.
    This is one of the reason why my calculation time is > 7 hours and some times it is horrible when there is huge amount of data (it takes aound 20 + hours) for calculation.
    ------------------------Your cube size is large and block density is quite low but there are too many other factors to consider to simply say that you should make changes based solely on these parameters. Too often we get focused on block density and ignore other factors. (To use an analogy from current events, this would be like making a decision on which car to buy solely based on gas mileage. You could do that but then how do you fit all four kids into the sub-compact you just bought?)
    Hope this helps.
    Brian

  • Essbase calc script performance issues

    Hi,
    I have essbase 9.3 running on Sun solaris 4 CPU, 16 GB server. The calc script "calc all" takes ~3 hrs to complete.
    This is the calc script.
    /ESS_LOCALE English_UnitedStates.US-ASCII@Binary
    SET UPDATECALC OFF;
    SET CALCPARALLEL 4;
    SET CALCTASKDIMS 2;
    CALC ALL;
    We don't have to calc all dim, but even if we
    But even with specific dim we get the same timing. Below is the script
    SET UPDATECALC OFF;
    SET CALCPARALLEL 4;
    SET CALCTASKDIMS 2;
    FIX ("Y2009", "Actual");
    CALC DIM("Data Source","Intercompany","LegalEntity","Site","Department","Entity");
    ENDFIX
    The ess00001.ind is 700 Mb and ess00001.pag is 2.1 GB.
    In Admin services, this is what I see for caches
    1) Index cache size is 1 GB for this DB
    2) Index cache current value is 1gb
    3) Datafile cache setting is 1.5 GB
    4) Datafile cache current value is 0 (?? not sure why??)
    5) Data cache setting 4.1 GB
    6) Index page setting 8 kb
    please help ...
    Thanks
    Moe

    Moe,
    I'm guessing you inherited this thing, else you would know why the cache settings are what they are, but here are some thoughts:
    Caches:
    3) Datafile cache setting is 1.5 GB
    4) Datafile cache current value is 0 (?? not sure why??)You're running the database in Buffered I/O, so the data file cache is ignored.
    1) Index cache size is 1 GB for this DB
    2) Index cache current value is 1gb You have consumed all of the cache -- I'm a little confused, as you state your .ind file to be 700 megabytes -- generally the index cache consumption doesn't go beyond the .ind file size. When you look at your hit ratio statistics in EAS, does it show a 1 against the index cache? If yes, then you don't need to look any further as that's as good as it's going to get.
    5) Data cache setting 4.1 GBUnless you're using MEMSCALINGFACTOR, I don't think Essbase is actually addressing all of the memory you've assigned. What are you showing as actually used? In any case, having a data cache almost twice as big as the .pag files is a waste as it's way too large.
    Easy, off the cuff suggestions without knowing more about your db:
    1) Try AGG instead of CALC DIM for sparse dimensions.
    2) Try turning off (yes, turning off, you'd be surprised) parallel calc, and benchmark it. It will probably be slower, but it's nice to know.
    3) Dimension order? Modified hourglass?
    4) Tried defragmenting the database and benchmarking the performance?
    5) What is your block size? Big? Small?
    6) I think you are not calculating your Accounts/Measures dimension in your calc? If you are, and it's dense, could you make those Accounts dynamic calc -- dropping a dimension from the calc can be huge.
    I'm sure there will be other suggestions -- these are the easiest.
    Regards,
    Cameron Lackpour

  • Calc script & performance issues

    Hi All we have a calc script which used to take only 10 mint every day. But today it is taking long time 4 hrs,,,stil running..if i cancel that calc operation what is the impact on database?. Earlier all users are hapy with speed...but suddenly every one got pissed off with speed...it is taking long time to retrieve data..Quickly what are the parameters I need to check?thanks in adavance..

    <p>If you are using committed access then you can safely cancel thecalculation. All data will be reverted back to what it was beforecalculation. However if you are using uncommitted access it isrecommended not to cancel any running operation.</p><p>If you want to eliminate fragmentation, just export your level 0data and import again, and then do a calc all.</p><p>Doing so will remove any fragmentation. It is recommended to dothis once in  a while like 2 months to get rid offragmentation.</p>

  • Essbase performance issue when calc scripts are run on FDM cube on same server

    We have a large Essbase application which has high usage on a daily basis, which is being impacted when we run Calc scripts on an FDM forecast cube which is on the same server. The large application is on EIS 11.1.2 and the FDM cubes are being migrated to the same server and also being upgraded from EIS 7.1 on Unix to EIS 11.1.2 on NT. Every time the Calc scripts are run on the FDM cube, the performance of the Essbase application is degraded and it shuts down after some time.

    Sudhir,
    Do you work at a help desk or are you a consultant? you ask such a varied range of questions I think the former. If you do work at a help desk, don't you have a next level support that could help you? If you are a consultant, I suggest getting together with another consultant that actually knows more. You might also want to close some of your questions,. You have 24 open and perhaps give points to those that helped you.

  • OBIEE 11g write back to Essbase and run calc script feature

    Hi,
    I have a requirement to write back into Essbase Cube and run calc script from OBIEE dashboard.
    From what i have search on google, we must deploy additional Java Script into weblogic, but that is before OBIEE 11.1.1.6.
    I have 2 question:
    - Does OBIEE 11.1.1.6 already supported native write-back to Essbase and running calcscript?
    - Anyone has example of the custom java-script for write back and running calcscript?
    And another, if there are requirement like this, is it better to install Essbase Add-in on Microsoft Excel and do the what-if analysis there, then just display the report on OBIEE dashboard? (based on user-friendliness and the complexity on maintenance)
    Thanks in advance.

    Hi,
    Even I am trying to achieve the same thing as you have mentioned but think that it is not possible to achieve easily in obiee 11.1.1.6, though we do have a work around to perform a writeback in Essbase cube using JAPI as mentioned below.
    Also we can call Hyperion reports from OBIEE using Action Links and also pass parameters to the same but dont know if it runs calculation script.
    Below link could be useful for you for write back workaround.
    http://oraclebizint.wordpress.com/2009/05/25/oracle-bi-ee-10-1-3-4-1-writebacks-to-essbase-using-japi-and-custom-html-part-1/
    Let me know in case you have found out anything else related to same.
    Thanks,

  • Calc scripts are running slow(all of a sudden)

    All of a sudden, for the past few days, we are noticing that all our calc scripts have been running very slow.
    The same scripts used to run much faster earlier.
    Has anybody seen this kind of scenario?
    We did a RAM upgrade on the eas server, and have restarted all services.
    Other than that, nothing has changed in our system.
    Thanks.

    It can be quite common for calcs to slow down over time, but there are some things to do to mitigate this.
    1. Are you using Intelligent Calc? All things being equal (a very broad statement in essbase, since things are never equal) if there is more activity by users, it could affect how many blocks are marked dirty. This is probably not your issue, because a properly written calc wouldn't slow down much for this reason. I had to mention it though because I have seen an installation where their calc was 'Calc All' and they used intelligent calc to create the scope of the calc. (bad, very bad)
    2. Do you perform DB restructures? (either explicity by Restructuring or by exporting level 0, clearing and import level 0 then agg) If this is not done on a regular basis (regular depends on the usage of the cube) then you could be experiencing fragmentation, which increases the size of the database, increasing run times.
    3. Have you just added another fiscal year to the database? More data means bigger database.
    RAM upgrade on the EAS server shouldn't affect calc times (unless essbase services are also running on the EAS server, then there might be something to it).
    Most of these (and other) issues can be mitigated by applying proper scope to your calcs (Fix statements).
    What environment are you running in? Windows or Unix?
    New application?
    What kind of time increases are we talking about here?
    Robert

  • Calc script takes longer than expected to execute

    The current Planning system has several calc scripts which are used to run the budget. This system is 3.3. I am currently in the process of migrating to Planning 11.1.2. The same outline, data and calc scripts are used in the new system. However, one script, which takes only 8 hours to run in the old system, now takes 5+ DAYS to run. I did a data extract in the new system and the data seems to be correctly calculated.
    My problem is, what can be the issue for this lengthy time for calculation.
    Note: This is the first time I am running the calculation scripts in the new system.
    Thanks

    Did you size your essbase plan type caches appropriately - the index and data caches specifically (this is the most common culprit)
    Do all dimensions have the same dense/sparse configuration?^^^I'll bet anything that Matt got it with the dense/sparse configuration. The caches are worth looking at as well but that big of a performance difference seems unlikely. Taking a dense dimension and making it sparse or vice versa will do crazy things to a database's performance.
    Regards,
    Cameron Lackpour

  • Calc scripts running very Long time

    Hi All,
    Recently, i am migrated the objects from Production to Test region. We have 5 applications and each of the application has a set of calc scripts.
    In test region, they are running really long time. Where as in Production, they run for less time.
    In TEST region each Calc script is taking 10 times more time than the Production times.
    No Dimension added or no script is updated. No difference in objects between TEST and PROD.
    Please suggest me, why is this difference.
    Thanks
    Mahesh

    The obvious first question would be if the hardware is different. You would expect prod to be a more powerful server and therefore perform better.I'm seeing a lot of virtualized test servers (who knows, really, what power the box has) and real prod servers. That can make a huge difference in performance.
    It makes benchmarking tough -- yes, you can see how long something will take relative to another process, but there isn't any way to know how it will perform in production until you sneak it over there and benchmark it. It can be a real PITA for Planning.
    And yes, the theory is that dev and prod are similar so that the above isn't an issue, but that seems to be a more theoretical than actual kind of thing.
    Regards,
    Cameron Lackpour

  • FDM Cannot Access Calc Script

    We use FDM to load data to Essbase and then run a script after export to Aggregate the data using Vlaidation Entities in FDM. We wrote a new Agg Script and so I changed the Validation entity from CalcALL to CALCCMA and now I get the below error.
    Error: Essbase API Procedure: [EsbCalcFile] Threw code: 1030214 - 1030214 - [Tue Aug 03 13:25:25 2010]XXXXX/PLANNING/IncStmt/admin/Error(1030214)
    User [admin] cannot access calc script: CALCCMA
    The script is in the same folder as the old one. What is also strange is that when I try to change back the Script to CalcALL in the FDM valdiation entity I get an error as well. I checked all the integration settings and they are correct. I checked every script in FDM and asearched the adapter XML and found no references to the CalcALL script. I can't figure out where else the script would be referenced in FDM or if I need to change something on the Essbase side.

    I just read the ReadMe and here is the relevant part:
    Essbase Security Requirements
    Some FDM tasks require the user to have certain security privileges for Essbase. The following table outlines the tasks and Essbase privileges required.
    Perform a consolidation (assigning a Validation Entity to the FDM location).     
    Add Calc privilege (ESB_PRIV_CALC) to the active database
    I'm assuming I will need a DBA to check this. The funny thing is that this was working before perfectly fine and then I changed the script in the validation entity and got the error. When I changed it back to the original script that worked I recive the same error. I was not involved in the implementation so I don't know how this was set up.

  • What is the procedure/code in VBA for passing the calc scripts dynamically based on the selection.

    Hello Gurus,
    I want to know what is the procedure/code in VBA for passing the calc scripts dynamically based on the selection.
    For example:
    X=EssVCalculate("Sheetname","Calc_Script name",True)
    In the above code instead of the *"Calc_Script name"* I want a script which is called dynamically and the values are calculated accordingly.
    Thanks in advance
    Saurabh

    Hi Todd,
    This is the situation:
    I have a calc script in Essbase which I can call to perform the calculations on the current sheet that is retrieved. I want calculation for the following formulas:
    x = EssVCalculate("Sheet2", "CalcBC", False)
    CalcBC is my calc script which is present in Essbase
    So instead of passing the above script I want to pass the conditions dynamically in the VBA code . I don't want to mention the script name directly in EssVCalculate option
    For example:
    I have three drop down menus from which I would select three different( zero level )members. It would then retrieve the data for that particular values in the excel sheet and now when I click on Calculate button it should calculate the script dynamically.
    I don't know how calc scripts can be executed dynamically in the VBA code itself.
    Thanks in Advance
    Saurabh

  • Is it possible to have a many to many calc script equation?

    Hi All,
    I'm thinking there has got to be an easy way to do this - but I've tried a bunch of different ways, and I've only been getting error messages.
    What I want to do is perform an allocation based on head count for a few dozen accounts. The allocation method will be the same for each account, and I wanted to write this in a single line rather than have dozens of lines, one for each account.
    For Example, the following works correctly for me ( Takes Total indirect salaries loaded to "Region Items" and allocates based on headcount loaded to each child of "Operations")
    FIX("Budget", @CHILDREN("Operations"))
    "Indirect Employee Salaries"
    = "Region Items"->"Indirect Employee Salaries" * "Office Staff - Employees" / "Operations"->"Office Staff - Employees";
    ENDFIX
    Because this allocation will be repeated for each account, I would like to have something similar to this:
    FIX("Budget",@CHILDREN("Operations"))
    @CHILDREN("Indirect")
    = "Region Items"->@CHILDREN("Indirect") * "Office Staff - Employees" / "Operations"->"Office Staff - Employees";
    ENDFIX
    However this change to the command gives me "Calc Script Command is Incomplete" warnings.

    You can do this with a "switch" on the fix etc.
    FIX(@CHILDREN("Indirect") ,@CHILDREN("Operations"))
    "Budget"= "Region Items"->@CURRMBR(Accounts) * "Office Staff - Employees" / "Operations"->"Office Staff - Employees";
    ENDFIX
    You will need to check the performance of this, especially if Budget is sparse - although it would remove create block issues.
    You might also need to enclose the "CurrMbr" section in a SUMRANGE to validate, but it is a starter for you.
    Hope this helps
    Andy King
    www.analitica.co.uk

  • CDF - calc script

    The calc script for the Export cdf shows @Jexport being executed like rollup functions - in that there is no requirement that you have 'member = @Jcdf'.When I try this with a sample cdf, I get an error 'not a CALC command'.How do I get Essbase to recognize that this doesn't need anything on the left side?Harold

    An other question about CDF and Script.I want to export data in a SGBDR , the sample works fine but :Is there a way to keep relationnal database connection open ( to increase performance) and so not to load connection class, open connection for each Essbase member ?X. VDS

  • HPCM: Calc Script Deployment Error: java.lang.indexoutofboundsexception: In

    I am trying to deploy the allocation calc scripts in HPCM and ran into the indexoutofboundsexception. Does anyone know how I resolve this?
    I have successfully deployed the calculation database. This is version 11.1.1.2.
    Cheers,
    Below is the relevant section of the hpm.log file.
    2009-04-07 21:02:06,645 [Thread-16] ERROR com.hyperion.profitability.business.integration.ces.jobs.ProcessCalcscriptsJob: Error processing calc scripts
    com.hyperion.profitability.common.ProfitabilityRuntimeException: java.lang.IndexOutOfBoundsException: Index: 3, Size: 3
    at com.hyperion.profitability.data.dao.AllocationDAOImpl.loadAllocations(AllocationDAOImpl.java:129)
    at com.hyperion.profitability.business.mdb.deployment.calcscriptgeneration.CalcScriptGenerationHelper.getInterCellLevelAllocations(CalcScriptGenerationHelper.java:145)
    at com.hyperion.profitability.business.mdb.deployment.calcscriptgeneration.CalculationScriptGenerator.generateCalcScripts(CalculationScriptGenerator.java:397)
    at com.hyperion.profitability.business.service.GenerateCalcScript.generateCalcScript(GenerateCalcScript.java:49)
    at com.hyperion.profitability.business.service.ServiceFacade.calcScriptGenerate(ServiceFacade.java:724)
    at com.hyperion.profitability.business.integration.ces.jobs.ProcessCalcscriptsJob.start(ProcessCalcscriptsJob.java:47)
    at com.hyperion.profitability.business.integration.ces.TaskHandler$AgentThread.run(TaskHandler.java:128)
    Caused by: java.lang.IndexOutOfBoundsException: Index: 3, Size: 3
    at java.util.ArrayList.RangeCheck(Unknown Source)
    at java.util.ArrayList.get(Unknown Source)
    at com.hyperion.profitability.data.dao.AllocationDAOImpl.extractAllocationDriver(AllocationDAOImpl.java:403)
    at com.hyperion.profitability.data.dao.AllocationDAOImpl.extractAllocationDriver(AllocationDAOImpl.java:352)
    at com.hyperion.profitability.data.dao.AllocationDAOImpl.loadAllocations(AllocationDAOImpl.java:91)
    ... 6 more

    I am working on first profitability application creation. I have performed the following steps till now:
    1. Creating Dimension Library for the Profitability Application. (I haven't put any details in the AllocationType Dimension)
    2. Validate and Deploy the Profitability Application.
    3. Created Staging Table (HPM_STG_STAGE, HPM_STG_ASSIGNMENT...) in Database. These are blank staging tables.
    My question is:
    1. How the data load happens in the Profitability Application.
    2. After creating stages, does it get populated when you create stages? How are you going to populate the same.
    3. Are you able to open the application in Essbase? I can see this through Shared Services but am unable to open the same in Essbase.
    Let me know if you have done things differently than this.

Maybe you are looking for