Difference in these calc scripts

Hi,
Is there any difference bet'n these two calc scripts:
Fix (Yr2011)
Fix(Feb)
Fix (@children("ACT"))
Calc Dim (Accounts, Product, Entity, Channel);
EndFix
EndFix
EndFix
+&+
Fix (Yr2011->Feb->@childen("Act"))
Calc Dim (Accounts, Product, Entity, Channel);
EndFix

No, not really (except that the syntax of your second one is incorrect, it would be FIX(Yr2011, Feb, @Children("ACT")) )

Similar Messages

  • Execute Essbase Calc Scripts from FDM

    Hi,
    Can any of you let me know how to execute Essbase Calc Scripts from FDM, these Calc Scripts are on Essbase Server. Any help would be greatly appreciated.
    Thanks

    See the thread below:
    Re: FDM - Script

  • Multi-dimensional allocation reference in calc script is locking up EAS

    I am trying to draft a calculation script that allocates 1 total number at a point of view that includes "Entity" "Product" and "Customer". This number is entered in at one entity, one customer, and one product. I have loaded allocation weights by entity, customer, and product to be used as the basis for allocating this number down to entity, customer, and product.
    We have tried to use the @MDALLOCATE in a script as follows:
    SET AGGMISSG ON;
    SET UPDATECALC OFF;
    SET CACHE HIGH;
    SET CALCPARALLEL 3;
    FIX("BU Version_1","Actual",&CurrFY,&CurMth)
    "PL_AC_4502" = @MDALLOCATE ("PL_DP_1"->"PL_AC_4502", 3, @LEVMBRS ("Prod_Line",0),@LEVMBRS("Product",0),@LEVMBRS("Customer",0),"Abs_4502",,share);
    ENDFIX
    This locked up the EAS server everytime we tried it.
    We decided to add an account to hold a calculated percentage for the allocation and tried the following calc script:
    SET CACHE HIGH;
    SET UPDATECALC OFF;
    SET AGGMISSG ON;
    SET CALCPARALLEL 3;
    FIX ("FY10","Jan","BU Version_1","Actual",
    @RELATIVE ("Entity",0),@RELATIVE ("Product",0), @RELATIVE ("Customer",0))
    "Pct_Abs_4502"="Abs_4502",@LEVMBRS("Prod_Line",0),@LEVMBRS("TotCustomer",0),@LEVMBRS("TotProduct",0) / "Abs_4502"->"Prod_Line"->"TotCustomer"->"TotProduct";
    ENDFIX
    This also locked up the EAS server. Each time, it did not even finish the syntax check.
    We have about 70 product numbers and about a dozen customer numbers with numbers in them. Can anyone suggest a modification that will allow either of these calc scripts to avoid locking up the servers?
    Much appreciated in advance,
    Rob

    I just thought I would try to add a little more context to Glenn's response. A calculation on Essbase will step through every single member of every dimension by default. When you mention member "Abs_4502" in a calc script, you are not referencing ONLY "Abs_4502", you are also implying exactly one member from every other dimension in your database (every data value needs a member from every dimension. FIXes are there to limit which members of certain dimensions should be subject to your calculation.
    The math you are doing to create a percentage should only reference one value in the numerator, and one value in the denominator. Your denominator is one value, but the numerator is undefined because it is referencing a range of member across three dimensions. I believe I understand what you are attempting here, but it is not necessary. Your FIX already explicitly states that the calculation will work across all the lev0 members of Entity, Product, and Customer. So, you don't need that in the math again. To restate Glenn's attempt...
    /* Assumes Prod_Line is "Entity", TotCustomer is "Customer" , and TotProduct is "Product" member */
    FIX ("FY10","Jan","BU Version_1","Actual",
    @RELATIVE ("Entity",0),@RELATIVE ("Product",0), @RELATIVE ("Customer",0))
    "Pct_Abs_4502" = "Abs_4502" / "Abs_4502"->"Prod_Line"->"TotCustomer"->"TotProduct";
    ENDFIX
    The very first calculation of this calc script is working on the first lev0 relative of "Entity", the first lev0 relative of "Product", and the first lev0 relative of "Customer", and divides the value in that intersection with the value at the "Prod_Line", "TotCustomer", and "TotProduct" intersection....which should be percentage (albeit probably very small...I'd warn against any rounding attempts). The fix then steps one by one through each of the lev0 members and does the same math.
    You could step this up and go across all months in the FIX too, and it will step through all months, and calculate the percentages within each month.
    Hopefully that helps a little.

  • Calc Script runs slow one day and fast the next?

    We are on Essbase 11.1.2.1. When we run calc scripts to load our actuals and run some allocation scripts, the difference in running time for the same exact script is very different day to day. One day three hours the next 45 minutes. Our servers are just for Essbase. We are on Windows 2008 R2. Has anyone else run into this or have any idea how we could figure out the reason for this. Thanks.

    Thanks.  In that case, I would try to grab the cube statistics - especially number of upper-level and input blocks, compression ratio and fragmentation (in EAS you can only see one of the two fragmentation statistics, 'Average Clustering Ratio', not 'Average Fragmentation Quotient') before each run, just to see if they're varying wildly.  Also, if your calc scripts log summary information (is there a SET MSG command in them?  If not you could try SET MSG SUMMARY) then Essbase will write high-level statistics on how much work was done to the application log.
    You can see which cubes are loaded (and consuming memory) in the EAS treeview - if they have a check (tick) mark they are loaded.  But you can also see current activity by right-clicking on the server in the EAS treeview and selecting Edit | Sessions.   That will show whether there are other operations (loads, calcs, reports, restructures etc) occurring at the same time.
    If none of the above give any clues and this is a virtual box (or you have SAN storage) you really will need someone who understands your infrastructure to help.
    I'm assuming these are BSO cubes, by the way!

  • CONTEST - Auto Balancing Calc Script in ASO

    I am trying to figure out a way to automate some autobalancing we would like to do in our Essbase verson 11.1.2 ASO cube. We currently feed subledger detail into the system and also feed the total ledger values in a separate scenaro. Due to timing, the pieces to not always equal the whole. I am trying to find a way to auto balance these items with a calc script in ASO. Here is an Example.
    Scenario
    ~*Ledger Total* (Scenario where we load ledger data to for the _E members)
    ~Total Actual
    +Actual_E (_E source acconts)
    +Actual_A (subledger detail)
    Actual_Adj (would like to move the difference of Ledger Total minus Actual_E Actual_A)
    Accounts:
    123456
    +123456_E (Ledger topsides and where we would like the difference placed for Actual_Adj)
    +456839_A
    +839020_A
    Cost Centers:
    BYM345
    +BYM345_E (Ledger topsides and where we would like the difference placed for Actual_Adj)
    +HJEUFS_A
    +JHM345_A
    Legal Entity
    78GHT
    +78GHT_E (Ledger topsides and where we would like the difference placed for Actual_Adj)
    +HLD599_A
    +783GHU
    Also have dimensions for version, analytic, product, supplement, period, year
    Data Example for intersection:
    Ledger Total
    Account: 123456(loaded to 123456_E) Dollar Amount $50,000
    CC: BYM345(loaded to BYM345_E) Dollar Amount $50,000
    LE: 78GHT(loaded to 78GHT_E) Dollar Amount $50,000
    Account/Scenario 123456_E/Actual_E $8,000
    456839_A/Actual_A $9,000
    839020_A/Actual_A $30,000
    CALC ADJ             123456_E/Actual_Adj $3,000 (Auto calculated by taking $50,000 from Ledger Total less 123456 account rollup for Actual_E and Actual_A or 50,000-8,000-39,000=3,000)
    Total Actual for Account rollup 123456 now equals Ledger Total Account rollup of 50,000
    The same logic would apply for Cost Center and Legal Entity. This is a tough one, so not sure it can be achieved.

    I think you might be confusing BSO calc scripts with ASO custom calcs, there is a whole section in the documentation on ASO calcs - http://docs.oracle.com/cd/E40248_01/epm.1112/essbase_db/aso_custcalc_alloc.html
    There are also examples on the internet if you search spend some time researching.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Calc scripts are running slow(all of a sudden)

    All of a sudden, for the past few days, we are noticing that all our calc scripts have been running very slow.
    The same scripts used to run much faster earlier.
    Has anybody seen this kind of scenario?
    We did a RAM upgrade on the eas server, and have restarted all services.
    Other than that, nothing has changed in our system.
    Thanks.

    It can be quite common for calcs to slow down over time, but there are some things to do to mitigate this.
    1. Are you using Intelligent Calc? All things being equal (a very broad statement in essbase, since things are never equal) if there is more activity by users, it could affect how many blocks are marked dirty. This is probably not your issue, because a properly written calc wouldn't slow down much for this reason. I had to mention it though because I have seen an installation where their calc was 'Calc All' and they used intelligent calc to create the scope of the calc. (bad, very bad)
    2. Do you perform DB restructures? (either explicity by Restructuring or by exporting level 0, clearing and import level 0 then agg) If this is not done on a regular basis (regular depends on the usage of the cube) then you could be experiencing fragmentation, which increases the size of the database, increasing run times.
    3. Have you just added another fiscal year to the database? More data means bigger database.
    RAM upgrade on the EAS server shouldn't affect calc times (unless essbase services are also running on the EAS server, then there might be something to it).
    Most of these (and other) issues can be mitigated by applying proper scope to your calcs (Fix statements).
    What environment are you running in? Windows or Unix?
    New application?
    What kind of time increases are we talking about here?
    Robert

  • Peculiar problem with Essbase (Calc Script) - substitution variable / UDAs

    This is odd but I have a script like :
    VAR iloop=1,break=0;
    FIX(<required POV>)
    Loop (20,break)
    VAR Country_total1,Country_total2,Country_total3;
    FIX (@UDA(Entity,@ALIAS(@CONCATENATE("&Country",iloop)))) // &Country1, &Country2 - are substitution variables with UDAs stored as strings
    Statements;
    /* +<statements for calculating total values.. for that country and stored against variables>+ */
    Country_total1=Country_total1+ +<Calculation>+
    ENDFIX
    /* Second part : Now again the calculations stored in the variables are to be stored against specific entities */
    FIX (@UDA(Entity,@ALIAS(@CONCATENATE("&Country",iloop))))
    FIX(@ISUDA(Entity,<Check1>)
    ..... Assign to relevant account
    ENDFIX
    ENDFIX
    ENDLOOP
    ENDFIX
    Now the problem is that the first fix statement works just fine, but the FIX statement in the 'second part' throws an error
    Error: 1200354 Error parsing formula for [FIX STATEMENT] (line 66): expected type [STRING] found [EXTVAR] ([iloop]) in function [@CONCATENATE]
    If I hard code the 'second part' FIX statement to the substitution variable directly - it works just fine.
    How can the first statement work and not the second one ? They are exactly the same.

    Glenn, thanks - I hadn't thought of that :).
    But it still does not entirely solve my problem (please see my previous post depicting a requirement similar to ours )
    - We have lots of countries (50-60+ might be much more) and each country can have multiple entities (3-4 on an average - can go unto 7-8)
    - so good guess would be around 200 entities
    - So say I have to do it for 2 countries only (two entity types). Then I need 4 variables - 2 for each country ('country 1 ET1 total', 'Country 1 ET2 Total')
    When the list is 20 counties - variables become 40 :(.
    - Still leaving aside the 40 variables for a bit -
    There are subsequent steps of calculations which needs to be done based on these totals (which are exactly the same for all countries) - just that we need the correct totals to begin with and the rest is already stored in the DB
    So since I have a different variable for each country - I cannot write one single calculation block to use the variables sequentially one by one (can I ?)
    I might have to write a separate calculation block for each of these countries. (20 separate blocks)
    That's what I was trying to avoid and simplify with the substitution variable (but is not working)
    - Create substitution variables - which would store the alias of the required countries (2/10/20 as many required)
    - Loop through these substitution variables - using them one by one
    - So I just need one single block of calculation with all the variable in the calc script being reused after each country calculation is done
    - and the user need not go into the script, as the only thing that will change are the countries. And he can change it easily through the substitution variable.
    Edited by: Ankur on Jun 27, 2012 12:53 PM

  • Automate the process of changing the subvariables in Essbase calc script?

    Hi Experts,
    I have two calc scripts called HourlyAgPre & HrlyAgPost and in these calcs I am using Subvariables. Like wise i have 4 regions and there are so many calc scripts. during month end close i have to go and change the subvariables manually. is there any way to change the subvaribles at one time using Maxl script?
    Eg:
    In BD9 My subvariable needs to point to Preclose
    In BD-3 my subvaribles need to poit to Postclose.
    Please suggest me if you come across this?
    Thanks in advance.

    Hi,
    Here is example of setting variables through Maxl and passing parameters into the maxl script Re: Update variables
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Essbase 9.3 Calc scripts. Pb with dates. How to convert (string = number) ?

    Hello,
    I've a problem with Essbase(Planning?) Scripts on version 9.3. It looks simple but I do not find any (clean) solution :
    On my Essbase database, I have a member called "Reference_Date" on my axis Indicators. It is a date data type, that is to say, it displays a number corresponding to a YYYYMMDD format. For example : 20091029 for October 29th 2009.
    In calc scripts I often need to compare the month included in that "Reference_Date" with the current Member of my Time Axis (I have 12 Months members based on the format M02 for February for example). The final aim is to calculate a number of complete years since that "Reference_Date".
    But theses two elements are not of the same "type" (one is a number value and the other is a "member" in Time Axis). So I'm forced to convert one of this two elements in order to compare it.
    For example I can extract the month value of the "Reference_Date"' and put an "M" before it to have a Time member equivalent or I can convert the member Name M10 to a number (10))
    in both cases I have the same type problem : I don't know how to convert a string into a number nor how to convert a number into a string.
    (For example @CONCATENATE doesn't work with numbers). and that my only remaining problem.
    I didn't find any Essbase Function which do this (conversion number <=>string).
    Is anyone have an Idea ?
    Thanks for your help
    Best regards

    I don't know any way for you to compare your data against your metadata. Not directly. To me it makes little enough sense to try that I'm not surprised the developers didn't provide for it.
    I've converted member names to strings, manipulated the strings (calc script functions are not good at this), and turned them back into member names, but that's really the only use I've had for string manipulation. I don't think an equivalency operator even exists for string data. And I see no way to begin thinking of a member name, once converted to a string, as a number.
    It makes even less sense to me to try thinking of a data value as a string. Even text values in Sys 11 are stored as numbers. Not encoded characters, but just a number to look up somewhere.
    I think you can do what you want though, with something like this...
    IF (@ISMBR("FY08"))
    vYr = 2008;
    ELSEIF (@ISMBR("FY09"))
    vYr = 2009;
    ENDIF;
    IF (@ISMBR("M01"))
    vMth = 1;
    ELSEIF (@ISMBR("M02"))
    vMth = = 2;
    ENDIF;
    "Years_Since_Reference" = ((vYr * 100) + Mth) - ("Reference_Date" / 12);
    Obviously, the math will need some work, coz that doesn't actually work, but the logic above essentially turns your metadata into numbers, which is what you are after.
    Good luck,
    -- Joe

  • Calc script takes longer than expected to execute

    The current Planning system has several calc scripts which are used to run the budget. This system is 3.3. I am currently in the process of migrating to Planning 11.1.2. The same outline, data and calc scripts are used in the new system. However, one script, which takes only 8 hours to run in the old system, now takes 5+ DAYS to run. I did a data extract in the new system and the data seems to be correctly calculated.
    My problem is, what can be the issue for this lengthy time for calculation.
    Note: This is the first time I am running the calculation scripts in the new system.
    Thanks

    Did you size your essbase plan type caches appropriately - the index and data caches specifically (this is the most common culprit)
    Do all dimensions have the same dense/sparse configuration?^^^I'll bet anything that Matt got it with the dense/sparse configuration. The caches are worth looking at as well but that big of a performance difference seems unlikely. Taking a dense dimension and making it sparse or vice versa will do crazy things to a database's performance.
    Regards,
    Cameron Lackpour

  • Calc scripts running very Long time

    Hi All,
    Recently, i am migrated the objects from Production to Test region. We have 5 applications and each of the application has a set of calc scripts.
    In test region, they are running really long time. Where as in Production, they run for less time.
    In TEST region each Calc script is taking 10 times more time than the Production times.
    No Dimension added or no script is updated. No difference in objects between TEST and PROD.
    Please suggest me, why is this difference.
    Thanks
    Mahesh

    The obvious first question would be if the hardware is different. You would expect prod to be a more powerful server and therefore perform better.I'm seeing a lot of virtualized test servers (who knows, really, what power the box has) and real prod servers. That can make a huge difference in performance.
    It makes benchmarking tough -- yes, you can see how long something will take relative to another process, but there isn't any way to know how it will perform in production until you sneak it over there and benchmark it. It can be a real PITA for Planning.
    And yes, the theory is that dev and prod are similar so that the above isn't an issue, but that seems to be a more theoretical than actual kind of thing.
    Regards,
    Cameron Lackpour

  • Outline Order, Calc Script Performance, Substitution Variables

    Hi All,
    I am currently looking in to the performance side.
    This is mainly about the calculation script performance.
    There are lot of questions in my mind and as it is said you can get the results only by testing.
    1. Outline order should be from least sparse to most sparse
    (other reason : to accomodate as many sparse members in to calculator cache) correct me if I am wrong
    2. Is Index entry created based on the outline order. For example I have outline order as Scenarios, Products, Markets then does my index entry be like scenario -> Products -> Markets ?
    3. Does this order has to match with the order of members in FIX Statement of calculation script?
    4. I have 3 sparse dimensions. P (150 members), M (8 members), V (20 members).
    I use substitution variables for these three in the calculation script. And these three are the mandotary things in my calculation script. Now when I see the fix statement, these three are the first 3 parameters of the fix statemtn and since I am fixing on a specific member, placing these three members as the first 3 sparse dimensions in the outline, ill it improve performance?
    In one way, I can say that a member from P, M,V becomes my key for the data.
    Theoritically if I think, may be it will...but in practical terms I don't see any of such thing.. Correct me If my thinking is wrong.
    One more thing, I have a calc script with say around 10 FIX statements and this P,M,V is being used in every FIX statemnts. Since my entire calculation will be only on one P, one M, one V. Can I put everything in one FIX at the beginning and exclude it from remaining FIX statememts?
    5. I have a lot of cross dimensional operations in my calc scripts for accounts dimension (500 + ) members.
    Is there a way to reduce these?
    6. My cube statistics..
    Cube size : 80 GB +
    Block Size : 18 KB (Approx)
    Block density : 0.03 . This is what I am more worried about. This really hurts me.
    This is one of the reason why my calculation time is > 7 hours and some times it is horrible when there is huge amount of data (it takes aound 20 + hours) for calculation.
    I would be looking forward for your suggestions.
    It would be really apprecialble if It is Ok to share your contact number so that I can get in touch with you. That could be of great help from your side.

    I have provided some answers below:
    There are lot of questions in my mind and as it is said you can get the results only by testing.
    ----------------------------You are absolutely right here but it helps to understand the underlying principles and best practices as you seem to understand.
    1. Outline order should be from least sparse to most sparse
    (other reason : to accomodate as many sparse members in to calculator cache) correct me if I am wrong
    ----------------------------This is one reason but another is to manage disk I/O during calculations. Especially when performing the intial calculation of a cube, the order of sparse dimensions from smallest to largest will measurably affect your calc times. There is another consideration here though. The smallest to largest (or least to most) sparse dimension argument assumes single threading of the calculations. You can gain improvements in calc time by multi-threading. Essbase will be able to make more effective use of multi-threading if the non-aggregating sparse dimensions are at the end of the outline.
    2. Is Index entry created based on the outline order. For example I have outline order as Scenarios, Products, Markets then does my index entry be like scenario -> Products -> Markets ?
    ----------------------------Index entry or block numbering is indeed based on outline order. However, you do not have to put the members in a cross-dimensional expression in the same order.
    3. Does this order has to match with the order of members in FIX Statement of calculation script?
    ----------------------------No it does not.
    4. I have 3 sparse dimensions. P (150 members), M (8 members), V (20 members).
    I use substitution variables for these three in the calculation script. And these three are the mandotary things in my calculation script. Now when I see the fix statement, these three are the first 3 parameters of the fix statemtn and since I am fixing on a specific member, placing these three members as the first 3 sparse dimensions in the outline, ill it improve performance?
    --------------------------This will not necessarily improve performance in and of itself.
    In one way, I can say that a member from P, M,V becomes my key for the data.
    Theoritically if I think, may be it will...but in practical terms I don't see any of such thing.. Correct me If my thinking is wrong.
    One more thing, I have a calc script with say around 10 FIX statements and this P,M,V is being used in every FIX statemnts. Since my entire calculation will be only on one P, one M, one V. Can I put everything in one FIX at the beginning and exclude it from remaining FIX statememts?
    --------------------------You would be well advised to do this and it would almost certainly improve performance. WARNING: There may be a reason for the multiple fix statements. Each fix statement is one pass on all of the blocks of the cube. If the calculation requires certain operations to happen before others, you may have to live with the multiple fix statements. A common example of this would be calculating totals in one pass and then allocating those totals in another pass. The allocation often cannot properly happen in one pass.
    5. I have a lot of cross dimensional operations in my calc scripts for accounts dimension (500 + ) members.
    Is there a way to reduce these?
    -------------------------Without knowing more about the application, there is no way of knowing. Knowledge is power. You may want to look into taking the Calculate Databases class. It is a two day class that could help you gain a better understanding of the underlying calculation principles of Essbase.
    6. My cube statistics..
    Cube size : 80 GB +
    Block Size : 18 KB (Approx)
    Block density : 0.03 . This is what I am more worried about. This really hurts me.
    This is one of the reason why my calculation time is > 7 hours and some times it is horrible when there is huge amount of data (it takes aound 20 + hours) for calculation.
    ------------------------Your cube size is large and block density is quite low but there are too many other factors to consider to simply say that you should make changes based solely on these parameters. Too often we get focused on block density and ignore other factors. (To use an analogy from current events, this would be like making a decision on which car to buy solely based on gas mileage. You could do that but then how do you fit all four kids into the sub-compact you just bought?)
    Hope this helps.
    Brian

  • Hyperion business rules and calc scripts

    Hi...can anyone differentiate HBR and Calc scripts.. what is the advantage HBR got over Calc scripts.. replies will be highly appreciated

    Hi
    there are many you can easily get answer reading thro documentation.
    major difference is the runtime prompt in HBR , which differentiates Calc script.
    however I recently learned that you can put run time prompts in calc scripts lusing VBA macros.
    good luck.

  • Server crashes after running Calc Scripts Verison II

    I running 11.1.2.1 on my laptop, and after After runnning a calc scripts ... and shutting down essbase I am attempting manually start up my Essbase system, and now the Foundation services, and essbase admin services are taking forever too start, then once i am able too start Essbase EAS/ Adminstration Server = stopped
    are these the right logs
    C:\Oracle\Middleware\user_projects\epmsystem1\diagnostics\logs\services\HyS9FoundationServices-sysout
    C:\Oracle\Middleware\user_projects\epmsystem1\diagnostics\logs\services\HyS9eas-sysout
    Please Advise
    Edited by: Next Level on May 31, 2012 2:20 PM
    Edited by: Next Level on May 31, 2012 2:24 PM

    HYS9FoundationServices-syserr.log
    May 31, 2012 5:36:49 PM oracle.security.jps.internal.credstore.ssp.CsfWalletManager openWallet
    WARNING: Opening of wallet based credential store failed. Reason java.io.IOException: PKI-02002: Unable to open the wallet. Check password.
    oracle.security.jps.service.credstore.CredStoreException: JPS-01syserr050: Opening of wallet based credential store failed. Reason java.io.IOException: PKI-02002: Unable to open the wallet. Check password.
    Please advise

  • HPCM: Calc Script Deployment Error: java.lang.indexoutofboundsexception: In

    I am trying to deploy the allocation calc scripts in HPCM and ran into the indexoutofboundsexception. Does anyone know how I resolve this?
    I have successfully deployed the calculation database. This is version 11.1.1.2.
    Cheers,
    Below is the relevant section of the hpm.log file.
    2009-04-07 21:02:06,645 [Thread-16] ERROR com.hyperion.profitability.business.integration.ces.jobs.ProcessCalcscriptsJob: Error processing calc scripts
    com.hyperion.profitability.common.ProfitabilityRuntimeException: java.lang.IndexOutOfBoundsException: Index: 3, Size: 3
    at com.hyperion.profitability.data.dao.AllocationDAOImpl.loadAllocations(AllocationDAOImpl.java:129)
    at com.hyperion.profitability.business.mdb.deployment.calcscriptgeneration.CalcScriptGenerationHelper.getInterCellLevelAllocations(CalcScriptGenerationHelper.java:145)
    at com.hyperion.profitability.business.mdb.deployment.calcscriptgeneration.CalculationScriptGenerator.generateCalcScripts(CalculationScriptGenerator.java:397)
    at com.hyperion.profitability.business.service.GenerateCalcScript.generateCalcScript(GenerateCalcScript.java:49)
    at com.hyperion.profitability.business.service.ServiceFacade.calcScriptGenerate(ServiceFacade.java:724)
    at com.hyperion.profitability.business.integration.ces.jobs.ProcessCalcscriptsJob.start(ProcessCalcscriptsJob.java:47)
    at com.hyperion.profitability.business.integration.ces.TaskHandler$AgentThread.run(TaskHandler.java:128)
    Caused by: java.lang.IndexOutOfBoundsException: Index: 3, Size: 3
    at java.util.ArrayList.RangeCheck(Unknown Source)
    at java.util.ArrayList.get(Unknown Source)
    at com.hyperion.profitability.data.dao.AllocationDAOImpl.extractAllocationDriver(AllocationDAOImpl.java:403)
    at com.hyperion.profitability.data.dao.AllocationDAOImpl.extractAllocationDriver(AllocationDAOImpl.java:352)
    at com.hyperion.profitability.data.dao.AllocationDAOImpl.loadAllocations(AllocationDAOImpl.java:91)
    ... 6 more

    I am working on first profitability application creation. I have performed the following steps till now:
    1. Creating Dimension Library for the Profitability Application. (I haven't put any details in the AllocationType Dimension)
    2. Validate and Deploy the Profitability Application.
    3. Created Staging Table (HPM_STG_STAGE, HPM_STG_ASSIGNMENT...) in Database. These are blank staging tables.
    My question is:
    1. How the data load happens in the Profitability Application.
    2. After creating stages, does it get populated when you create stages? How are you going to populate the same.
    3. Are you able to open the application in Essbase? I can see this through Shared Services but am unable to open the same in Essbase.
    Let me know if you have done things differently than this.

Maybe you are looking for

  • IMac G5 - No picture or video on start

    Hi, I have an iMac G5 with Ambient Light systema and no iSight. The probleme is that when I press the POWER button, I hear start up sound and also the fan but nothing is displayed on the screen. So via Apple Support webpage, I have done some troubles

  • 3rd Party Sales Optional ?

    Hi- We have a vendor from which we purchase a material that some times we have shipped to our plant to later reship, and other times we have the material drop shipped.  I would like to separate the process so we can treat the 3rd party shipments as t

  • FM to get deleted bom component

    Hi all, Is there any function module available to get the deleted BOM component. From The "CDPos" table, will get the fields client,Bom Type,BOM number,Node position and counter(Corresponding technical fieldnames for the above said fields are mandt,s

  • Flvpayback preloader percent with VideoEvent.READY

    Hello, I'm trying to get a dynamic tex box to go from 0 to100%, 100% would be when videoEvent is ready. I cant think of a way to do this. Any suggestions? I guess I need to know how many bytes it takes for VideoEvent.READY to happen. heres my code as

  • I have a pixma mg 7120. I was wanting to know if anyone had any good recomendations on non oem ink?

    I mainly want to use it for photos and I love the way that the canon ink looks on the canon paper its AMAZING!! Are there any oem (non canon, third party, not from orignial manufacture, for the ones that don't know what non-oem means)  inks out there