Calc script & performance issues

Hi All we have a calc script which used to take only 10 mint every day. But today it is taking long time 4 hrs,,,stil running..if i cancel that calc operation what is the impact on database?. Earlier all users are hapy with speed...but suddenly every one got pissed off with speed...it is taking long time to retrieve data..Quickly what are the parameters I need to check?thanks in adavance..

<p>If you are using committed access then you can safely cancel thecalculation. All data will be reverted back to what it was beforecalculation. However if you are using uncommitted access it isrecommended not to cancel any running operation.</p><p>If you want to eliminate fragmentation, just export your level 0data and import again, and then do a calc all.</p><p>Doing so will remove any fragmentation. It is recommended to dothis once in  a while like 2 months to get rid offragmentation.</p>

Similar Messages

  • Essbase calc script performance issues

    Hi,
    I have essbase 9.3 running on Sun solaris 4 CPU, 16 GB server. The calc script "calc all" takes ~3 hrs to complete.
    This is the calc script.
    /ESS_LOCALE English_UnitedStates.US-ASCII@Binary
    SET UPDATECALC OFF;
    SET CALCPARALLEL 4;
    SET CALCTASKDIMS 2;
    CALC ALL;
    We don't have to calc all dim, but even if we
    But even with specific dim we get the same timing. Below is the script
    SET UPDATECALC OFF;
    SET CALCPARALLEL 4;
    SET CALCTASKDIMS 2;
    FIX ("Y2009", "Actual");
    CALC DIM("Data Source","Intercompany","LegalEntity","Site","Department","Entity");
    ENDFIX
    The ess00001.ind is 700 Mb and ess00001.pag is 2.1 GB.
    In Admin services, this is what I see for caches
    1) Index cache size is 1 GB for this DB
    2) Index cache current value is 1gb
    3) Datafile cache setting is 1.5 GB
    4) Datafile cache current value is 0 (?? not sure why??)
    5) Data cache setting 4.1 GB
    6) Index page setting 8 kb
    please help ...
    Thanks
    Moe

    Moe,
    I'm guessing you inherited this thing, else you would know why the cache settings are what they are, but here are some thoughts:
    Caches:
    3) Datafile cache setting is 1.5 GB
    4) Datafile cache current value is 0 (?? not sure why??)You're running the database in Buffered I/O, so the data file cache is ignored.
    1) Index cache size is 1 GB for this DB
    2) Index cache current value is 1gb You have consumed all of the cache -- I'm a little confused, as you state your .ind file to be 700 megabytes -- generally the index cache consumption doesn't go beyond the .ind file size. When you look at your hit ratio statistics in EAS, does it show a 1 against the index cache? If yes, then you don't need to look any further as that's as good as it's going to get.
    5) Data cache setting 4.1 GBUnless you're using MEMSCALINGFACTOR, I don't think Essbase is actually addressing all of the memory you've assigned. What are you showing as actually used? In any case, having a data cache almost twice as big as the .pag files is a waste as it's way too large.
    Easy, off the cuff suggestions without knowing more about your db:
    1) Try AGG instead of CALC DIM for sparse dimensions.
    2) Try turning off (yes, turning off, you'd be surprised) parallel calc, and benchmark it. It will probably be slower, but it's nice to know.
    3) Dimension order? Modified hourglass?
    4) Tried defragmenting the database and benchmarking the performance?
    5) What is your block size? Big? Small?
    6) I think you are not calculating your Accounts/Measures dimension in your calc? If you are, and it's dense, could you make those Accounts dynamic calc -- dropping a dimension from the calc can be huge.
    I'm sure there will be other suggestions -- these are the easiest.
    Regards,
    Cameron Lackpour

  • Will block size effect the calc script performance?

    Hi Experts,
    I have a cube called RCI_LA:RCI_LA, now I have created calc scripts and working fine. But those calc scripts are taking too much time than expected (normally it should not take more than 15 min but those are taking nearly 1 hr or more some calc scripts.)
    In database properties I found that block size is 155896 B i.e. 152.KB but this size should be 8 to 100 KB & Block density is 0.72%
    If block size exceeds more than 100 KB will it impact the performance of Calc scripts?
    I think answer to the above question is “yes”. In this case what should I need to do to improve calc scripts performance?
    Could you please share your experience here with me to come out of this problem?
    Thanks in advance.
    Ram

    I believe Sandeep was trying to say "Dynamic" rather than "Intelligent".
    The ideal block size is a factor in all calcs, but the contributing reasons are many (The main three are CPU caching, Data I/O overhead, Index I/O overhead).
    Generally speaking, the ideal block size is achieved when you can minimize the combination of Data I/O overhead and Index I/O overhead. For this reason a block size that is too large will incur too much Data I/O, while a block size that is too small will incur too much Index I/O. If your Index file is small, increasing your block size may help, although the commonly acceptible block size is between 8K and 64K in size, this is just a guideline.
    In other words, if you test it with something right in the middle and your index file is tiny, you might want to test it with a smaller block size. If your index file is very large (i.e. 400 MB or more), you may want to increase the block size and retest.
    Ways to increase/decrease it are also many. Obviously, changing the dense/sparse settings is the main way, but there are some considerations that make this a touchy process. Other ways are to use dynamic calc in the dense dimensions. I say start at the top of your smallest dense dimension and keep the number of DIMENSIONS that you use D-C on limited. Using D-C members in a dense dimension does NOT increase the index file, so it could be considered a "free" reduction in block size -- the penulty is paid on the retrieve side (there is no free ride).

  • Calc script prompt issue in workspace

    Hello Gurus,
    This issue is related to running Calc scripts from EPM Workspace 11.1.2.2.300.
    Browser : IE8/IE9
    We are facing an issue where users are able to run CALC from workspace successfully.
    But they are not getting prompt as CALC XYZ ran successfully.
    The browser just keeps showing CALC XYZ is processing. If we check in logs, its shows Calc has already completed its execution.
    Can this happen if we have a slow network? Or is it a bug as the calc seems to be completing on the required time in the background (when checking logs) but the front process keeps going on.
    Any help will be highly appreciated.
    Thanks,
    hyperionEPM

    Hi Rahul,
    Thanks for your quick reply..
    Yes we are on planning 11.1.2.2.300
    Following is what happens --
    <li> We open workspace and login to a planning application.
    <li> We navigate to the Business Rules from Tools.
    <li> We then run the Calc script from within Workspace for that application
    What happens is the calc should be running for ~ 3 mins (it runs for approx same time when we check the logs) however the workspace keeps showing that it is still in progress even if it exceeds ~10 mins, giving us a feel that the calc is still running which is not the case as the log file has the appropriate time when it completed.
    We then have to manually close the UI which shows that the calc is still running as it had already completed.
    Any ideas on what could be causing this?
    Thanks,
    hyperionEPM

  • Outline Order, Calc Script Performance, Substitution Variables

    Hi All,
    I am currently looking in to the performance side.
    This is mainly about the calculation script performance.
    There are lot of questions in my mind and as it is said you can get the results only by testing.
    1. Outline order should be from least sparse to most sparse
    (other reason : to accomodate as many sparse members in to calculator cache) correct me if I am wrong
    2. Is Index entry created based on the outline order. For example I have outline order as Scenarios, Products, Markets then does my index entry be like scenario -> Products -> Markets ?
    3. Does this order has to match with the order of members in FIX Statement of calculation script?
    4. I have 3 sparse dimensions. P (150 members), M (8 members), V (20 members).
    I use substitution variables for these three in the calculation script. And these three are the mandotary things in my calculation script. Now when I see the fix statement, these three are the first 3 parameters of the fix statemtn and since I am fixing on a specific member, placing these three members as the first 3 sparse dimensions in the outline, ill it improve performance?
    In one way, I can say that a member from P, M,V becomes my key for the data.
    Theoritically if I think, may be it will...but in practical terms I don't see any of such thing.. Correct me If my thinking is wrong.
    One more thing, I have a calc script with say around 10 FIX statements and this P,M,V is being used in every FIX statemnts. Since my entire calculation will be only on one P, one M, one V. Can I put everything in one FIX at the beginning and exclude it from remaining FIX statememts?
    5. I have a lot of cross dimensional operations in my calc scripts for accounts dimension (500 + ) members.
    Is there a way to reduce these?
    6. My cube statistics..
    Cube size : 80 GB +
    Block Size : 18 KB (Approx)
    Block density : 0.03 . This is what I am more worried about. This really hurts me.
    This is one of the reason why my calculation time is > 7 hours and some times it is horrible when there is huge amount of data (it takes aound 20 + hours) for calculation.
    I would be looking forward for your suggestions.
    It would be really apprecialble if It is Ok to share your contact number so that I can get in touch with you. That could be of great help from your side.

    I have provided some answers below:
    There are lot of questions in my mind and as it is said you can get the results only by testing.
    ----------------------------You are absolutely right here but it helps to understand the underlying principles and best practices as you seem to understand.
    1. Outline order should be from least sparse to most sparse
    (other reason : to accomodate as many sparse members in to calculator cache) correct me if I am wrong
    ----------------------------This is one reason but another is to manage disk I/O during calculations. Especially when performing the intial calculation of a cube, the order of sparse dimensions from smallest to largest will measurably affect your calc times. There is another consideration here though. The smallest to largest (or least to most) sparse dimension argument assumes single threading of the calculations. You can gain improvements in calc time by multi-threading. Essbase will be able to make more effective use of multi-threading if the non-aggregating sparse dimensions are at the end of the outline.
    2. Is Index entry created based on the outline order. For example I have outline order as Scenarios, Products, Markets then does my index entry be like scenario -> Products -> Markets ?
    ----------------------------Index entry or block numbering is indeed based on outline order. However, you do not have to put the members in a cross-dimensional expression in the same order.
    3. Does this order has to match with the order of members in FIX Statement of calculation script?
    ----------------------------No it does not.
    4. I have 3 sparse dimensions. P (150 members), M (8 members), V (20 members).
    I use substitution variables for these three in the calculation script. And these three are the mandotary things in my calculation script. Now when I see the fix statement, these three are the first 3 parameters of the fix statemtn and since I am fixing on a specific member, placing these three members as the first 3 sparse dimensions in the outline, ill it improve performance?
    --------------------------This will not necessarily improve performance in and of itself.
    In one way, I can say that a member from P, M,V becomes my key for the data.
    Theoritically if I think, may be it will...but in practical terms I don't see any of such thing.. Correct me If my thinking is wrong.
    One more thing, I have a calc script with say around 10 FIX statements and this P,M,V is being used in every FIX statemnts. Since my entire calculation will be only on one P, one M, one V. Can I put everything in one FIX at the beginning and exclude it from remaining FIX statememts?
    --------------------------You would be well advised to do this and it would almost certainly improve performance. WARNING: There may be a reason for the multiple fix statements. Each fix statement is one pass on all of the blocks of the cube. If the calculation requires certain operations to happen before others, you may have to live with the multiple fix statements. A common example of this would be calculating totals in one pass and then allocating those totals in another pass. The allocation often cannot properly happen in one pass.
    5. I have a lot of cross dimensional operations in my calc scripts for accounts dimension (500 + ) members.
    Is there a way to reduce these?
    -------------------------Without knowing more about the application, there is no way of knowing. Knowledge is power. You may want to look into taking the Calculate Databases class. It is a two day class that could help you gain a better understanding of the underlying calculation principles of Essbase.
    6. My cube statistics..
    Cube size : 80 GB +
    Block Size : 18 KB (Approx)
    Block density : 0.03 . This is what I am more worried about. This really hurts me.
    This is one of the reason why my calculation time is > 7 hours and some times it is horrible when there is huge amount of data (it takes aound 20 + hours) for calculation.
    ------------------------Your cube size is large and block density is quite low but there are too many other factors to consider to simply say that you should make changes based solely on these parameters. Too often we get focused on block density and ignore other factors. (To use an analogy from current events, this would be like making a decision on which car to buy solely based on gas mileage. You could do that but then how do you fit all four kids into the sub-compact you just bought?)
    Hope this helps.
    Brian

  • Calc Script performance

    <BR> Hello,<BR><BR> A customer have a cube that is taking a lot longer to calculate after each new load. The cube have 7 dimensions, monthly data from Jan 2005 on, 20 GB of data. It's taking around 14 hours to calculate it, but if you load the data on an identical cube with no data, it is calculated in less than 2 hours. <BR><BR>The calc scripts include a FIX on a dense dimension, as shown below:<BR><BR><BR><BR>Fix( &CurrentYear, &CurrentMonth, Actual, Local) <--- sparse dims<BR><BR> Fix (@IDescendants("REVENUE"), "Qtd VP Interna") <--- dense dim members (Accounts)<BR><BR> Calc Dim (Presidencia, Product); <--- sparse dims<BR><BR> EndFix<BR><BR>EndFix<BR><BR><BR><BR> The question is: since FIXing on a dense dimension causes all data blocks to be touched, is the inner FIX causing a scan in all data blocks of the database, even if the outer FIX refers to sparse dims only? <BR><BR> And during the calc process, the Windows performance monitor shows very little CPU activity and only accasionally a disk reading...<BR><BR><BR> And since Calc Dim is not allowed within an IF command, is there another way to obtain that consolidation?<BR><BR> Thanks in advance!<BR>

    <BR> Hello Gary!<BR><BR> I agree that calculating a new month's data in an empty cube should be faster than calculation the same data in a cube that already have 16 months of data, but I think that it's taking much longer that expected. I expected it to be 50% slower, but not 700% !<BR><BR> I even recreated the production cube from scratch, loading and calculating one month at a time, in 2 different servers. The results are always the same: the new calc time is a lot longer the previous one.<BR><BR> And when I use the windows' performance monitor to compare the server's behavior between the calcs of the empty cube an the production one, you can see that the server is either acessing the hard disk or is calculating 100% of the time for the empty cube, but the graphs for the production cube indicates very low disk access and CPU activities. It seems to be waiting for something...<BR><BR> I have already made many configuration changes, such as resizing the Index, data and data-file caches (I'm using direct i/o), number of lock blocks, compression mode among others, but the performance gains obtained for the calc in the empty cube is not reflected for the production cube, maybe because it's (apparently) doing nothing most of the time...<BR><BR> Is there a trace I can use to check what the ESSBASE is doing during the calc? I have used the MSG Detail but this didn't help.<BR><BR><BR> Thank you for your help!<BR>

  • ALI Scripting - Performance Issues

    We are using ALI scripting to raise events so that other portlets in the page can listen to the events and also passing some data to other portlets using "PTPortlet.setSessionPref" scripting API. In my local testing, the scripting is taking just 0.3 seconds, however when we deployed this code to common portal which has few other portlets, the scripting is taking 5 seconds. Does any one know if there are any known best practices around ALI scripting to avoid performance issues especially to transfer events between portlets?
    Thanks
    Sampath

    Hi, I would like to provide additional information on the performance issue we are facing with ALI scripting. In the code highlighted below, we are using “setSessionPref” methods to set values in session variables using ALI scripting APIs and raising events for IPC.
    var prefName = 'selectionString';
    var prefValue = xmlFile;
    var gpConfigValue = "<%=gpPromptsConfigID%>";
    PTPortlet.setSessionPref(prefName,prefValue);
    PTPortlet.setSessionPref('gpFormObj',frmObj.name);
    PTPortlet.setSessionPref('gpPromptsConfigID', gpConfigValue);
    myportlet$$PORTLET_ID$$.raiseEvent('onSelectionSubmitFormObj',false);
    myportlet$$PORTLET_ID$$.raiseEvent('onSelectionSubmit', false);
    This code does not take lot of time(less than a second) to execute in my local machine however, it takes lof of time (around 5 seconds) in integrated development environment.
    After lot of debugging, I observed that if I change “PTPortalContext.GET_SESSION_PREFS_URL” to point to http://localhost:7001 instead of integrated development environemnt, the processing time is considerably reduced. This code below is inserted automatically by ALUI in every portal page. Does any one know the signficance of the below code automaitcally inserted by ALUI and how it can impact IPC.
    // Define PTPortalContext for CSAPI
    PTPortalContext = new Object();
    PTPortalContext.GET_SESSION_PREFS_URL = 'http://dcdev.pg.com/portal/server.pt?space=SessionPrefs&control=SessionPrefs&action=getprefs';
    PTPortalContext.SET_SESSION_PREFS_URL = 'http://dcdev.pg.com/portal/server.pt?space=SessionPrefs&control=SessionPrefs&action=setprefs';
    Thanks
    Sampath

  • Report Script Performance Issues

    Essbase Nation,
    We have a report script that extracts a full 12 months worth of history in 7 minutes. The script that is used to extract the period dimension is as follows:
    <Link (<Descendants("Dec YTD") And <Lev("Period",0))
    The line above is then changed to pull just one month of data, and now the report script runs for 8 hours.
    Please advise as to why the difference in performance.
    Thank you.

    ID 581459.1:
    Goal
    How to optimize Hyperion Essbase Report Scripts?
    Solution
    To optimize your Report follow the suggested guidelines below:
    1. Decrease the amount of Dynamic Calcs in your outline. If you have to, make it dynamic calc and store.
    2. Use the <Sparse command at the beginning of the report script.
    3. Use the <Column command for the dense dimensions instead of using the Page command. The order of the dense dimensions in the Column command should
    be the same as the order of the dense dimension in the outline. (Ex. <Column (D1, D2)).
    4. Use the <Row command for the sparse dimensions. The order of the sparse dimensions in the Row command should be in the opposite order of the sparse
    dimension in the outline. (Ex. <Row (S3, S2, S1)). This is commonly called sparse bottom up method.
    5. If the user does not want to use the <Column command for the dense dimensions, then the dense dimensions should be placed at the end of the <Row command.
    (Ex. <Row (S3, S2, S1, D1, D2)).
    6. Do not use the Page command, use the Column command instead.

  • Report Script- Performance Issue

    Hi,
    I ran this report script and it is taking around 2 hours to complete. Is there any possiblity to better tune this script. Please advice me where else can we better tune this.
    Thanks,
    UB.

    ID 581459.1:
    Goal
    How to optimize Hyperion Essbase Report Scripts?
    Solution
    To optimize your Report follow the suggested guidelines below:
    1. Decrease the amount of Dynamic Calcs in your outline. If you have to, make it dynamic calc and store.
    2. Use the <Sparse command at the beginning of the report script.
    3. Use the <Column command for the dense dimensions instead of using the Page command. The order of the dense dimensions in the Column command should
    be the same as the order of the dense dimension in the outline. (Ex. <Column (D1, D2)).
    4. Use the <Row command for the sparse dimensions. The order of the sparse dimensions in the Row command should be in the opposite order of the sparse
    dimension in the outline. (Ex. <Row (S3, S2, S1)). This is commonly called sparse bottom up method.
    5. If the user does not want to use the <Column command for the dense dimensions, then the dense dimensions should be placed at the end of the <Row command.
    (Ex. <Row (S3, S2, S1, D1, D2)).
    6. Do not use the Page command, use the Column command instead.

  • Essbase performance issue when calc scripts are run on FDM cube on same server

    We have a large Essbase application which has high usage on a daily basis, which is being impacted when we run Calc scripts on an FDM forecast cube which is on the same server. The large application is on EIS 11.1.2 and the FDM cubes are being migrated to the same server and also being upgraded from EIS 7.1 on Unix to EIS 11.1.2 on NT. Every time the Calc scripts are run on the FDM cube, the performance of the Essbase application is degraded and it shuts down after some time.

    Sudhir,
    Do you work at a help desk or are you a consultant? you ask such a varied range of questions I think the former. If you do work at a help desk, don't you have a next level support that could help you? If you are a consultant, I suggest getting together with another consultant that actually knows more. You might also want to close some of your questions,. You have 24 open and perhaps give points to those that helped you.

  • Calc scripts are running slow(all of a sudden)

    All of a sudden, for the past few days, we are noticing that all our calc scripts have been running very slow.
    The same scripts used to run much faster earlier.
    Has anybody seen this kind of scenario?
    We did a RAM upgrade on the eas server, and have restarted all services.
    Other than that, nothing has changed in our system.
    Thanks.

    It can be quite common for calcs to slow down over time, but there are some things to do to mitigate this.
    1. Are you using Intelligent Calc? All things being equal (a very broad statement in essbase, since things are never equal) if there is more activity by users, it could affect how many blocks are marked dirty. This is probably not your issue, because a properly written calc wouldn't slow down much for this reason. I had to mention it though because I have seen an installation where their calc was 'Calc All' and they used intelligent calc to create the scope of the calc. (bad, very bad)
    2. Do you perform DB restructures? (either explicity by Restructuring or by exporting level 0, clearing and import level 0 then agg) If this is not done on a regular basis (regular depends on the usage of the cube) then you could be experiencing fragmentation, which increases the size of the database, increasing run times.
    3. Have you just added another fiscal year to the database? More data means bigger database.
    RAM upgrade on the EAS server shouldn't affect calc times (unless essbase services are also running on the EAS server, then there might be something to it).
    Most of these (and other) issues can be mitigated by applying proper scope to your calcs (Fix statements).
    What environment are you running in? Windows or Unix?
    New application?
    What kind of time increases are we talking about here?
    Robert

  • Calc script takes longer than expected to execute

    The current Planning system has several calc scripts which are used to run the budget. This system is 3.3. I am currently in the process of migrating to Planning 11.1.2. The same outline, data and calc scripts are used in the new system. However, one script, which takes only 8 hours to run in the old system, now takes 5+ DAYS to run. I did a data extract in the new system and the data seems to be correctly calculated.
    My problem is, what can be the issue for this lengthy time for calculation.
    Note: This is the first time I am running the calculation scripts in the new system.
    Thanks

    Did you size your essbase plan type caches appropriately - the index and data caches specifically (this is the most common culprit)
    Do all dimensions have the same dense/sparse configuration?^^^I'll bet anything that Matt got it with the dense/sparse configuration. The caches are worth looking at as well but that big of a performance difference seems unlikely. Taking a dense dimension and making it sparse or vice versa will do crazy things to a database's performance.
    Regards,
    Cameron Lackpour

  • Calc scripts running very Long time

    Hi All,
    Recently, i am migrated the objects from Production to Test region. We have 5 applications and each of the application has a set of calc scripts.
    In test region, they are running really long time. Where as in Production, they run for less time.
    In TEST region each Calc script is taking 10 times more time than the Production times.
    No Dimension added or no script is updated. No difference in objects between TEST and PROD.
    Please suggest me, why is this difference.
    Thanks
    Mahesh

    The obvious first question would be if the hardware is different. You would expect prod to be a more powerful server and therefore perform better.I'm seeing a lot of virtualized test servers (who knows, really, what power the box has) and real prod servers. That can make a huge difference in performance.
    It makes benchmarking tough -- yes, you can see how long something will take relative to another process, but there isn't any way to know how it will perform in production until you sneak it over there and benchmark it. It can be a real PITA for Planning.
    And yes, the theory is that dev and prod are similar so that the above isn't an issue, but that seems to be a more theoretical than actual kind of thing.
    Regards,
    Cameron Lackpour

  • Is it possible to have a many to many calc script equation?

    Hi All,
    I'm thinking there has got to be an easy way to do this - but I've tried a bunch of different ways, and I've only been getting error messages.
    What I want to do is perform an allocation based on head count for a few dozen accounts. The allocation method will be the same for each account, and I wanted to write this in a single line rather than have dozens of lines, one for each account.
    For Example, the following works correctly for me ( Takes Total indirect salaries loaded to "Region Items" and allocates based on headcount loaded to each child of "Operations")
    FIX("Budget", @CHILDREN("Operations"))
    "Indirect Employee Salaries"
    = "Region Items"->"Indirect Employee Salaries" * "Office Staff - Employees" / "Operations"->"Office Staff - Employees";
    ENDFIX
    Because this allocation will be repeated for each account, I would like to have something similar to this:
    FIX("Budget",@CHILDREN("Operations"))
    @CHILDREN("Indirect")
    = "Region Items"->@CHILDREN("Indirect") * "Office Staff - Employees" / "Operations"->"Office Staff - Employees";
    ENDFIX
    However this change to the command gives me "Calc Script Command is Incomplete" warnings.

    You can do this with a "switch" on the fix etc.
    FIX(@CHILDREN("Indirect") ,@CHILDREN("Operations"))
    "Budget"= "Region Items"->@CURRMBR(Accounts) * "Office Staff - Employees" / "Operations"->"Office Staff - Employees";
    ENDFIX
    You will need to check the performance of this, especially if Budget is sparse - although it would remove create block issues.
    You might also need to enclose the "CurrMbr" section in a SUMRANGE to validate, but it is a starter for you.
    Hope this helps
    Andy King
    www.analitica.co.uk

  • How do you stop multiple users executing the same calc script at the same time?

    We have an issue when  users uplaod a spreadsheet and then run a calc script. at one time we have multiple exectuions of the script running.
    This slows up the system and we have to go cancel all the executions and run it again.
    Can we stop this and put them on a queue, so only one execution of the calc happens at one time. Or stop multiple executions getting submitted at all.

    You could use EXCLUSIVECALC to stop more than one calc running at the same time, although this will apply to all applications on your server.  And it doesn't just stop the same calc being launched twice.  Easier than the alternatives I can think of though, if it works for you.

Maybe you are looking for