Essbase calc performance

Dears,
I have one calculation script in which there are all datacopy statements ( about 26000) lines and script is taking almost 3-4 hours to run for a month in a year.
how can we improve the performance with using datacopy or some other function. Please if some experts suggestion that will be useful.
sample code.

Calculation scripts performance can be reduced .Can you provide us with more info about the actual datacopy script which takes 3-4 hours (Sample Example of the script will be nice)
Thanks,
Sreekumar Hariharan

Similar Messages

  • Enhanced Calc Script more flexible than native Essbase Calc Script?

    What makes an Enhanced Calc Script more flexible than native Essbase Calc Script?
    Run on Save or @CALCMODE function or Run time prompts or Can be run over the web or Substitution Variables or Custom Defined Functions.
    Appreciate if u reply ASAP!!
    Thanks in Advance!!!

    Some posts on the subject
    Business Rule
    Business rule
    Business rule
    Cheers
    John
    http://john-goodiwn.blogspot.com/

  • Execute Essbase Calc Scripts from FDM

    Hi,
    Can any of you let me know how to execute Essbase Calc Scripts from FDM, these Calc Scripts are on Essbase Server. Any help would be greatly appreciated.
    Thanks

    See the thread below:
    Re: FDM - Script

  • Essbase calc script performance issues

    Hi,
    I have essbase 9.3 running on Sun solaris 4 CPU, 16 GB server. The calc script "calc all" takes ~3 hrs to complete.
    This is the calc script.
    /ESS_LOCALE English_UnitedStates.US-ASCII@Binary
    SET UPDATECALC OFF;
    SET CALCPARALLEL 4;
    SET CALCTASKDIMS 2;
    CALC ALL;
    We don't have to calc all dim, but even if we
    But even with specific dim we get the same timing. Below is the script
    SET UPDATECALC OFF;
    SET CALCPARALLEL 4;
    SET CALCTASKDIMS 2;
    FIX ("Y2009", "Actual");
    CALC DIM("Data Source","Intercompany","LegalEntity","Site","Department","Entity");
    ENDFIX
    The ess00001.ind is 700 Mb and ess00001.pag is 2.1 GB.
    In Admin services, this is what I see for caches
    1) Index cache size is 1 GB for this DB
    2) Index cache current value is 1gb
    3) Datafile cache setting is 1.5 GB
    4) Datafile cache current value is 0 (?? not sure why??)
    5) Data cache setting 4.1 GB
    6) Index page setting 8 kb
    please help ...
    Thanks
    Moe

    Moe,
    I'm guessing you inherited this thing, else you would know why the cache settings are what they are, but here are some thoughts:
    Caches:
    3) Datafile cache setting is 1.5 GB
    4) Datafile cache current value is 0 (?? not sure why??)You're running the database in Buffered I/O, so the data file cache is ignored.
    1) Index cache size is 1 GB for this DB
    2) Index cache current value is 1gb You have consumed all of the cache -- I'm a little confused, as you state your .ind file to be 700 megabytes -- generally the index cache consumption doesn't go beyond the .ind file size. When you look at your hit ratio statistics in EAS, does it show a 1 against the index cache? If yes, then you don't need to look any further as that's as good as it's going to get.
    5) Data cache setting 4.1 GBUnless you're using MEMSCALINGFACTOR, I don't think Essbase is actually addressing all of the memory you've assigned. What are you showing as actually used? In any case, having a data cache almost twice as big as the .pag files is a waste as it's way too large.
    Easy, off the cuff suggestions without knowing more about your db:
    1) Try AGG instead of CALC DIM for sparse dimensions.
    2) Try turning off (yes, turning off, you'd be surprised) parallel calc, and benchmark it. It will probably be slower, but it's nice to know.
    3) Dimension order? Modified hourglass?
    4) Tried defragmenting the database and benchmarking the performance?
    5) What is your block size? Big? Small?
    6) I think you are not calculating your Accounts/Measures dimension in your calc? If you are, and it's dense, could you make those Accounts dynamic calc -- dropping a dimension from the calc can be huge.
    I'm sure there will be other suggestions -- these are the easiest.
    Regards,
    Cameron Lackpour

  • Essbase inconsistent calc performance

    Edited by: user610131 on Aug 23, 2012 11:32 AM
    Edited by: user610131 on Aug 23, 2012 11:32 AM

    Edited by: user610131 on Aug 23, 2012 11:32 AM
    Edited by: user610131 on Aug 23, 2012 11:32 AM

  • Tuning Essbase Calc Scripts

    Hi,
    I am looking for some guidance on tuning a calculation script.The business rule perform forecasting for two years based on the historical data (upto 5 years). The rule performs well when it is executed against a single department, single activity and single project.However when I try to run the admin rule which performs the same function but for all departments, all classes and all projects. it never completes.
    Your feedback is really appreciated.
    Sincerely,
    JJ
    Note in this example:
    Activity and Project are dimension.
    Total# of Departments = 7000 members
    Activity = 100 members
    Project = 100 stored members
    Accounts = 350 stored members

    Hello JJ,
    for tuning an essbase database a consultant may use a day or more. Not something to be handled properly in a forum.
    What we can do here is to give you some guidelines.
    Calculate only what is necessary. Data which have been calculated and are not changed, do not need to be recalculated. Data which will be modified in a next step, should not be aggregated and then be aggregated again.
    When you calculate, much is related to to the block size and how fast Essbase can get the blocks into memory and back to disk. So optimize the blocks (dynamic calc on the parent members, label only on where no data storage should take place).
    Then optimize the calculation. Work with parallel calculation. And test, test, and test.
    Regards,
    Philip Hulsebosch

  • App-V - "Excel & ESSBase Plugin" perform poorly

    Hi,
    using App-V 5.0 sp3 client for RDS 2012 R2.
    *Excel 2007 & ESSBase Add-in sequenced into a connection Group
    *Standalone Mode/ Normal mode so no streaming server 
    *The App-V client UI (or PoSh commandlet) is showing package is fully loaded in local cache
    *when I run Excel v-App, Excel Plugin "ESSBase" is registered fine
      I can open large .xls files with no issue
    The performance issue escalated by a user is when loading a .xlsm calc sheet
    and starting a data retrieval from ESSBase database server
    - the operation takes around 2min 50
    - compared to a "native setup" where Excel & ESSBase are installed locally, the same operation takes 30 seconds
    TCPView.exe shows that starting the action throws tons of new network connections to the ESSBase database server
    Procmon.exe filtered on Excel.exe shows :
    - that compared to a native install, the App-V setup has tons more result="Path Not Found" or "Name Not Found" 
    TaskMgr shows that on the App-V setup, Excel.exe enters "Not Responding" time window many times during data retrieval
    => on Native setup, Excel.exe don't enter "Not Responding" status
    The tests were made on two identical RDS Session Host 2012 r2 VMs, 4 GB, AMD Quad Core, 1Gb Ethernet
    with as single load the Excel test
    => I would be surprised if this is a ressource bottleneck - Excel.exe uses max 90 Mo.
    I'll do benchmark tests using dummy network traffic tests using
    measure-command powershell commandlet
    from Inside App-V bubble & compare it to the results from outside the App-V bubble.
    while I'm sure this shoundn't be a App-V limitation in itself, to be honest not knowing what to optimize
    is frustrating.
    Any help would be welcome,
    Thanks.
    MCTS Windows Server Virtualization, Configuration

    Fixed !
    now the App-V app (Excel & ESSBase plugin) data load from the database server
    has simply the same performance as in the Native Installation Environment or is even a bit faster...
    The ‘guilty’ was : “COM Mode” setting witch was set up on the App-V package’s dynamic configuration
    XML file as in
    Isolated Mode witch according to App-V: On COM Isolation and Interaction
    meant that it is being Virtualized
    On – w/ GUID Spoofing
    I deleted the App-V package & set that bit of code inside of the XML like below then
     tested again – the result is now satisfying !
    <COM
    Mode="Off"><integratedcomattributes
    inprocessenabled="false" outofprocessenabled="true"><IntegratedCOMAttributes
    InProcessEnabled="false"
    OutOfProcessEnabled="true"/>
    </integratedcomattributes></COM>
    I watched the below TechEd North America Event
    Project Virtual Reality Check: Microsoft App-V 5.0 Performance, Tuning, and Optimization (App-V PTO)
    it is briefly said that to troubleshoot execution performance issues,
    - check unnecessary Extention Points wich could slow execution
    - double check COM Mode setting as it could also horribly slow the App-V App user experience
    See also the section Writing directly to Excel
    where The scripting Guys explain two methods
    & how COM is known to be horribly slow …
    sample powershell code snippets from
    PowerShell and Excel: Fast, Safe, and Reliable
    to inject cells & generate and Excel file…
    This was not easy to find as the majority of the documentation around ‘performance tweaks’ for App-V was leading to the Streaming or Package Publishing process optimization wich was not my case – I was more interested
    about optimizing a process started during the package’s execution when it was already fully cached locally.
    Thanks.
    MCTS Windows Server Virtualization, Configuration

  • Essbase Studio Performance Issue : Data load into BSO cube

    Hello,
    Having succesfully built my outline by member loading through Essbase Studio, I have tried to load data into my application again with Studio. However I was never able to complete the data load because it is taking forever. Each time I tried to work with Studio in streaming mode (hoping to increase the query speed), the load gets terminated due to the following error : Socket read timed out.
    In the Studio properties file, I typed in, oracle.jdbc.ReadTimeout=1000000000, but the result has not changed. Even if it did work out, I am also not sure the streaming mode is gonna provide a much faster alternative to working in non-streaming mode. What I'd like to know is, which Essbase settings I can change (either in Essbase or Studio server) in order to speed up my data load. I am loading into a Block Storage database with 3 Dense, 8 Sparse and 2 attribute dimensions. I filtered some dimensions and tried to load data to see exactly how long it takes to create a certain number of blocks. With ODBC setting in Essbase Studio, it took me 2.15 hours to load data into my application where only 153 blocks were created with the block size of 24B. Assuming that in my real application the number of Blocks created are going to be at least 1000 times more than this , I need to make some changes in settings. I am transferring the data from Oracle Database, with 5 tables joined to a fact table (view) from the same data source. All the cache settings in Essbase are in default mode. Would changing cache settings, buffer size or multiple threads help to increase the performance? Or what would you suggest that I should do?
    Thank you very much.

    Hello user13695196 ,
    (sorry I no longer remember my system number here)
    Before it comes to any optimisation attemps in the essbase (also Studio) environment you should definitily make clear that your source data query performs well at the oracle db.
    I would recommand:
    1. to create in your db source schema a View from your sql statement (these behind your data load rule)
    2. query against this view with any GUI (Sql Developer, TOAD etc.) to fetch all rows and measure the time it takes to complete. Also count the effected (returned) number of rows for your information and for future comparing of results.
    If your query runs longer then you think is acceptable then
    a) check DB statistiks,
    b) check and/or consider creating indexes
    c) if you are unsure then kindliy ask your DBA for help. Usually they can help you very fast.
    (Don't be shy - a DBa is a human being like you and me :-) )
    Only when your sql runs fast (enough for you; or your DBA says is the best you can achieve) at the database move your effort over to essbase.
    One hint in addition:
    We had often problems when using views for dataload (not only performance but rather other strange behavior) . Thats the reaons why I like more directly to set up on (persistence) tables.
    Just to keep in mind: If nothing helps create a table from your view and then query your data from this table for your essbase data load. Normaly however this should be your last option.
    Best Regards
    (also to you Torben :-) )
    Andre
    Edited by: andreml on Mar 17, 2012 4:31 AM

  • Call ODI 11g scenario from Essbase calc script/business rule using ODI SDK

    I am looking for any hints on how to use the ODI 11g SDK. I want to call a java application (CDF) that runs an ODI scenario using RUNJAVA in Essbase which I have successfully done in the 10g environment.
    The java application has the odi-core.jar included in the project and registers OK with Essbase and I have replicated code from the Oracle sample code site. When I run the application in a calc script I get the following error:
    EssbaseCluster-1.EFTS.EFTS.odi     Execute calculation script     June 17, 2011 10:20:40 AM NZST     Failed
    Error: 1200456 Problem running [indigo.essbase.odi.RunODIScenario]: [java.lang.NoClassDefFoundError: org/springframework/util/StringUtils]
    When I comment out the code that calls the creates the OdiInstance then the java app executes fine - i.e. writes something to the Essbase log.
    The research I have done so far indicates that a classpath is incorrect. If that is the case where do I start looking to correct the classpath? Is it the ODI classpath or the Essabase classpath?
    Any tips would be grateful.
    Thanks.

    You need to import more jars to execute this
    following are the jars
    1)     bsf.jar
    2)     bsh-2.0b2.jar
    3)     commons-collections-3.2.jar
    4)     eclipselink.jar
    5)     odi-core.jar
    6)     ojdl.jar
    7)     oracle.ucp_11.1.0.jar
    8)     persistence.jar
    9)     spring-beans.jar
    10)     spring-core.jar
    11)     spring-dao.jar
    12)     spring-jdbc.jar
    Once you have this in classpath - your scenario will execute
    Hope this helps.

  • CREATEBLOCKONEQ: calc performance issue.

    Hello Everyone,
    We've been using one of the calc on but it takes a heck lot of time to finish.It runs almost for a day. I can see that CREATEBLOCKONEQ is set to true for this calc. I understand that this setting works on sparse dimension however ProjCountz (Accounts) and BegBalance(Period) is member on dense dimension in our outline. One flaw that I see is that ProjCount data sits in every scenario. However, we just want it in one scenario. So we will try to narrow the calc down to only one scenario. Other than that, do you see any major flaw in the calc?
    Its delaying a lot of things. Any help appreciated. Thanks in Advance.
    /* Set the calculator cache. */
    SET CACHE HIGH;
    /* Turn off Intelligent Calculation. */
    SET UPDATECALC OFF;
    /* Make sure missing values DO aggregate*/
    SET AGGMISSG ON;
    /*Utilizing Parallel Calculation*/
    SET CALCPARALLEL 6;
    /*Utilizing Parallel Calculation Task Dimensions*/
    SET CALCTASKDIMS 1;
    /*STOPS EMPTY MEMBER SET*/
    SET EMPTYMEMBERSETS ON;
    SET CREATEBLOCKONEQ ON;
    SET LOCKBLOCK HIGH;
    FIX("Proj_Countz")
    clearblock all;
    ENDFIX;
    Fix(@Relative(Project,0), "BegBalance", "FY11")
    "Proj_Countz"
    "Proj_Countz"="Man-Months"->YearTotal/ "Man-Months"->YearTotal;
    ENDFIX;
    Fix("Proj_Countz")
    AGG("Project");
    ENDFIX;

    You are valuing a dense member (Proj_Countz) by dividing a dense member combination (Man Months->YearTotal/Man Months->YearTotal). There can be no block creation going on as everything is in the block. CREATEBLOCKSONEQ isn't coming into play and isn't needed.
    The code is making three passes through the code.
    Pass#1 -- It is touch every block in the db. This is going to be expensive.
    FIX("Proj_Countz")
    clearblock all;
    ENDFIX;Pass#2
    Fix(@Relative(Project,0), "BegBalance", "FY11")
    "Proj_Countz"
    "Proj_Countz"="Man-Months"->YearTotal/ "Man-Months"->YearTotal;
    ENDFIX;Pass#3 -- It's calcing more than FY11. Why?
    Fix("Proj_Countz")
    AGG("Project");
    ENDFIX;Why not try this:
    FIX("FY11", "BegBalance", @LEVMBRS(whateverotherdimensionsyouhave))
    Fix(@Relative(Project,0))
    "Proj_Countz"
    "Proj_Countz"="Man-Months"->YearTotal/ "Man-Months"->YearTotal;
    ENDFIX
    AGG(Project,whateverotherdimensionsyouhave) ;
    ENDFIXThe clear of Proj_Countz is pointless unless Man-Months gets deleted. Actually, even if it does, Essbase should do a #Missing/#Missing and zap the value. The block will exist if Proj_Countz is valued and the cells MM and YT) will be there and clear the PC value.
    I would also look at the parallelism of your calculation -- I don't think you're getting any with one taskdim.
    Regards,
    Cameron Lackpour

  • Retrieve imported and validated Entities for further ESSBASE calc Script

    Hi folks,
    once the FDM processing is finished:
    The Event Script AftConsolidate is executed.
    It is retrieving all unique Entity entries (trialbalance command), Period (POV), Scenario (POV) etc. and is bulding a dynamic ESSBAE calc script command which is afterwards executed to ensure that even the leaf member are correctly transferred to ESSBASE, the nodes are refreshed/aggregated as well.
    This works perfectly ;-)
    MY ISSUE:
    I want to clone this logic into a custom web script which then can be executed adhoc via webfrontend / Task Flow.
    I tried to copy the AftCondolidate Script into this custom web script. UNFORTUNATELY i get an error: saying DATA ACCESS ERROR
    My assumption is, that the trialbalance command does not work wit the custom web scripts.
    Is that right? Are there any workarounds how to retrieve out of a custom web script the entity dimension and store the unique entity entries in an array?
    regards
    Hau

    You don't need a custom script. FDM has functionality to call the consolidate action only, check the activities menu

  • Essbase Cube Performance

    Hi,
    We have migrated Essbase applications and server to new server. But cube build is taking almost double time as compare to old server. Eventhough new server configuration is very powerfull (Processor - 3Ghz, RAM-8GB, Disk Space - 380 GB).
    All the settings like application, database, configuration are same on both the servers. Also calculation is taking double time.
    Kindly tell what could be the reasons for it?
    Atul,

    Hi Atul,
    when you are adding a new member into Dense dimension. The cube uder dense (full) restructuring.
    Information from DBAG
    To perform a full restructure, Analytic Services does the following:
    1.Creates temporary files that are copies of the .ind, .pag, .otl, .esm, and .tct files. Each temporary file substitutes either N or U for the last character of the file extension, so the temporary file names are dbname.inn, essxxxxx.inn, essxxxxx.pan, dbname.otn, dbname.esn, and dbname.tcu.
    2.Reads the blocks from the database files copied in step 1, restructures the blocks in memory, and then stores them in the new temporary files. This step takes the most time.
    3.Removes the database files copied in step 1, including .ind, .pag, .otl, .esm, and .tct files.
    hope this helps

  • Essbase MDX Performance

    Hello all,
    I am developing a report with obiee 10.1.3.4 sitting on top of Essbase and I am developing some reports which i am running into issues lately. The MDX generated by OBIEE is causing some performance issues especially in the reports where we use Rank function. I am aware of work arounds present(Christian Blogs) to get there by directly filtering in the MDX in the case of Rank function. But in general i believe there is some patch available which increases the MDX performance on a whole. So i just wanted to know if any one out there are able to achieve success with any of those patches related to MDX perfomance. Also let me know if it does some sort of tuning to the MDX query
    Thanks
    Prash
    Edited by: user10283076 on Apr 20, 2009 3:20 PM
    Edited by: user10283076 on Apr 20, 2009 3:58 PM

    Hi Prash,
    nope, nothing yet. 10.1.3.4.1 is just around the corner but from what I hear, it's not that "quantum leap" needed in the OBIEE/Essbase integration. That's definitely only going to hit us with OBIEE 11.
    Influencing the MDX generated my OBIEE isn't an easy task, unless you go and replace every column with an EVALUATE function. And that's beside the point of OBIEE.
    One alternative you may want to think of: use BIP to create your reports by writing pure MDX and then displaying the BIP reports on the dashboards.
    Cheers,
    Christi@n

  • Calc Performance 5.0.2 vs 6.5

    We are upgrading from 5.0.2p11 to 6.5. Before the conversion I ran the ?CALC ALL? on the database, which took around 3hr 15 min. Keeping the same settings and data on the version 6.5 run time is 38 hrs.The only exception is, I have added CALCPARALLEL 3 in Essbase.cfg fileCan anyone tell me what should I look into?Settings I have looked into:Index Cache is 120M Data Cache is 200MBuffered I/oCalc Cache is 200KI am running this on NT server with 3G RAM and 4 processors. Thank you,Mrunal

    If you add the line 'SET MSG SUMMARY;' or 'SET MSG INFO;' before the 'CALC ALL;' and run the calc using esscmd, you will see a lot of useful information such as if parallel calc is working on how many dimensions and also how many empty tasks there were. This information is also written to the application log file. Parallel calc won't work if the outline contains complex formulae and isn't beneficial if there are a large number of empty tasks.I've found it better to take it out of the essbase.cfg file and then if I need it I add 'SET CALCPARALLEL 3;' to the calc scripts. However, that shouldn't explain your massive increase in calc times. If you have 3gb of Ram then try increasing your data cache (try 1gb). Add 'SET CACHE ALL;' to the calc script. Check your block density and block sizes in database/information. There could be any number of things that affect your calc time. I have also found that when upgrading, I re-create the index by exporting input data, clear the data, unload the application, delete the (app).ind file, delete the (app).esm and (app).tct files as well, load the application, load the data and calculate. This has helped us to improve the database stability.

  • Restrict Planning Admin from seeing all Essbase calc scripts

    Hi - I have a few ppl who are Administrators on Planning and they see all of my calc scripts in Essbase. Is there a way to block them from seeing them? I have those calc scripts to run things that they do not need to run.
    I am using Planning 3.5.1, Essbase 6.5
    Thanks,
    Cindy

    Hi Cindy,
    I assume the calc scripts you want to block from the Planning Admin's are within the applications they are administrators of correct?
    The only way I can think to do this is to place these in another directory on the server or on a share you have access to and use Esscmd scripts to execute them in a batch function.
    e.g.
    RUNCALC 3 C:\SERVER_DIR\mycalcs\calc
    Presuming the directory were on your essbase server and you schedule this on your essbase server.
    This assumes your Planning Admins do not have access to your Essbase server. Of course if this were unix the path to the script would be different.
    You could also only have the scripts on your local machine.
    Regards,
    John A. Booth
    http://www.metavero.com

Maybe you are looking for

  • Problem in sending mail with a xls attachment

    Hi, I have a problem in a program that I created. The program sends a email via internet to user with a excel attachment. In SOST its showing as a file with .xls attachment which opens in MS excel. But the actual mail which the user gets is a .dat fi

  • Searching of a the file 910002 in a dir !

    hi my retrospect stops when meets the folder 910002 in the dir / kunden how can I find this file exist a terminal command for it lik ls the file 910002 in / kunden any help wold be great mic Message was edited by: mici

  • Apple TV on non-HD television???

    Hi, does Apple TV work on a TV that is not HD ?? can it be connect via a scart or AV port ?? or in the case of a non-HD TV is it just as good to connect the iPod to the TV? cheers

  • Can call a function a few times in reports

    Hi Please help me to solve this problem: I have a function taking parameters and return a number, I tried 2 ways to call this function but get same error saying the function may not be used in SQL: 1. the function is in a package, and I use the pl/sq

  • 16 bit stereo question

    Hi, I'm converting mp3 to wav then modifying the wav and saving it to a new wav file. The saved wav files comes out as 16 bit mono. I need it to be 16 bit stereo like the original. I realize the following code does not account for the stereo (only 4