Essbase inconsistent calc performance

Edited by: user610131 on Aug 23, 2012 11:32 AM
Edited by: user610131 on Aug 23, 2012 11:32 AM

Edited by: user610131 on Aug 23, 2012 11:32 AM
Edited by: user610131 on Aug 23, 2012 11:32 AM

Similar Messages

  • Essbase Agg Calc script Running Inconsistantly

    Hi All,
    We are seeing inconsistent completion times for one of our calc scripts that simply aggregates a single Entity dimension. It runs periodically throughout the day on an already aggregated database. The normal completion time is 20 minutes. We have observed that some runs can take up to 7 hours. This issue persists even if there are no users in the system. We had the SAN and the Essbase server monitored while running this calc, but no issues were found on either end. In the essbase log, it appears that Essbase is sitting idle for a period of time while the calc is running. Has anyone experienced this before?
    ------------------------Calc script -------------------
    SET CACHE HIGH;
    SET LOCKBLOCK HIGH;
    SET MSG SUMMARY;
    SET NOTICE LOW;
    SET UPDATECALC OFF;
    SET AGGMISSG ON;
    SET CALCPARALLEL 4;
    SET CALCTASKDIMS 4;
    /* Baseline fix. */
    FIX (@RELATIVE("YearTotal",0), @RELATIVE("ACCountInc",0), @RELATIVE("AccountLine",0), @RELATIVE("AccountOther",0), "FY02", "Working")
         Agg("Entity") ;
    ENDFIX
    It became a major issue. Your inputs will really help us.
    Thanks in advance

    Couple of unknowns here but here's a few tips:
    1. Run the script in MaxL and ensure that you log users out, kill all existing app processes. Now even if this isn't do-able in the long run, you want to test this out to see that the result is consistent. A lot of times your process is waiting for other online processes to finish.
    2. Defragmentation of the BSO cube could be the cause. If you defrag the cube and the 1st time run is fast, and the 2nd time run is slow, then you have created alot of blocks that shouldn't be there. And that's your problem and you need to tune the way you agg.
    3. Check essbase statistics especial average cluster ratio to be 1
    Daniel Poon

  • Essbase calc performance

    Dears,
    I have one calculation script in which there are all datacopy statements ( about 26000) lines and script is taking almost 3-4 hours to run for a month in a year.
    how can we improve the performance with using datacopy or some other function. Please if some experts suggestion that will be useful.
    sample code.

    Calculation scripts performance can be reduced .Can you provide us with more info about the actual datacopy script which takes 3-4 hours (Sample Example of the script will be nice)
    Thanks,
    Sreekumar Hariharan

  • 64 bit version in Essbase version 11 performance compared to 32 bit

    Hi :)
    I am interested if anyone could tell me about performance experience in an Essbase 64 bit version 11.x.x installation compared to a 32 bit version 9.3 installation? I am currently working with BSO cubes and would like to know if we could aggregate/calculate cubes faster switching to a newer version and a 64-bit installation.
    BR
    Michael

    You can address up to eight threads in parallel calc in 64 bit versus the four in 32 bit.
    And you have much more efficient access to memory.
    I know I have read of some people seeing slower performance post the move to 64 bit, but that's the exception, not the rule.
    You may have to reconfigure (it's likely a good chance to examine your applications regardless of what you end up doing) the block and the caches and maybe even the calcs to get the best performance, but regardless of what you end up doing, moving to 64 bit should see significant speed increases.
    Regards,
    Cameron Lackpour

  • FDM:Issue with Launching Essbase agg calc cript after FDM load complete

    Hi experts ,
    I am using a upshell batch to run FDM custom VB script  to process 12 months data file which is running fine. All level 0 data is sucessfully loaded into Essbase.
    But now the problem is to launch Agg calc scipt in Essbase. I tried 2 options but having following issues:
    1-
    If i am giving Agg script name in Validation entity and runs the load Up-To-Consolidate then FDM runs that agg script after each month load (ie: 12 times) but i just want to run that agg script after complete 12 months data load from FDM to Essbase.
    Is there any way i can set Calc script to run after all data is loaded in Essbase ?
    2-
    If i call tha Essbase batch (which call MaxL to run the calc script) in AftLoad event script then again script is running 12 times after each load. Can you please suggest if i can modify the VB code with IFcondition here ? (ie: IF period is 12 then call \\Essbase server\***\.Batch) ? If possible please provide the sample code as i am new to VB ?
    Please suggest
    Thanks a lot !
    Vivek

    I guess you are using Batch Loader from your custom script??
    Then you could use BatchAction to execute your maxl batch when the batch processing finishes.
    In that way you would have only one execution after your 12 period file is processed.
    I would suggest having a look to "Batch-Load Single Multiload File (Up To Check) Process" section in the FDM API Guide.
    If your file is Multiload file you could also use MultiLoad Action Script.
    For example, for multiload action you could say something like:
    If Month(objLSItem.PstrTBPer) = 12 Then
    That code checks period being processed is December (in case you load from Jan to Dec)
    I hope that helps

  • Cannot see all the essbase dimensions when performed reverse engg.using ODI

    Hi,
    I performed the reverse engineering process for getting the dimensions from Essbase to the Model. First time the session has shown an error. When i right clicked the session in the operator and clicked on restart it has run succesfuly. But only the Accounts dimension got reversed. The rest of the dimensions are not seen in the Model. Please suggest

    I have got to import my actuals data which is ther in Oracle database to Planning. As i was told that ODI is not compatible with EPMA, i cannot load data into Planning using ODI. As anyways the data is to be stored in essbase itself i want to use ODI to load the data into essbase. For metadata upload i am making use of Interface tables
    My problem is
    1.do i need to have a staging table for bringing my actuals data into it first and then connect it to Essbase?
    2. If yes, what shud be the table format, like the columns?
    3. And, the user wants this to be automated. And if there are any changes to the base tables(tables with actual data in oracle) they shud be reflected in the essbase aslo. So is there any lookup functionality in ODI?
    I am new to ODI so having many doubts. Hope to see a reply soon

  • CREATEBLOCKONEQ: calc performance issue.

    Hello Everyone,
    We've been using one of the calc on but it takes a heck lot of time to finish.It runs almost for a day. I can see that CREATEBLOCKONEQ is set to true for this calc. I understand that this setting works on sparse dimension however ProjCountz (Accounts) and BegBalance(Period) is member on dense dimension in our outline. One flaw that I see is that ProjCount data sits in every scenario. However, we just want it in one scenario. So we will try to narrow the calc down to only one scenario. Other than that, do you see any major flaw in the calc?
    Its delaying a lot of things. Any help appreciated. Thanks in Advance.
    /* Set the calculator cache. */
    SET CACHE HIGH;
    /* Turn off Intelligent Calculation. */
    SET UPDATECALC OFF;
    /* Make sure missing values DO aggregate*/
    SET AGGMISSG ON;
    /*Utilizing Parallel Calculation*/
    SET CALCPARALLEL 6;
    /*Utilizing Parallel Calculation Task Dimensions*/
    SET CALCTASKDIMS 1;
    /*STOPS EMPTY MEMBER SET*/
    SET EMPTYMEMBERSETS ON;
    SET CREATEBLOCKONEQ ON;
    SET LOCKBLOCK HIGH;
    FIX("Proj_Countz")
    clearblock all;
    ENDFIX;
    Fix(@Relative(Project,0), "BegBalance", "FY11")
    "Proj_Countz"
    "Proj_Countz"="Man-Months"->YearTotal/ "Man-Months"->YearTotal;
    ENDFIX;
    Fix("Proj_Countz")
    AGG("Project");
    ENDFIX;

    You are valuing a dense member (Proj_Countz) by dividing a dense member combination (Man Months->YearTotal/Man Months->YearTotal). There can be no block creation going on as everything is in the block. CREATEBLOCKSONEQ isn't coming into play and isn't needed.
    The code is making three passes through the code.
    Pass#1 -- It is touch every block in the db. This is going to be expensive.
    FIX("Proj_Countz")
    clearblock all;
    ENDFIX;Pass#2
    Fix(@Relative(Project,0), "BegBalance", "FY11")
    "Proj_Countz"
    "Proj_Countz"="Man-Months"->YearTotal/ "Man-Months"->YearTotal;
    ENDFIX;Pass#3 -- It's calcing more than FY11. Why?
    Fix("Proj_Countz")
    AGG("Project");
    ENDFIX;Why not try this:
    FIX("FY11", "BegBalance", @LEVMBRS(whateverotherdimensionsyouhave))
    Fix(@Relative(Project,0))
    "Proj_Countz"
    "Proj_Countz"="Man-Months"->YearTotal/ "Man-Months"->YearTotal;
    ENDFIX
    AGG(Project,whateverotherdimensionsyouhave) ;
    ENDFIXThe clear of Proj_Countz is pointless unless Man-Months gets deleted. Actually, even if it does, Essbase should do a #Missing/#Missing and zap the value. The block will exist if Proj_Countz is valued and the cells MM and YT) will be there and clear the PC value.
    I would also look at the parallelism of your calculation -- I don't think you're getting any with one taskdim.
    Regards,
    Cameron Lackpour

  • Calc Performance 5.0.2 vs 6.5

    We are upgrading from 5.0.2p11 to 6.5. Before the conversion I ran the ?CALC ALL? on the database, which took around 3hr 15 min. Keeping the same settings and data on the version 6.5 run time is 38 hrs.The only exception is, I have added CALCPARALLEL 3 in Essbase.cfg fileCan anyone tell me what should I look into?Settings I have looked into:Index Cache is 120M Data Cache is 200MBuffered I/oCalc Cache is 200KI am running this on NT server with 3G RAM and 4 processors. Thank you,Mrunal

    If you add the line 'SET MSG SUMMARY;' or 'SET MSG INFO;' before the 'CALC ALL;' and run the calc using esscmd, you will see a lot of useful information such as if parallel calc is working on how many dimensions and also how many empty tasks there were. This information is also written to the application log file. Parallel calc won't work if the outline contains complex formulae and isn't beneficial if there are a large number of empty tasks.I've found it better to take it out of the essbase.cfg file and then if I need it I add 'SET CALCPARALLEL 3;' to the calc scripts. However, that shouldn't explain your massive increase in calc times. If you have 3gb of Ram then try increasing your data cache (try 1gb). Add 'SET CACHE ALL;' to the calc script. Check your block density and block sizes in database/information. There could be any number of things that affect your calc time. I have also found that when upgrading, I re-create the index by exporting input data, clear the data, unload the application, delete the (app).ind file, delete the (app).esm and (app).tct files as well, load the application, load the data and calculate. This has helped us to improve the database stability.

  • Essbase I/O performance

    I am with Cambex Corporation. We develop server memory and storage products. I have seen OLAP surveys and benchmarks which indicate that database load and query performance is a performance issue for some customers and in some cases loading the OLAP database on a solid state (RAM-based) disk has had a significant impact in improving performance. See for example: http://www.dmreview.com/master.cfm?NavID=198&EdID=4085http://www.4pointstech.com/pdf/scm-benchmark.pdf I would be very interested in your view on the subject. Specifically: - Do you have an application/configuration of Essbase which is "I/O bound"?- Are you or do you know of any Essbase customer who has used a solid state disk to alleviate a performance issue? I am investigating the market for high-capacity, low-cost Solid State disks as a new product opportunity for Cambex and am focusing on OLAP as one of the potential opportunities. Thank you, Arnie EpsteinSenior Technical Marketing ConsultantCambex [email protected] ext 229 Office978-852- 5825 Cell

    Hi Dana.Kadi,
    This issue may more about the Exchange server issue,
     this forum is not specific for the Exchange related issue there have a specific Exchange I suggest you ask in that forum.
    ESE (store.exe) will grow the cache to consume almost all available RAM on the server if there is no other memory pressure on the system For example, if the server
    contains 16gb physical memory, if there is no other memory pressure, one could expect that the store.exe process will grow to use up to 14gb memory (16gb minus 2gb allocated to Kernel mode). If you disable the ESE Cache the RAM cache workload will become to
    use more disk IO.
    About the Exchange question please post to the Exchange forum.
    http://social.technet.microsoft.com/Forums/en-us/home?filter=alltypes&sort=lastpostdesc&brandIgnore=true
    More information:
    Understanding the Mailbox Database Cache
    http://technet.microsoft.com/en-us/library/ee832793(v=exchg.141).aspx
    Thanks for your understanding and support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Calc performance on new machine

    We are migrating from 5.02 patch 13a to 6.5 over the next month. As a part of this migration we have moved to a new Unix server, with faster processors. The move changes OS from 11.0 to 11.i. We are seeing calc times INCREASE dramatically, with interesting memory usage (memory leak?). Calcs which previously run in 1 hour have taken 11 hours to complete. The FIRST time the calc was run (on the new machine), it processed in roughly an equal time, then it gets progressively worse. Restarting the server seems to have no impact. We have "mirrored" the servers, as they are the same class HP machines. The only change is a faster chip and the 11.i OS. Any thoughts? Thanks in advance. Glen Moser Consultant

    Just move the iPhoto Library. There are more details in iPhoto: Move your iPhoto library to a new location

  • Essbase MDX Query Performance Problem

    Hello,
    I'm doing an analysis in OBIEE to Essbase cubes, but I don't know why OBIEE generates two MDX queries against Essbase. The first one returns in a reasonable time ( 5 minutos ) but the second one never returns.
    With
    set [_Year] as '[Year].Generations(2).members'
    set [_Month] as '[Mês Caixa].Generations(2).members'
    set [_Product2] as 'Filter([Product].Generations(2).members, (([Product].CurrentMember.MEMBER_Name = "SPECIAL" OR [Product].CurrentMember.MEMBER_ALIAS = "SPECIAL") OR ([Product].CurrentMember.MEMBER_Name = "EXECUTIVE" OR [Product].CurrentMember.MEMBER_ALIAS = "EXECUTIVE")))'
    set [_Client Name] as 'Filter([Client Name].Generations(2).members, (([Client Name].CurrentMember.MEMBER_Name = "JOHN DOE" OR [Client Name].CurrentMember.MEMBER_ALIAS = "JOHN DOE")))'
    set [_Service Name] as 'Generate([Service Name].Generations(2).members, Descendants([Service Name].currentmember, [Service Name].Generations(4), leaves))'
    select
    { [Accounts].[Paid Amount]
    } on columns,
    NON EMPTY {crossjoin({[_Year]},crossjoin({[_Month]},crossjoin({[_Product2]},crossjoin({[_Client Name]},{[_Service Name]}))))} properties MEMBER_NAME, GEN_NUMBER, [Year].[MEMBER_UNIQUE_NAME], [Year].[Memnor], [Mês Caixa].[MEMBER_UNIQUE_NAME], [Mês Caixa].[Memnor], [Product].[MEMBER_UNIQUE_NAME], [Product].[Memnor], [Client Name].[MEMBER_UNIQUE_NAME], [Client Name].[Memnor], [Service Name].[Member_Alias] on rows
    from [cli.Client]
    With
    set [_Year] as '[Year].Generations(2).members'
    set [_Month] as '[Mês Caixa].Generations(2).members'
    set [_Product2] as 'Filter([Product].Generations(2).members, (([Product].CurrentMember.MEMBER_Name = "SPECIAL" OR [Product].CurrentMember.MEMBER_ALIAS = "SPECIAL") OR ([Product].CurrentMember.MEMBER_Name = "EXECUTIVE" OR [Product].CurrentMember.MEMBER_ALIAS = "EXECUTIVE")))'
    set [_Client Name] as 'Filter([Client Name].Generations(2).members, (([Client Name].CurrentMember.MEMBER_Name = "JOHN DOE" OR [Client Name].CurrentMember.MEMBER_ALIAS = "JOHN DOE")))'
    set [_Service Name] as 'Generate([Service Name].Generations(2).members, Descendants([Service Name].currentmember, [Service Name].Generations(4), leaves))'
    member [Accounts].[_MSCM1] as 'AGGREGATE({[_Product2]}, [Accounts].[Paid Amount])'
    select
    { [Accounts].[_MSCM1]
    } on columns,
    NON EMPTY {crossjoin({[_Year]},crossjoin({[_Month]},crossjoin({[_Client Name]},{[_Service Name]})))} properties MEMBER_NAME, GEN_NUMBER, [Year].[MEMBER_UNIQUE_NAME], [Mês Caixa].[MEMBER_UNIQUE_NAME], [Client Name].[MEMBER_UNIQUE_NAME], [Service Name].[Member_Alias] on rows
    from [cli.Client]
    Does anyone know why OBIEE generate these two queries and how to optimize them since it's generated automatically by OBIEE ?
    Thanks,

    Hi,
    I have been through the queries, and understand that the "_MSCM1" is being aggregated across Product and Paid Amount from the query extract below:
    member [Accounts].[_MSCM1] as 'AGGREGATE({[_Product2]}, [Accounts].[Paid Amount])'
    If I am getting it right, there is an aggregation rule missing for [Paid Amount] (I think that's the reason, the query is to aggregate _MSCM1 by "Paid Amount" ie just like any other dimension).
    Could you please check this once and this is why I think BI is generating two queries? I am sorry, if I got this wrong.
    Hope this helps.
    Thank you,
    Dhar

  • Inconsistent datastore performance

    Hi all
    Hope someone can help, ran into a bit of a wall here. We're running ESXi 5.1 on an HP Proliant DL120 G7 and Intel Cougar Point 6 SATA controller, with 4 identical WD 2TB drives.
    We've got 4 datastores configured, each on the 4 WD drives.
    The problem is that uploading say a 2GB file via vSphere's datastore browser takes incredibly long (at least 20 minutes) on 3 of the 4 datastores, with the odd one out completing in under 2 minutes. I cannot for the life of me figure out what exactly is different between the datastores, especially considering that they're all using what should be identical backend hardware.
    Just as a test, I'm seeing the same behaviour when I SSH to the ESXi host and copy something from one datastore to another - 3 of 4 datastores return write speeds of just over 9MB/s.
    Reading up on the Cougar Point 6 controller though, apparently it has 2 6GB/s ports and 4 3GB/s ports, but there doesn't seem to be a lot of easily accessible info on it. I do not have the output immediately accessible, but all four disks are set at 3GB/s, so doubt that could have anything to do with it.
    Apparently there was/ is also a hardware bug with the B2 stepping version of this particular controller, though I have no idea how to identify whether this controller comes from the faulty batch.
    What I'm more interested in at this stage is if there's a recommended way of identifying where the bottleneck is via ESXi? I'm baffled why one datastore's performance would be great, while others notsomuch.
    Any help appreciated.

    This is probably more a question for Intel or on an Intel forum instead of here, but here's the relevant output from esxcli hardware pci list:
    000:000:1f.2
       Address: 000:000:1f.2
       Segment: 0x0000
       Bus: 0x00
       Slot: 0x1f
       Function: 0x02
       VMkernel Name: vmhba0
       Vendor Name: Intel Corporation
       Device Name: Cougar Point 6 port SATA AHCI Controller
       Configured Owner: Unknown
       Current Owner: VMkernel
       Vendor ID: 0x8086
       Device ID: 0x1c02
       SubVendor ID: 0x103c
       SubDevice ID: 0x330d
       Device Class: 0x0106
       Device Class Name: SATA controller
       Programming Interface: 0x01
       Revision ID: 0x05
       Interrupt Line: 0x0a
       IRQ: 10
       Interrupt Vector: 0x98
       PCI Pin: 0x33
       Spawned Bus: 0x00
       Flags: 0x0201
       Module ID: 65
       Module Name: ahci
       Chassis: 0
       Physical Slot: 255
       Slot Description:
       Passthru Capable: false
       Parent Device:
       Dependent Device: PCI 0:0:31:2
       Reset Method: Function reset
       FPT Sharable: false
    Apparently the B2 stepping model of this type of controller was the faulty version. According to Dell Intel, the B2 stepping model has a revision ID of 04h, with 05h being the B3 and therefore working model. What I can't figure out is whether Revision ID: 0x05 == 05h. Assume it probably is.
    If it is, then I'm probably barking up the wrong tree in suspecting the AHCI controller of being the root cause.

  • Inconsistent query performance

    my query is this:
    select hz, hz_time from freq_logger where hz_time = (select max(hz_time) from freq_logger)
    hz is a float, hz_time is a timestamp that is based on the time the row is inserted. After about a million rows inserted I started noticing slow down in the app that uses this query. I went to toad and the query is executing at 125-200ms, much more than the ~70ms it needs to be at for the app to run smoothly. However in sqlplus I got around 70ms. And then I noticed that another PC was running this exact app without any noticeable slow down. So I logged the query execution within the app, on my PC it was at around 200ms while the other PC was at 70ms. Any ideas what could be causing this performance difference?
    thanks

    No
    Your watch is an inadequate analysis instrument. Moreover you don't adhere to the following to posts
    When your query takes too long ...
    and
    HOW TO: Post a SQL statement tuning request - template posting
    so can not be worked upon.
    Sybrand Bakker
    Senior Oracle DBA

  • Inconsistent Compiz performance

    Hi all,
    I run both Ubuntu Hardy and Arch, and Compiz runs quite smoothly in the former (but I use it less so this might not be a valid point). I noticed however that when I refresh Compiz in my Arch installation (start Fusion-Icon, enable/disable loose-binding, etc.), animation performance would improve, but soon would regress to a very choppy state. The loose-binding/indirect rendering options don't matter; performance always improve when Compiz is refreshed, and regression cannot be prevented. I am using a Nvidia 8400 GS card.
    I wonder if there's a way to keep that performance like it's always just refreshed? Is it the inner working of Compiz or my Nvidia card that leads to the performance regression? The choppiness really bugs me, especially when I know that my card can perform.

    1. Identical execution plans are what caused the problem!
    Imagine you have a table with 50,000 employees and 10 managers, you have a nice index on job type, and a query that selects the managers.
    The execution plan will use the index and will run very fast.
    Now add another 50,000 employees (acquisition!) and 10,000 managers.
    If you didn't recollect statistics, the optimizer will assume that you still have 10 managers and will still use index (same plan!), but selecting 10,010 managers through index is much slower than selecting 10 managers through index, so - same plan, bad performance.
    After you recollect stats - optimizer will know that you have 10,010 managers and will do full table scan to get them, which will be much faster than getting them through the index. So collecting stats will lead to better plan and better performance.

  • Inconsistent YTD Performance in EVFDRE Reports

    Greetings,
    Currently using BPC 5.1 SP5.  I have a report that has nested columns with  measures as a dimension.  On the rows there is a single dimension.  This EVDRE works.  I am able to toggle between periodic and ytd.
    I have created a similar report to the one described above.  the difference is that I have nested multiple dimensions in the rows.  This EVDRE only works for periodic.  If I set the measures dimension to YTD, this report will run for an inordinate amount of time and I must terminate the application.. 
    Any ideas?
    Regards,
    Greg Lakin

    I suppose your application is periodic and probably into your report you are using also some members with dimension formula.
    If this is the case then it is clear that your report will take forever.
    You can check what kind of MDX query is generate and this will explain why is taking so long time.
    Any way my expectation is that actually you are touching some members with dimension formula and this is causing the main issue.
    Any way to be able to provide suggestion we need more information about the report.
    Regards
    Sorin Radulescu

Maybe you are looking for

  • Mouse & Screen Jump around at various times

    If I left my mouse stand still the mouse cursor will jump around. Also, the screen will automatically switch screens moving to the left or to the right to a new screen or sit there half way. It's happening while typing this message. Also the mouse pa

  • Resetting JApplet Content Pane

    I have an applet which needs to wait on another applet in the page being loaded before the GUI components can be rendered on the screen. My plugin appears to load the applets in the order that the HTML tags appear on the page, which is exactly the wr

  • HT4461 how do i install a purchased app that says installed but it's not on mac or iPad?

    how do i install a purchased app that says installed but it's not on mac or iPad?

  • Dynamically Obtaining Username and Password

    Hi, I am using Oracle Application Server and implementing a Servlet in it. The issue I am having is , how do I dynamically get the username and password to connect to the database? Should I use a configuration file and hard code it? Is there any spec

  • Need an opinion on video capture cards...

    I need to record roughly 2 minutes of high-quality promotional video (480p at 60 FPS, ideally). I only have a Powermac G5 dual 2 GHz at my disposal with 1.5 GB of RAM. What are my options? I'm looking for something <$500, but slightly higher is an op