Calculator Cache.

jay ganesh.I have few performance issues. I have not enabled Calculator Cache and Dynamic Calc Cache. I am using Essbase 6.5.3. I have set the calc something like:Index Cache = Index File Size.Data File Cache = Data File Cache.Data Cache = 0.125 * Data File Cache.I/O = Buffered.will modifying the above size and enabling calc caches help improving the performance. Please Help. Thanks in advance. Cheers.

Your data file cache will not be used if you are using buffered. It is only used when you use direct i/o

Similar Messages

  • Is calculator cache used if there are no blocks created?

    Hi experts,
    In DBAG it is written -
    Essbase can create a bitmap, whose size is controlled by the size of the calculator cache, to record and track data blocks during a calculation. Determining which blocks exist using the bitmap is faster than accessing the disk to obtain the information, particularly if calculating a database for the first time or calculating a database when the data is sparse.
    If my calculation does not create any block (it is only dense calculation), but it reads from different blocks on the right hand side of assignment (using cross dim) then is calculator cache used? Is it better to turn off cache in the calculation?
    ~Debashis

    The very next lines in the DBAG entry you quoted from are...
    Essbase uses the calculator cache bitmap if the database has at least two sparse dimensions and either of these conditions is also met:
    You calculate at least one full sparse dimension.
    You specify the SET CACHE ALL command in a calculation scriptSo I would assume the answer is 'no', unless your dense-only calculation also contains 'SET CACHE ALL'.

  • Performance issue - Loading and Calculating

    Hi,<BR>I am having 5 GB of data. It is taking 1 hr to load and 30 min to calculate. <BR>I did the following things to improve the performance.<BR>1) Sort the data and loading them in the order of largest sparse first, followed by smallest and dense<BR>2) Enabled parallel load, gave 6 threads for prepare and 4 for writing.<BR>3) Increased data file cache as 400MB and data cache as 50MB, then index cache as 100MB.<BR>4) Calculation only for 4 dimensions, out of 9. In that 2 are dense and 2 are sparse. <BR>5) Calculation with parallel calculation having 3 threads and CALCTASKDIMS as 2.<BR><BR>But i am not getting any improvements.<BR>While doing the calculation i got following message in the logs.<BR>I feel that CALCTASKDIM is not working<BR><BR>[Fri Jan  6 22:01:54 2006]Local/tcm2006/tcm2006/biraprd/Info(1012679)<BR>Calculation task schedule [2870,173,33,10,4,1]<BR><BR>[Fri Jan  6 22:01:54 2006]Local/tcm2006/tcm2006/biraprd/Info(1012680)<BR>Parallelizing using [1] task dimensions. Usage of Calculator cache caused reduction in task dimensions<BR><BR>[Fri Jan  6 22:33:54 2006]Local/tcm2006/tcm2006/biraprd/Info(1012681)<BR>Empty tasks [2434,115,24,10,2,0]<BR><BR>Can any one help me what the above log message is telling and what are the other things to be done to imrpove the performance.<BR><BR>Regards<BR>prsan<BR><BR><BR>

    <p>its not the problem with ur calc task dim.</p><p> </p><p><b>Calculation task schedule [2870,173,33,10,4,1</b>] indicates that ur parell calc can start with 2870calculations in parallel, after which 173 can be performed inparallel then 33 ,10,4 and 1.</p><p> </p><p><b>Empty tasks [2434,115,24,10,2,0]</b>  means these manytasks dont need any calculation- either because there is no data orthey are marked clean due to intelligent calc.</p><p> </p><p>the problem lies with your cal cache setting. try increaing thecal cache settings in ur cfg file and use calcache high setting inyour calc.</p><p> </p><p>hope this works<br></p>

  • Cache size  in cfg file& design aggregations

    Hi all
    Iam working in ASO for 11vny we are in production getting so many bugs
    plz can any on e tell me that
    wen they generate the reports its taking long time for many crossings, and for some reports its giving eror in OBI through essbase
    as per the doc refering
    sets a calculator cache of up to 1,000,000 bytes for the duration of the calculation script
    sets a calculator cache of up to 200,000 bytes for the duration of the calculation script
    sets a calculator cache of 200,000 bytes to be used even when you do not calculate at least one, full sparse dimension.
    but i put in essbase cfg file
    CALCCACHE TRUE
    CALCCACHEDEFAULT 75000000
    CALCCACHEHIGH 199000000
    DYNCALCMAXSIZE 30M
    CALCLOCKBLOCK
    ,DYNCALCCACHE
    Plz let me nw wat are the max size can i put for above statements to fast retrivel fast treports
    and in
    application properties
    cache pending size i put 512 mb is it any problem for retriving
    any suggestions greatly appriciated
    Edited by: user8815661 on 7 avr. 2010 01:51

    I am doing design aggregation but not using by default option iam usingsecond option design, meterialise and save aggregation
    so how can i do this using maxl
    iam using a maxl
    execute aggregate process on database
    but wen we use second option we have to save and we have to mention the size , i think this maxl will not work second option,
    plz any one help on this
    Edited by: user8815661 on 7 avr. 2010 02:00

  • Remote bitmap cache is disabled

    Hi , i noticed a statement in the log file
    "Remote bitmap cache is disabled"
    what does it means ,
    does it means that the calculation is not using calculator cache ?

    <p>This error I get a lot. It is most likely to occur when theserver does not respond within a system given time. When I stressthe server and try to calculate a database, it may occur.</p><p>It also happend when I had a bad member calculation set and thedatabase simply crashed while trying to calculate the member.</p><p>It also occurs when the server is shutting down services for aback-up routine, something I experience a lot when calculatingovernight.</p>

  • End of file reached error

    Coherence 3.5
    We are getting following exception -
    Caused by: com.tangosol.io.lh.LHIOException: /local/home/beuser/cachestore/lh03861950282677193497~, Primary file, Reading, Group 25253, Frame 25253: End of file reached. at com.tangosol.io.lh.LHSubs.ReadFrame(LHSubs.java:2173) at com.tangosol.io.lh.LHSubs.GetPrimaryFrame(LHSubs.java:1583) at com.tangosol.io.lh.LHSubs.SetupKeyId(LHSubs.java:997) at com.tangosol.io.lh.LHSubs.FindKeyId(LHSubs.java:915) at com.tangosol.io.lh.JLHFile.updateRecord(JLHFile.java:433) at com.tangosol.io.lh.JLHFile.writeRecord(JLHFile.java:616) at com.tangosol.io.lh.LHBinaryStore.store(LHBinaryStore.java:128) at com.tangosol.net.cache.SerializationMap.putAll(SerializationMap.java:201) at com.tangosol.net.cache.OverflowMap.putOne(OverflowMap.java:3086) at com.tangosol.net.cache.OverflowMap.processFrontEvent(OverflowMap.java:2592) at com.tangosol.net.cache.OverflowMap.processEvent(OverflowMap.java:2448) at com.tangosol.net.cache.OverflowMap.prepareStatus(OverflowMap.java:2191) at com.tangosol.net.cache.OverflowMap.processDeferredEvents(OverflowMap.java:2763) at com.tangosol.net.cache.OverflowMap.evict(OverflowMap.java:3121) at com.tangosol.net.cache.OverflowMap.size(OverflowMap.java:390) at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onSizeRequest(DistributedCache.CDB:24) ... 6 more
    Any pointers resolving this will be appreciated.
    Thanks.

    Here is the cache config -
    <cache-config>
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>RECORD</cache-name>
                   <scheme-name>example-distributed</scheme-name>
                   <init-params>
                        <init-param>
                             <param-name>cache-size-limit</param-name>
                             <param-value system-property="record.cache.size.limit">98566144</param-value>
                        </init-param>
                   </init-params>
              </cache-mapping>
              <cache-mapping>
                   <cache-name>COUNTERS</cache-name>
                   <scheme-name>example-no-expiry</scheme-name>
                   <init-params>
                        <init-param>
                             <param-name>cache-unit-type</param-name>
                             <param-value system-property="counters.cache.unit.type">FIXED</param-value>
                        </init-param>
                        <init-param>
                             <param-name>cache-size-limit</param-name>
                             <param-value system-property="counters.cache.size.limit">100</param-value>
                        </init-param>
                   </init-params>
              </cache-mapping>
              <cache-mapping>
                   <cache-name>RECORDKEYLIST</cache-name>
                   <scheme-name>example-distributed-with-backup</scheme-name>
                   <init-params>
                        <init-param>
                             <param-name>cache-size-limit</param-name>
                             <param-value system-property="recordkeylist.cache.size.limit">25165824</param-value>
                        </init-param>
                   </init-params>
              </cache-mapping>
              <cache-mapping>
                   <cache-name>RECORDITEM</cache-name>
                   <scheme-name>example-distributed-with-backup</scheme-name>
                   <init-params>
                        <init-param>
                             <param-name>cache-size-limit</param-name>
                             <param-value system-property="recorditem.cache.size.limit">25165824</param-value>
                        </init-param>
                   </init-params>
              </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>
              <!--Distributed caching scheme.-->
              <distributed-scheme>
                   <scheme-name>example-distributed</scheme-name>
                   <service-name>DistributedCache</service-name>
                   <thread-count>12</thread-count>
                   <backup-count>0</backup-count>
                   <backing-map-scheme>
                        <overflow-scheme>
                             <scheme-ref>example-overflow</scheme-ref>
                        </overflow-scheme>
                   </backing-map-scheme>
                   <backup-storage>
                        <type>scheme</type>
                        <scheme-name>example-overflow</scheme-name>
                   </backup-storage>
                   <autostart>true</autostart>
              </distributed-scheme>
              <distributed-scheme>
                   <scheme-name>example-no-expiry</scheme-name>
                   <service-name>DistributedCache-no-expiry</service-name>
                   <thread-count>12</thread-count>
                   <backup-count>0</backup-count>
                   <backing-map-scheme>
                        <overflow-scheme>
                             <scheme-ref>example-overflow-no-expiry</scheme-ref>
                        </overflow-scheme>
                   </backing-map-scheme>
                   <backup-storage>
                        <type>scheme</type>
                        <scheme-name>example-overflow-no-expiry</scheme-name>
                   </backup-storage>
                   <autostart>true</autostart>
              </distributed-scheme>
              <distributed-scheme>
                   <scheme-name>example-distributed-with-backup</scheme-name>
                   <service-name>DistributedCache-with-backup</service-name>
                   <thread-count>12</thread-count>
                   <backup-count>1</backup-count>
                   <backing-map-scheme>
                        <overflow-scheme>
                             <scheme-ref>example-overflow-with-backup</scheme-ref>
                        </overflow-scheme>
                   </backing-map-scheme>
                   <backup-storage>
                        <type>scheme</type>
                        <scheme-name>example-overflow-with-backup</scheme-name>
                   </backup-storage>
                   <autostart>true</autostart>
              </distributed-scheme>
              <!--
                   Backing map scheme definition used by all the caches that require
                   size limitation and/or expiry eviction policies.
              -->
              <local-scheme>
                   <scheme-name>example-backing-map</scheme-name>
                   <eviction-policy>HYBRID</eviction-policy>
                   <unit-calculator>{cache-unit-type BINARY}</unit-calculator>
                   <high-units>{cache-size-limit 10}</high-units>
                   <expiry-delay>2h</expiry-delay>
                   <flush-delay>5m</flush-delay>
                   <cachestore-scheme></cachestore-scheme>
              </local-scheme>
              <local-scheme>
                   <scheme-name>example-backing-map-no-expiry</scheme-name>
                   <eviction-policy>HYBRID</eviction-policy>
                   <expiry-delay>0</expiry-delay> <!--A value of zero implies no expiry. -->
                   <cachestore-scheme></cachestore-scheme>
              </local-scheme>
              <local-scheme>
                   <scheme-name>example-backing-map-with-delay</scheme-name>
                   <eviction-policy>HYBRID</eviction-policy>
                   <unit-calculator>{cache-unit-type BINARY}</unit-calculator>
                   <high-units>{cache-size-limit 10}</high-units>
                   <expiry-delay>24h</expiry-delay>
                   <flush-delay>5m</flush-delay>
                   <cachestore-scheme></cachestore-scheme>
              </local-scheme>
              <!--
                   Overflow caching scheme with example eviction local cache in the
                   front-tier and the example LH-based cache in the back-tier.
              -->
              <overflow-scheme>
                   <scheme-name>example-overflow</scheme-name>
                   <front-scheme>
                        <local-scheme>
                             <scheme-ref>example-backing-map</scheme-ref>
                        </local-scheme>
                   </front-scheme>
                   <back-scheme>
                        <external-scheme>
                             <scheme-ref>example-lh</scheme-ref>
                        </external-scheme>
                   </back-scheme>
              </overflow-scheme>
              <overflow-scheme>
                   <scheme-name>example-overflow-no-expiry</scheme-name>
                   <front-scheme>
                        <local-scheme>
                             <scheme-ref>example-backing-map-no-expiry</scheme-ref>
                        </local-scheme>
                   </front-scheme>
                   <back-scheme>
                        <external-scheme>
                             <scheme-ref>example-lh</scheme-ref>
                        </external-scheme>
                   </back-scheme>
              </overflow-scheme>
              <overflow-scheme>
                   <scheme-name>example-overflow-with-backup</scheme-name>
                   <front-scheme>
                        <local-scheme>
                             <scheme-ref>example-backing-map-with-delay</scheme-ref>
                        </local-scheme>
                   </front-scheme>
                   <back-scheme>
                        <external-scheme>
                             <scheme-ref>example-lh</scheme-ref>
                        </external-scheme>
                   </back-scheme>
              </overflow-scheme>
              <!--External caching scheme using LH.-->
              <external-scheme>
                   <scheme-name>example-lh</scheme-name>
                   <lh-file-manager>
                        <directory>/local/home/beuser/cachestore</directory>
                   </lh-file-manager>
              </external-scheme>
         </caching-schemes>
    </cache-config>

  • Substitution variable in Dynamic Calc

    Hi,
    We are using Essbase 9.3.0 on Windows and are seeing this behavior in our BSO cubes.
    When we use a substitution variable in a Scenario member with Dynamic Calc (not store) setting, after the first retrieve, if we change the value of the substitution variable, the subsequent retrieves do not generate updated results.
    I suspect that the value is cached in the Dynamic Calculator Cache, and for some reason does not track changes in Substitution Variables to know that the value must be re-calculated. Here is what I see in the Application log -
    [Mon Aug 09 10:31:51 2010]Local/App1/db1/user1/Info(1020055)
    Spreadsheet Extractor Elapsed Time : [0.032] seconds
    [Mon Aug 09 10:31:51 2010]Local/App1/db1/user1/Info(1020082)
    Spreadsheet Extractor Big Block Allocs -- Dyn.Calc.Cache : [4] non-Dyn.Calc.Cache : [0]
    This says that 4 blocks were used from the Dynamic Calc Cache, and none from outside it. Does this mean that existing blocks were read and not re-populated?
    If I make a change to the formula, wherein I hard code the value of the sub var and perform the retrieve, then the value is updated. Subsequent retrieves, after restoring the formula still returns the updated results.
    My question is, is this expected behavior? Or am I doing something /reading something wrong?
    Thanks,
    Andy

    when a subst variable value is changed... to use the value in member formula or calc script, the concerned application has to be restarted...
    - Krish

  • Retrieval performance become poor with dynamic calc members with formulas

    We are facing the retrieval performance issue on our partititon cube.
    It was fine before applying the member formulas for 4 of measures and made them dynamic calc.
    The retrieval time has increased from 1sec to 5 sec.
    Here is the main formula on a member, and all these members are dynamic calc (having member formula)
    IF (@ISCHILD ("YTD"))
    IF (@ISMBR("JAN_YTD") AND @ISMBR ("Normalised"))
    "Run Rate" =
    (@AVG(SKIPNONE, @LIST (@CURRMBR ("Year")->"JAN_MTD",
    @RANGE (@SHIFT(@CURRMBR ("Year"),-1, @LEVMBRS ("Year", 0)), @LIST("NOV_MTD","DEC_MTD")))) *
    @COUNT(SKIPNONE,@RSIBLINGS(@CURRMBR ("Period")))) + "04";
    ELSE
    IF (@ISMBR("FEB_YTD") AND @ISMBR ("Normalised"))
    "Run Rate" =
    (@AVG (SKIPNONE, @RANGE (@SHIFT(@CURRMBR ("Year"),-1, @LEVMBRS ("Year", 0)),"DEC_MTD"),
    @RANGE (@CURRMBR ("Year"), @LIST ("JAN_MTD", "FEB_MTD"))) *
    @COUNT(SKIPNONE,@RSIBLINGS(@CURRMBR ("Period")))) + "04";
    ELSE
    "Run Rate"
    =(@AVGRANGE(SKIPNONE,"Normalised Amount",@CURRMBRRANGE("Period",LEV,0,-14,-12))*
    @COUNT(SKIPNONE,@RSIBLINGS(@CURRMBR ("Period"))))
    + "Normalised"->"04";
    ENDIF;
    ENDIF;
    ELSE 0;
    ENDIF
    Period is dense
    Year is dense
    Measures (normalised) is dense
    remaining all sparse
    block size 112k
    index cache to 10mb
    Rertrieval buffer 70kb
    dynamiccalccahe max set to 200mb
    Please not that, this is partition cube, retriving data from 2 ASO, 1 BSO underline cubes.

    I received the following from Hyperion. I had the customer add the following line to their essbase.cfg file and it increased their performance of Analyzer retrieval from 30 seconds to 0.4 seconds. CalcReuseDynCalcBlocks FALSE This is an undocumented setting (will be documented in Essbase v6.2.3). Here is a brief explanation of this setting from development: This setting is used to turn off a method of reusing dynamically calculated values during retrievals. The method is turned on by default and can speed up retrievals when it involves a large number of dynamically calculated blocks that are each required to compute several other blocks. This may happen when there is a big hierarchy of sparse dynamic calc members. However, a large dynamic calculator cache size or a large value of CALCLOCKBLOCK may adversely affect the retrieval performance when this method is used. In such cases, the method should be turned off by setting CalcReuseDynCalcBlocks to FALSE in the essbase.cfg file. Only retrievals are affected by this setting.

  • Set createnonmissingblk on/off issue

    EXAMPLE:
    four dimensions in outline:
    account (dense) (members: A1,A2)
    time(dense) (member: T1,T2)
    entity (sparse) (member: E1,E2)
    product(sparse) (member:P1,P2)
    need to create blocks to enter data, BR like that:
    FIX("E1")
      SET CREATENONMISSINGBLK ON;
      "P1"=1;
       CLEARDATA "P1";
      SET CREATENONMISSINGBLK OFF;
    ENDFIX;
    FIX("E1","A1","A2","T1","T2")
    "P1"=100;
    ENDFIX;
    QUESTION:
    "P1"=100 cannot be set to Essbase in the above BR, only if I delete the syntax:"SET CREATENONMISSINGBLK OFF;" then I can set 100 to "P1".
    WAHT I THINK:
    with the blow script,
    FIX("E1")
    SET CREATENONMISSINGBLK ON;
      "P1"=1;
       CLEARDATA "P1";
    I have already make the block P1->E1 an existing block, then my "SET CREATENONMISSINGBLK OFF;" does not effect the existing block. but why cannot me add "SET CREATENONMISSINGBLK OFF;"?? I don't understand.

    FIX("E1")
      SET CREATENONMISSINGBLK ON;
      "P1"=1;
       CLEARDATA "P1";
      SET CREATENONMISSINGBLK OFF;
    ENDFIX;
    It is because of CLEARDATA, HERE CLEAR DATA is removing block!!!
    [Tue Jun 11 03:23:08 2013]Local/testStel/as/hypadmin@Native Directory/4720/Info(1012668)
    Calculating [ PEriod(P1)] with fixed members [Entity(E1)]
    [Tue Jun 11 03:23:08 2013]Local/testStel/as/hypadmin@Native Directory/4720/Info(1012672)
    Calculator Information Message:
    Maximum Number of Lock Blocks: [100] Blocks
    Completion Notice Messages: [Disabled]
    Calculations On Updated Blocks Only: [Enabled]
    Clear Update Status After Full Calculations: [Enabled]
    Calculator Cache: [Disabled]
    [Tue Jun 11 03:23:08 2013]Local/testStel/as/hypadmin@Native Directory/4720/Info(1012672)
    Calculator Information Message:
    Create Blocks on Equations: [Enabled]
    Create Non #Missing Blocks: [Enabled]
    [Tue Jun 11 03:23:08 2013]Local/testStel/as/hypadmin@Native Directory/4720/Info(1012677)
    Calculating in serial
    [Tue Jun 11 03:23:08 2013]Local/testStel/as/hypadmin@Native Directory/4720/Info(1012672)
    Calculator Information Message: Executing Block - [E1], [P1]
    [Tue Jun 11 03:23:08 2013]Local/testStel/as/hypadmin@Native Directory/4720/Info(1012672)
    Calculator Information Message:
    Total Block Created: [1.0000e+000] Blocks
    Sparse Calculations: [1.0000e+000] Writes and [0.0000e+000] Reads
    Dense Calculations: [0.0000e+000] Writes and [0.0000e+000] Reads
    Sparse Calculations: [0.0000e+000] Cells
    Dense Calculations: [0.0000e+000] Cells
    [Tue Jun 11 03:23:08 2013]Local/testStel/as/hypadmin@Native Directory/4720/Info(1012672)
    Calculator Information Message:
    Total #Missing Blocks Not Written Back: [0.0000e+000] Blocks
    [Tue Jun 11 03:23:08 2013]Local/testStel/as/hypadmin@Native Directory/4720/Info(1012555)
    Clearing data from [P1] partition with fixed members [Entity(E1)]
    [Tue Jun 11 03:23:08 2013]Local/testStel/as/hypadmin@Native Directory/4720/Info(1017018)
    Removed [1] data blocks
    Message was edited by: _RahulS_
    Message was edited by: _RahulS_

  • Outline Order, Calc Script Performance, Substitution Variables

    Hi All,
    I am currently looking in to the performance side.
    This is mainly about the calculation script performance.
    There are lot of questions in my mind and as it is said you can get the results only by testing.
    1. Outline order should be from least sparse to most sparse
    (other reason : to accomodate as many sparse members in to calculator cache) correct me if I am wrong
    2. Is Index entry created based on the outline order. For example I have outline order as Scenarios, Products, Markets then does my index entry be like scenario -> Products -> Markets ?
    3. Does this order has to match with the order of members in FIX Statement of calculation script?
    4. I have 3 sparse dimensions. P (150 members), M (8 members), V (20 members).
    I use substitution variables for these three in the calculation script. And these three are the mandotary things in my calculation script. Now when I see the fix statement, these three are the first 3 parameters of the fix statemtn and since I am fixing on a specific member, placing these three members as the first 3 sparse dimensions in the outline, ill it improve performance?
    In one way, I can say that a member from P, M,V becomes my key for the data.
    Theoritically if I think, may be it will...but in practical terms I don't see any of such thing.. Correct me If my thinking is wrong.
    One more thing, I have a calc script with say around 10 FIX statements and this P,M,V is being used in every FIX statemnts. Since my entire calculation will be only on one P, one M, one V. Can I put everything in one FIX at the beginning and exclude it from remaining FIX statememts?
    5. I have a lot of cross dimensional operations in my calc scripts for accounts dimension (500 + ) members.
    Is there a way to reduce these?
    6. My cube statistics..
    Cube size : 80 GB +
    Block Size : 18 KB (Approx)
    Block density : 0.03 . This is what I am more worried about. This really hurts me.
    This is one of the reason why my calculation time is > 7 hours and some times it is horrible when there is huge amount of data (it takes aound 20 + hours) for calculation.
    I would be looking forward for your suggestions.
    It would be really apprecialble if It is Ok to share your contact number so that I can get in touch with you. That could be of great help from your side.

    I have provided some answers below:
    There are lot of questions in my mind and as it is said you can get the results only by testing.
    ----------------------------You are absolutely right here but it helps to understand the underlying principles and best practices as you seem to understand.
    1. Outline order should be from least sparse to most sparse
    (other reason : to accomodate as many sparse members in to calculator cache) correct me if I am wrong
    ----------------------------This is one reason but another is to manage disk I/O during calculations. Especially when performing the intial calculation of a cube, the order of sparse dimensions from smallest to largest will measurably affect your calc times. There is another consideration here though. The smallest to largest (or least to most) sparse dimension argument assumes single threading of the calculations. You can gain improvements in calc time by multi-threading. Essbase will be able to make more effective use of multi-threading if the non-aggregating sparse dimensions are at the end of the outline.
    2. Is Index entry created based on the outline order. For example I have outline order as Scenarios, Products, Markets then does my index entry be like scenario -> Products -> Markets ?
    ----------------------------Index entry or block numbering is indeed based on outline order. However, you do not have to put the members in a cross-dimensional expression in the same order.
    3. Does this order has to match with the order of members in FIX Statement of calculation script?
    ----------------------------No it does not.
    4. I have 3 sparse dimensions. P (150 members), M (8 members), V (20 members).
    I use substitution variables for these three in the calculation script. And these three are the mandotary things in my calculation script. Now when I see the fix statement, these three are the first 3 parameters of the fix statemtn and since I am fixing on a specific member, placing these three members as the first 3 sparse dimensions in the outline, ill it improve performance?
    --------------------------This will not necessarily improve performance in and of itself.
    In one way, I can say that a member from P, M,V becomes my key for the data.
    Theoritically if I think, may be it will...but in practical terms I don't see any of such thing.. Correct me If my thinking is wrong.
    One more thing, I have a calc script with say around 10 FIX statements and this P,M,V is being used in every FIX statemnts. Since my entire calculation will be only on one P, one M, one V. Can I put everything in one FIX at the beginning and exclude it from remaining FIX statememts?
    --------------------------You would be well advised to do this and it would almost certainly improve performance. WARNING: There may be a reason for the multiple fix statements. Each fix statement is one pass on all of the blocks of the cube. If the calculation requires certain operations to happen before others, you may have to live with the multiple fix statements. A common example of this would be calculating totals in one pass and then allocating those totals in another pass. The allocation often cannot properly happen in one pass.
    5. I have a lot of cross dimensional operations in my calc scripts for accounts dimension (500 + ) members.
    Is there a way to reduce these?
    -------------------------Without knowing more about the application, there is no way of knowing. Knowledge is power. You may want to look into taking the Calculate Databases class. It is a two day class that could help you gain a better understanding of the underlying calculation principles of Essbase.
    6. My cube statistics..
    Cube size : 80 GB +
    Block Size : 18 KB (Approx)
    Block density : 0.03 . This is what I am more worried about. This really hurts me.
    This is one of the reason why my calculation time is > 7 hours and some times it is horrible when there is huge amount of data (it takes aound 20 + hours) for calculation.
    ------------------------Your cube size is large and block density is quite low but there are too many other factors to consider to simply say that you should make changes based solely on these parameters. Too often we get focused on block density and ignore other factors. (To use an analogy from current events, this would be like making a decision on which car to buy solely based on gas mileage. You could do that but then how do you fit all four kids into the sub-compact you just bought?)
    Hope this helps.
    Brian

  • Force creation of a block

    Hi,
    I'm trying to force the creation of a block in a calc script. I just want a little subset of my script to create blocks to optimize the calculation time.
    So I tried using
    SET CREATEBLOCKONEQ ON;
    <script>
    SET CREATEBLOCKONEQ OFF;
    It didn't work so I tried
    SET CREATENONMISSINGBLK ON;
    <Script>
    SET CREATENONMISSINGBLK OFF;
    But it didn't work either.
    What am I doing wrong ?
    Thanks for your help,
    Cyril

    I'm not sure if anyone found a resolution to this one, but we're having a similar issue. We're attempting to finalize a script that will provide a specific allocation methodology, but we're running into this block creation issue. Basically, when we run the script, the entries are not created. However, if we first push a zero to the the intersection, through either a data form or a copy data, the script will populate the intersection. Any ideas?
    /* Set the calculator cache. */
    SET CACHE HIGH;
    /* Turn off Intelligent Calculation. */
    SET UPDATECALC OFF;
    /* Make sure missing values DO aggregate, thereby NOT PROTECTING parent level members (in calculated dimensions),
    but allowing Essbase to calculate quicker. */
    SET AGGMISSG ON;
    /*Utilizing Parallel Calculation*/
    SET CALCPARALLEL 3;
    /*Setting for the number of updates to the log file*/
    SET NOTICE LOW;
    /*Sets the max number of blocks that can be pulled at a time during a calc*/
    SET LOCKBLOCK HIGH;
    /*Specifies the number of spare dimensions included in parallel calcs*/
    SET CALCTASKDIMS 7;
    /*Controls whether new blocks are created when a calculation formula assigns
    anything other than a constant to a member of a sparse dimension*/
    SET CREATEBLOCKONEQ ON;
    FIX(&CurrActFcst,&CurrActYear,&NextYear,&NextYearPlus1,Final,
         (@REMOVE(@RELATIVE("LTotal Locations",0),@LIST("L01010","L01011",@RELATIVE("LInactive - US Locations",0),"L05300",
         @RELATIVE("LInactive - Canada Locations",0),"L01060",     @RELATIVE("LInactive - Philippines Locations",0),"L21060","L01045","L01047",
         "L01049","L01050","L01570","L01053","L21000","L21001","L21002","L21003","L21004","L21005","L21006","L21007","L21008",
         "L21009","L21010","L21030","L21032","L21033","L21034","L21035","L21036","L21037","L21038","L21039","L21040",
         "L21041","L03121","L04002","L04100","L04101","L04102","L04108","L04200","L04215","L04220","L04252","L04253",
         "L04299","L04300","L02140","L02142","L21300","L02150",@RELATIVE("LKorea",0),"L02162","L02100","L02101","L02106",
         "L02155","L21270","L02130","L02133","L03100","L03101","L06001","L06200","L06210","L06220","L06225","L06100",
         "L06107","L06108","L06109","L06201","L06300","L02200","L01221","L01223","L01228","L01232","LHLDNG",
         @RELATIVE("LPercepta",0),@RELATIVE("LEnhansiv",0),"L01900",     @RELATIVE("LCorporate & Other",0)))))
    /* Big Fix, calculating for all level zero depts and level 0 clients, for the Location specified in the RTP*/
         FIX(@RELATIVE("DTotal Departments",0),@RELATIVE("CTotal Clients",0))
    /* Clears out previous allocation amount*/
    "ADepreciation Location CE Allocation Estimate" = #MISSING;
    "ASG&A Location CE Allocation Estimate" = #MISSING;
    "ADirect Cost Location CE Allocation Estimate" = #MISSING;
    "ADepreciation Location CE Allocation Offset" = #MISSING;
    "ASG&A Location CE Allocation Offset" = #MISSING;
    "ADirect Cost Location CE Allocation Offset" = #MISSING;
         ENDFIX
    AGG ("Client", "Department");
    /*Calculating Empty Seat Clients for locations that are not dedicated, that are empty, and where nothing has been
    entered to ASEAT*/
    FIX("C9002","No Currency")
    "ASEAT"(
    /*IF stmt to determine if location is dedicated*/
         IF(NOT @ISUDA("Location","Dedicated"))
              /*IF stmt to determine if location is an "Empty Site" and to see if ASEAT was populated*/
              IF(("ASEAT"->"No Currency"->"DTotal Departments"->"CTotal Clients"/
                   "APROD"->"No Currency"->"DTotal Departments"->"CTotal Clients") < .85 AND "ASEAT"== #MISSING)
              /*Populates ASEAT account for empty seat client*/
              "ASEAT" = (("APROD"->"No Currency"->"CTotal Clients"-> "DTotal Departments" * .85) -
                             "ASEAT"->"No Currency"->"CTotal Clients"-> "DTotal Departments");
              ENDIF
         ENDIF
         ENDFIX
    AGG ("Client", "Department");     
    FIX(@RELATIVE("DTotal Departments",0),@RELATIVE("CTotal Clients",0))
    "ADepreciation Location CE Allocation Estimate"(
    /*IF stmt to fix on dedicated sites*/
    IF(@ISUDA ("Location", "Dedicated"))
    /*Calculating allocation using assigned wrkstation for each client as numerator and the total assigned wrkstns for
         all clients as denominator*/
    "ADepreciation & Amortization"->"C0000" * ("ASEAT"->"No Currency"->"DTotal Departments"/
         "ASEAT"->"No Currency"->"DTotal Departments"->"CTotal Clients");
    /*IF stmt to determine if location is a "Full Site"*/
    ELSEIF(("ASEAT"->"No Currency"-> "DTotal Departments"->"CTotal Clients"/
              "APROD"-> "DTotal Departments"->"No Currency"->"CTotal Clients") >= .85)
    /*Calculating allocation using assigned wrkstation for each client as numerator and the total assigned wrkstns for
         all clients as denominator*/
    "ADepreciation & Amortization"->"C0000" * ("ASEAT"->"No Currency"->"DTotal Departments"/
    "ASEAT"->"No Currency"->"DTotal Departments"->"CTotal Clients");
    /*IF stmt to determine if location is an "Empty Site"*/
    ELSEIF(("ASEAT"->"No Currency"->"CTotal Clients"/"APROD"->"No Currency"->"CTotal Clients") < .85)
         /*Calculating Empty seat client:alloc numerator is plug to get total assigned wkstn up to 85% of prod wkstn,
         denom. is total prod wkstn * 85% */
         "ADepreciation & Amortization"->"C0000" * ("ASEAT"->"No Currency"->"DTotal Departments"/
         "ASEAT"->"No Currency"->"DTotal Departments"->"CTotal Clients");
    ELSE
    #MISSING;
    ENDIF)
    /*Calculates the SG&A Allocation Estimate*/
    "ASG&A Location CE Allocation Estimate" = "ASG&A"->"C0000" * ("ADirect Cost of Revenue"->"DTotal Departments"/
    "ADirect Cost of Revenue"->"CTotal Clients"->"DTotal Departments");
    "ADirect Cost Location CE Allocation Estimate"(
    /*IF stmt to fix on dedicated sites*/
    IF(@ISUDA ("Location", "Dedicated"))
    /*Calculating allocation using assigned wrkstation for each client as numerator and the total assigned wrkstns for
         all clients as denominator*/
    "ADirect Cost of Revenue"->"C9500" * ("ASEAT"->"DTotal Departments"->"No Currency"/
              "ASEAT"->"No Currency"->"DTotal Departments"->"CTotal Clients");
    /*IF stmt to determine if location is a "Full Site"*/
    ELSEIF(("ASEAT"->"No Currency"-> "DTotal Departments"->"CTotal Clients"/
              "APROD"-> "DTotal Departments"->"No Currency"->"CTotal Clients") >= .85)
    /*Calculating allocation using assigned wrkstation for each client as numerator and the total assigned wrkstns for
         all clients as denominator*/
    "ADirect Cost of Revenue"->"C9500" * ("ASEAT"->"DTotal Departments"->"No Currency"/
         "ASEAT"->"No Currency"->"DTotal Departments"->"CTotal Clients");
    /*IF stmt to determine if location is an "Empty Site"*/
    ELSEIF(("ASEAT"->"No Currency"->"DTotal Departments"->"CTotal Clients"/
              "APROD"->"DTotal Departments"->"No Currency"->"CTotal Clients") < .85)
    "ADirect Cost of Revenue"->"C9500" * ("ASEAT"->"No Currency"->"DTotal Departments"/
         "ASEAT"->"No Currency"->"DTotal Departments"->"CTotal Clients");
    ELSE
    #MISSING;
    ENDIF)
    ENDFIX
    AGG ("Client", "Department");
    /*-----------THIS SECTION CALCULATES THE ALLOCATION OFFSETS------------------------------------------------*/
    FIX(@RELATIVE("DTotal Departments",0))
    FIX("C0000")
    "ADepreciation Location CE Allocation Offset" = "ADepreciation Location CE Allocation Estimate"->"CTotal Clients" * -1;
    "ASG&A Location CE Allocation Offset" = "ASG&A Location CE Allocation Estimate"->"CTotal Clients" * -1;
    ENDFIX
    FIX("C9500")
    "ADirect Cost Location CE Allocation Offset" = "ADirect Cost Location CE Allocation Estimate"->"CTotal Clients" * -1;
    ENDFIX
    ENDFIX
         AGG ("Client", "Department");
    ENDFIX

  • CREATEBLOCKONEQ: calc performance issue.

    Hello Everyone,
    We've been using one of the calc on but it takes a heck lot of time to finish.It runs almost for a day. I can see that CREATEBLOCKONEQ is set to true for this calc. I understand that this setting works on sparse dimension however ProjCountz (Accounts) and BegBalance(Period) is member on dense dimension in our outline. One flaw that I see is that ProjCount data sits in every scenario. However, we just want it in one scenario. So we will try to narrow the calc down to only one scenario. Other than that, do you see any major flaw in the calc?
    Its delaying a lot of things. Any help appreciated. Thanks in Advance.
    /* Set the calculator cache. */
    SET CACHE HIGH;
    /* Turn off Intelligent Calculation. */
    SET UPDATECALC OFF;
    /* Make sure missing values DO aggregate*/
    SET AGGMISSG ON;
    /*Utilizing Parallel Calculation*/
    SET CALCPARALLEL 6;
    /*Utilizing Parallel Calculation Task Dimensions*/
    SET CALCTASKDIMS 1;
    /*STOPS EMPTY MEMBER SET*/
    SET EMPTYMEMBERSETS ON;
    SET CREATEBLOCKONEQ ON;
    SET LOCKBLOCK HIGH;
    FIX("Proj_Countz")
    clearblock all;
    ENDFIX;
    Fix(@Relative(Project,0), "BegBalance", "FY11")
    "Proj_Countz"
    "Proj_Countz"="Man-Months"->YearTotal/ "Man-Months"->YearTotal;
    ENDFIX;
    Fix("Proj_Countz")
    AGG("Project");
    ENDFIX;

    You are valuing a dense member (Proj_Countz) by dividing a dense member combination (Man Months->YearTotal/Man Months->YearTotal). There can be no block creation going on as everything is in the block. CREATEBLOCKSONEQ isn't coming into play and isn't needed.
    The code is making three passes through the code.
    Pass#1 -- It is touch every block in the db. This is going to be expensive.
    FIX("Proj_Countz")
    clearblock all;
    ENDFIX;Pass#2
    Fix(@Relative(Project,0), "BegBalance", "FY11")
    "Proj_Countz"
    "Proj_Countz"="Man-Months"->YearTotal/ "Man-Months"->YearTotal;
    ENDFIX;Pass#3 -- It's calcing more than FY11. Why?
    Fix("Proj_Countz")
    AGG("Project");
    ENDFIX;Why not try this:
    FIX("FY11", "BegBalance", @LEVMBRS(whateverotherdimensionsyouhave))
    Fix(@Relative(Project,0))
    "Proj_Countz"
    "Proj_Countz"="Man-Months"->YearTotal/ "Man-Months"->YearTotal;
    ENDFIX
    AGG(Project,whateverotherdimensionsyouhave) ;
    ENDFIXThe clear of Proj_Countz is pointless unless Man-Months gets deleted. Actually, even if it does, Essbase should do a #Missing/#Missing and zap the value. The block will exist if Proj_Countz is valued and the cells MM and YT) will be there and clear the PC value.
    I would also look at the parallelism of your calculation -- I don't think you're getting any with one taskdim.
    Regards,
    Cameron Lackpour

  • Reduce Calc Time

    Afternoon everyone,
    We load data into our cube monthly, and when running the calc on the database it can take between 2/3 days to complete. I appreciate that calc time can be determined by a wide variety of factors (number of dense/spare dims/members etc) - but looking at things from a system resource view:
    The server has 8 CPU's.
    With total memory = 4194303 (according to server information within Application Manager)
    When calcing, approx 1500000 of memory is used.
    The start of the calc script defines the following parameters: 'SET CALCPARALLEL 4; SET UPDATECALC OFF;'
    Would increasing the 'SET CALCPARALLEL' parameter from 4 to 6 be a viable approach to trying to reduce calc time (especially given the amount of available resource on the server)??
    The server wont be used for anything else during the calc.

    CL wrote:
    Are you running 64 bit or 32 bit Essbase?
    32 bit maxes out at 4 CPUs for parallel calc; 64 bit can go to 8.
    You might want to look at the number of task dimensions set for parallel calculations.
    See: http://download.oracle.com/docs/cd/E10530_01/doc/epm.931/html_esb_techref/calc/set_calctaskdims.htm
    And your calculator cache is going to impact parallel calcs as well.
    All of this can go up in smoke if you have calculations that require Essbase to calculate in serial, such as cross dimensionals.
    There are lots of other possibilities re performance.
    1) Could the SAN/local drives be faster?
    2) Do you need to calc the whole database (I have no idea what your db is, only that you mention a monthly calc -- is it for just one month?)
    3) Partitioning the db by month<--This is probably a really good place to look although nothing is free.
    4) Going to ASO
    There are others as well.
    I appreciate that thie above four thoughts are beyond your question, they're just food for thought.
    Regards,
    Cameron LackpourASO should be an option. It is much much faster rollup than BSO.

  • CalcCache settings

    Hi Experts,
    I have recently set up a new system 9 environment (9.3.1). I currently have 2 servers 1 blade 64bit with 1tb of space and 32gb ram this is for our production essbase server, and 1 32bit with 500gb disk and 8gb of ram for our shared services environment. This is a massive improvement from our old 7.1.5 system that only had 4gb of ram on a 32bit platform. However as at the time we were limited as to what we could do calculation wise on the old server, we no longer have these restrictions.
    I have migrated all our application to system 9 and found that retrievals arnt much better? I checked out the essbase config file from our old 7.1.5 and its been set as follows:
    ; The following entry specifies the full path to JVM.DLL.
    ;Original JVM entry: JvmModuleLocation d:\Hyperion\Essbase\java\jre\bin\hotspot\jvm.dll
    JvmModuleLocation D:\Hyperion\common\JRE\Sun\1.4.2\bin\client\jvm.dll
    ;Override default Direct I/O to specifically set Buffered I/O access
    __SM__BUFFERED_IO TRUE
    __SM__WAITED_IO TRUE
    EssbaseLicenseServer @UKBSS016
    AnalyticServerId 1
    ;Set data block locking defaults
    CALCLOCKBLOCKHIGH 200000
    CALCLOCKBLOCKDEFAULT 100000
    CALCLOCKBLOCKLOW 5000
    SET LOCKBLOCK HIGH;
    ;Enable the calc cache and set defaults for memory use
    CALCCACHE TRUE
    CALCCACHELOW 5120
    CALCCACHEDEFAULT 10240
    CALCCACHEHIGH 20480
    CALCNOTICEHIGH 50
    CALCNOTICEDEFAULT 20
    CALCNOTICELOW 5
    ; Set Dynamic Calculator Cache
    DYNCALCCACHEMAXSIZE 512M
    DYNCALCCACHEWAITFORBLK FALSE
    ;DYNCALCCACHEBLKTIMEOUT 0.005
    ;DYNCALCCACHEBLKRELEASE TRUE
    ;DYNCALCCACHECOMPRBLKBUFSIZE 12540000
    ;Set maximum number of data load errors recorded to error file before logging stops
    DATAERRORLIMIT 1000000
    I have tried to apply these settings to my new server as the config file was empty with the exception of the jave module and what not. No specific performance settings.
    I was wondering what sort of scop I have to increase these settings, and what would happen if I set any of the above options too high?
    A lot of the issues arise from formula in my outlines attached to dense dimensions but have cross dimensional operators to sparse dimensions. so quite intensive, but I have no choice in changing this design of the outline with the exception of making a few of the dense upper levels dynamic calc. but want to be able to get the most out of the new machine aimed at retrieval.
    Any suggestions/help would be great. If you need any more information them please ask
    Thanks in advance
    Eddie

    Hi, yes that would be great......
    I am basically running a calc all on 4 of our databases that contain a total of 5 dimensions listed below, These data bases are then transparently partitioned to a larger database that contain additional functions, such as working out MAT etc.
    Year DB's
    Dimension type members in dimension members stored
    Period - Dense - 836 - 775
    Fact - Dense - 152 - 118
    Promotion - Sparse - 48 - 47
    Brand - Sparse - 2801 - 2038
    Customer -Sparse - 5314 - 4845
    the above are the yearly databases dimension structure
    Cache sizes are as follows:
    Index 16384kb
    data file cache 32768kb
    data cache 614400kb
    Statistics
    number of existing blocks: 141202
    Block Size 731600B
    Create blocks on equation is set on.
    Transparent Partitions DB
    Period - Dense - 836 - 775
    Fact - Dense - 152 - 118
    Currency - Sparse - 5 - 4
    Year - Sparse - 6 - 4
    Accumulation - 18 - 1
    Promotion - Sparse - 48 - 47
    Version - 57 - 30
    Brand - Sparse - 2801 - 2038
    Customer -Sparse - 5314 - 4845
    Index 153600kb
    data file cache 8192kb
    data cache 307200kb
    Statistics
    number of existing blocks: 0
    Block Size 731600B
    Create blocks on equation is set off.
    In the transparent model formula exists to work out MAT's
    any help would be greatly appreciated

  • Performance and Products

    Hi, All Experts,
    I have just started working with Esbase and have no idea, what are the basic required products, what are the add-on products.
    I am told our users have big performance problems. Each year they create new cubes in order to give different parameters such as year, some of cubes even refer to each months. Well, is this an usual way to ue cubes?
    What kind of products and services are required to be installed in order to utillize performance problem?
    Having read serveral documents and discussions from this forum, I understand some of setiings are related to performance issues. But how do I know, say if I have 10 dimentions, what kind of setting combinations will maximize performance?
    Thanks and looking forward for your replay
    Linda

    Creating and maintaining Essbase cubes is an art not a science. There are many factors that affect the performance of a cube including Dense/sparse combinations, dimension order, formulas and calc scripts, use of dynamic calculations, cache settings, Asking a generic question about how to improve your cubes is difficult to answer.As you can see it depends on a number of factors and is dependant on what type of performance improvements you are trying to obtain. For example is it calculation time that takes to long, retrieval time, data load time? Each has different potential solutions and in some cases conflict with each other. Sometimes you have to compromise between two solutions to get the the best possible for both. I hate to say this, but you might consider having a Qualified (not necessarily certified) consultant review your subes with you to offer suggestions for improvement. Have specific goals in mind. Faster calc or consolidation of cubes, etc.

Maybe you are looking for

  • How can i transfer picture from old iPod to my computer

    how can i transfer picture from old iPod to my computer

  • MMS over WiFi

    I understand to send MMS (pictures/group messages) you need a data plan and have to be connected to the mobile network.  However, whenever I am at home or work, and I happened to be connected to the local WiFi network, I am unable to send group messa

  • Would I be able to load Arch from wubi based grub [solved]

    Hello, This one of my experience where I am trying to see if I can load Arch (installed on partition not like lookback disk) from grub that is installed on wubi installed ubuntu file. So far, I could install Arch on disk without installing grub and a

  • Firefox not saving tabs/windows to reopen next session

    Whenever firefox crashes, a page saying "Ooops this is embarrassing" appears, letting me restore the previous session. However, ever since a recent crash, it hasn't appeared anymore whenever it crashes, and I lost all the tabs I had opened because of

  • RAC on AIX

    Does RAC on AIX require HACMP or GPFS?. I think these are licensed products of IBM. Can't I use CRS without HACMP or GPFS?. OS: AIX 6.1 DB : 10.2.0.4