Essbase BSO data cache setting

can you guys post your BSO cube compressed data size(.pag files) and data cache setting? I am trying to get a better understanding about these two areas interms of optimization. I am also curious to know the maximum data cache setting ever used on BSO cube....(1 GB is what I know as of now)
I think the settings depends on million things (available RAM on server, performance check, etc etc) but just a quick check.
Thanks,
KK

Indes and data caches work differently and it is probably good to talk about that. When an Essbase database starts, it will grab all of the memory you have defined for the index cache and nothing else can get at it. Data cachs is grabbed as it is needed until it reaches the maximum. It was truly problematic in the old days when we had much less memory to work with. If you started too many applications, you took all available memory right away. Man I feel old talking about it :)

Similar Messages

  • Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).

    Hi,
    Our Environment is Essbase 11.1.2.2 and working on Essbase EAS and Shared Services components.One of our user tried to run the Cal Script of one Application and faced this error.
    Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).
    I have done some Google and found that we need to add something in Essbase.cfg file like below.
    1012704 Dynamic Calc processor cannot lock more than number ESM blocks during the calculation, please increase CalcLockBlock setting and then retry (a small data cache setting could also cause this problem, please check the data cache size setting).
    Possible Problems
    Analytic Services could not lock enough blocks to perform the calculation.
    Possible Solutions
    Increase the number of blocks that Analytic Services can allocate for a calculation:
    Set the maximum number of blocks that Analytic Services can allocate to at least 500. 
    If you do not have an $ARBORPATH/bin/essbase.cfg file on the server computer, create one using a text editor.
    In the essbase.cfg file on the server computer, set CALCLOCKBLOCKHIGH to 500.
    Stop and restart Analytic Server.
    Add the SET LOCKBLOCK HIGH command to the beginning of the calculation script.
    Set the data cache large enough to hold all the blocks specified in the CALCLOCKBLOCKHIGH setting. 
    Determine the block size.
    Set the data catche size.
    Actually in our Server Config file(essbase.cfg) we dont have below data  added.
    CalcLockBlockHigh 2000
    CalcLockBlockDefault 200
    CalcLockBlocklow 50
    So my doubt is if we edit the Essbase.cfg file and add the above settings and restart the services will it work?  and if so why should we change the Server config file if the problem is with one application Cal Script. Please guide me how to proceed.
    Regards,
    Naveen

    Your calculation needs to hold more blocks in memory than your current set up allows.
    From the docs (quoting so I don't have to write it, not to be a smarta***:
    CALCLOCKBLOCK specifies the number of blocks that can be fixed at each level of the SET LOCKBLOCK HIGH | DEFAULT | LOW calculation script command.
    When a block is calculated, Essbase fixes (gets addressability to) the block along with the blocks containing its children. Essbase calculates the block and then releases it along with the blocks containing its children. By default, Essbase allows up to 100 blocks to be fixed concurrently when calculating a block. This is sufficient for most database calculations. However, you may want to set a number higher than 100 if you are consolidating very large numbers of children in a formula calculation. This ensures that Essbase can fix all the required blocks when calculating a data block and that performance will not be impaired.
    Example
    If the essbase.cfg file contains the following settings:
    CALCLOCKBLOCKHIGH 500  CALCLOCKBLOCKDEFAULT 200  CALCLOCKBLOCKLOW 50 
    then you can use the following SET LOCKBLOCK setting commands in a calculation script:
    SET LOCKBLOCK HIGH; 
    means that Essbase can fix up to 500 data blocks when calculating one block.
    Support doc is saying to change your config file so those settings can be made available for any calc script to use.
    On a side note, if this was working previously and now isn't then it is worth investigating if this is simply due to standard growth or a recent change that has made an unexpected significant impact.

  • Dynamic Calc Issue - CalcLockBlock or Data Cache Setting

    We recently started seeing an issue with a Dynamic scenario member in our UAT and DEV environments. When we tried to reference the scenario member in Financial Reports, we get the following error:
    Error executing query: The data form grid is invalid. Verify that all members selected are in Essbase. Check log for details.com.hyperion.planning.HspException
    In SmartView, if I try to reference that scenario member, I get the following:
    The dynamic calc processor cannot allocare more than [10] blocks from the heap. Either the CalcLockBlock setting is too low or the data cahce size setting is too low.
    The Dynamic calcs worked fine in both environments up until recently, and no changes were made to Essbase, so I am not sure why them stopped working.
    I tried to set the CalcLockBlock settings in the Essbase.cfg file, and increases the data cache size. When I increases the CalcLockBlock settings, I would get the same error.
    When I increased the data cache size, it would just sit there and load and load in Financial Reporting and wouldn't show the report. In SmartView, it would give me an error that it had timed out and to try to increase the NetRetry and NetDelay values.

    Thanks for the responses guys.
    NN:
    I tried to double the Index Cache Setting and the Data Cache setting, but it appears when I do that, it crashes my Essbase app. I also tried adding the DYNCALCCACHEMAXSIZE and QRYGOVEXECTIME to essbase.cfg (without the Cache settings since it is crashing), and still no luck.
    John:
    I had already had those values set on my client machine, I tried to set them on the server as well, but no luck.
    The exact error message I get after increasing the Cache settings is"Essbase Error (1042017) Network error: The client or server timed out waiting to receive data using TCP/IP. Check network connections. Increase the olap.server.netConnectTry and/or olap.server.netDealy values in teh essbase. proprieties. Restart the server and try again"
    From the app's essbase log:
    [Wed Jun 06 10:07:44 2012]Local/PropBud/Staffing/admin@Native Directory/4340/Error(1023040)
    msg from remote site [[Wed Jun 06 10:07:44 2012]CCM6000SR-HUESS/PropBud/NOIStmt/admin/Error(1042017) Network error: The client or server timed out waiting to receive data using TCP/IP. Check network connections. Increase the NetRetryCount and/or NetDelay values in the ESSBASE.CFG file. Update this file on both client and server. Restart the client and try again.]
    [Wed Jun 06 10:07:44 2012]Local/PropBud/Staffing/admin@Native Directory/4340/Error(1200467)
    Error executing formula for [Resident Days for CCPRD NOI]: status code [1042017] in function [@_XREF]
    [Wed Jun 06 10:07:44 2012]Local/PropBud/Staffing/admin@Native Directory/4340/Warning(1080014)
    Transaction [ 0x10013( 0x4fcf63b8.0xcaa30 ) ] aborted due to status [1042017].
    [Wed Jun 06 10:07:44 2012]Local/PropBud/NOIStmt/admin@Native Directory/4340/Info(1013091)
    Received Command [Process local xref/xwrite request] from user [admin@Native Directory]
    [Wed Jun 06 10:07:44 2012]Local/PropBud/NOIStmt/admin@Native Directory/4340/Info(1008108)
    Essbase Internal Logic Error [1060]
    [Wed Jun 06 10:07:44 2012]Local/PropBud/NOIStmt/admin@Native Directory/4340/Info(1008106)
    Exception error log [E:\Oracle\Middleware\user_projects\epmsystem2\diagnostics\logs\essbase\essbase_0\app\PropBud\NOIStmt\log00014.xcp] is being created...
    [Wed Jun 06 10:07:46 2012]Local/PropBud/NOIStmt/admin@Native Directory/4340/Info(1008107)
    Exception error log completed E:\Oracle\Middleware\user_projects\epmsystem2\diagnostics\logs\essbase\essbase_0\app\PropBud\NOIStmt\log00014.xcp please contact technical support and provide them with this file
    [Wed Jun 06 10:07:46 2012]Local/PropBud///4340/Info(1002089)
    RECEIVED ABNORMAL SHUTDOWN COMMAND - APPLICATION TERMINATING

  • Essbase cubes - best cache setting

    We implemented Essbase about 2 years ago. We have never reset our cache settings. We are experiencing slowness in our cubes. Could this be caused by our cache settings? Do we need to reset our cache setting every so often. Thanks.
    Below are our current cache settings.
    Index Cache Setting 50242
    Index Cache Current Value 50242
    Data File Cache Setting 327680
    Data File Cache Current Value 0
    Data Cache Setting 300720
    Data Cache Current Value 87232
    Index Page Setting 8
    Index Page Current Value 8

    We struggled with cache settings for a while until we found a sweet spot, and we make periodic adjustments. Make sure you follow the published guidelines it will save you a lot of pain. It looks like you have plenty of room on most of the cache settings and only max out the Index Cache. Below is an excerpt from the dbag...
    Index Cache Recommended Value
    Combined size of all essn.ind files, if possible; as large as possible otherwise.  Do not set this cache size higher than the total index size, as no performance improvement results.

  • Essbase data cache setting not registering

    Hi,
    We have 64-bit essbase. I changed the data and Index cache setting (storage type Buffered IO) in essbase.
    After restarting the database, I see the value getting updated in the "Index cache current value" but the value in the "Data cache current value" is zero.
    These are planning essbase databases. The behavior is same for the native essbase dbs.
    I have also tried restarting the essbase service. The essbase version 11.1.1.3
    Please help.
    Regards,
    Avi

    Indes and data caches work differently and it is probably good to talk about that. When an Essbase database starts, it will grab all of the memory you have defined for the index cache and nothing else can get at it. Data cachs is grabbed as it is needed until it reaches the maximum. It was truly problematic in the old days when we had much less memory to work with. If you started too many applications, you took all available memory right away. Man I feel old talking about it :)

  • Spatial Data Cache Setting

    Hi all,
    If I disable the cache, I mean if I set max_cache_size to 0 on mapviewerConfig, Will it affect mapviewer performance or speed?
    I have developed sample application to move a point data on web using mapviewer. After updating SDO_GEOMETRY column on database, I will refresh the map. The map will show moved point data only if I set max_cache_size to 0 on mapviewerConfig. Is there any mapviewer API is available to clear spatial data cache?
    Thanks,
    Sujnan

    This is likely a bug. It assumes a 2gb limit.

  • Essbase BSO data recovery

    Hello
    In my organization, we have one essbase BSO application. For some reason, the data in it got wrong and we have to restore the data from file system backup. We created a new temporary essbase application and copied the .otl from the existing application using EAS console. The outline opens in the new temporary application. Then I stopped the application and and copied .pag , .ind, .esm and .tct files from file system backup to this application folder but the application does not start. What could be the reason? and can anything be done about it.
    Thanks

    I renamed the .pag, .ind, .esm files. But the application still doesnt start. I get this error in EAS console
    error: 1054001 Cannot load application appname with error number [1052003] - see server log file
    The server log file also says
    error: 1054001 Cannot load application appname with error number [1052003] - see server log file
    I also took the .otl file from the same backup and dropped it in database folder but the application is still not coming up

  • OBIEE & Essbase - Reduce Data Cache in OBI Server

    Hi,
    We have a essbase ASO cube and using it as a source we have created a dashboard.
    But when we change the data in the cube (for example, what-if or online data analysis) the dashboard dose not change since the BI server cache the data.
    How do we reduce this cache period for a particular cube or a dashboard.
    Thanks
    Nilaksha.

    Have you tried reducing the time in the Cache Persistence time?
    Hope it helps
    Thanks
    Prash

  • NumBroadcastThreads meaning for Data Cache setting

    Hello,
    I would like to ask some clarification on the meaning of the
    NumBroadcastThreads setting for the TCP kodo.RemoteCommitProvider.
    The documentation says this is the "The number of threads to create
    for the purpose of transmitting events to peers."
    My question is how to best set this value and if there are any risks
    (maybe loosing some events?) when setting this value wrongly?
    Thanks,
    Werner

    Laurent Goldsztejn <> wrote:
    From our docs, NumBroadcastThreads: You should increase this value as the number of concurrent tx increases. The max number of
    concurrent tx is a function of the size of the connection pool. Setting a value of 0 will result in behavior where the thread
    invoking commit will perform the broadcast directly. Defaults to 2. LaurentHi thanks,
    my question is how to best set this value and if there are any risks
    (maybe loosing some events?) when setting this value wrongly?
    Another doubt is if I set the NumBroadcastThreads value too low, can it
    be that, in case of high load, it takes long time for the broadcasts to
    be sent?
    Thanks,
    Werner

  • Data Cache Settings

    Hello,
    I am coming up with the spreadhseet retreival error very frequently and get Essbase Error 1130203. I went through the Error interpretation and found that Data Cache might be causing it. Here are the details of cache settings on my
    databases currently. Can somebody please help me understand if they are right or need some changes:
    DataBase A:
    Data File Cache setting (KB) 32768
    Data cache setting (KB) 100000
    Block Size: (B) 143880
    Number of existing blocks: 7266
    Page file size: 40034304
    DataBase B:
    Data File Cache setting (KB) 32768
    Data cache setting (KB) 300000
    Block Size: (B) 91560
    Number of existing blocks: 1912190
    Page file size:2147475456--1
    Page file size: 500703056
    DataBase C:
    Data File Cache setting (KB) 300000
    Data cache setting (KB) 37500
    Block Size: (B) 23160
    Number of existing blocks: 26999863
    Page File Size: 21 page file =21 * 2= 42 GB
    If this might not be the issue then please let me know what might be causing it?
    Thanks!
    Edited by: user4958421 on Dec 15, 2009 10:43 AM

    Hi,
    1. For error no 1130203, here are the possible problems and solutions, straight from document.
    Try any of the following suggestions to fix the problem. Once you fix the problem, check to see if the database is corrupt.
    1. Check the physical memory on the server computer. In a Windows environment, 64 MB is the suggested minimum for one database. In a UNIX environment, 128 MB is the suggested minimum for one database. If the error keeps occuring, add more memory to the server computer.
    2. If you are on a UNIX computer, check the user limit profile.
    3. Check the block size of the database. If necessary, reduce the block size.
    4. Check the data cache and data file cache setting. If necessary, decrease the data cache and data file cache sizes.
    5. Make sure that the Analytic Services computer has enough resources. Consult the Analytic Services Installation Guide for a list of system requirements. If a resource-intensive application, such as a relational database, is running on the same computer, the resource-intensive application may be using the resources that Analytic Services needs
    Sandeep Reddy Enti
    HCC
    http://hyperionconsultancy.com/

  • Essbase Data Cache Full Error

    Hi, All,
    I am encountering an Essbase data cache full error while running some calc scripts. I have already try to set the data cache size to 800MB to 1.6GB, and index cache to around 100MB. It is quite a large BSO Essbase, with one dimension over 1000 members, another about 2500 members, and the last one about 3000 members.
    I have three similar scripts and each is for different entities to aggregate on some of those dimensions.
    For example, I started with unload the app/db, and run the calc script 1 for Entity1, it is successfully; However, I continue to run on calc script 2 for Entity 2, it showed the "data cache full error". However, after I unload the app/db, and then run calc script 2 again, I can have the calc scripts completed with no errors.
    I am running on Essbase 11.1.1.3 on AIX platform 32-bit Essbase.
    Anyone had encountered that before. Is it a problem with Essbase RAM handling, in this case?
    Thanks in advance

    Thank you John,
    We have found that it is the entity dimension that should be responsible to this problem.
    I remember we have encountered this kind of problem before when we aggregate an application with the entity dimension hierarchy mixed with shared and stored instances of the same level 0 members. To put it simple, there are three members under "Entity" dimension member, which represents different view of entity hierarchies of same level 0 members. the first one has stored level 0 entity members while the other two have shared ones. And at that time, our client added another hierarchy with shared level 0 members, but they did not put this tree under "Entity" dimension member directly, but rather put it under the first child of "Entity", which is the one with stored level 0 members.
    It is a little bit confusing to describe the situation only by text. Anyway, at that time, the first hierarchy had both stored and shared instances of the same group of level 0 members. And the data cache is always full when aggregating. and after we moved the forth hierarchy to another tree so under that hierarchy the level 0 members are all shared instances, the aggregation worked flawlessly.
    I wondered why this happened and consider this is related to detailed calculation logic of Essbase. May you shed some light on this topic? Thank you with all my heart!
    Warm Regards,
    John

  • Possible to set CacheSize for the single-JVM version of the data cache?

    Hi -
    I'm using Kodo 2.3.2.
    Can I set the size for the data cache when I'm using a subclass of the single-JVM version of the
    data cache? Or is this only a feature for the distributed version?
    Here are more details...
    I'm using a subclass of LocalCache so I can display the cache contents.
    The kodo.properties contains these lines:
    com.solarmetric.kodo.DataCacheClass=com.siemens.financial.jdoprototype.app.TntLocalCache
    com.solarmetric.kodo.DataCacheProperties=CacheSize=10
    When my test program starts, it displays getConfiguration().getDataCacheProperties() and the value
    is displayed as "CacheSize=10".
    But when I load 25 objects by OID and display the contents of the cache, I see that all the objects
    have been cached (not just 10). Can you shed light on this?
    Thanks,
    Les

    The actual size of the cache is a bit more complex than just the CacheSize
    setting. The CacheSize is the number of hard references to maintain in the
    cache. So, the most-recently-used 10 elements will have hard refs to them,
    and the other 15 will be moved to a SoftValueCache. Soft references are not
    garbage-collected immediately, so you might see the cache size remain at
    25 until you run out of memory. (The JVM has a good deal of flexibility in
    how it implements soft references. The theory is that soft refs should stay
    around until absolutely necessary, but many JVMs treat them the same as
    weak refs.)
    Additionally, pinning objects into the cache has an impact on the cache
    size. Pinned objects do not count against the cache size. So, if you have
    15 pinned objects, the cache size could be 25 even if there are no soft
    references being maintained.
    -Patrick
    In article <aqrpqo$rl7$[email protected]>, Les Selecky wrote:
    Hi -
    I'm using Kodo 2.3.2.
    Can I set the size for the data cache when I'm using a subclass of the single-JVM version of the
    data cache? Or is this only a feature for the distributed version?
    Here are more details...
    I'm using a subclass of LocalCache so I can display the cache contents.
    The kodo.properties contains these lines:
    com.solarmetric.kodo.DataCacheClass=com.siemens.financial.jdoprototype.app.TntLocalCache
    com.solarmetric.kodo.DataCacheProperties=CacheSize=10
    When my test program starts, it displays getConfiguration().getDataCacheProperties() and the value
    is displayed as "CacheSize=10".
    But when I load 25 objects by OID and display the contents of the cache, I see that all the objects
    have been cached (not just 10). Can you shed light on this?
    Thanks,
    Les
    Patrick Linskey [email protected]
    SolarMetric Inc. http://www.solarmetric.com

  • Realize essbase data cache through business rule

    Hi!
    Can any body guide me how to realize essbase data cache through business rule.
    Thanks

    Right,
    Actually, Business rules are running essbase cubes in every case.
    Remember planning and BRs are adding extra layers like prompts, forms, security, processes...
    Think Business rules are extended version of calc scripts with some more functionality. They are ultimately running on essbase cubes...
    Regards,
    Ahmet

  • Data Cache full - what is the soultion

    Hello,<BR>I am using essbase. In this 7 are sparse members and 2 are dense members. I am using parallel calculation, with 3 threads. After loading the data when calculation starts it says an error "DATA CACHE FULL INCREASE THE DATA CACHE". I increased the data cache data file cache to twice as existing. Existing size is default one. But still the calcualtion was not happening.<BR>Can any one suggest why it is happening and what has to be done.<BR>Do the needful.<BR><BR>Regards<BR>R.Prasanna

    I have no concrete information that there are any risks, only suspicion based on the way most servers operate. <BR><BR>You dedicated 4 GB of memory to one running application. Depending on how much physically exists on the server, this could result in a lot of paging taking place, adversely affecting performance across the entire server. In general, the physical addressable memory caps out at 4 GB (although depending on the O.S., 2 GB is the limit for application space). What all this means, is that you are not (probably) allowing the O.S. any "room" to operate in, and even though you only have one application, the O.S. still needs some space too.<BR><BR>Again, "don't fix what ain't broken" applies here, but you might want to try a somewhat more discrete setting and see if things actually improve somewhat. If you "only" allocated 2 or 3 GB instead of 4, would it continue to work, etc...<BR><BR>If anyone else out there has used a setting above 2 GB, I'd be interested in hearing how it went for you -- perhaps my fears are totally unfounded (though ( still believe there's reason for caution).<BR>

  • Data Cache Vs Data Retreival buffers

    Hello All-
    I want to know how does the increase / decrease of data cache effects the size of Data retreival buffer size?
    I was going through dbag and it says
    *"When you retrieve data into Essbase Spreadsheet Add-in for Excel or use Report Writer to retrieve data, Essbase uses the retrieval buffer to optimize the retrieval"*
    Is there any interdependence between data cache size and data retreival buffer?
    Moreover in dbag i found in section "Enabling Dynamic Retrieval Buffer Sizing" as follows:
    If a database has very large block size and retrievals include a large percentage of cells from each block across several blocks, consider setting the VLBREPORT option to TRUE in the Essbase configuration file essbase.cfg.
    Has anybody used this command in past and can let me know the pros and cons of it?
    Thanks!

    Hello Glenn-
    Thanks for the reply. On reading dbag ahead it states:
    When the VLBREPORT setting is TRUE, Essbase internally determines an optimized retrieval buffer size for reports that access more than 20% of the cells in each block across several blocks. This setting takes effect only if the outline does not include Dynamic Calc, Dynamic Times Series, or attribute members.
    The above sentence in bold what does that mean? It seems to be a vague statement as i cant imagine of an outline which dont have dynamic calc or dynamic time series enabled. Moreover my outline has 6 different attribute dimensions as well.
    If i understood it correctly and i put this command in config file and have any of the above mentioned constraints (Dynamic Calc, Dynamic Times Series, or attribute members) in outline it will not work ??
    There is another quick question not related to this post i was going through my application configuration file and here is a copy of it:
    *; The following entry specifies the full path to JVM.DLL JvmModuleLocation E:\Hyperion\common\JRE\Sun\1.5.0\bin\client\jvm.dll SharedServicesLocation admin.companyname.int 58080 AuthenticationModule CSS http://admin.companyname.int:58080/interop/framework/getCSSConfigFile AGENTDESC ESSBASE Service*
    CALCLOCKBLOCKHIGH 500
    CALCLOCKBLOCKDEFAULT 200
    CALCLOCKBLOCKLOW 50
    The sentence that is marked as bold whats the purpose of that statement in confirg file ? Do we need it in our configuration file or we can delete it?
    Thanks!

Maybe you are looking for

  • Working with DIAdem REPORT in DIAdem SCRIPT

    Hello, I am working with DIAdem 9.1 Advanced. I would like to create a report with a 2D graph object containing one curve (X,Y1) or (X,Y2). The user has to choose between (X,Y1) and (X,Y2) via a dialog box. I want to generate this report with a VBS s

  • Inter portlet comminication ?

    Hi Does anyone know how to establish communication between two (pageflow) portlets ? Of course you can use session. But I'd like to use custom events instead. Or any other way that is considered "right". The docs says nothing about how you fire these

  • User Login Page

    Hi all, Good day,I need your help. I am really new to APEX. I would like to have a user table name 'tbl_user' with 2 columns (user_id, password). Some dummy data were created, and I do some PL/SQL coding inside the default login page of a newly creat

  • In-app purchases not Working - Buy new app works!!

    Im not trying to Make an in-app Purchas in CLASH OF CLANS App .. but My in-app Purchase is not getting through i get error message your Transaction is not been processes Try to contact us at Apple Support.. i am Able to buy a new app from app-store b

  • Flash Player 10 Install Failure

    The current firefox installer download and installed for FIrefox seems to be malfunctioning.  I initially took the support call from one of my clients but was able to duplicate it, and continue to duplicate it on my Windows 7 PC running the latest ve