Spatial Data Cache Setting

Hi all,
If I disable the cache, I mean if I set max_cache_size to 0 on mapviewerConfig, Will it affect mapviewer performance or speed?
I have developed sample application to move a point data on web using mapviewer. After updating SDO_GEOMETRY column on database, I will refresh the map. The map will show moved point data only if I set max_cache_size to 0 on mapviewerConfig. Is there any mapviewer API is available to clear spatial data cache?
Thanks,
Sujnan

This is likely a bug. It assumes a 2gb limit.

Similar Messages

  • Mapviewer - Spatial Data Cache

    I am trying to clear the spatial data cache in two different versions (mv10 & mv11ea) of Mapviewer without success. To confirm that the cache is being cleared I have enabled the report_stats as below in the mapViewerConfig.xml for both versions
        <spatial_data_cache   max_cache_size="64"
                              report_stats="true"
        />After a few requests I note the following in the log:
    Cache group PARCEL_VIEW_SHAPE_82473_PDT_GEOM statistics:
       capacity: 262144
           size: 145988
    load factor: 0.95
        # of chains: 88837
    max chain depth: 8
    avg chain depth: 1.6433242905546113
    empty bucket %: 0.6611137390136719
    total mem size : 28169KBKnowing (assuming) that this cache group is populated by a single theme that references the PARCEL_VIEW table I then issue the following via the Admin section of the Mapviewer control.
    <?xml version="1.0" standalone="yes"?>
    <non_map_request>
      <clear_theme_cache data_source="vicmap" theme="THEME_PARCEL" />
    </non_map_request>Then, after waiting patiently for the next set of statistics to appear in the log (BTW, is there a way to change the frequency from 10 minutes to something more regular?) I notice that the information for the cache group has not changed.
    Am I following the correct steps here? If I wish to clear the spatial cache, should I be monitoring these statistics?
    All advice most welcome.
    Ross.

    Hi Ross,
    we'll review the statistics reported and check why it is not changing. The frequency is currently hard-coded (5 minutes), and there is no parameter on the configuration file to change that. We may consider this in the future.
    Joao

  • Spatial data cache only 2048MB ??

    Why spatial data cache can be only less then 2048MB ??
    If I set up any value larger then this, it shows a negative numer of cache size in get_stats xml request (the same in logs when Mapviewer is starting).

    This is likely a bug. It assumes a 2gb limit.

  • Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).

    Hi,
    Our Environment is Essbase 11.1.2.2 and working on Essbase EAS and Shared Services components.One of our user tried to run the Cal Script of one Application and faced this error.
    Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).
    I have done some Google and found that we need to add something in Essbase.cfg file like below.
    1012704 Dynamic Calc processor cannot lock more than number ESM blocks during the calculation, please increase CalcLockBlock setting and then retry (a small data cache setting could also cause this problem, please check the data cache size setting).
    Possible Problems
    Analytic Services could not lock enough blocks to perform the calculation.
    Possible Solutions
    Increase the number of blocks that Analytic Services can allocate for a calculation:
    Set the maximum number of blocks that Analytic Services can allocate to at least 500. 
    If you do not have an $ARBORPATH/bin/essbase.cfg file on the server computer, create one using a text editor.
    In the essbase.cfg file on the server computer, set CALCLOCKBLOCKHIGH to 500.
    Stop and restart Analytic Server.
    Add the SET LOCKBLOCK HIGH command to the beginning of the calculation script.
    Set the data cache large enough to hold all the blocks specified in the CALCLOCKBLOCKHIGH setting. 
    Determine the block size.
    Set the data catche size.
    Actually in our Server Config file(essbase.cfg) we dont have below data  added.
    CalcLockBlockHigh 2000
    CalcLockBlockDefault 200
    CalcLockBlocklow 50
    So my doubt is if we edit the Essbase.cfg file and add the above settings and restart the services will it work?  and if so why should we change the Server config file if the problem is with one application Cal Script. Please guide me how to proceed.
    Regards,
    Naveen

    Your calculation needs to hold more blocks in memory than your current set up allows.
    From the docs (quoting so I don't have to write it, not to be a smarta***:
    CALCLOCKBLOCK specifies the number of blocks that can be fixed at each level of the SET LOCKBLOCK HIGH | DEFAULT | LOW calculation script command.
    When a block is calculated, Essbase fixes (gets addressability to) the block along with the blocks containing its children. Essbase calculates the block and then releases it along with the blocks containing its children. By default, Essbase allows up to 100 blocks to be fixed concurrently when calculating a block. This is sufficient for most database calculations. However, you may want to set a number higher than 100 if you are consolidating very large numbers of children in a formula calculation. This ensures that Essbase can fix all the required blocks when calculating a data block and that performance will not be impaired.
    Example
    If the essbase.cfg file contains the following settings:
    CALCLOCKBLOCKHIGH 500  CALCLOCKBLOCKDEFAULT 200  CALCLOCKBLOCKLOW 50 
    then you can use the following SET LOCKBLOCK setting commands in a calculation script:
    SET LOCKBLOCK HIGH; 
    means that Essbase can fix up to 500 data blocks when calculating one block.
    Support doc is saying to change your config file so those settings can be made available for any calc script to use.
    On a side note, if this was working previously and now isn't then it is worth investigating if this is simply due to standard growth or a recent change that has made an unexpected significant impact.

  • Dynamic Calc Issue - CalcLockBlock or Data Cache Setting

    We recently started seeing an issue with a Dynamic scenario member in our UAT and DEV environments. When we tried to reference the scenario member in Financial Reports, we get the following error:
    Error executing query: The data form grid is invalid. Verify that all members selected are in Essbase. Check log for details.com.hyperion.planning.HspException
    In SmartView, if I try to reference that scenario member, I get the following:
    The dynamic calc processor cannot allocare more than [10] blocks from the heap. Either the CalcLockBlock setting is too low or the data cahce size setting is too low.
    The Dynamic calcs worked fine in both environments up until recently, and no changes were made to Essbase, so I am not sure why them stopped working.
    I tried to set the CalcLockBlock settings in the Essbase.cfg file, and increases the data cache size. When I increases the CalcLockBlock settings, I would get the same error.
    When I increased the data cache size, it would just sit there and load and load in Financial Reporting and wouldn't show the report. In SmartView, it would give me an error that it had timed out and to try to increase the NetRetry and NetDelay values.

    Thanks for the responses guys.
    NN:
    I tried to double the Index Cache Setting and the Data Cache setting, but it appears when I do that, it crashes my Essbase app. I also tried adding the DYNCALCCACHEMAXSIZE and QRYGOVEXECTIME to essbase.cfg (without the Cache settings since it is crashing), and still no luck.
    John:
    I had already had those values set on my client machine, I tried to set them on the server as well, but no luck.
    The exact error message I get after increasing the Cache settings is"Essbase Error (1042017) Network error: The client or server timed out waiting to receive data using TCP/IP. Check network connections. Increase the olap.server.netConnectTry and/or olap.server.netDealy values in teh essbase. proprieties. Restart the server and try again"
    From the app's essbase log:
    [Wed Jun 06 10:07:44 2012]Local/PropBud/Staffing/admin@Native Directory/4340/Error(1023040)
    msg from remote site [[Wed Jun 06 10:07:44 2012]CCM6000SR-HUESS/PropBud/NOIStmt/admin/Error(1042017) Network error: The client or server timed out waiting to receive data using TCP/IP. Check network connections. Increase the NetRetryCount and/or NetDelay values in the ESSBASE.CFG file. Update this file on both client and server. Restart the client and try again.]
    [Wed Jun 06 10:07:44 2012]Local/PropBud/Staffing/admin@Native Directory/4340/Error(1200467)
    Error executing formula for [Resident Days for CCPRD NOI]: status code [1042017] in function [@_XREF]
    [Wed Jun 06 10:07:44 2012]Local/PropBud/Staffing/admin@Native Directory/4340/Warning(1080014)
    Transaction [ 0x10013( 0x4fcf63b8.0xcaa30 ) ] aborted due to status [1042017].
    [Wed Jun 06 10:07:44 2012]Local/PropBud/NOIStmt/admin@Native Directory/4340/Info(1013091)
    Received Command [Process local xref/xwrite request] from user [admin@Native Directory]
    [Wed Jun 06 10:07:44 2012]Local/PropBud/NOIStmt/admin@Native Directory/4340/Info(1008108)
    Essbase Internal Logic Error [1060]
    [Wed Jun 06 10:07:44 2012]Local/PropBud/NOIStmt/admin@Native Directory/4340/Info(1008106)
    Exception error log [E:\Oracle\Middleware\user_projects\epmsystem2\diagnostics\logs\essbase\essbase_0\app\PropBud\NOIStmt\log00014.xcp] is being created...
    [Wed Jun 06 10:07:46 2012]Local/PropBud/NOIStmt/admin@Native Directory/4340/Info(1008107)
    Exception error log completed E:\Oracle\Middleware\user_projects\epmsystem2\diagnostics\logs\essbase\essbase_0\app\PropBud\NOIStmt\log00014.xcp please contact technical support and provide them with this file
    [Wed Jun 06 10:07:46 2012]Local/PropBud///4340/Info(1002089)
    RECEIVED ABNORMAL SHUTDOWN COMMAND - APPLICATION TERMINATING

  • Essbase BSO data cache setting

    can you guys post your BSO cube compressed data size(.pag files) and data cache setting? I am trying to get a better understanding about these two areas interms of optimization. I am also curious to know the maximum data cache setting ever used on BSO cube....(1 GB is what I know as of now)
    I think the settings depends on million things (available RAM on server, performance check, etc etc) but just a quick check.
    Thanks,
    KK

    Indes and data caches work differently and it is probably good to talk about that. When an Essbase database starts, it will grab all of the memory you have defined for the index cache and nothing else can get at it. Data cachs is grabbed as it is needed until it reaches the maximum. It was truly problematic in the old days when we had much less memory to work with. If you started too many applications, you took all available memory right away. Man I feel old talking about it :)

  • Essbase data cache setting not registering

    Hi,
    We have 64-bit essbase. I changed the data and Index cache setting (storage type Buffered IO) in essbase.
    After restarting the database, I see the value getting updated in the "Index cache current value" but the value in the "Data cache current value" is zero.
    These are planning essbase databases. The behavior is same for the native essbase dbs.
    I have also tried restarting the essbase service. The essbase version 11.1.1.3
    Please help.
    Regards,
    Avi

    Indes and data caches work differently and it is probably good to talk about that. When an Essbase database starts, it will grab all of the memory you have defined for the index cache and nothing else can get at it. Data cachs is grabbed as it is needed until it reaches the maximum. It was truly problematic in the old days when we had much less memory to work with. If you started too many applications, you took all available memory right away. Man I feel old talking about it :)

  • Disabling Spatial data cache for one theme?

    How can I accomplish this. I have a crude editing application and this cache is preventing the updates from being seen in the ThemeBasedFOI under oracle maps. The ThemeBasedFOI.refresh() seems to have no effect.
    I have confirmed that disabling the entire cache resolves the problem, but I have another application that needs this cache on the mapviewer, so turning off the whole cache isn't a good option.
    So, can I mark a single theme to be ignored in this cache?

    You can set the caching mode for each geometry theme individually (NONE, NORMAL (default) or ALL). Using MapBuilder, go to the Advanced panel on the theme editor, and select NONE as caching mode.
    Joao

  • NumBroadcastThreads meaning for Data Cache setting

    Hello,
    I would like to ask some clarification on the meaning of the
    NumBroadcastThreads setting for the TCP kodo.RemoteCommitProvider.
    The documentation says this is the "The number of threads to create
    for the purpose of transmitting events to peers."
    My question is how to best set this value and if there are any risks
    (maybe loosing some events?) when setting this value wrongly?
    Thanks,
    Werner

    Laurent Goldsztejn <> wrote:
    From our docs, NumBroadcastThreads: You should increase this value as the number of concurrent tx increases. The max number of
    concurrent tx is a function of the size of the connection pool. Setting a value of 0 will result in behavior where the thread
    invoking commit will perform the broadcast directly. Defaults to 2. LaurentHi thanks,
    my question is how to best set this value and if there are any risks
    (maybe loosing some events?) when setting this value wrongly?
    Another doubt is if I set the NumBroadcastThreads value too low, can it
    be that, in case of high load, it takes long time for the broadcasts to
    be sent?
    Thanks,
    Werner

  • Data Cache Settings

    Hello,
    I am coming up with the spreadhseet retreival error very frequently and get Essbase Error 1130203. I went through the Error interpretation and found that Data Cache might be causing it. Here are the details of cache settings on my
    databases currently. Can somebody please help me understand if they are right or need some changes:
    DataBase A:
    Data File Cache setting (KB) 32768
    Data cache setting (KB) 100000
    Block Size: (B) 143880
    Number of existing blocks: 7266
    Page file size: 40034304
    DataBase B:
    Data File Cache setting (KB) 32768
    Data cache setting (KB) 300000
    Block Size: (B) 91560
    Number of existing blocks: 1912190
    Page file size:2147475456--1
    Page file size: 500703056
    DataBase C:
    Data File Cache setting (KB) 300000
    Data cache setting (KB) 37500
    Block Size: (B) 23160
    Number of existing blocks: 26999863
    Page File Size: 21 page file =21 * 2= 42 GB
    If this might not be the issue then please let me know what might be causing it?
    Thanks!
    Edited by: user4958421 on Dec 15, 2009 10:43 AM

    Hi,
    1. For error no 1130203, here are the possible problems and solutions, straight from document.
    Try any of the following suggestions to fix the problem. Once you fix the problem, check to see if the database is corrupt.
    1. Check the physical memory on the server computer. In a Windows environment, 64 MB is the suggested minimum for one database. In a UNIX environment, 128 MB is the suggested minimum for one database. If the error keeps occuring, add more memory to the server computer.
    2. If you are on a UNIX computer, check the user limit profile.
    3. Check the block size of the database. If necessary, reduce the block size.
    4. Check the data cache and data file cache setting. If necessary, decrease the data cache and data file cache sizes.
    5. Make sure that the Analytic Services computer has enough resources. Consult the Analytic Services Installation Guide for a list of system requirements. If a resource-intensive application, such as a relational database, is running on the same computer, the resource-intensive application may be using the resources that Analytic Services needs
    Sandeep Reddy Enti
    HCC
    http://hyperionconsultancy.com/

  • Essbase cubes - best cache setting

    We implemented Essbase about 2 years ago. We have never reset our cache settings. We are experiencing slowness in our cubes. Could this be caused by our cache settings? Do we need to reset our cache setting every so often. Thanks.
    Below are our current cache settings.
    Index Cache Setting 50242
    Index Cache Current Value 50242
    Data File Cache Setting 327680
    Data File Cache Current Value 0
    Data Cache Setting 300720
    Data Cache Current Value 87232
    Index Page Setting 8
    Index Page Current Value 8

    We struggled with cache settings for a while until we found a sweet spot, and we make periodic adjustments. Make sure you follow the published guidelines it will save you a lot of pain. It looks like you have plenty of room on most of the cache settings and only max out the Index Cache. Below is an excerpt from the dbag...
    Index Cache Recommended Value
    Combined size of all essn.ind files, if possible; as large as possible otherwise.  Do not set this cache size higher than the total index size, as no performance improvement results.

  • Spacial Data Cache stats

    The mapviewer 10.1.2 UG talks about turning on the report_stats option in the conf file to dump out the cache stats every 5 minutes. Where does this data dump to? I assumed it would be the default log file, but I don't see this happening. What am I missing?

    Thanks, I switched the logging to finest and now I'm seeing the information being written. Another question, everytime is reports, it always reports:
    MapViewer Spatial Data Cache info (upper limit=524288kb):
    In Memory Cache Size: 0 kb
    Nothing ever seems to be saved in cache. Why is this?

  • Possible to set CacheSize for the single-JVM version of the data cache?

    Hi -
    I'm using Kodo 2.3.2.
    Can I set the size for the data cache when I'm using a subclass of the single-JVM version of the
    data cache? Or is this only a feature for the distributed version?
    Here are more details...
    I'm using a subclass of LocalCache so I can display the cache contents.
    The kodo.properties contains these lines:
    com.solarmetric.kodo.DataCacheClass=com.siemens.financial.jdoprototype.app.TntLocalCache
    com.solarmetric.kodo.DataCacheProperties=CacheSize=10
    When my test program starts, it displays getConfiguration().getDataCacheProperties() and the value
    is displayed as "CacheSize=10".
    But when I load 25 objects by OID and display the contents of the cache, I see that all the objects
    have been cached (not just 10). Can you shed light on this?
    Thanks,
    Les

    The actual size of the cache is a bit more complex than just the CacheSize
    setting. The CacheSize is the number of hard references to maintain in the
    cache. So, the most-recently-used 10 elements will have hard refs to them,
    and the other 15 will be moved to a SoftValueCache. Soft references are not
    garbage-collected immediately, so you might see the cache size remain at
    25 until you run out of memory. (The JVM has a good deal of flexibility in
    how it implements soft references. The theory is that soft refs should stay
    around until absolutely necessary, but many JVMs treat them the same as
    weak refs.)
    Additionally, pinning objects into the cache has an impact on the cache
    size. Pinned objects do not count against the cache size. So, if you have
    15 pinned objects, the cache size could be 25 even if there are no soft
    references being maintained.
    -Patrick
    In article <aqrpqo$rl7$[email protected]>, Les Selecky wrote:
    Hi -
    I'm using Kodo 2.3.2.
    Can I set the size for the data cache when I'm using a subclass of the single-JVM version of the
    data cache? Or is this only a feature for the distributed version?
    Here are more details...
    I'm using a subclass of LocalCache so I can display the cache contents.
    The kodo.properties contains these lines:
    com.solarmetric.kodo.DataCacheClass=com.siemens.financial.jdoprototype.app.TntLocalCache
    com.solarmetric.kodo.DataCacheProperties=CacheSize=10
    When my test program starts, it displays getConfiguration().getDataCacheProperties() and the value
    is displayed as "CacheSize=10".
    But when I load 25 objects by OID and display the contents of the cache, I see that all the objects
    have been cached (not just 10). Can you shed light on this?
    Thanks,
    Les
    Patrick Linskey [email protected]
    SolarMetric Inc. http://www.solarmetric.com

  • Keep Oracle Spatial data in Coherence Caches?

    Can I keep Oracle Spatial data in Coherence Caches?

    You can store the Oracle Spatial data in Coherence caches. But creating Spatial indexes in Coherence neeeds too much effort, I guess. How do you create the Spatial geocoding package, map symbols, map styling rules and Spatial network model?
    Edited by: junez on 07-Jan-2010 12:24

  • TimesTen and Geo-spatial data

    Can TimesTen support geo-spatial data (and datatypes) as supported in Oracle? For example, if one created a set of Oracle tables that held geo-spatial data, could TimesTen be used in its "caching" form to speed geo-spatial queries?
    Thanks!
    --rick grehan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

    No, sorry. TimesTen does not support Geo-spatialo datatypes or functions.
    Chris

Maybe you are looking for

  • HP Laserjet P4515n Printer administration

    Hello there ! I have a network printer HP Laserjet P4515n. Yesterday i found that someone printed over 200pages from a website. Is there anyway to find out from which computer / IP address the print has been given ??? Thanks Ridwan

  • How to I buy an ISO file of Windows 8.1 to install on my new MacBook bootcamp?

    How to I buy an ISO file of Windows 8.1 to install on my new MacBook bootcamp? I can only find DVDs of Win 8.1; the online download is for upgrades from Win 7 only.  I need a clean install version.

  • Append hints in insert query

    Hi, I want to know the difference between insert /*+ APPEND NOLOGGING PARALLEL */ into tablename and insert /*+ APPEND */ into tablename. Also, which hint will make the performance faster and in which conditions. Also, anu URL given as reference for

  • Using Logic 7 with Alesis Multimix8 Firewire...problem

    Hi there, I'm running Logic 7 on my 2GHz Macbook (White) with Snow Leopard 10.6 and an Alesis Mutlimix 8 Firewire. The problem I'm having is that I can't seem to get the playback or output to come back through the Interface. I have done all the norma

  • How to remove the "share" button in Pages?

    I just bought the latest version of Pages from the App Store. To my surprise, the latest update still has the "Share" button in the Tool bar. What is the purpose of that button, since when you click on it all you get is: "iWork.com has been discontin