Bdb cache size setiings

Hi,
I have one question about bdb cache.
If I set je.maxMemory=1073741824 in je.properties to limit bdb cache to 1G, is it possible that the real size of bdb cache is larger than 1G in a long period?
Thanks,
Yu

To add to what Linda said, you are correct that is not a good idea to make the JE cache size too large, relative to the heap size. Some extra room is needed for three reasons:
1) JE's calculations of memory size are approximate.
2) Your application may use variable amounts of memory, in addition to JE.
3) Java garbage collection will not perform well in some circumstances, if there is not enough free space.
The last reason is a large variable. To find out how this impacts performance, and how much extra room you need, you'll really have to go though a Java GC tuning exercise using a system that is similar to your production machine, and do a lot of testing.
I would certainly never use less than 10 or 20% free space, and with large heaps where GC is active, you will probably need more free space than that.
--mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

Similar Messages

  • Does bdb cache use JVM heap memory?

    I'm using BDB Java API. Do I need to set the heap size to be bigger than the db cache size?

    Hi Mimi,
    BDB cache size and JVM heap size are used for two different things. Cache size is used for storing the db data in modifying them in memory. If the cache size is not optimum then there would be too much of I/O activity and the performance would be affected.
    JVM heap size is used by the JVM to store the objects that the application creates. If the heap size is too small and there are too many objects get created by the application then the GC thread would be very active and will affect the performance of the application.
    Whether the heap size would be bigger than the cache size or not will depend upon how the application works.
    -Debsubhra Roy

  • Poor performance of the BDB cache

    I'm experiencing incredibly poor performance of the BDB cache and wanted to share my experience, in case anybody has any suggestions.
    Overview
    Stone Steps maintains a fork of a web log analysis tool - the Webalizer (http://www.stonesteps.ca/projects/webalizer/). One of the problems with the Webalizer is that it maintains all data (i.e. URLs, search strings, IP addresses, etc) in memory, which puts a cap on the maximum size of the data set that can be analyzed. Naturally, BDB was picked as the fastest database to maintain analyzed data on disk set and produce reports by querying the database. Unfortunately, once the database grows beyond the cache size, overall performance goes down the drain.
    Note that the version of SSW available for download does not support BDB in the way described below. I can make the source available for you, however, if you find your own large log files to analyze.
    The Database
    Stone Steps Webalizer (SSW) is a command-line utility and needs to preserve all intermediate data for the month on disk. The original approach was to use a plain-text file (webalizer.current, for those who know anything about SSW). The BDB database that replaced this plain text file consists of the following databases:
    sequences (maintains record IDs for all other tables)
    urls -primary database containing URL data - record ID (key), URL itself, grouped data, such as number of hits, transfer size, etc)
    urls.values - secondary database that contains a hash of the URL (key) and the record ID linking it to the primary database; this database is used for value lookups)
    urls.hits - secondary database that contains the number of hits for each URL (key) and the record ID to link it to the primary database; this database is used to order URLs in the report by the number of hits.
    The remaining databases are here just to indicate the database structure. They are the same in nature as the two described above. The legend is as follows: (s) will indicate a secondary database, (p) - primary database, (sf) - filtered secondary database (using DB_DONOTINDEX).
    urls.xfer (s), urls.entry (s), urls.exit (s), urls.groups.hits (sf), urls.groups.xfer (sf)
    hosts (p), hosts.values (s), hosts.hits (s), hosts.xfer (s), hosts.groups.hits (sf), hosts.groups.xfer (sf)
    downloads (p), downloads.values (s), downloads.xfer (s)
    agents (p), agents.values (s), agents.values (s), agents.hits (s), agents.visits (s), agents.groups.visits (sf)
    referrers (p), referrers.values (s), referrers.values (s), referrers.hits (s), referrers.groups.hits (sf)
    search (p), search.values (s), search.hits (s)
    users (p), users.values (s), users.hits (s), users.groups.hits (sf)
    errors (p), errors.values (s), errors.hits (s)
    dhosts (p), dhosts.values (s)
    statuscodes (HTTP status codes)
    totals.daily (31 days)
    totals.hourly (24 hours)
    totals (one record)
    countries (a couple of hundred countries)
    system (one record)
    visits.active (active visits - variable length)
    downloads.active (active downloads - variable length)
    All these databases (49 of them) are maintained in a single file. Maintaining a single database file is a requirement, so that the entire database for the month can be renamed, backed up and used to produce reports on demand.
    Database Size
    One of the sample Squid logs I received from a user contains 4.4M records and is about 800MB in size. The resulting database is 625MB in size. Note that there is no duplication of text data - only nodes and such values as hits and transfer sizes are duplicated. Each record also contains some small overhead (record version for upgrades, etc).
    Here are the sizes of the URL databases (other URL secondary databases are similar to urls.hits described below):
    urls (p):
    8192 Underlying database page size
    2031 Overflow key/data size
    1471636 Number of unique keys in the tree
    1471636 Number of data items in the tree
    193 Number of tree internal pages
    577738 Number of bytes free in tree internal pages (63% ff)
    55312 Number of tree leaf pages
    145M Number of bytes free in tree leaf pages (67% ff)
    2620 Number of tree overflow pages
    16M Number of bytes free in tree overflow pages (25% ff)
    urls.hits (s)
    8192 Underlying database page size
    2031 Overflow key/data size
    2 Number of levels in the tree
    823 Number of unique keys in the tree
    1471636 Number of data items in the tree
    31 Number of tree internal pages
    201970 Number of bytes free in tree internal pages (20% ff)
    45 Number of tree leaf pages
    243550 Number of bytes free in tree leaf pages (33% ff)
    2814 Number of tree duplicate pages
    8360024 Number of bytes free in tree duplicate pages (63% ff)
    0 Number of tree overflow pages
    The Testbed
    I'm running all these tests using the latest BDB (v4.6) built from the source on Win2K3 server (release version). The test machine is 1.7GHz P4 with 1GB of RAM and an IDE hard drive. Not the fastest machine, but it was able to handle a log file like described before at a speed of 20K records/sec.
    BDB is configured in a single file in a BDB environment, using private memory, since only one process ever has access to the database).
    I ran a performance monitor while running SSW, capturing private bytes, disk read/write I/O, system cache size, etc.
    I also used a code profiler to analyze SSW and BDB performance.
    The Problem
    Small log files, such as 100MB, can be processed in no time - BDB handles them really well. However, once the entire BDB cache is filled up, the machine goes into some weird state and can sit in this state for hours and hours before completing the analysis.
    Another problem is that traversing large primary or secondary databases is a really slow and painful process. It is really not that much data!
    Overall, the 20K rec/sec quoted above drop down to 2K rec/sec. And that's all after most of the analysis has been done, just trying to save the database.
    The Tests
    SSW runs in two modes, memory mode and database mode. In memory mode, all data is kept in memory in SSW's own hash tables and then saved to BDB at the end of each run.
    In memory mode, the entire BDB is dumped to disk at the end of the run. First, it runs fairly fast, until the BDB cache is filled up. Then writing (disk I/O) goes at a snail pace, at about 3.5MB/sec, even though this disk can write at about 12-15MB/sec.
    Another problem is that the OS cache gets filled up, chewing through all available memory long before completion. In order to deal with this problem, I disabled the system cache using the DB_DIRECT_DB/LOG options. I could see OS cache left alone, but once BDB cache was filed up, processing speed was as good as stopped.
    Then I flipped options and used DB_DSYNC_DB/LOG options to disable OS disk buffering. This improved overall performance and even though OS cache was filling up, it was being flushed as well and, eventually, SSW finished processing this log, sporting 2K rec/sec. At least it finished, though - other combinations of these options lead to never-ending tests.
    In the database mode, stale data is put into BDB after processing every N records (e.g. 300K rec). In this mode, BDB behaves similarly - until the cache is filled up, the performance is somewhat decent, but then the story repeats.
    Some of the other things I tried/observed:
    * I tried to experiment with the trickle option. In all honesty, I hoped that this would be the solution to my problems - trickle some, make sure it's on disk and then continue. Well, trickling was pretty much useless and didn't make any positive impact.
    * I disabled threading support, which gave me some performance boost during regular value lookups throughout the test run, but it didn't help either.
    * I experimented with page size, ranging them from the default 8K to 64K. Using large pages helped a bit, but as soon as the BDB cached filled up, the story repeated.
    * The Db.put method, which was called 73557 times while profiling saving the database at the end, took 281 seconds. Interestingly enough, this method called ReadFile function (Win32) 20000 times, which took 258 seconds. The majority of the Db.put time was wasted on looking up records that were being updated! These lookups seem to be the true problem here.
    * I tried libHoard - it usually provides better performance, even in a single-threaded process, but libHoard didn't help much in this case.

    I have been able to improve processing speed up to
    6-8 times with these two techniques:
    1. A separate trickle thread was created that would
    periodically call DbEnv::memp_trickle. This works
    especially good on multicore machines, but also
    speeds things up a bit on single CPU boxes. This
    alone improved speed from 2K rec/sec to about 4K
    rec/sec.Hello Stone,
    I am facing a similar problem, and I too hope to resolve the same with memp_trickle. I had these queries.
    1. what was the % of clean pages that you specified?
    2. What duration were you clling this thread to call memp_trickle?
    This would give me a rough idea about which to tune my app. Would really appreciate if you can answer these queries.
    Regards,
    Nishith.
    >
    2. Maintaining multiple secondary databases in real
    time proved to be the bottleneck. The code was
    changed to create secondary databases at the end of
    the run (calling Db::associate with the DB_CREATE
    flag), right before the reports are generated, which
    use these secondary databases. This improved speed
    from 4K rec/sec to 14K rec/sec.

  • Individual cache size too large: maximum is 10TB

    Hi All,
    When application is trying to open environment the following error occur:
    individual cache size too large: maximum is 10TB
    All BDB parameters are default. Environment created with the following flags: DB_CREATE | DB_INIT_TXN | DB_INIT_LOCK | DB_INIT_LOG | DB_INIT_MPOOL | DB_THREAD | DB_REGISTER | DB_RECOVER;
    BDB version 4.8.24. OS Debian Linux 2.6.26-2-amd64
    What can cause this error?
    Modification of cache size in DB_CONFIG does not help.
    Thanks
    Edited by: user3578137 on Dec 17, 2009 7:54 AM

    No, cache parameters are not set, they are default. So it is strange.
    Here is the code:
    env_flags = DB_CREATE | DB_INIT_TXN | DB_INIT_LOCK | DB_INIT_LOG |
    DB_INIT_MPOOL | DB_THREAD | DB_REGISTER | DB_RECOVER;
    rc = db_env_create(&dbenv, 0);
    rc = dbenv->set_lk_detect(dbenv, DB_LOCK_MINWRITE);
    dbenv->set_event_notify(dbenv, db_event_callback);
    rc = dbenv->open(dbenv, work_dir, env_flags, 0);
    error checks are omitted.
    It works fine on machine with Fedora x64 and 1GB RAM, and error occurs on Debian x64 4GB RAM

  • Why does performance decrease as cache size increases?

    Hi,
    We have some very serious performance problems with our
    database. I have been trying to help by tuning the cache size,
    but the results are the opposite of what I expect.
    To create new databases with my data set, it takes about
    8200 seconds with a 32 Meg cache. Performance gets worse
    as the cache size increases, even though the cache hit rate
    improves!
    I'd appreciate any insight as to why this is happening.
    32 Meg does not seem like such a large cache that it would
    strain some system limitation.
    Here are some stats from db_stat -m
    Specified a 128 Meg cache size - test took 16076 seconds
    160MB 1KB 900B Total cache size
    1 Number of caches
    1 Maximum number of caches
    160MB 8KB Pool individual cache size
    0 Maximum memory-mapped file size
    0 Maximum open file descriptors
    0 Maximum sequential buffer writes
    0 Sleep after writing maximum sequential buffers
    0 Requested pages mapped into the process' address space
    34M Requested pages found in the cache (93%)
    2405253 Requested pages not found in the cache
    36084 Pages created in the cache
    2400631 Pages read into the cache
    9056561 Pages written from the cache to the backing file
    2394135 Clean pages forced from the cache
    2461 Dirty pages forced from the cache
    0 Dirty pages written by trickle-sync thread
    40048 Current total page count
    40048 Current clean page count
    0 Current dirty page count
    16381 Number of hash buckets used for page location
    39M Total number of times hash chains searched for a page (39021639)
    11 The longest hash chain searched for a page
    85M Total number of hash chain entries checked for page (85570570)
    Specified a 64 Meg cache size - test took 10694 seconds
    80MB 1KB 900B Total cache size
    1 Number of caches
    1 Maximum number of caches
    80MB 8KB Pool individual cache size
    0 Maximum memory-mapped file size
    0 Maximum open file descriptors
    0 Maximum sequential buffer writes
    0 Sleep after writing maximum sequential buffers
    0 Requested pages mapped into the process' address space
    31M Requested pages found in the cache (83%)
    6070891 Requested pages not found in the cache
    36104 Pages created in the cache
    6066249 Pages read into the cache
    9063432 Pages written from the cache to the backing file
    5963647 Clean pages forced from the cache
    118611 Dirty pages forced from the cache
    0 Dirty pages written by trickle-sync thread
    20024 Current total page count
    20024 Current clean page count
    0 Current dirty page count
    8191 Number of hash buckets used for page location
    42M Total number of times hash chains searched for a page (42687277)
    12 The longest hash chain searched for a page
    98M Total number of hash chain entries checked for page (98696325)
    Specified a 32 Meg cache size - test took 8231 seconds
    40MB 1KB 900B Total cache size
    1 Number of caches
    1 Maximum number of caches
    40MB 8KB Pool individual cache size
    0 Maximum memory-mapped file size
    0 Maximum open file descriptors
    0 Maximum sequential buffer writes
    0 Sleep after writing maximum sequential buffers
    0 Requested pages mapped into the process' address space
    26M Requested pages found in the cache (70%)
    10M Requested pages not found in the cache (10812846)
    35981 Pages created in the cache
    10M Pages read into the cache (10808327)
    9200273 Pages written from the cache to the backing file
    9335574 Clean pages forced from the cache
    1498651 Dirty pages forced from the cache
    0 Dirty pages written by trickle-sync thread
    10012 Current total page count
    10012 Current clean page count
    0 Current dirty page count
    4099 Number of hash buckets used for page location
    47M Total number of times hash chains searched for a page (47429232)
    13 The longest hash chain searched for a page
    118M Total number of hash chain entries checked for page (118218066)
    vmstat says that a few minutes into the test, the box is
    spending 80-90% of its time in iowait. That worsens as
    the test continues.
    System and test info follows
    We have 10 databases (in 10 files) sharing a database
    environment. We are using a hash table since we expect
    data accesses to be pretty much random.
    We are using the default cache type: a memory mapped file.
    Using DB_PRIVATE did not improve performance.
    The database environment created with these flags:
    DB_CREATE | DB_THREAD | DB_INIT_CDB | DB_INIT_MPOOL
    The databases are opened with only the DB_CREATE flag.
    There is only one process accessing the db. In my tests,
    only one thread access the db, doing only writes.
    We do not use transactions.
    My data set is about 550 Meg of plain ASCII text data.
    13 million inserts and 2 million deletes. Key size is
    32 bytes, data size is 4 bytes.
    BDB 4.6.21 on linux.
    1 Gig of RAM
    Filesystem = ext3 page size = 4K
    The test system is not doing anything else while I am
    testing.

    Surprising result: I tried closing the db handles with DB_NOSYNC and performance
    got worse. Using a 32 Meg cache, it took about twice as long to run my test:
    15800 seconds using DB->close(DB_NOSYNC) vs 8200 seconds using DB->close(0).
    Here is some data from db_stat -m when using DB_NOSYNC:
    40MB 1KB 900B Total cache size
    1 Number of caches
    1 Maximum number of caches
    40MB 8KB Pool individual cache size
    0 Maximum memory-mapped file size
    0 Maximum open file descriptors
    0 Maximum sequential buffer writes
    0 Sleep after writing maximum sequential buffers
    0 Requested pages mapped into the process' address space
    26M Requested pages found in the cache (70%)
    10M Requested pages not found in the cache (10811882)
    44864 Pages created in the cache
    10M Pages read into the cache (10798480)
    7380761 Pages written from the cache to the backing file
    3452500 Clean pages forced from the cache
    7380761 Dirty pages forced from the cache
    0 Dirty pages written by trickle-sync thread
    10012 Current total page count
    5001 Current clean page count
    5011 Current dirty page count
    4099 Number of hash buckets used for page location
    47M Total number of times hash chains searched for a page (47428268)
    13 The longest hash chain searched for a page
    118M Total number of hash chain entries checked for page (118169805)
    It looks like not flushing the cache regularly is forcing a lot more
    dirty pages (and fewer clean pages) from the cache. Forcing a
    dirty page out is slower than forcing a clean page out, of course.
    Is this result reasonable?
    I suppose I could try to sync less often than I have been, but more often
    than never to see if that makes any difference.
    When I close or sync one db handle, I assume it flushes only that portion
    of the dbenv's cache, not the entire cache, right? Is there an API I can
    call that would sync the entire dbenv cache (besides closing the dbenv)?
    Are there any other suggestions?
    Thanks,
    Eric

  • Problems with increasing/decreasing cache size when live

    Hello,
    I have configured multiple environments which I'm compacting sequentially and to achieve this I allocate a bigger cache to the env currently being compacted as follows:
    Initialization:
    DB_ENV->set_cachesize(gbytes, bytes, 1); // Initial cache size.
    DB_ENV->set_cache_max(gbytes, bytes); // Maximum size.
    While live, application decreases cache of current env when finished and then increases cache of next env using:
    DB_ENV->set_cachesize(gbytes, obytes, 0); // Decrease cache size of current env to initial size
    DB_ENV->set_cachesize(gbytes, obytes, 0); // Increase cache size of next env to max size.
    When I print statistics about the memory pool using DB_ENV->memp_stat I can see that everyting is going normally:
    memp_stat: env1 ncache= 8 cache_size=20973592 // env1 is current env
    memp_stat: env2 ncache= 1 cache_size=20973592
    and then after changing current env:
    memp_stat: env1 ncache= 1 cache_size=20973592
    memp_stat: env2 ncache= 8 cache_size=20973592 // env2 is now current env
    But the problem is that over time memory is leaked (as if the extra memory of each env was not freed) and I'm totally sure that the problem comes from this code.
    I'm running Berkeley DB 4.7.25 on FreeBSD.
    Maybe some leak was fixed in newer versions and you could suggest to me a patch? or I don't use the API correctly?
    Thanks!
    Edited by: 894962 on Jan 23, 2012 6:40 AM

    Hi,
    Thanks for providing the information.
    Unfortunately, I don't remember exact test case I was doing, so I did a new one with 32 env.
    I set the following for each env:
    - Initial cache=512MB/32
    - Max=1GB
    Before open, I do:
    DBEnvironment->set_cachesize((u_int32_t)0, (u_int32_t)512*1024*1024/32, 1);
    DBEnvironment->set_cache_max(1*1024*1024*1024, 0);
    DBEnvironment->get_cache_max(&gbytes, &obytes); // gives gbytes=1 and obytes=0
    After open, I have the following:
    DBEnvironment->get_cache_max(&gbytes, &obytes); // gives gbytes=0 and obytes=9502720
    memp_stat: cache_size=18644992 cache_ncache=1
    So here, the values returned by memp_stat are normal but get_cache_max is strange. Then after increasing the cache to the strange value returned by get_cache_max (gbytes=0, obytes=9502720), I have the following:
    DBEnvironment->get_cache_max(&gbytes, &obytes); // gives gbytes=0 and obytes=9502720
    memp_stat: outlinks cache_size=27328512 cache_ncache=54
    with cache_size being: ((ui64)sp->st_gbytes * GIGA + sp->st_bytes);.
    So cache is actually increased...
    I try to reproduce this case by opening 1 env as follows.
    //Before open
    DbEnv->set_cachesize(); 512MB, 1 cache
    DbEnv->set_cache_max; 1GB
    //After open
    DbEnv->get_cachesize; 512MB, 1cache
    DbEnv->get_caceh_max; 1GB
    memp_stat: cache:512MB, ncache:1, cache_max:1GB
    //Decrease the cache size
    DbEnv->set_cachesize(); 9MB(9502720B), 1 cache
    DbEnv->get_cachesize; 512MB, 1cache
    DbEnv->get_caceh_max; 1GB
    memp_stat: cache:512MB, ncache:1, cache_max:1GB
    All the result is expected. Since when resizing the cache after DbEnv is open, it is rounded to the nearest multiple of the region size. Region size means the size of each region specified initially. Please refer to BDB doc: [http://docs.oracle.com/cd/E17076_02/html/api_reference/C/envset_cachesize.html|http://docs.oracle.com/cd/E17076_02/html/api_reference/C/envset_cachesize.html]. Here region size is 512MB/1cache = 512MB. And I don't think you can resize the cache smaller than 1 region.
    Since you are opening 32 env at the same time with 512MB cache and 1GB maximum for each, when the env is open, whether it can allocate as much as that specified for the cache, is dependent on the system. I am guess the number 9502720 got from get_cache_max after opening the env is probably based on the system and app request, the cache size you can get when opening the env.
    And for the case listed in the beginning of the post
    While live, application decreases cache of current env when finished and then increases cache of next env using:
    DB_ENV->set_cachesize(gbytes, obytes, 0); // Decrease cache size of current env to initial size
    DB_ENV->set_cachesize(gbytes, obytes, 0); // Increase cache size of next env to max size.
    When I print statistics about the memory pool using DB_ENV->memp_stat I can see that everyting is going normally:
    memp_stat: env1 ncache= 8 cache_size=20973592 // env1 is current env
    memp_stat: env2 ncache= 1 cache_size=20973592
    and then after changing current env:
    memp_stat: env1 ncache= 1 cache_size=20973592
    memp_stat: env2 ncache= 8 cache_size=20973592 // env2 is now current envWhen env1 is finishing soon, what numbers do you set in set_cachesize to decrease the cache, including the number of caches and cache size?
    When decreasing the cache, I do:
    env->GetDbEnv()->set_cachesize((u_int32_t)0, (u_int32_t)20973592, 0);
    I mean, in all cases I simply set cachesize to its original value (obtained after open through get_cachesize) when decreasing and set cachesize to its max value when increasing (obtained though get_cache_max; plus I do something like cacheMaxSize * 0.75 if < 500MB).I can reproduce this case. And I think the result is expected. When using DBEnv->set_cachesize() to resize the cache after env is opened, the ncache para is ignored. Please refer to BDB doc here: [http://docs.oracle.com/cd/E17076_02/html/api_reference/C/envset_cachesize.html|http://docs.oracle.com/cd/E17076_02/html/api_reference/C/envset_cachesize.html] . Hence I don't think you can decrease the cache size by setting the number of cache to 0.
    Hope it helps.

  • Page & cache size performance tuneup

    Hi
    I am doing performance evaluation on BDB. Please help me in find answer to below queries.
    1. page size: Do I need to give the page size based on my XML document size. Is there any relation(formula) between page size & XML document size to get optimum memory usage?
    2. cache size: Is cache size needs to be equal/more than the doc size to minimize the query response time? Could you please suggests a optimum cache size for 1MB XML document?
    3. I have stared with BDB version 2.3.10, but i read in this forum there is some performance improvement in BDB version 2.3.10. What version i should use for my evaluation? Is the latest(4.6.21) is best(stable)?
    4. Is any other parameters ( other than page & cache size) I need to tuneup to get optimum memory usage & minimal CPU utilization?
    Is there any reference document I can get more details on BDB performace?
    Thanks,
    Santhosh

    Hi Santhosh,
    It’s hard to give solid suggestions without knowing more about your application, what you are measuring and what your performance requirements are. What language are you implementing in?
    Is query response time most important, or document insertion or updates?
    I am going to request that you respond to this Performance Questionaire and answer as many questions as you can at this time. Send the questionaire to me at Ron dot Cohen at Oracle.
    http://forums.oracle.com/forums/ann.jspa?annID=426
    In addition to the information requested, you can see from the questionaire that the utility
    Db_stat –m is useful to look at a number of things including the effectiveness of the cache size you have.
    Have you taken any measurements yet? I would suggest going with the default pagesize but using a cachesize larger than the default. I don’t know how much real memory you have but for a first measurement you could try a cachesize of 100MB-500MB (or larger) depending on your workload and how much memory you have available. I am not recommending that as a final cache size, just giving you a number to start with.
    http://tinyurl.com/2mfn6f
    You will likely find a lot of improvements in performance can be obtained by your indexing strategy. This may be where you get the best results. You may want to spend some time reviewing that and the documentation on indexes:
    http://tinyurl.com/2522sc
    Also, take a look in the same document at the indexing sections.
    Berkeley DB XML 2.3 (Berkeley DB 4.5.20) should be fine to start (though you may have read on this forum about the speed improvements in Berkeley DB XML 2.4 which is currently in test mode).
    Please do respond to the survey, send it to me and we will try to help you further.
    Ron

  • Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).

    Hi,
    Our Environment is Essbase 11.1.2.2 and working on Essbase EAS and Shared Services components.One of our user tried to run the Cal Script of one Application and faced this error.
    Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).
    I have done some Google and found that we need to add something in Essbase.cfg file like below.
    1012704 Dynamic Calc processor cannot lock more than number ESM blocks during the calculation, please increase CalcLockBlock setting and then retry (a small data cache setting could also cause this problem, please check the data cache size setting).
    Possible Problems
    Analytic Services could not lock enough blocks to perform the calculation.
    Possible Solutions
    Increase the number of blocks that Analytic Services can allocate for a calculation:
    Set the maximum number of blocks that Analytic Services can allocate to at least 500. 
    If you do not have an $ARBORPATH/bin/essbase.cfg file on the server computer, create one using a text editor.
    In the essbase.cfg file on the server computer, set CALCLOCKBLOCKHIGH to 500.
    Stop and restart Analytic Server.
    Add the SET LOCKBLOCK HIGH command to the beginning of the calculation script.
    Set the data cache large enough to hold all the blocks specified in the CALCLOCKBLOCKHIGH setting. 
    Determine the block size.
    Set the data catche size.
    Actually in our Server Config file(essbase.cfg) we dont have below data  added.
    CalcLockBlockHigh 2000
    CalcLockBlockDefault 200
    CalcLockBlocklow 50
    So my doubt is if we edit the Essbase.cfg file and add the above settings and restart the services will it work?  and if so why should we change the Server config file if the problem is with one application Cal Script. Please guide me how to proceed.
    Regards,
    Naveen

    Your calculation needs to hold more blocks in memory than your current set up allows.
    From the docs (quoting so I don't have to write it, not to be a smarta***:
    CALCLOCKBLOCK specifies the number of blocks that can be fixed at each level of the SET LOCKBLOCK HIGH | DEFAULT | LOW calculation script command.
    When a block is calculated, Essbase fixes (gets addressability to) the block along with the blocks containing its children. Essbase calculates the block and then releases it along with the blocks containing its children. By default, Essbase allows up to 100 blocks to be fixed concurrently when calculating a block. This is sufficient for most database calculations. However, you may want to set a number higher than 100 if you are consolidating very large numbers of children in a formula calculation. This ensures that Essbase can fix all the required blocks when calculating a data block and that performance will not be impaired.
    Example
    If the essbase.cfg file contains the following settings:
    CALCLOCKBLOCKHIGH 500  CALCLOCKBLOCKDEFAULT 200  CALCLOCKBLOCKLOW 50 
    then you can use the following SET LOCKBLOCK setting commands in a calculation script:
    SET LOCKBLOCK HIGH; 
    means that Essbase can fix up to 500 data blocks when calculating one block.
    Support doc is saying to change your config file so those settings can be made available for any calc script to use.
    On a side note, if this was working previously and now isn't then it is worth investigating if this is simply due to standard growth or a recent change that has made an unexpected significant impact.

  • In EJB3 entities, what is the equiv. of key-cache-size for PK generation?

    We have an oracle sequence which we use to generate primary keys. This sequence is set to increment by 5.
    e.g.:
    create sequence pk_sequence increment by 5;
    This is so weblogic doesn't need to query the sequence on every entity bean creation, it only needs to query the sequence every 5 times.
    With CMP2 entity beans and automatic key generation, this was configured simply by having the following in weblogic-cmp-rdbms-jar.xml:
    <automatic-key-generation>
    <generator-type>Sequence</generator-type>
    <generator-name>pk_sequence</generator-name>
    <key-cache-size>5</key-cache-size>
    </automatic-key-generation>
    This works great, the IDs created are 10, 11, 12, 13, 14, 15, 16, etc and weblogic only needs to hit the sequence 1/5 times.
    However, we have been trying to find the equivalent with the EJB3-style JPA entities:
    We've tried
    @SequenceGenerator(name = "SW_ENTITY_SEQUENCE", sequenceName = "native(Sequence=pk_sequence, Increment=5, Allocate=5)")
    @SequenceGenerator(name = "SW_ENTITY_SEQUENCE", sequenceName = "pk_sequence", allocationSize = 5)
    But with both configurations, the autogenerated IDs are 10, 15, 20, 25, 30, etc - weblogic seems to be getting a new value from the sequence every time.
    Am i missing anything?
    We are using weblogic 10.3

    If you are having a problem it is not clear what it is from what you have said.  If you have sugestions for improving some shortcomings you see in Flash CC then you should submit them to:
    Adobe - Wishlist & Bug Report
    http://www.adobe.com/cfusion/mmform/index.cfm?name=wishform

  • Java.sql.SQLException: Statement cache size has not been set

    All,
    I am trying to create a light weight SQL Layer.It uses JDBC to connect to the database via weblogic. When my application tries to connect to the database using JDBC alone (outside of weblogic) everything works fine. But when the application tries to go via weblogic I am able to run the Statement objects successfully but when I try to run PreparedStatements I get the following error:
    java.sql.SQLException: Statement cache size has not been set
    at weblogic.rjvm.BasicOutboundRequest.sendReceive(BasicOutboundRequest.java:108)
    at weblogic.rmi.internal.BasicRemoteRef.invoke(BasicRemoteRef.java:138)
    at weblogic.jdbc.rmi.internal.ConnectionImpl_weblogic_jdbc_wrapper_PoolConnection_oracle_jdbc_driver_OracleConnection_812_WLStub.prepareStatement(Unknown Source)
    i have checked the StatementCacheSize and it is 10. Is there any other setting that needs to be implemented for this to work? Has anybody seen this error before? Any help will be greatly appreciated.
    Thanks.

    Pooja Bamba wrote:
    I just noticed that I did not copy the jdbc log fully earlier. Here is the log:
    JDBC log stream started at Thu Jun 02 14:57:56 EDT 2005
    DriverManager.initialize: jdbc.drivers = null
    JDBC DriverManager initialized
    registerDriver: driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
    DriverManager.getDriver("jdbc:oracle:oci:@devatl")
    trying driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
    getDriver returning driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
    Oracle Jdbc tracing is not avaliable in a non-debug zip/jar file
    DriverManager.getDriver("jdbc:oracle:oci:@devatl")
    trying driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
    getDriver returning driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
    DriverManager.getDriver("jdbc:oracle:oci:@devatl")
    trying driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
    getDriver returning driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
    DriverManager.getDriver("jdbc:oracle:oci:@devatl")
    trying driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
    getDriver returning driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
    DriverManager.getDriver("jdbc:oracle:oci:@devatl")
    trying driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
    getDriver returning driver[className=oracle.jdbc.driver.OracleDriver,oracle.jdbc.driver.OracleDriver@12e0e2f]
    registerDriver: driver[className=weblogic.jdbc.jts.Driver,weblogic.jdbc.jts.Driver@c0a150]
    registerDriver: driver[className=weblogic.jdbc.pool.Driver,weblogic.jdbc.pool.Driver@17dff15]
    SQLException: SQLState(null) vendor code(17095)
    java.sql.SQLException: Statement cache size has not been set
         at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:134)
         at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:179)
         at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:269)Hi. Ok. This is an Oracle driver bug/problem. Please show me the pool's definition
    in the config.xml file. I'll bet you're defining the pool in an unusual way. Typically
    we don't want any driver-level pooling to be involved. It is superfluous to the functionality
    we provide, and can also conflict.
    Joe
         at oracle.jdbc.driver.OracleConnection.prepareCallWithKey(OracleConnection.java:1037)
         at weblogic.jdbc.wrapper.PoolConnection_oracle_jdbc_driver_OracleConnection.prepareCallWithKey(Unknown Source)
         at weblogic.jdbc.rmi.internal.ConnectionImpl_weblogic_jdbc_wrapper_PoolConnection_oracle_jdbc_driver_OracleConnection.prepareCallWithKey(Unknown Source)
         at weblogic.jdbc.rmi.internal.ConnectionImpl_weblogic_jdbc_wrapper_PoolConnection_oracle_jdbc_driver_OracleConnection_WLSkel.invoke(Unknown Source)
         at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:477)
         at weblogic.rmi.internal.BasicServerRef$1.run(BasicServerRef.java:420)
         at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:353)
         at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:144)
         at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:415)
         at weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java:30)
         at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:197)
         at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:170)
    SQLException: SQLState(null) vendor code(17095)

  • Client cache size reverts to default

    (SCCM 2012 SP1 with CU5)
    I've used a powershell script to set the client cache size to 20GB, using get-wmiobject and the Put() function.  I've deployed this to a test collection of PCs using a CI with remediation, and it's both checking and remediating correctly.
    However, while the 'compliance' is reported at near 100%, when manually checking the clients, many have reset to 5120MB. This affects both Windows 7 and 8.x based clients.
    To take one client specifically, at the time of writing this the CI reports it ran and remediated at 11:45am - it is now 8pm, and the cache is back to 5120.
    Triggering the CI to evaluate again sets the cache to 20GB (verified via get-wmiobject), but I'm sure if I check again at some point tomorrow it will have reverted to 5120 again.
    I can't see any evidence in any client log that indicates what could be causing this.  I've looked into how I can see what has made the change in WMI via the Analytic/Debug logs in Event Viewer, but the tracing available needs to be run at the
    time the change is made - and I don't know exactly when that is.
    Any ideas?!
    Thanks, Bob
    Edit:  CI is set to evaluate every day at 9am.  Client push properties on primary site now include SMSCACHESIZE=20480, but this is recent change - when the existing clients were installed this was not present so they installed at 5120MB.

    Changing the value directly in WMI is unsupported. You need to use the UIResource.UIResourceMgr COM object to perform this task on the client side:
    http://msdn.microsoft.com/en-us/library/cc145211.aspx
    PowerShell examples are available at
    http://www.david-obrien.net/2013/02/07/how-to-configure-the-configmgr-client/
    Jason | http://blog.configmgrftw.com | @jasonsandys

  • Question of Berkeley DB "cache size"

    quote:
    Set the size of the shared memory buffer pool, that is, the size of the cache.
    The cache should be the size of the normal working data set of the application, with some small amount of additional memory for unusual situations. (Note: the working set is not the same as the number of pages accessed simultaneously, and is usually much larger.)
    The default cache size is 256KB, and may not be specified as less than 20KB. Any cache size less than 500MB is automatically increased by 25% to account for buffer pool overhead; cache sizes larger than 500MB are used as specified. The current maximum size of a single cache is 4GB. (All sizes are in powers-of-two, that is, 256KB is 2^18 not 256,000.)
    The database environment's cache size may also be set using the environment's DB_CONFIG file. The syntax of the entry in that file is a single line with the string "set_cachesize", one or more whitespace characters, and the cache size specified in three parts: the gigabytes of cache, the additional bytes of cache, and the number of caches, also separated by whitespace characters. For example, "set_cachesize 2 524288000 3" would create a 2.5GB logical cache, split between three physical caches. Because the DB_CONFIG file is read when the database environment is opened, it will silently overrule configuration done before that time.
    This method configures a database environment, including all threads of control accessing the database environment, not only the operations performed using a specified Environment handle.
    This method may not be called after the environment has been opened. If joining an existing database environment, any information specified to this method will be ignored.
    This method may be called at any time during the life of the application.
    Parameters:
    cacheSize The size of the shared memory buffer pool, that is, the size of the cache.
    The question:
    When I have a host, the memory total is 16G.
    I don't know what mean of this document.
    How many max cache size can be set ?
    4G? 16G?
    or cacheCount (4)* 4G = 16G?
    My Email: [email protected]

    What version of Berkeley DB are you using?
    I'm a little confused about what you are quoting. Most of your quote seems to be from DB_ENV->set_cachesize(), but set_cachesize does not have a parameter named cacheSize. The parameters for set_cachesize are gbytes, bytes and ncache.
    You use set_cachesize to specify the logical cache that you can optionally split into more than one physical region. The maximum size of the logical cache is 4GB and there is only one logical cache. You specify the total size of the logical cache with the gbytes and bytes parameters. If you set ncache to a value greater than 1, you split this logical cache into separate physical regions. So, for example, if you specify (gbytes=2, bytes=0, ncache=1) you will have a logical cache of 2GB that internally is split into 2 separate physical regions of 1GB each.
    You can read more about the memory pool cache in the Reference Guide sections "Selecting a cache size" and "Configuring the memory pool".
    If you have other Berkeley DB questions that are not specific to replication, you should direct them to the general Berkeley DB forum where you will have the benefit of a wider set of Berkeley DB experts:
    Berkeley DB
    Paula Bingham
    Oracle

  • Trying to change the cache size of FF3.6 from 75meg to a larger size, it only applies on a per session basis. i check the about:config and the changes have applied but when i restart FF it has reset itself to 75 :(

    as per the question, tried to up the cache from 75meg to 300meg but it resets after i restart firefox, have tried to change to various cache sizes but to no avail.
    -=EDIT=-
    it must be something to do with the profile, as when i set up a new profile in the manager, the cache size problem no longer appears. but now, how to repair my profile

    ok, nothing in that text file helped but the original file that it was based on pointed me in the direction that it might be an extension. The only extensions i have are NoScript and FasterFox Lite version....
    I have now traced the fault to lie with FasterFox... if you are not familiar with fasterfox it speeds up internet connections in firefox... several of the options are presets... but when i selected custom it gave me the option of a cache setting, which was set to 75megs.
    I have now changed that cache setting in fasterfox to 300 Megs and it is now persistant in firefox on restart.
    hopefully this information will be helpful to other people in the future that suffer the same problem.
    Thanks for your help TonyE, its greatly appreciated

  • Querie regarding cache sizes

    For optimizing the calculation script i have set in my cube & compression type as RLE (prior calculation script was running with time span of 6 minutes now it takes 2 minutes ,datafile exported using dataexport is same )
    The maximum index cache you have set as 4097152 kb ( i.e.3.9 gb) Is it ok to set the index cache so high even though my index file size is less than 1 GB.
    1)How do i conclude the maximum value for datacache is 36000000 kb. What are the factors i need to take for consideration.
    2)Datacache Maximum 36000000 kb
    Datacache is 36000000 kb (i.e. 34.33 GB). Is it a practical approach
    Regards
    Shenna

    Hi,
    Index Cache:
    The doc suggests to have- 1 MB of index cache for Buffered I/O and 10 MB of index cache for Buffered I/O !
    While you can use this recommendation to start with- You're the right person to arrive at the actual figure by doing some trials relevant to your environment.
    Data Cache:
    Again, the doc. suggests to have- Data cache = 0.125 * the value of data file cache size
    Where- Suggested Data file cache size = Combined size of all essn.pag files, if possible; otherwise as large as possible. This cache setting not used if Essbase is set to use buffered I/O.
    It's prudent to do trials independently for each of the caches!
    It's worth reading all the posts of the thread @ Understanding Buffered I/O and Direct I/O to understand experts' opinions !
    Best of luck :)
    - Natesh

  • Callable Statement Cache Size

    Hi all,
    while using some dinamyc store procedures I get in the following error:
    [BEA][SQLServer JDBC Driver]Value can not be converted to requested type.
    I'm using WL8.1 and Sql Server 2000.
    Store procedure contains two different queries where table name is a store procedure's
    parameter.
    The first time it works great, after that I always have this error:
    Reading bea doc's I found
    There may be other issues related to caching prepared statements that are not
    listed here. If you see errors in your system related to prepared statements,
    you should set the prepared statement cache size to 0, which turns off prepared
    statement caching, to test if the problem is caused by caching prepared statements.
    If I set prepared statement cache size to 0 everything works great but that does
    not seem the better way.
    Should we expect Bea to solve this problem?
    Or whatever else solution?
    such as using JDBCConnectionPoolMBean.setPreparedStatementCacheSize()
    dynamically ?
    thks in advance
    Leonardo

    caching works well for DML and thats what it is supposed to do. But it looks
    like you are doing DDL , which means your tables might be getting
    created/dropped/altered which effectively invalidates the cache. So you
    should try to turn the cache off.
    "leonardo" <[email protected]> wrote in message
    news:40b1bb75$1@mktnews1...
    >
    >
    Hi all,
    while using some dinamyc store procedures I get in the following error:
    [BEA][SQLServer JDBC Driver]Value can not be converted to requested type.
    I'm using WL8.1 and Sql Server 2000.
    Store procedure contains two different queries where table name is a storeprocedure's
    parameter.
    The first time it works great, after that I always have this error:
    Reading bea doc's I found
    There may be other issues related to caching prepared statements that arenot
    listed here. If you see errors in your system related to preparedstatements,
    you should set the prepared statement cache size to 0, which turns offprepared
    statement caching, to test if the problem is caused by caching preparedstatements.
    If I set prepared statement cache size to 0 everything works great butthat does
    not seem the better way.
    Should we expect Bea to solve this problem?
    Or whatever else solution?
    such as using JDBCConnectionPoolMBean.setPreparedStatementCacheSize()
    dynamically ?
    thks in advance
    Leonardo

Maybe you are looking for

  • Cannot send email just get a 'relaying' error messgae

    Hi Apologies if this has already been asked:- i have an iphone 4 and have upgraded to the latest version of ios.  But since then I cannot send emails from either my aol or hotmail account. I just get this error message:- A copy has been placed in you

  • Kerberos Authentication on Windows 7

    I'm trying to authenticate using Kerberos Authentication. Let's say the server is oracle.mydomain.com, and the kdc is kdc.sub.mydomain.com. Now, I have one machine that is joined to the sub.mydomain.com domain, and another machine which is on a total

  • Need information on Dynamic internal table

    Hi All, I need some information on dynamic internal table. I want what are the field names and the values in dynamic internal table. Is there any function module, which tells us the field names and values of corresponding fields from a dynamic intern

  • How to caputre report output into an internal table?

    Hi, Is there a way to capture report output into one internal table? Regards, Amruth

  • Tabbed Panel Alignment

    I'm having a problem with aligning the Tabbed Panel Content window. My page has a sidebar containing a navigation menu. This menu is about 200 pixels tall. In the mainContent window to the right of the sidebar is a Spry Tabbed Panel. When in Design m