Question of Berkeley DB "cache size"

quote:
Set the size of the shared memory buffer pool, that is, the size of the cache.
The cache should be the size of the normal working data set of the application, with some small amount of additional memory for unusual situations. (Note: the working set is not the same as the number of pages accessed simultaneously, and is usually much larger.)
The default cache size is 256KB, and may not be specified as less than 20KB. Any cache size less than 500MB is automatically increased by 25% to account for buffer pool overhead; cache sizes larger than 500MB are used as specified. The current maximum size of a single cache is 4GB. (All sizes are in powers-of-two, that is, 256KB is 2^18 not 256,000.)
The database environment's cache size may also be set using the environment's DB_CONFIG file. The syntax of the entry in that file is a single line with the string "set_cachesize", one or more whitespace characters, and the cache size specified in three parts: the gigabytes of cache, the additional bytes of cache, and the number of caches, also separated by whitespace characters. For example, "set_cachesize 2 524288000 3" would create a 2.5GB logical cache, split between three physical caches. Because the DB_CONFIG file is read when the database environment is opened, it will silently overrule configuration done before that time.
This method configures a database environment, including all threads of control accessing the database environment, not only the operations performed using a specified Environment handle.
This method may not be called after the environment has been opened. If joining an existing database environment, any information specified to this method will be ignored.
This method may be called at any time during the life of the application.
Parameters:
cacheSize The size of the shared memory buffer pool, that is, the size of the cache.
The question:
When I have a host, the memory total is 16G.
I don't know what mean of this document.
How many max cache size can be set ?
4G? 16G?
or cacheCount (4)* 4G = 16G?
My Email: [email protected]

What version of Berkeley DB are you using?
I'm a little confused about what you are quoting. Most of your quote seems to be from DB_ENV->set_cachesize(), but set_cachesize does not have a parameter named cacheSize. The parameters for set_cachesize are gbytes, bytes and ncache.
You use set_cachesize to specify the logical cache that you can optionally split into more than one physical region. The maximum size of the logical cache is 4GB and there is only one logical cache. You specify the total size of the logical cache with the gbytes and bytes parameters. If you set ncache to a value greater than 1, you split this logical cache into separate physical regions. So, for example, if you specify (gbytes=2, bytes=0, ncache=1) you will have a logical cache of 2GB that internally is split into 2 separate physical regions of 1GB each.
You can read more about the memory pool cache in the Reference Guide sections "Selecting a cache size" and "Configuring the memory pool".
If you have other Berkeley DB questions that are not specific to replication, you should direct them to the general Berkeley DB forum where you will have the benefit of a wider set of Berkeley DB experts:
Berkeley DB
Paula Bingham
Oracle

Similar Messages

  • Cache size question...

    howdy,
    just a quick one.
    i've noticed in the new firefox (1.5) there is an option in preferences to limit the cache size. however, there seems to be no option in safari.
    :: is there a method to set the cache size in safari? ::
    any info is greatly appreciated.
    cheers...
    ryan

    ryan,
    I am Terminally challenged, so I use CLIX and it has both Caches Off/Caches On for Safari.
    I do not see a command for setting a cache limit.
    Caches Off (rm -fr ~/Library/Caches/Safari;ln -s /dev/null ~/Library/Caches/Safari)
    Caches On (rm ~/Library/Caches/Safari)
    ;~)

  • Jinitiator Question - JAR Cache Size Location

    Using Jinitiator 1.3.1.22 - on Windows 2000 Pro
    Does anyone know where this setting is stored on the PC when set in the Java Console? I have looked at the properties121222 in the .jinit folder under the User profile in Docs and Settings and it isn't in there!

    Thanks Francois...I know that bit. I want to know where the value for the default is actually derived from for my installation. According to the Forms Services Deployment Guide 'The default cache size for Oracle JInitiator is 20000000. This is set for you when you install Oracle JInitiator.'
    If you override the default in the Jini Configuration Panel, it's value will appear in a text file in the users .jini folder. Where is it held before that?!
    When Jini is installed at my workplace, the default for the JAR cache is 50mb. No one knows why and how it differs from what the documentation states! This is what I am trying to get to the bottom of!

  • Berkeley DB Tree Size

    Hi,
    I am using berkeley DB to store large number of keys/values. I set the database type to BTREE i.e. for example DbConfig.setType(DatabaseType.BTREE);
    Please I want to ask if the speed of retrieving key/value from berkeley database defined above depends on the key/value size. in other words, if the value size for example was 2MB then over the time it increased to 4MB does this affects the performance of retrieving...? if yes then please tell me how to overcome this problem?
    Best regards,
    Ahmad.

    Hi Ahmad,
    Is this a hypothetical question, or did you experience any problems with retrieving records?
    Will you use transactions in your application? What release of Berkeley DB are you using and what API?
    How many separate processes does your application have? How many threads of control in each process? What is the number of threads of control which will simultaneously retrieve records from the database?
    Is the database configured to support record numbers?
    Is the database configured for duplicate data items?
    What do the keys look like?
    How the data is retrieved (using partial retrieval, cursors, etc.)? How are the keys sorted? What is the cache size?
    Related docs:
    Retrieving Btree records by logical record number: http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/bt_recnum.html
    Retrieving records: http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am/get.html
    Retrieving records with a cursor: http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am/curget.html
    Retrieving records in bulk: http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_misc/get_bulk.html
    Partial record storage and retrieval: http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_misc/partial.html
    Retrieved key/data permanence for C/C++: http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_misc/perm.html
    Access method tuning (there is a paragraph for large key/data items): http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_misc/tune.html
    The page size implicitly sets the size of an overflow record. Overflow records are key or data items that are too large to fit on a normal database page because of their size, and are therefore stored in overflow pages. Overflow pages are pages that exist outside of the normal database structure. For this reason, there is often a significant performance penalty associated with retrieving or modifying overflow records.
    Usually, having large records should make the cache too hot, since a significant part of the cache will have to be flushed in order to retrieve a new data item. I think that if you want to test this, you'll have to do it in a multi-threaded program.
    Also, performance will be better if all of the pages that make up a large record are contiguous, and that happens by default if the records are inserted in order.
    Bogdan Coman

  • Problems setting cache size,

    Hi,
    I hope I m in the right category for this question:
    I m reading in a csv table, and putting it in one primary database ( DB_QUEUE )
    and several secondary databases ( DB_BTREE ). I m using the Berkeley DB 4.7
    for with C++. For the primary database i leave the standart cache size, for the secondary
    databases I want to use 16mb:
    unsigned long cache_byte= (1024*1024*16);
    sec[a]->set_cachesize(0,cache_byte,1)
    sec[*] are the secondary databases
    and the cache size is set before opening the databases.
    The problem is when i run the programm it allocates more and more memory,
    but it should just use a little more than a times 16 mb.
    Can somebody help me ?

    Welcome to the forums !
    You might get a better/faster response in the Berkeley DB Forum
    Berkeley DB
    HTH
    Srini

  • Page & cache size performance tuneup

    Hi
    I am doing performance evaluation on BDB. Please help me in find answer to below queries.
    1. page size: Do I need to give the page size based on my XML document size. Is there any relation(formula) between page size & XML document size to get optimum memory usage?
    2. cache size: Is cache size needs to be equal/more than the doc size to minimize the query response time? Could you please suggests a optimum cache size for 1MB XML document?
    3. I have stared with BDB version 2.3.10, but i read in this forum there is some performance improvement in BDB version 2.3.10. What version i should use for my evaluation? Is the latest(4.6.21) is best(stable)?
    4. Is any other parameters ( other than page & cache size) I need to tuneup to get optimum memory usage & minimal CPU utilization?
    Is there any reference document I can get more details on BDB performace?
    Thanks,
    Santhosh

    Hi Santhosh,
    It’s hard to give solid suggestions without knowing more about your application, what you are measuring and what your performance requirements are. What language are you implementing in?
    Is query response time most important, or document insertion or updates?
    I am going to request that you respond to this Performance Questionaire and answer as many questions as you can at this time. Send the questionaire to me at Ron dot Cohen at Oracle.
    http://forums.oracle.com/forums/ann.jspa?annID=426
    In addition to the information requested, you can see from the questionaire that the utility
    Db_stat –m is useful to look at a number of things including the effectiveness of the cache size you have.
    Have you taken any measurements yet? I would suggest going with the default pagesize but using a cachesize larger than the default. I don’t know how much real memory you have but for a first measurement you could try a cachesize of 100MB-500MB (or larger) depending on your workload and how much memory you have available. I am not recommending that as a final cache size, just giving you a number to start with.
    http://tinyurl.com/2mfn6f
    You will likely find a lot of improvements in performance can be obtained by your indexing strategy. This may be where you get the best results. You may want to spend some time reviewing that and the documentation on indexes:
    http://tinyurl.com/2522sc
    Also, take a look in the same document at the indexing sections.
    Berkeley DB XML 2.3 (Berkeley DB 4.5.20) should be fine to start (though you may have read on this forum about the speed improvements in Berkeley DB XML 2.4 which is currently in test mode).
    Please do respond to the survey, send it to me and we will try to help you further.
    Ron

  • Trying to change the cache size of FF3.6 from 75meg to a larger size, it only applies on a per session basis. i check the about:config and the changes have applied but when i restart FF it has reset itself to 75 :(

    as per the question, tried to up the cache from 75meg to 300meg but it resets after i restart firefox, have tried to change to various cache sizes but to no avail.
    -=EDIT=-
    it must be something to do with the profile, as when i set up a new profile in the manager, the cache size problem no longer appears. but now, how to repair my profile

    ok, nothing in that text file helped but the original file that it was based on pointed me in the direction that it might be an extension. The only extensions i have are NoScript and FasterFox Lite version....
    I have now traced the fault to lie with FasterFox... if you are not familiar with fasterfox it speeds up internet connections in firefox... several of the options are presets... but when i selected custom it gave me the option of a cache setting, which was set to 75megs.
    I have now changed that cache setting in fasterfox to 300 Megs and it is now persistant in firefox on restart.
    hopefully this information will be helpful to other people in the future that suffer the same problem.
    Thanks for your help TonyE, its greatly appreciated

  • New FAQ Entry on JVM Parameters for Large Cache Sizes

    I've posted a new [FAQ entry|http://www.oracle.com/technology/products/berkeley-db/faq/je_faq.html#60] on JVM parameters for large cache sizes. The text of it is as follows:
    What JVM parameters should I consider when tuning an application with a large cache size?
    If your application has a large cache size, tuning the Java GC may be necessary. You will almost certainly be using a 64b JVM (i.e. -d64), the -server option, and setting your heap and stack sizes with -Xmx and -Xms. Be sure that you don't set the cache size too close to the heap size so that your application has plenty of room for its data and to avoided excessive full GC's. We have found that the Concurrent Mark Sweep GC is generally the best in this environment since it yields more predictable GC results. This can be enabled with -XX:+UseConcMarkSweepGC.
    Best practices dictates that you disable System.gc() calls with -XX:-DisableExplicitGC.
    Other JVM options which may prove useful are -XX:NewSize (start with 512m or 1024m as a value), -XX:MaxNewSize (try 1024m as a value), and -XX:CMSInitiatingOccupancyFraction=55. NewSize is typically tuned in relationship to the overall heap size so if you specify this parameter you will also need to provide a -Xmx value. A convenient way of specifying this in relative terms is to use -XX:NewRatio. The values we've suggested are only starting points. The actual values will vary depending on the runtime characteristics of the application.
    You may also want to refer to the following articles:
    * Java SE 6 HotSpot Virtual Machine Garbage Collection Tuning
    * The most complete list of -XX options for Java 6 JVM
    * My Favorite Hotspot JVM Flags
    Edited by: Charles Lamb on Oct 22, 2009 9:13 AM

    First of all please be aware HSODBC V10 has been desupported and DG4ODBC should be used instead.
    The root cause the problem you describe could be related to a timeout of the ODBC driver (especially while taking care of the comment: it happens only for larger tables):
    (0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
    (0) Code: 2006)
    indicates the Driver or the DB abends the connection due to a timeout.
    Check out the wait_timeout mysql variable on the server and increase it.

  • Can't increase cache size, though i set it to 500mb it shows 27.65mb,what should i do??

    i am unable to increase cache size. whatever i put in the setting it says max cache limit 27.65 mb. I have 3 gb ram and 200 gb hard disk.

    Mark the question as solved. Please!

  • BerkeleyDB cache size and Solaris

    I am having problems trying to scale up an application that uses BerkelyDB-4.4.20 on Sun Sparc servers running Solaris 8 and 9.
    The application has 11 primary databases and 7 secondary databases.
    In different instances of the application, the size of the largest pimary database
    ranges only from 2MB to 10MB, but those will grow rapidly over the
    course of the semester.
    The servers have 4-8 GB of RAM and 12-20 GBytes of swap.
    Succinctly, when the primary databases are small, the application runs as expected.
    But as the primary databases grow, the following, counterintuitive phenomenon
    occurs. With modest cache sizes, the application starts up, but throws
    std::exceptions of "not enough space" when it attempts to delete records
    via a cursor. The application also crashes randomly returning
    RUN_RECOVERY. But when the cache size is increased, the application
    will not even start up; instead, it fails and throws std::exceptions which say there
    is insufficient space to open the primary databases.
    Here is some data from a server that has 4GB RAM with 2.8 GBytes free
    (according to "top") when the data was collected:
    DB_CONFIG............db_stat -m.................................Result
    set_cachesize........Pool......Ind. Cache
    0 67108864 1.........80 MB.......8 KB................Starts but crashes and can't delete by
    .....................................................................cursor because of insufficient space
    0 134217728 1.......160 MB......8 KB.................Same as case above
    0 268435456 1........320 MB.....8 KB.................Doesn't start and says there is
    ......................................................................not enough space to open a primary
    ......................................................................database.
    0 536870912 1.........512 MB...16 KB.................Doesn't start and says there is
    ......................................................................not enough space to open a primary
    ......................................................................database (although it mentions a
    ......................................................................different primary database than before.
    1 073741884 1........1GB 70MB....36 KB............Doesn't start and says there is
    ......................................................................not enough space to open a primary
    ......................................................................database (although it mentions a
    ......................................................................different pimary database than
    ......................................................................previously).
    2 147483648 1.........2GB 140MB...672 KB........Doesn't start and says there is
    ......................................................................not enough space to open a primary
    ......................................................................database (although it mentions a
    ......................................................................different pimary database than
    ......................................................................previously).
    I should also mention that the application is written in Perl and uses
    the Sleepycat::Db Perl module to interface with the BerkeleyDB C++ API.
    Any help on how to interpret this data and, if the problem is the
    interface with Solaris, how to tweak that, will be greatly appreciated.
    Sincerely,
    Bill Wheeler, Department of Mathematics, Indiana University, Bloomington.

    Having found answers to my questions, I think I should document them here.
    1. On the matter of the error message "not enough space", this message
    apparently orginates from Solaris. When a process (e.g., an Apache child)
    requests additional (virtual) memory (via either brk or mmap) such that the
    total (virtual) memory allocated to the process would exceed the system limit
    (set by the setrlimit command), then the Solaris kernel rejects the request
    and returns the error ENOMEM . Somewhat cryptically, the text for this error
    is "not enough space" (in contrast, for instance, to "not enough virtual
    memory").
    Apparently, when the BerkeleyDB cache size is set too large, a process
    (e.g., an Apache child) that attempts to open the environment and databases
    may request a total memory allocation that exceeds the system limit.
    Then Solaris will reject the request and return the ENOMEM error.
    Within Solaris, the only solutions are apparently
    (i) to decrease the cache size or
    (ii) to increase the system limit via the setrlimit command.
    2. On the matter of the DB_RUNRECOVERY errors, the cause appears
    to have been the use of the DB_TXN_NOWAIT flag in combination with
    code that was mishandling some of the resulting, complex situations.
    Sincerely,
    Bill Wheeler

  • Does buffer cache size matters during imp process ?

    Hi,
    sorry for maybe naive question but I cant imagine why do Oracle need buffer cache (larger = better ) during inserts only (imp process with no index creation) .
    As far as I know insert is done via pga area (direct insert) .
    Please clarify for me .
    DB is 10.2.0.3 if that matters :).
    Regards.
    Greg

    Surprising result: I tried closing the db handles with DB_NOSYNC and performance
    got worse. Using a 32 Meg cache, it took about twice as long to run my test:
    15800 seconds using DB->close(DB_NOSYNC) vs 8200 seconds using DB->close(0).
    Here is some data from db_stat -m when using DB_NOSYNC:
    40MB 1KB 900B Total cache size
    1 Number of caches
    1 Maximum number of caches
    40MB 8KB Pool individual cache size
    0 Maximum memory-mapped file size
    0 Maximum open file descriptors
    0 Maximum sequential buffer writes
    0 Sleep after writing maximum sequential buffers
    0 Requested pages mapped into the process' address space
    26M Requested pages found in the cache (70%)
    10M Requested pages not found in the cache (10811882)
    44864 Pages created in the cache
    10M Pages read into the cache (10798480)
    7380761 Pages written from the cache to the backing file
    3452500 Clean pages forced from the cache
    7380761 Dirty pages forced from the cache
    0 Dirty pages written by trickle-sync thread
    10012 Current total page count
    5001 Current clean page count
    5011 Current dirty page count
    4099 Number of hash buckets used for page location
    47M Total number of times hash chains searched for a page (47428268)
    13 The longest hash chain searched for a page
    118M Total number of hash chain entries checked for page (118169805)
    It looks like not flushing the cache regularly is forcing a lot more
    dirty pages (and fewer clean pages) from the cache. Forcing a
    dirty page out is slower than forcing a clean page out, of course.
    Is this result reasonable?
    I suppose I could try to sync less often than I have been, but more often
    than never to see if that makes any difference.
    When I close or sync one db handle, I assume it flushes only that portion
    of the dbenv's cache, not the entire cache, right? Is there an API I can
    call that would sync the entire dbenv cache (besides closing the dbenv)?
    Are there any other suggestions?
    Thanks,
    Eric

  • Question about Berkeley DB Primary Index/Secondary Index in DPL package

    We are using DPL package for BDB access.
    My question is about memory consumption by Primary or Secondary Index. We are using BDB only for Read Purposes
    e.g. Let's say if we are using BDB to store 250GB of Raw Data (which will translate to .jdb files which will have all indexes). Each box is having around 32GB memory.
    1) So when we call method store.getPrimaryIndex(....) or Create Secondary Index; how much size these Indexes will take up in memory ?
    2) Is it advisable to keep these Indexes in memory all the time ?
    Thanks,

    I have a couple caveats to add on DbCacheSize.
    1) It is an estimate, and for ordinary databases (without duplicate keys) is an upper bound.
    2) For databases with duplicate keys (in the DPL these are MANY_TO_ONE and MANY_TO_MANY secondary indices), it is incorrect and can be used as a lower bound.
    So it gives only a rough estimate. If you need an accurate number, write a DPL-based program that creates a realistic data set and look at EnvironmentStats.getCacheTotalBytes to see how much memory is used prior to any eviction occurring, which is indicated byEnvironmentStats.getNEvictPasses. Use a machine that has as much memory as possible, set the Java heap size and the JE cache size to the maximum possible, and create the data set as close as possible to the size you expect in production.
    2) Is it advisable to keep these Indexes in memory all the time ?Yes. The FAQ has multiple entries on this topic in the performance section.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Time measuring and cache size

    Hi,
    This has been posted in C forum, but not much activity there.
    I have two questions.
    1. Is it possible to obtain the level 1 and 2 cache size from within a C/C++ program. You can do that with fpversion on the command line.
    2. If I have a multi-threaded program, then I want to meassure the time
    taken from within. Now I use getrusage. However, it includes the time for all child threads. How do I get the time for the main thread. The command line tool times seem to be able to do that. I do not want wall
    clock time but CPU time. This is possible on SGI.
    Thanks in advance.
    Erling

    1. Is it possible to obtain the level 1 and 2 cache size from within a C/C++ program. You can do that with fpversion on the command line. Yes!
    My friend work through this. If you still intresting - amail me at lesson1@mail/.com
    Wishes , a [url http://personallfiles.com/Grant.Scholarship.asp]nursing scholarship-in need for me

  • Bdb cache size setiings

    Hi,
    I have one question about bdb cache.
    If I set je.maxMemory=1073741824 in je.properties to limit bdb cache to 1G, is it possible that the real size of bdb cache is larger than 1G in a long period?
    Thanks,
    Yu

    To add to what Linda said, you are correct that is not a good idea to make the JE cache size too large, relative to the heap size. Some extra room is needed for three reasons:
    1) JE's calculations of memory size are approximate.
    2) Your application may use variable amounts of memory, in addition to JE.
    3) Java garbage collection will not perform well in some circumstances, if there is not enough free space.
    The last reason is a large variable. To find out how this impacts performance, and how much extra room you need, you'll really have to go though a Java GC tuning exercise using a system that is similar to your production machine, and do a lot of testing.
    I would certainly never use less than 10 or 20% free space, and with large heaps where GC is active, you will probably need more free space than that.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Problems with increasing/decreasing cache size when live

    Hello,
    I have configured multiple environments which I'm compacting sequentially and to achieve this I allocate a bigger cache to the env currently being compacted as follows:
    Initialization:
    DB_ENV->set_cachesize(gbytes, bytes, 1); // Initial cache size.
    DB_ENV->set_cache_max(gbytes, bytes); // Maximum size.
    While live, application decreases cache of current env when finished and then increases cache of next env using:
    DB_ENV->set_cachesize(gbytes, obytes, 0); // Decrease cache size of current env to initial size
    DB_ENV->set_cachesize(gbytes, obytes, 0); // Increase cache size of next env to max size.
    When I print statistics about the memory pool using DB_ENV->memp_stat I can see that everyting is going normally:
    memp_stat: env1 ncache= 8 cache_size=20973592 // env1 is current env
    memp_stat: env2 ncache= 1 cache_size=20973592
    and then after changing current env:
    memp_stat: env1 ncache= 1 cache_size=20973592
    memp_stat: env2 ncache= 8 cache_size=20973592 // env2 is now current env
    But the problem is that over time memory is leaked (as if the extra memory of each env was not freed) and I'm totally sure that the problem comes from this code.
    I'm running Berkeley DB 4.7.25 on FreeBSD.
    Maybe some leak was fixed in newer versions and you could suggest to me a patch? or I don't use the API correctly?
    Thanks!
    Edited by: 894962 on Jan 23, 2012 6:40 AM

    Hi,
    Thanks for providing the information.
    Unfortunately, I don't remember exact test case I was doing, so I did a new one with 32 env.
    I set the following for each env:
    - Initial cache=512MB/32
    - Max=1GB
    Before open, I do:
    DBEnvironment->set_cachesize((u_int32_t)0, (u_int32_t)512*1024*1024/32, 1);
    DBEnvironment->set_cache_max(1*1024*1024*1024, 0);
    DBEnvironment->get_cache_max(&gbytes, &obytes); // gives gbytes=1 and obytes=0
    After open, I have the following:
    DBEnvironment->get_cache_max(&gbytes, &obytes); // gives gbytes=0 and obytes=9502720
    memp_stat: cache_size=18644992 cache_ncache=1
    So here, the values returned by memp_stat are normal but get_cache_max is strange. Then after increasing the cache to the strange value returned by get_cache_max (gbytes=0, obytes=9502720), I have the following:
    DBEnvironment->get_cache_max(&gbytes, &obytes); // gives gbytes=0 and obytes=9502720
    memp_stat: outlinks cache_size=27328512 cache_ncache=54
    with cache_size being: ((ui64)sp->st_gbytes * GIGA + sp->st_bytes);.
    So cache is actually increased...
    I try to reproduce this case by opening 1 env as follows.
    //Before open
    DbEnv->set_cachesize(); 512MB, 1 cache
    DbEnv->set_cache_max; 1GB
    //After open
    DbEnv->get_cachesize; 512MB, 1cache
    DbEnv->get_caceh_max; 1GB
    memp_stat: cache:512MB, ncache:1, cache_max:1GB
    //Decrease the cache size
    DbEnv->set_cachesize(); 9MB(9502720B), 1 cache
    DbEnv->get_cachesize; 512MB, 1cache
    DbEnv->get_caceh_max; 1GB
    memp_stat: cache:512MB, ncache:1, cache_max:1GB
    All the result is expected. Since when resizing the cache after DbEnv is open, it is rounded to the nearest multiple of the region size. Region size means the size of each region specified initially. Please refer to BDB doc: [http://docs.oracle.com/cd/E17076_02/html/api_reference/C/envset_cachesize.html|http://docs.oracle.com/cd/E17076_02/html/api_reference/C/envset_cachesize.html]. Here region size is 512MB/1cache = 512MB. And I don't think you can resize the cache smaller than 1 region.
    Since you are opening 32 env at the same time with 512MB cache and 1GB maximum for each, when the env is open, whether it can allocate as much as that specified for the cache, is dependent on the system. I am guess the number 9502720 got from get_cache_max after opening the env is probably based on the system and app request, the cache size you can get when opening the env.
    And for the case listed in the beginning of the post
    While live, application decreases cache of current env when finished and then increases cache of next env using:
    DB_ENV->set_cachesize(gbytes, obytes, 0); // Decrease cache size of current env to initial size
    DB_ENV->set_cachesize(gbytes, obytes, 0); // Increase cache size of next env to max size.
    When I print statistics about the memory pool using DB_ENV->memp_stat I can see that everyting is going normally:
    memp_stat: env1 ncache= 8 cache_size=20973592 // env1 is current env
    memp_stat: env2 ncache= 1 cache_size=20973592
    and then after changing current env:
    memp_stat: env1 ncache= 1 cache_size=20973592
    memp_stat: env2 ncache= 8 cache_size=20973592 // env2 is now current envWhen env1 is finishing soon, what numbers do you set in set_cachesize to decrease the cache, including the number of caches and cache size?
    When decreasing the cache, I do:
    env->GetDbEnv()->set_cachesize((u_int32_t)0, (u_int32_t)20973592, 0);
    I mean, in all cases I simply set cachesize to its original value (obtained after open through get_cachesize) when decreasing and set cachesize to its max value when increasing (obtained though get_cache_max; plus I do something like cacheMaxSize * 0.75 if < 500MB).I can reproduce this case. And I think the result is expected. When using DBEnv->set_cachesize() to resize the cache after env is opened, the ncache para is ignored. Please refer to BDB doc here: [http://docs.oracle.com/cd/E17076_02/html/api_reference/C/envset_cachesize.html|http://docs.oracle.com/cd/E17076_02/html/api_reference/C/envset_cachesize.html] . Hence I don't think you can decrease the cache size by setting the number of cache to 0.
    Hope it helps.

Maybe you are looking for

  • How to view/change "default' display format

    Hi all, I have created some dashboards in Oracle Answers and I have this strange "problem". The report A which was created by another developer and has a numeric column shows the data with different format than the numeric columns in report B created

  • Error when I try to install rpm compat-libcap1-1.10-1.x86_64.rpm,

    Please suggest how to solve the above issue on red hat 5.2: [root@EBSLive RPM]# rpm -ivh compat-libcap1-1.10-1.x86_64.rpm warning: compat-libcap1-1.10-1.x86_64.rpm: Header V4 DSA signature: NOKEY, key ID 652e84dc error: Failed dependencies: rpmlib(Fi

  • Replacement battery from recall worse than old one

    I received my replacement battery, which I assumed would be new, but it is not doing a very good job of keeping a long charge. Whereas my old battery could give me over 3 hours if I optimized my settings, my new one barely gives me 2. Anyone else get

  • Objects & Garbage Collector

    HI, When, exactly GC removes some object from memory? Does this code Object obj = new Object(); obj = null; System.gc; guarantee that the obj is already removed from memory? Thanks for some help KM

  • Do you have a favorites button like internet explorer?

    I want to be able to click one button to show all my favorites