Maximum entries in Cache

What is the maximum entries which we can populate in a Cache.
I have a Huge report which generates around 167 enteries in cache.
But when cache entries go beyond 1000, the previous entries are removed and new enteries are generated.
So i want to know, What is the maximum limit which i can keep for cache entries.
And is there a way where i can permanently keep entries in my cache??

HI
In NQS Config file under Cache section you can find the default entries for Cache:
DATA_STORAGE_PATHS     =     "C:\OracleBIData\cache" 500 MB;
MAX_ROWS_PER_CACHE_ENTRY = 100000; // 0 is unlimited size
MAX_CACHE_ENTRY_SIZE = 1 MB;
MAX_CACHE_ENTRIES = 1000;
If you want to increase the entry then you can edit in this file.
Regards,
Diney

Similar Messages

  • How to Remove a Entry in cache but do not trigger the erase in cachestore

    hi guys,
    I meet a really special requirement, like the topic, I have a distribute-cache, when put a object in cache, will tigger save method in cachestore to store it to DB, when remve a entry, will tigger the erase method to delete it from DB, and then I need another operation--which can just remove the entry on cache and won't delete from DB? I checked eviction method, but I think it's for local cache. how could I use it on near-cache or distribute cache.
    anyone could help me?
    Thanks a lot.
    best,
    DFJ
    Edited by: 912032 on Mar 2, 2012 12:37 AM

    let me post my code:
    package com.admin.processor;
    import java.io.IOException;
    import com.tangosol.io.pof.PofReader;
    import com.tangosol.io.pof.PofWriter;
    import com.tangosol.io.pof.PortableObject;
    import com.tangosol.util.InvocableMap;
    import com.tangosol.util.processor.AbstractProcessor;
    public class ValueEvictor extends AbstractProcessor implements PortableObject {
         @Override
         public Object process(InvocableMap.Entry entry) {
    entry.remove(true);
    return null;
    @Override
    public void readExternal(PofReader pofReader) throws IOException {
    @Override
    public void writeExternal(PofWriter pofWriter) throws IOException {
         public static void main(String[] args) {
              AccountKey key = new AccountKey("Test0015678",24);
              AccountEntity account = new AccountEntity();
              account.setDescription("testDescrtiption");
              account.setName("hello");
              account.setParentID(18);
              account.setKey(key);
              NamedCache namedCache=CacheFactory.getCache(account.getClass().getName());
              //namedCache.put(account.getKey(),account);
              namedCache.invoke(key, new ValueEvictor());
    Edited by: 912032 on Mar 2, 2012 1:16 AM

  • Global Load Balancing / Failover....what about dns entries being cached?

    It is my understanding that DNS is used to provide data center redundancy. How does one resolve the problem of dns entries being cached across the Internet? For example, I fail-over to my secondary datacenter, the IP addresses of my primary datacenter will likely be cached in dns servers across the Internet. What are some options for datacenter redundancy that can overcome these issues of dns propogation delays?
    Thanks!

    The only option that you have is to run with a low TTL.
    Unfortunately, there are applications out that that don't behave very well to a low TTL. Microsoft Internet Explorer, for example, needs to be restarted for it to do another name lookup. The same holds true for some proxies etc.
    -A

  • Maximum entries in Fast Entry

    Hello SAP gurus,
    i have a very small query , could not get the answer tht's y m posting it here.
    Can you please tell me tht is it so tht maximum of 20 entries can only be posted through fast Entry screen. Coz in my system the system is taking only 20 entries.
    and if not then hw can we change it.
    thanks,
    neha

    Hi,
    You can add 20 records but to add more records you will have to scroll down. The maximum limit to enter the records in one session is 999.
    Regards,
    Waqas Rashid

  • Pacman: could not remove entry from cache

    Hi,
    I've been getting "error: could not remove entry 'openbox' from cache" message while trying to uninstall openbox. Why?

    Any third-party software that doesn't install by drag-and-drop into the Applications folder, and uninstall by drag-and-drop to the Trash, is a system modification.
    Whenever you remove system modifications, they must be removed completely, and the only way to do that is to use the uninstallation tool, if any, provided by the developers, or to follow their instructions. If the software has been incompletely removed, you may have to re-download or even reinstall it in order to finish the job.
    I never install system modifications myself, and I don't know how to uninstall them. You'll have to do your own research to find that information.
    Here are some general guidelines to get you started. Suppose you want to remove something called “BrickMyMac” (a hypothetical example.) First, consult the product's Help menu, if there is one, for instructions. Finding none there, look on the developer's website, say www.brickmyrmac.com. (That may not be the actual name of the site; if necessary, search the Web for the product name.) If you don’t find anything on the website or in your search, contact the developer. While you're waiting for a response, download BrickMyMac.dmg and open it. There may be an application in there such as “Uninstall BrickMyMac.” If not, open “BrickMyMac.pkg” and look for an Uninstall button.
    You generally have to reboot in order to complete an uninstallation.
    If you can’t remove software in any other way, you’ll have to erase and install OS X. Never install any third-party software unless you're sure you know how to uninstall it; otherwise you may create problems that are very hard to solve.
    You may be advised by others to try to remove complex system modifications by hunting for files by name, or by running "utilities" that purport to remove software. I don't give such advice. Those tactics often will not work and maymake the problem worse.

  • AE 5.2 Configuring User Defaults Parameter ID (PID) - maximum entries?

    Hello,
    I am configuring Access Enforcer 5.2 User Defaults and am setting up a number of PID parameters;
    I have successfully defined 23 out of 28 parameters and now fail whenever I attempt to add the 24 entry.
    Does anyone know if there is a limit on defining these parameters in AE 5.2? Has anyone exceeded 23 entries successfully?
    I am somewhat new to working with AE and am wondering if I am missing something
    Appreciate any feedback
    Thanks
    Jerry
    The following is the error message and iss somewhat mis-leading also:
    Internet Explorer cannot display the webpage
       Most likely causes:
    You are not connected to the Internet.
    The website is encountering problems.
    There might be a typing error in the address.
       What you can try:
         Check your Internet connection. Try visiting another website to make sure you are connected. 
         Retype the address. 
         Go back to the previous page.
         More information
    This problem can be caused by a variety of issues, including:
    Internet connectivity has been lost.
    The website is temporarily unavailable.
    The Domain Name Server (DNS) is not reachable.
    The Domain Name Server (DNS) does not have a listing for the website's domain.
    If this is an HTTPS (secure) address, click tools, click Internet Options, click Advanced, and check to be sure the SSL and TLS protocols are enabled under the security section.
    For offline users
    You can still view subscribed feeds and some recently viewed webpages.
    To view subscribed feeds
    Click the Favorites Center button , click Feeds, and then click the feed you want to view.
    To view recently visited webpages (might not work on all pages)
    Click Tools , and then click Work Offline.
    Click the Favorites Center button , click History, and then click the page you want to view.

    Hello Listeners,
    I have received feedback from SAP and they have identified this issue as a bug and are presently developing a fix. The tentative date is targeted for 3rd week of June for support pack GRC-SAE 5.2 SP9
    Following is the response from SAP:
        Hello Customer,
    The GRC Development Team has identified this as a bug and the fix has
    been targeted for the next support pack for GRC-SAE 5.2 SP9 and
    other relevant versions. The tentative release date of the mentioned
    Support pack is June 3rd week 2008.
    I am marking this question as "answered" although the fix is still forthcoming
    Jerry Synoga
    Ryerson
    630-758-2021

  • Ldif2db virtual memory error Directory Server enterprise 6

    Hello,
    I installed directory server ee 6 on a Solaris 10 sparc machine, 8 gigs of ram. This is a testing environment. The installation, startup, and tools, like dcc/webconsole all work fine.
    I created a new DS instance.
    Next I copied a 5.2 instance from an older testing server, and ran the dsmig utility on it, directing the utility to migrate the 5.2 instance to the new instance of 6.0 that I had created.
    All parts of the migration worked except the data import. So I tried manually doing an export from 5.2 to ldif, then an import into 6.0. I received this error:
    root@WEB_ZONE_vmpwd1# ./ldif2db -n userRoot -i /vmpwd1_d_p01/portal/userRoot.ldif
    importing data ...
    [08/Aug/2008:09:01:01 -0700] - Waiting for 6 database threads to stop
    [08/Aug/2008:09:01:02 -0700] - All database threads now stopped
    [08/Aug/2008:09:01:03 -0700] - import userRoot: Index buffering enabled with bucket size 9
    [08/Aug/2008:09:01:03 -0700] - import userRoot: Beginning import job...
    [08/Aug/2008:09:01:03 -0700] - import userRoot: Processing file "/vmpwd1_d_p01/portal/userRoot.ldif"
    [08/Aug/2008:09:01:03 -0700] - ERROR<5132> - Resource Limit - conn=-1 op=-1 msgId=-1 - Memory allocation error realloc of 100 bytes failed; errno 0
    The server has probably allocated all available virtual memory. To solve this problem, make more virtual memory available to your server, or reduce the size of the server's `Maximum Entries in Cache' (cachesize) or `Maximum DB Cache Size' (dbcachesize) parameters.
    can't recover; calling exit(1)
    Any ideas?
    The only forums posts I could find about this message pertained to DS 5.2 and were written in 2004.
    There is nothing running on the server except DS 6 and its tools.

    Update:
    Well, I tried something different. I created a new 6.0 instance, and then migrated just the schema and tried and ldif2db of 5.2 data into the 6.0 instance. That failed because it did not have the suffixes setup.
    So I tried a migrate-data, and it created the suffixes and imported the data into 6.0.
    While I am still curious what could have caused the error above, my immediate problem of getting 5.2 data into a 6.0 instance is take care of.

  • 5.1 directory server try to allocate 1.7G virtual memory

    I am running iPlanet 5.1 at Solaris 8. The server crashed on the following error,
    12/Sep/2003:17:42:03 +0000] - malloc of 1719227497 bytes failed; errno 11
    The server has probably allocated all available virtual memory. To solve this p
    roblem, make more virtual memory available to your server, or reduce the size of
    the server's `Maximum Entries in Cache' (cachesize) or `Maximum DB Cache Size'
    (dbcachesize) parameters.
    can't recover; calling exit(1)
    The server is configured just 64M cache. Why here directory server try to allocate 1.7G virtual memory?
    Is this known bug for 5.1? Is there any way to fix it?

    I had the same problem. DS 5.1 sp2
    in my case it is trying to allocate 4.2G virtual memory
    ps

  • The specified cache entry could not be loaded during initialization

    The OBIEE server is failing every few minutes of working.
    I saw in the NQServer.log:
    The specified cache entry could not be loaded during initialization.
    Cache entry, '/home/oracleadmin/OracleBIData/cache/NQS_rprt_734086_39693_00000001.TBL', is invlaid due to either non-existing file name, size does not match, exceeding maximum space limit nor exceeding maximum entry limit, and it's been removed.
    I checked the directory and there is no space limitation.
    Can someone please clarify?
    The server/Config/NQSConfig.INI is as follows:
    [ CACHE ]
    ENABLE = YES;
    DATA_STORAGE_PATHS = "/home/oraceladmin/OracleBIData/cache" 2000 MB;
    MAX_ROWS_PER_CACHE_ENTRY = 100000; // 0 is unlimited size
    MAX_CACHE_ENTRY_SIZE = 1 MB;
    MAX_CACHE_ENTRIES = 1000;
    POPULATE_AGGREGATE_ROLLUP_HITS = NO;
    USE_ADVANCED_HIT_DETECTION = NO;
    MAX_SUBEXPR_SEARCH_DEPTH = 7;

    It's more than enough if you are using it for .tmp and .tbl files storage. Have you checked work_directory_path in NQSConfig.ini file .Is it on diffrent mount point.Usually we wil create a seperate mount point and use that as storage location.
    b. Instead of using SAPurgeAllCache() , can I add shell script which delete the files fromt the cache directory and the execute run-sa.sh?You need to understand diffrence between OBIEE rpd cache and .tbl and .tmp file. SAPurgeAllCache() will purge cache from RPD.It wont purge .tmp or .tbl files.These files will be stored in some physical location.When a report start executing .tmp file will start evoluting and after execution of report it will be purged by OBIEE server automatically.
    see this blog for some insight into this topic
    Re: TMP Files uneder /OracleBIData/tmp
    Regards,
    Sandeep

  • ICM Statistics - Suppressed Cache Entries

    Well, I've done the usual search in SAP xSearch and Google, not to mention checking the application help, but I can't seem to find the answer.
    We have a CRM ABAP 7.0 system (kernel 7.01).  When I display the ICM statistics for the HTTP Server Cache (txn SMICM), I see:
      Cache Statistics
       Cache Size (Bytes)                         52428,800
       Occupied Cache Memory (Bytes)              52422,656
       Maximum Number of Cache Entries               10,000
       Current Number of Cache Entries                5,968
    Number of Suppressed Cache Entries            55,593
       Total Number of Cache Accesses             27134,021
       Number of Cachable Accesses                28224,213
       Number of Cache Hits                       23749,222             84.14  %
       Number of Failed Cache Accesses            4,474,991             15.86  %
       Number of Cache Hits in Memory             23749,222             84.14  %
       Maximum Number of UFO Entries                  2,000
       Current Number of UFO Entries                      0
       Number of Accesses to UFO List                   416
       Number of UFO Hits                                44
    What, pray tell, are "Suppressed Cache Entries" (in red above) are what are their effects on the Server Cache performance?  Do I need to 'care' about these suppressed entries?
    Regards,
    bryan

    Thanks to Ashutosh and Clébio for responding to my query. 
    Ashutosh , the link you supplied was interesting and it is possible that web error pages, such as "400 session timed out", could be the so-called suppressed cache pages.
    Clébio, As for "the number of entries in the cache that have been deleted to release space to new ones," I think the message that would be relevant to that action is the CCMS alert "EvictedEntries".  I'm not certain as that is anotehr alert that isn't very well documented.
    bryan

  • XSLT Caching

    HI,
    I am using JAXP Template object to cache my XSLT files. Can my code pass parameters to a cached XSLT file?
    Thanks,
    Java-Junkie

    I am using the following code to handle my transforms, can you suggest how I would cache the xslt files?
    import javax.xml.transform.*;
    import javax.xml.transform.stream.*;
    import java.util.*;
    import java.io.*;
    public class TransformerPool
    /** Source control version number. */
    public final static String SOURCE_VERSION_INFO = "%R%";
    /** Minimum number of transformers per transformation. */
    public final static int MIN_INSTANCES = 1;
    /** Maximum number of transformers per transformation. */
    public final static int MAX_INSTANCES = 5;
    // class members
    private static TransformerFactory tFactory;
    private static TransformerPool thisTP;
    // data members
    private Map transformers;
    private int defaultMinInstances;
    private int defaultMaxInstances;
    private class PoolEntry
    private int minInstances;
    private int maxInstances;
    private int currentInstances;
    private int hits;
    private int waitStates;
    private int waitIter;
    private StreamSource xslDocument;
    private boolean busy[];
    private Transformer processors[];
    public PoolEntry(TransformerPool source, String xslDocument)
    init(source);
    initTransformer(xslDocument);
    public PoolEntry(TransformerPool source, StreamSource streamSource)
    init(source);
    initTransformer(streamSource);
    public void init(TransformerPool source)
    minInstances = source.defaultMinInstances;
    maxInstances = source.defaultMaxInstances;
    currentInstances = minInstances;
    hits = 0;
    waitStates = 0;
    waitIter = 0;
    public void initTransformer(String xslDocument)
    initTransformer(new StreamSource(new StringReader(xslDocument)));
    public void initTransformer(StreamSource xslDocument)
    this.xslDocument = xslDocument;
    busy = new boolean[maxInstances];
    processors = new Transformer[maxInstances];
    for ( int i=0; i<maxInstances; i++ )
    busy[i] = false;
    processors[i] = i < currentInstances ? newTransformer() : null;
    private Transformer newTransformer()
    synchronized (TransformerPool.tFactory)
    try
    Templates cashedXSLT = TransformerPool.tFactory.newTemplates(xslDocument);
    Transformer trans = cashedXSLT.newTransformer();
    return trans;
    //return TransformerPool.tFactory.newTransformer(xslDocument);
    catch (TransformerConfigurationException e)
    return null;
    public boolean transform(Source xml, Result result, Properties params)
    boolean isBusy = false;
    boolean allNull = false;
    Transformer xform = null;
    int index = -1;
    do
    if (isBusy)
    try
    Thread.currentThread().sleep(100);
    catch (InterruptedException e)
    synchronized (this)
    allNull = true;
    for ( int i=0; i<maxInstances; i++ )
    if (processors[i] != null)
    allNull = false;
    if (processors[i] != null && busy[i] == false)
    index = i;
    xform = processors;
    busy[i] = true;
    isBusy = false;
    break;
    if (allNull == true) // theres nothing we can do; fail
    return false;
    if (index == -1)
    waitIter++;
    isBusy = true;
    } while (isBusy);
    // we should have a transformer now
    try
    String paramName;
    xform.clearParameters();
    if(params!=null){
    for ( Enumeration e=params.propertyNames(); e.hasMoreElements(); )
    paramName = (String)e.nextElement();
    xform.setParameter(paramName, params.get(paramName));
    Message.out(Message.DEBUG, "starting transform");
    xform.transform(xml, result);
    Message.out(Message.DEBUG, "ending transform");
    catch (Exception e)
    e.printStackTrace();
    Message.out(Message.DEBUG, "exception? " + e.toString());
    return false;
    finally
    if (xform != null)
    synchronized (this)
    busy[index] = false;
    // increment counters
    if (isBusy)
    waitStates++;
    hits++;
    return true;
    * Create a new {@link TransformerPool}.
    private TransformerPool()
    if (tFactory == null)
    tFactory = TransformerFactory.newInstance();
    transformers = new HashMap();
    defaultMinInstances = MIN_INSTANCES;
    defaultMaxInstances = MAX_INSTANCES;
    * Create a new {@link TransformerPool}.
    * @return A {@link TransformerPool} instance.
    public static synchronized TransformerPool getInstance()
    if (thisTP == null)
    thisTP = new TransformerPool();
    return thisTP;
    private synchronized PoolEntry newEntry(StreamSource xsl)
    return xsl == null ? null : new PoolEntry(this, xsl);
    public synchronized void dump(PrintWriter out)
    PoolEntry entry;
    String key;
    out.println("Default instances: " + defaultMinInstances +
    " (minimum), " + defaultMaxInstances + " (maximum)");
    out.println("Transfomers: " + transformers.size());
    out.println();
    for ( Iterator iter=transformers.keySet().iterator(); iter.hasNext(); )
    key = (String)iter.next();
    entry = (PoolEntry)transformers.get(key);
    out.println("Transformer: " + key);
    out.println(" Instances: " + entry.minInstances + " (minimum), " +
    entry.maxInstances + " (maximum), " + entry.currentInstances +
    " (current)");
    out.println(" Hits: " + entry.hits + " (" + entry.waitStates + " busy)");
    for ( int i=0; i<entry.maxInstances; i++ )
    out.println(" (" + i + ") " + entry.processors[i] + " " + (entry.busy[i] ? "busy" : "not busy"));
    out.println();
    out.flush();
    * Add a new transformation.
    * @param name Transformation name as a String.
    * @param xsl Transformation XSLT document as a StreamSource.
    public synchronized void addTransformation(String name, StreamSource xsl)
    PoolEntry entry = newEntry(xsl);
    if (entry != null)
    transformers.put(name, entry);
    * Remove a given transformation.
    * @param name Transformation name as a String.
    public synchronized void removeTransformation(String name)
    transformers.remove(name);
    * Determines if a given transformation exists.
    * @param name Transformation name as a String.
    * @return <code>true</code> if there is a transformation by that
    * name, otherwise <code>false</code>.
    public synchronized boolean isTransformation(String name)
    return transformers.containsKey(name);
    * Transform an XML document using a named transformation.
    * @param name Transformation name as a String.
    * @param xml XML document to transform as a Source.
    * @param result Transformed document as a Result.
    * @return <code>true</code> if the transformation succeeded or
    * <code>false</code> if the transformation couldn't be completed
    * for any reason.
    public synchronized boolean transform(String name, Source xml, Result result)
    // find the entry
    PoolEntry entry = (PoolEntry)transformers.get(name);
    if (entry == null)
    return false;
    // transform
    return entry.transform(xml, result, null);
    * Transform an XML document using a named transformation.
    * @param name Transformation name as a String.
    * @param xml XML document to transform as a Source.
    * @param result Transformed document as a Result.
    * @param params Collection of transformation parameters as Properties.
    * @return <code>true</code> if the transformation succeeded or
    * <code>false</code> if the transformation couldn't be completed
    * for any reason.
    public synchronized boolean transform(String name, Source xml, Result result, Properties params)
    // find the entry
    PoolEntry entry = (PoolEntry)transformers.get(name);
    if (entry == null)
    return false;
    // transform
    return entry.transform(xml, result, params);

  • Filter API will hit the database if not found in Cache???

    I wrote a sample program using Filter API i.e. LikeFilter, EqualsFilter which will query the cache (i.e. Map in memory) based on the criteria. But what if it doesn't found the entries in Cache . Will it then query the database with similar criteria.
    I don't think so ... Please correct me if i am wrong.
    I need to implement Pre-loading the cache and the link http://download.oracle.com/docs/cd/E14526_01/coh.350/e14509/preloadcache.htm#CACCFCFJ
    use the Filter API. I am not sure if i can rely on pre-loading the cache from database.
    Any sample example preloading the data from database into cache will be greatly appreciated
    Regards,
    Bansi

    Then you won't need the Invokable code (anyway its owned by my employer ;-)
    All you need to do is write a simple program something like this :
    1. Query everything from the database "select * from table"
    2. Iterate through the ResultSet converting them all into objects
    3. Cache.put() these objects into Coherence.
    Now some optimisations. Re :
    1. You have a single query going to the database and will be connecting to a single cluster member. You could divide the query up by some logical division (eg. Customer / Date etc.) and then have many clients putting in concurrently, but it raises the complexity.
    2. It is also possible to store List tuples in Coherence, I once worked on a very nice project along those lines.
    3. Cache.putAll() is much faster than put() and you should batch these into sizes of 1000 for maximum performance.
    However, I would ignore these optimisations for now (tho 3. is the most useful).
    You will have to manually kick off this process when you restart your cluster (or put it in your startup script). People generally write a cluster restart script. Tho JavaServiceWrapper is very nice for restarting nodes that run out of memory. When you get really complicated you can use something like FabricServer to dynamically control your cluster, but lets get the basics working now.
    Best, Andrew.

  • Caching data in J2EE

    Hi,
    Is it possible to cache the data without restarting the weblogic server.
    I am using OSCACHE to cache a value from a properties file, for example
    name="Bob". Hence I will get the name as "Bob" through the cache.
    Suddenly the user updates the name to "Chandler" in the properties file not in DB.
    Now i need to retrieve the updated name "Chandler" from the cache without restarting the server.
    Please shed some light into this.
    Thanks.

    Judging by your other thread, I believe the person mentioning that was talking about a JMX management interface. You'll have to look at your server's documentation how to use that and you should also look for built-in scheduling capabilities. There is no standard way to do these things, it depends on what software you use and on your requirements. If you use JBoss by any chance I can give you some pointers if you tell me which version it is.
    What about something less invasive, like not caching values endlessly but only for a certain amount of time? There will be some lag until the value is updated in the cache, but eventually it will happen. Most cache implementations will have a way to put a maximum age on cache entries; how to get the data back into the cache is usually something you have to facilitate yourself. Lazy loading of information is usually the best way (is it loaded already? Return it. If not load it, cache it, return it).

  • BerkeleyDB shared cache

    Hello,
    I'm using BerkelyDB 4.7.25 (via the BerkeleyDB Perl module) on a FreeBSD system in a forking daemon, with a shared cache environment. Could someone, looking at the output below, tell me whether these statistics are good for the cache? You'd think that "Requested pages found in the cache (99%)" looks good, but I see no shared memory segments being used of any kind; or how much shared memory is being used, or if, even.
    Thanks,
    - Mark
    {root} % /usr/local/bin/db_stat -m -h /var/db/smtpd/
    20MB 1KB 752B Total cache size
    1 Number of caches
    1 Maximum number of caches
    20MB 8KB Pool individual cache size
    0 Maximum memory-mapped file size
    0 Maximum open file descriptors
    0 Maximum sequential buffer writes
    0 Sleep after writing maximum sequential buffers
    0 Requested pages mapped into the process' address space
    231751 Requested pages found in the cache (99%)
    410 Requested pages not found in the cache
    1 Pages created in the cache
    410 Pages read into the cache
    3430 Pages written from the cache to the backing file
    0 Clean pages forced from the cache
    0 Dirty pages forced from the cache
    0 Dirty pages written by trickle-sync thread
    411 Current total page count
    411 Current clean page count
    0 Current dirty page count
    2053 Number of hash buckets used for page location
    232572 Total number of times hash chains searched for a page
    2 The longest hash chain searched for a page
    248006 Total number of hash chain entries checked for page
    0 The number of hash bucket locks that required waiting (0%)
    0 The maximum number of times any hash bucket lock was waited for (0%)
    0 The number of region locks that required waiting (0%)
    0 The number of buffers frozen
    0 The number of buffers thawed
    0 The number of frozen buffers freed
    483 The number of page allocations
    0 The number of hash buckets examined during allocations
    0 The maximum number of hash buckets examined for an allocation
    0 The number of pages examined during allocations
    0 The max number of pages examined for an allocation
    0 Threads waited on page I/O

    Hi Mark,
    Even if I'm not sure I can help you with everything (especially with Perl configuration) I didn't want for your post to remain unanswered.
    liarafan wrote:
    Could someone, looking at the output below, tell me whether these statistics are good for the cache?Your statistics are looking like the ones every Berkeley DB user dreams on. I'm not sure how your application scenario looks like (maybe you can tell us more) and what amount of data you use for the test (numebr of records, key/data size), compared to the size of cache, but the statistics are looking great. Is the cache size corresponding to the size of a normal working data set?
    liarafan wrote:
    I see no shared memory segments being used of any kind; or how much shared memory is being used, or if, even.Each of the BDB subsystems within an database environment is described by one or more regions ( http://www.oracle.com/technology/documentation/berkeley-db/db/ref/env/region.html ). The regions contain all of the per-process and per-thread shared information, including mutexes, that comprise a Berkeley DB environment. For example, one of the shared memory segments will be the transactional information for the system, and one (or more) of them will be the cache. What Berkeley DB product/flags are you using?
    Please let me know if this helped and if you have other questions.
    Thanks,
    Bogdan

  • View video files in my browser cache

    how do I locate and view video file in my browsers cache? I'm running WinXP and Firefox 9.0.1.
    thanks,
    Steve
    [email protected]

    thanks, I get something like this......
    Information about the Cache Service
    Memory cache device
    Number of entries: 5313
    Maximum storage size: 24576 KiB
    Storage in use: 11379 KiB
    Inactive storage: 11378 KiB
    List Cache Entries
    Disk cache device
    Number of entries: 16022
    Maximum storage size: 819200 KiB
    Storage in use: 387148 KiB
    Cache Directory: C:\Documents and Settings\Steve Langdon\Local Settings\Application Data\Mozilla\Firefox\Profiles\gjjx1ixq.default\Cache
    List Cache Entries
    Offline cache device
    Number of entries: 0
    Maximum storage size: 512000 KiB
    Storage in use: 0 KiB
    Cache Directory: C:\Documents and Settings\Steve Langdon\Local Settings\Application Data\Mozilla\Firefox\Profiles\gjjx1ixq.default\OfflineCache
    but isn't the music and or video I already watched, listened to already on my hard drive??

Maybe you are looking for

  • Third party sales COPA document is not getting generated for MIGO

    Hi Kindly help to resolve the following. Third Party Sale Scenario While doing to MIGO accounting document and controlling document is getting generated, but not generating COPA document. In KEI2 and KE4IM assignments are maitained for condition type

  • Crystal Reports 2011 on BI 4.0 server.

    Hello All, I would request your valuable assistance on the below issue: Environment: - BI 4.0 Sp02 Patch 7 - Oracle 11g (64 bit) for Reporting - Windows Server 2008 Description: I have installed CR 2011 designer on the BI 4.0 server machine. The repo

  • User roles in jsp

    Hello! How do you solve displaying components in jsp depending on user roles. I cannot find a jstl or jsf tag that does this for me. For example: I have a menu where only some user can see all menu elements. I do not want to use scripting. For now I

  • Verity Errors

    Why is it that after all these years Verity support in Coldfusion is still a black art when it comes to its errors and support. I have asked numerous places with regards to what Error (-1705), (-1703) and the dubious General Error(2) mean when search

  • My App Store icon quit working after downloading Lion. How do I reinstall or fix this?

    I was so excited when I downloaded Lion on my Macbook Pro. Now my App Store icon doesn't work--it brings up an error message: App Store cannot be opened because of a problem. Check with the developer...reinstall the application...etc. I checked for u