Why cache size doesnt increase beyond 63 MB(longer files browsing) & then disappear?

during larger files streaming/browsing, cache formed for the same files ( in my firefox cache folder) remain max 63 MB only. i want to increase their length to the length of the file i browse/stream. i increased the offline storage to 1024 MB from mozilla tools >network> offline storage. let me kno wt can i do??

In general theory, one now has the Edit button for their posts, until someone/anyone Replies to it. I've had Edit available for weeks, as opposed to the old forum's ~ 30 mins.
That, however, is in theory. I've posted, and immediately seen something that needed editing, only to find NO Replies, yet the Edit button is no longer available, only seconds later. Still, in that same thread, I'd have the Edit button from older posts, to which there had also been no Replies even after several days/weeks. Found one that had to be over a month old, and Edit was still there.
Do not know the why/how of this behavior. At first, I thought that maybe there WAS a Reply, that "ate" my Edit button, but had not Refreshed on my screen. Refresh still showed no Replies, just no Edit either. In those cases, I just Reply and mention the [Edit].
Also, it seems that the buttons get very scrambled at times, and Refresh does not always clear that up. I end up clicking where I "think" the right button should be and hope for the best. Seems that when the buttons do bunch up they can appear at random around the page, often three atop one another, and maybe one way the heck out in left-field.
While I'm on a role, it would be nice to be able to switch between Flattened and Threaded Views on the fly. Each has a use, and having to go to Options and then come back down to the thread is a very slow process. Jive is probably incapable of this, but I can dream.
Hunt

Similar Messages

  • Suddenly on DellXPS15z laptop reading email Firefox or IE Zoom goes from 100% serially down to an unreadable 35%. Size can increased with Ctrl and +/= keys. But then it repeats the same downsizing routine.

    '''bold text'''

    Hi Alex,
    Take a look at this document, and select the appropriate scenario and try the troubleshooting steps.  Please let me know if it helps you resolve the problem.
    Good luck!
    ↙-----------How do I give Kudos?| How do I mark a post as Solved? ----------------↓

  • BerkeleyDB cache size and Solaris

    I am having problems trying to scale up an application that uses BerkelyDB-4.4.20 on Sun Sparc servers running Solaris 8 and 9.
    The application has 11 primary databases and 7 secondary databases.
    In different instances of the application, the size of the largest pimary database
    ranges only from 2MB to 10MB, but those will grow rapidly over the
    course of the semester.
    The servers have 4-8 GB of RAM and 12-20 GBytes of swap.
    Succinctly, when the primary databases are small, the application runs as expected.
    But as the primary databases grow, the following, counterintuitive phenomenon
    occurs. With modest cache sizes, the application starts up, but throws
    std::exceptions of "not enough space" when it attempts to delete records
    via a cursor. The application also crashes randomly returning
    RUN_RECOVERY. But when the cache size is increased, the application
    will not even start up; instead, it fails and throws std::exceptions which say there
    is insufficient space to open the primary databases.
    Here is some data from a server that has 4GB RAM with 2.8 GBytes free
    (according to "top") when the data was collected:
    DB_CONFIG............db_stat -m.................................Result
    set_cachesize........Pool......Ind. Cache
    0 67108864 1.........80 MB.......8 KB................Starts but crashes and can't delete by
    .....................................................................cursor because of insufficient space
    0 134217728 1.......160 MB......8 KB.................Same as case above
    0 268435456 1........320 MB.....8 KB.................Doesn't start and says there is
    ......................................................................not enough space to open a primary
    ......................................................................database.
    0 536870912 1.........512 MB...16 KB.................Doesn't start and says there is
    ......................................................................not enough space to open a primary
    ......................................................................database (although it mentions a
    ......................................................................different primary database than before.
    1 073741884 1........1GB 70MB....36 KB............Doesn't start and says there is
    ......................................................................not enough space to open a primary
    ......................................................................database (although it mentions a
    ......................................................................different pimary database than
    ......................................................................previously).
    2 147483648 1.........2GB 140MB...672 KB........Doesn't start and says there is
    ......................................................................not enough space to open a primary
    ......................................................................database (although it mentions a
    ......................................................................different pimary database than
    ......................................................................previously).
    I should also mention that the application is written in Perl and uses
    the Sleepycat::Db Perl module to interface with the BerkeleyDB C++ API.
    Any help on how to interpret this data and, if the problem is the
    interface with Solaris, how to tweak that, will be greatly appreciated.
    Sincerely,
    Bill Wheeler, Department of Mathematics, Indiana University, Bloomington.

    Having found answers to my questions, I think I should document them here.
    1. On the matter of the error message "not enough space", this message
    apparently orginates from Solaris. When a process (e.g., an Apache child)
    requests additional (virtual) memory (via either brk or mmap) such that the
    total (virtual) memory allocated to the process would exceed the system limit
    (set by the setrlimit command), then the Solaris kernel rejects the request
    and returns the error ENOMEM . Somewhat cryptically, the text for this error
    is "not enough space" (in contrast, for instance, to "not enough virtual
    memory").
    Apparently, when the BerkeleyDB cache size is set too large, a process
    (e.g., an Apache child) that attempts to open the environment and databases
    may request a total memory allocation that exceeds the system limit.
    Then Solaris will reject the request and return the ENOMEM error.
    Within Solaris, the only solutions are apparently
    (i) to decrease the cache size or
    (ii) to increase the system limit via the setrlimit command.
    2. On the matter of the DB_RUNRECOVERY errors, the cause appears
    to have been the use of the DB_TXN_NOWAIT flag in combination with
    code that was mishandling some of the resulting, complex situations.
    Sincerely,
    Bill Wheeler

  • Cache Size error

    We have a few users that occasionally receive the following:
    OLAP_error (1200601): Not enough memory for formula execution. Set MAXFORMULACACHESIZE configuration parameter to [2112]KB and try again.
    Our Essbase admin is suggesting that rather than increase the MAXFORMULACACHESIZE, that we reduce the maximum number of rows that are allowed to be returned.  Thoughts on that?. 
    2 other questions:
    Are there any issues with increasing the MAXFORMULACACHESIZE to a much larger number than what the error message recommends? (let's say 9000KB for the sake of this discussion).  In the DBAG I think it says it will only use what is needed.
    Are there any issues with setting the maximum rows allowed to be returned to a very high number (such as 1 million rows to reflect that max number of rows excel can handle)?

    Answer for both of your questions is a "No" . There wont eb any problem if you change teh cache size nor Increasing the row limit.But in Practical Conditions there will be no reports in any Financial organization retrieving a Million rows , so it is better to split the workbook for a faster retrieval and better performance.

  • Why can't I increase the cache size in FF 4?

    OK, first of all, to be honest I really don't like FF 4. I see absolutely no advantages over the previous versions and I can't find anything to like about FF 4..
    One big and annoying factor (besides the how UI and layout) is why can't I increase the cache size?
    If I choose to manage it myself and override what the developers did, I can go to 1024 and that's it. If I want 5GB of cache, I'm out of luck because some coded decides how I want to manage my disk space? They're kidding, right?
    What the heck is the matter with these guys?
    This thing is so bad that I've resisted upgrading, I didn't have a choice when I upgraded to Ubuntu 11.04, it came with the package.. Yuk... I've been a FF user since late 2003 or early 2004... I'm about to part ways with this thing if they don't fix it...Even IE sucks less than this thing....
    None of my Windows machines are going to get 4 loaded if I can help it.. About 1 million people agree with me...

    correction.. It's not a million.. it's legion at this point..
    Google "I hate FF 4" and you'll get 40,900,000 results... I don't trust Google further than I can throw them, but 41 million people complaining about this thing? What a freakin travesty...

  • Why does performance decrease as cache size increases?

    Hi,
    We have some very serious performance problems with our
    database. I have been trying to help by tuning the cache size,
    but the results are the opposite of what I expect.
    To create new databases with my data set, it takes about
    8200 seconds with a 32 Meg cache. Performance gets worse
    as the cache size increases, even though the cache hit rate
    improves!
    I'd appreciate any insight as to why this is happening.
    32 Meg does not seem like such a large cache that it would
    strain some system limitation.
    Here are some stats from db_stat -m
    Specified a 128 Meg cache size - test took 16076 seconds
    160MB 1KB 900B Total cache size
    1 Number of caches
    1 Maximum number of caches
    160MB 8KB Pool individual cache size
    0 Maximum memory-mapped file size
    0 Maximum open file descriptors
    0 Maximum sequential buffer writes
    0 Sleep after writing maximum sequential buffers
    0 Requested pages mapped into the process' address space
    34M Requested pages found in the cache (93%)
    2405253 Requested pages not found in the cache
    36084 Pages created in the cache
    2400631 Pages read into the cache
    9056561 Pages written from the cache to the backing file
    2394135 Clean pages forced from the cache
    2461 Dirty pages forced from the cache
    0 Dirty pages written by trickle-sync thread
    40048 Current total page count
    40048 Current clean page count
    0 Current dirty page count
    16381 Number of hash buckets used for page location
    39M Total number of times hash chains searched for a page (39021639)
    11 The longest hash chain searched for a page
    85M Total number of hash chain entries checked for page (85570570)
    Specified a 64 Meg cache size - test took 10694 seconds
    80MB 1KB 900B Total cache size
    1 Number of caches
    1 Maximum number of caches
    80MB 8KB Pool individual cache size
    0 Maximum memory-mapped file size
    0 Maximum open file descriptors
    0 Maximum sequential buffer writes
    0 Sleep after writing maximum sequential buffers
    0 Requested pages mapped into the process' address space
    31M Requested pages found in the cache (83%)
    6070891 Requested pages not found in the cache
    36104 Pages created in the cache
    6066249 Pages read into the cache
    9063432 Pages written from the cache to the backing file
    5963647 Clean pages forced from the cache
    118611 Dirty pages forced from the cache
    0 Dirty pages written by trickle-sync thread
    20024 Current total page count
    20024 Current clean page count
    0 Current dirty page count
    8191 Number of hash buckets used for page location
    42M Total number of times hash chains searched for a page (42687277)
    12 The longest hash chain searched for a page
    98M Total number of hash chain entries checked for page (98696325)
    Specified a 32 Meg cache size - test took 8231 seconds
    40MB 1KB 900B Total cache size
    1 Number of caches
    1 Maximum number of caches
    40MB 8KB Pool individual cache size
    0 Maximum memory-mapped file size
    0 Maximum open file descriptors
    0 Maximum sequential buffer writes
    0 Sleep after writing maximum sequential buffers
    0 Requested pages mapped into the process' address space
    26M Requested pages found in the cache (70%)
    10M Requested pages not found in the cache (10812846)
    35981 Pages created in the cache
    10M Pages read into the cache (10808327)
    9200273 Pages written from the cache to the backing file
    9335574 Clean pages forced from the cache
    1498651 Dirty pages forced from the cache
    0 Dirty pages written by trickle-sync thread
    10012 Current total page count
    10012 Current clean page count
    0 Current dirty page count
    4099 Number of hash buckets used for page location
    47M Total number of times hash chains searched for a page (47429232)
    13 The longest hash chain searched for a page
    118M Total number of hash chain entries checked for page (118218066)
    vmstat says that a few minutes into the test, the box is
    spending 80-90% of its time in iowait. That worsens as
    the test continues.
    System and test info follows
    We have 10 databases (in 10 files) sharing a database
    environment. We are using a hash table since we expect
    data accesses to be pretty much random.
    We are using the default cache type: a memory mapped file.
    Using DB_PRIVATE did not improve performance.
    The database environment created with these flags:
    DB_CREATE | DB_THREAD | DB_INIT_CDB | DB_INIT_MPOOL
    The databases are opened with only the DB_CREATE flag.
    There is only one process accessing the db. In my tests,
    only one thread access the db, doing only writes.
    We do not use transactions.
    My data set is about 550 Meg of plain ASCII text data.
    13 million inserts and 2 million deletes. Key size is
    32 bytes, data size is 4 bytes.
    BDB 4.6.21 on linux.
    1 Gig of RAM
    Filesystem = ext3 page size = 4K
    The test system is not doing anything else while I am
    testing.

    Surprising result: I tried closing the db handles with DB_NOSYNC and performance
    got worse. Using a 32 Meg cache, it took about twice as long to run my test:
    15800 seconds using DB->close(DB_NOSYNC) vs 8200 seconds using DB->close(0).
    Here is some data from db_stat -m when using DB_NOSYNC:
    40MB 1KB 900B Total cache size
    1 Number of caches
    1 Maximum number of caches
    40MB 8KB Pool individual cache size
    0 Maximum memory-mapped file size
    0 Maximum open file descriptors
    0 Maximum sequential buffer writes
    0 Sleep after writing maximum sequential buffers
    0 Requested pages mapped into the process' address space
    26M Requested pages found in the cache (70%)
    10M Requested pages not found in the cache (10811882)
    44864 Pages created in the cache
    10M Pages read into the cache (10798480)
    7380761 Pages written from the cache to the backing file
    3452500 Clean pages forced from the cache
    7380761 Dirty pages forced from the cache
    0 Dirty pages written by trickle-sync thread
    10012 Current total page count
    5001 Current clean page count
    5011 Current dirty page count
    4099 Number of hash buckets used for page location
    47M Total number of times hash chains searched for a page (47428268)
    13 The longest hash chain searched for a page
    118M Total number of hash chain entries checked for page (118169805)
    It looks like not flushing the cache regularly is forcing a lot more
    dirty pages (and fewer clean pages) from the cache. Forcing a
    dirty page out is slower than forcing a clean page out, of course.
    Is this result reasonable?
    I suppose I could try to sync less often than I have been, but more often
    than never to see if that makes any difference.
    When I close or sync one db handle, I assume it flushes only that portion
    of the dbenv's cache, not the entire cache, right? Is there an API I can
    call that would sync the entire dbenv cache (besides closing the dbenv)?
    Are there any other suggestions?
    Thanks,
    Eric

  • Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).

    Hi,
    Our Environment is Essbase 11.1.2.2 and working on Essbase EAS and Shared Services components.One of our user tried to run the Cal Script of one Application and faced this error.
    Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).
    I have done some Google and found that we need to add something in Essbase.cfg file like below.
    1012704 Dynamic Calc processor cannot lock more than number ESM blocks during the calculation, please increase CalcLockBlock setting and then retry (a small data cache setting could also cause this problem, please check the data cache size setting).
    Possible Problems
    Analytic Services could not lock enough blocks to perform the calculation.
    Possible Solutions
    Increase the number of blocks that Analytic Services can allocate for a calculation:
    Set the maximum number of blocks that Analytic Services can allocate to at least 500. 
    If you do not have an $ARBORPATH/bin/essbase.cfg file on the server computer, create one using a text editor.
    In the essbase.cfg file on the server computer, set CALCLOCKBLOCKHIGH to 500.
    Stop and restart Analytic Server.
    Add the SET LOCKBLOCK HIGH command to the beginning of the calculation script.
    Set the data cache large enough to hold all the blocks specified in the CALCLOCKBLOCKHIGH setting. 
    Determine the block size.
    Set the data catche size.
    Actually in our Server Config file(essbase.cfg) we dont have below data  added.
    CalcLockBlockHigh 2000
    CalcLockBlockDefault 200
    CalcLockBlocklow 50
    So my doubt is if we edit the Essbase.cfg file and add the above settings and restart the services will it work?  and if so why should we change the Server config file if the problem is with one application Cal Script. Please guide me how to proceed.
    Regards,
    Naveen

    Your calculation needs to hold more blocks in memory than your current set up allows.
    From the docs (quoting so I don't have to write it, not to be a smarta***:
    CALCLOCKBLOCK specifies the number of blocks that can be fixed at each level of the SET LOCKBLOCK HIGH | DEFAULT | LOW calculation script command.
    When a block is calculated, Essbase fixes (gets addressability to) the block along with the blocks containing its children. Essbase calculates the block and then releases it along with the blocks containing its children. By default, Essbase allows up to 100 blocks to be fixed concurrently when calculating a block. This is sufficient for most database calculations. However, you may want to set a number higher than 100 if you are consolidating very large numbers of children in a formula calculation. This ensures that Essbase can fix all the required blocks when calculating a data block and that performance will not be impaired.
    Example
    If the essbase.cfg file contains the following settings:
    CALCLOCKBLOCKHIGH 500  CALCLOCKBLOCKDEFAULT 200  CALCLOCKBLOCKLOW 50 
    then you can use the following SET LOCKBLOCK setting commands in a calculation script:
    SET LOCKBLOCK HIGH; 
    means that Essbase can fix up to 500 data blocks when calculating one block.
    Support doc is saying to change your config file so those settings can be made available for any calc script to use.
    On a side note, if this was working previously and now isn't then it is worth investigating if this is simply due to standard growth or a recent change that has made an unexpected significant impact.

  • Increase the size of the cache using the cache.size= number of pages ?

    Hi All,
    I am getting this error when I do load testing.
    I have Connection pool for Sybase database that I am using in my JPD. I am using Database control of weblogic to call the Sybase Stored procedure.
    I got following exception when I was doing load testing with 30 concurrent users.
    Any idea why this exception is coming ?
    thanks in advance
    Hitesh
    javax.ejb.EJBException: [WLI-Core:484047]Tracking MDB failed to acquire resources.
    java.sql.SQLException: Cache Full. Current size is 2069 pages. Increase the size of the cache using the cache.size=<number of pages>
         at com.pointbase.net.netJDBCPrimitives.handleResponse(Unknown Source)
         at com.pointbase.net.netJDBCPrimitives.handleJDBCObjectResponse(Unknown Source)
         at com.pointbase.net.netJDBCConnection.prepareStatement(Unknown Source)
         at weblogic.jdbc.common.internal.ConnectionEnv.makeStatement(ConnectionEnv.java:1133)
         at weblogic.jdbc.common.internal.ConnectionEnv.getCachedStatement(ConnectionEnv.java:917)
         at weblogic.jdbc.common.internal.ConnectionEnv.getCachedStatement(ConnectionEnv.java:905)
         at weblogic.jdbc.wrapper.Connection.prepareStatement(Connection.java:350)
         at weblogic.jdbc.wrapper.JTSConnection.prepareStatement(JTSConnection.java:479)
         at com.bea.wli.management.tracking.TrackingMDB.getResources(TrackingMDB.java:86)
         at com.bea.wli.management.tracking.TrackingMDB.onMessage(TrackingMDB.java:141)
         at com.bea.wli.management.tracking.TrackingMDB.onMessage(TrackingMDB.java:115)
         at weblogic.ejb20.internal.MDListener.execute(MDListener.java:370)
         at weblogic.ejb20.internal.MDListener.onMessage(MDListener.java:262)
         at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:2678)
         at weblogic.jms.client.JMSSession.execute(JMSSession.java:2598)
         at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:219)
         at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:178)

    hitesh Chauhan wrote:
    Hi All,
    I am getting this error when I do load testing.
    I have Connection pool for Sybase database that I am using in my JPD. I am using Database control of weblogic to call the Sybase Stored procedure.
    I got following exception when I was doing load testing with 30 concurrent users.
    Any idea why this exception is coming ?
    thanks in advance
    Hitesh Hi. Please note below, the stacktrace and exception is coming from the
    Pointbase DBMS, nothing to do with Sybase. It seems to be an issue
    with a configurable limit for PointBase, that you are exceeding.
    Please read the PointBase configuration documents, and/or configure
    your MDBs to use Sybase.
    Joe
    >
    javax.ejb.EJBException: [WLI-Core:484047]Tracking MDB failed to acquire resources.
    java.sql.SQLException: Cache Full. Current size is 2069 pages. Increase the size of the cache using the cache.size=<number of pages>
         at com.pointbase.net.netJDBCPrimitives.handleResponse(Unknown Source)
         at com.pointbase.net.netJDBCPrimitives.handleJDBCObjectResponse(Unknown Source)
         at com.pointbase.net.netJDBCConnection.prepareStatement(Unknown Source)
         at weblogic.jdbc.common.internal.ConnectionEnv.makeStatement(ConnectionEnv.java:1133)
         at weblogic.jdbc.common.internal.ConnectionEnv.getCachedStatement(ConnectionEnv.java:917)
         at weblogic.jdbc.common.internal.ConnectionEnv.getCachedStatement(ConnectionEnv.java:905)
         at weblogic.jdbc.wrapper.Connection.prepareStatement(Connection.java:350)
         at weblogic.jdbc.wrapper.JTSConnection.prepareStatement(JTSConnection.java:479)
         at com.bea.wli.management.tracking.TrackingMDB.getResources(TrackingMDB.java:86)
         at com.bea.wli.management.tracking.TrackingMDB.onMessage(TrackingMDB.java:141)
         at com.bea.wli.management.tracking.TrackingMDB.onMessage(TrackingMDB.java:115)
         at weblogic.ejb20.internal.MDListener.execute(MDListener.java:370)
         at weblogic.ejb20.internal.MDListener.onMessage(MDListener.java:262)
         at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:2678)
         at weblogic.jms.client.JMSSession.execute(JMSSession.java:2598)
         at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:219)
         at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:178)

  • How to increase partial cache size on Lookup stuck at 4096 MB SSIS 2012

    Hello,
    After converting from SSIS 2008 to SSIS 2012, I am facing major performance slowdown while loading fact data.
    When we used 2008 - one file used to take around 2 hours average and now after converting to 2012 - it took 17 hours to load one file. 
    This is the current scenario: We load data into Staging and Select everything from Staging (28 million rows) and use a lookups for each dimension. I believe it is taking very long time due to one Dimension table which has (89 million rows). 
    With the lookup, we currently are using partial cache because full cache caused system out of memory.
    Lookup Transformation Editor - on this lookup - 
    Does anyone know how to increase the size on partial Cache size 64-bit? I am being stuck at 4096 MB and can not increase it. In 2008, I had 200,000 MB partial cache size.
    Any ideas with how to make this faster?
    Thanks in advance!

    Hi Sarora, 
    First of all, you may want to re-think if that huge dimension is really a dimension or you can re-design your model so you lower its cardinality. 
    Then, why are you using Partial Cache? 
    Partial Cache will end up querying directly a lot of times your huge dimension. Potentially, as many times as the No Lookup mode.
    If you can afford loading the whole dimension in memory it'd be MUCH better in terms of performance. If you are loading the whole dimension, try just querying the surrogate key and the business key columns, you'll find the amount of memory used will be much
    lower. 
    I am not sure how much memory do you have available or how much memory the Lookup used to take, but with these amounts you talked about you were probably loading the whole table in memory. You can rough estimate the amount of memory the Lookup will take
    in Full Lookup mode using
    (ColumnWidth1 + ColumnWidth2 + ... + ColumnWidthN) * NumberOfRows
    Regards.
    Pau

  • How to increase cache size in firefox 6.02 mac os x

    I just want to be able to increase the cache size for larger files. As it is now large files are not caching and the limit of 1024 mb is very limiting.

    See if this reference helps:
    http://www.cyberciti.biz/faq/linux-unix-apache-increase-php-upload-limit/

  • Why Argument$ segment size is increased

    Hello everyone,
    I have imported dump file having size 800mb and after completing importing given file my database size became more than 10 gb and in that arguement$ segment occupied more than 5 gb.
    so, I would like to know
    1.why it worked in this way ..
    2. As this is system tablespace's segment so ultimately System tablespace increased .. hence is there any way to reduce system TS.
    below query result for ur reference..
    Thanks in adavance
    select owner,segment_name,segment_type ,bytes/(1024*1024) size_m from dba_segments where tablespace_name = 'SYSTEM' and bytes/(1024*1024) > 2 order by size_m desc;
    OWNER SEGMENT_NAME SEGMENT_TYPE SIZE_M
    SYS ARGUMENT$ TABLE 5431
    SYS I_ARGUMENT1 INDEX 4366
    SYS I_ARGUMENT2 INDEX 2453

    Reducing system tablespace this will help
    Shrink/Reduce System Tablespace!
    System Tablespace have grown too large!
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:153612348067

  • Can't increase cache size, though i set it to 500mb it shows 27.65mb,what should i do??

    i am unable to increase cache size. whatever i put in the setting it says max cache limit 27.65 mb. I have 3 gb ram and 200 gb hard disk.

    Mark the question as solved. Please!

  • Problems with increasing/decreasing cache size when live

    Hello,
    I have configured multiple environments which I'm compacting sequentially and to achieve this I allocate a bigger cache to the env currently being compacted as follows:
    Initialization:
    DB_ENV->set_cachesize(gbytes, bytes, 1); // Initial cache size.
    DB_ENV->set_cache_max(gbytes, bytes); // Maximum size.
    While live, application decreases cache of current env when finished and then increases cache of next env using:
    DB_ENV->set_cachesize(gbytes, obytes, 0); // Decrease cache size of current env to initial size
    DB_ENV->set_cachesize(gbytes, obytes, 0); // Increase cache size of next env to max size.
    When I print statistics about the memory pool using DB_ENV->memp_stat I can see that everyting is going normally:
    memp_stat: env1 ncache= 8 cache_size=20973592 // env1 is current env
    memp_stat: env2 ncache= 1 cache_size=20973592
    and then after changing current env:
    memp_stat: env1 ncache= 1 cache_size=20973592
    memp_stat: env2 ncache= 8 cache_size=20973592 // env2 is now current env
    But the problem is that over time memory is leaked (as if the extra memory of each env was not freed) and I'm totally sure that the problem comes from this code.
    I'm running Berkeley DB 4.7.25 on FreeBSD.
    Maybe some leak was fixed in newer versions and you could suggest to me a patch? or I don't use the API correctly?
    Thanks!
    Edited by: 894962 on Jan 23, 2012 6:40 AM

    Hi,
    Thanks for providing the information.
    Unfortunately, I don't remember exact test case I was doing, so I did a new one with 32 env.
    I set the following for each env:
    - Initial cache=512MB/32
    - Max=1GB
    Before open, I do:
    DBEnvironment->set_cachesize((u_int32_t)0, (u_int32_t)512*1024*1024/32, 1);
    DBEnvironment->set_cache_max(1*1024*1024*1024, 0);
    DBEnvironment->get_cache_max(&gbytes, &obytes); // gives gbytes=1 and obytes=0
    After open, I have the following:
    DBEnvironment->get_cache_max(&gbytes, &obytes); // gives gbytes=0 and obytes=9502720
    memp_stat: cache_size=18644992 cache_ncache=1
    So here, the values returned by memp_stat are normal but get_cache_max is strange. Then after increasing the cache to the strange value returned by get_cache_max (gbytes=0, obytes=9502720), I have the following:
    DBEnvironment->get_cache_max(&gbytes, &obytes); // gives gbytes=0 and obytes=9502720
    memp_stat: outlinks cache_size=27328512 cache_ncache=54
    with cache_size being: ((ui64)sp->st_gbytes * GIGA + sp->st_bytes);.
    So cache is actually increased...
    I try to reproduce this case by opening 1 env as follows.
    //Before open
    DbEnv->set_cachesize(); 512MB, 1 cache
    DbEnv->set_cache_max; 1GB
    //After open
    DbEnv->get_cachesize; 512MB, 1cache
    DbEnv->get_caceh_max; 1GB
    memp_stat: cache:512MB, ncache:1, cache_max:1GB
    //Decrease the cache size
    DbEnv->set_cachesize(); 9MB(9502720B), 1 cache
    DbEnv->get_cachesize; 512MB, 1cache
    DbEnv->get_caceh_max; 1GB
    memp_stat: cache:512MB, ncache:1, cache_max:1GB
    All the result is expected. Since when resizing the cache after DbEnv is open, it is rounded to the nearest multiple of the region size. Region size means the size of each region specified initially. Please refer to BDB doc: [http://docs.oracle.com/cd/E17076_02/html/api_reference/C/envset_cachesize.html|http://docs.oracle.com/cd/E17076_02/html/api_reference/C/envset_cachesize.html]. Here region size is 512MB/1cache = 512MB. And I don't think you can resize the cache smaller than 1 region.
    Since you are opening 32 env at the same time with 512MB cache and 1GB maximum for each, when the env is open, whether it can allocate as much as that specified for the cache, is dependent on the system. I am guess the number 9502720 got from get_cache_max after opening the env is probably based on the system and app request, the cache size you can get when opening the env.
    And for the case listed in the beginning of the post
    While live, application decreases cache of current env when finished and then increases cache of next env using:
    DB_ENV->set_cachesize(gbytes, obytes, 0); // Decrease cache size of current env to initial size
    DB_ENV->set_cachesize(gbytes, obytes, 0); // Increase cache size of next env to max size.
    When I print statistics about the memory pool using DB_ENV->memp_stat I can see that everyting is going normally:
    memp_stat: env1 ncache= 8 cache_size=20973592 // env1 is current env
    memp_stat: env2 ncache= 1 cache_size=20973592
    and then after changing current env:
    memp_stat: env1 ncache= 1 cache_size=20973592
    memp_stat: env2 ncache= 8 cache_size=20973592 // env2 is now current envWhen env1 is finishing soon, what numbers do you set in set_cachesize to decrease the cache, including the number of caches and cache size?
    When decreasing the cache, I do:
    env->GetDbEnv()->set_cachesize((u_int32_t)0, (u_int32_t)20973592, 0);
    I mean, in all cases I simply set cachesize to its original value (obtained after open through get_cachesize) when decreasing and set cachesize to its max value when increasing (obtained though get_cache_max; plus I do something like cacheMaxSize * 0.75 if < 500MB).I can reproduce this case. And I think the result is expected. When using DBEnv->set_cachesize() to resize the cache after env is opened, the ncache para is ignored. Please refer to BDB doc here: [http://docs.oracle.com/cd/E17076_02/html/api_reference/C/envset_cachesize.html|http://docs.oracle.com/cd/E17076_02/html/api_reference/C/envset_cachesize.html] . Hence I don't think you can decrease the cache size by setting the number of cache to 0.
    Hope it helps.

  • Why does rotating a layer cause image size to increase?

    I was working on the top layer of a PSD document and did a custom rotation of 45 degrees so I could get a better painting angle.  When I rotated the layer back I noticed my image size had increased from 10 x 8 inches to 18.3 x 18.3.  When I tried to resize the image back to 10 x 8 it made my image fat and squatty.  Any ideas what I did wrong when rotating?  I just wanted a better wrist angle for stroking lines...

    This is an example of using Image>Resize>Reveal All.
    If you rotate a layer, using free transform or Image>Rotate>Free Rotate Layer,
    then setect the move tool, you should see a bounding box well outside the original image.
    This is the part of the photo you can't see after rotating the layer.
    If you then were to go to Image>Image Size>Reveal All, elements makes those areas visible.
    The only thing about rotating images or layers at angles other than 90 degree increments,
    is that might degrade the image some.
    MTSTUNER

  • Swapping and Database Buffer Cache size

    I've read that setting the database buffer cache size too large can cause swapping and paging. Why is this the case? More memory for sql data would seem to not be a problem. Unless it is the proportion of the database buffer to the rest of the SGA that matters.

    Well I am always a defender of the large DB buffer cache. Setting the bigger db buffer cache alone will not in any way hurt Oracle performance.
    However ... as the buffer cache grows, the time to determine 'which blocks
    need to be cleaned' increases. Therefore, at a certain point the benefit of a
    larger cache is offset by the time to keep it sync'd to the disk. After that point,
    increasing buffer cache size can actually hurt performance. That's the reason why Oracle has checkpoint.
    A checkpoint performs the following three operations:
    1. Every dirty block in the buffer cache is written to the data files. That is, it synchronizes the datablocks in the buffer cache with the datafiles on disk.
    It's the DBWR that writes all modified databaseblocks back to the datafiles.
    2. The latest SCN is written (updated) into the datafile header.
    3. The latest SCN is also written to the controlfiles.
    The following events trigger a checkpoint.
    1. Redo log switch
    2. LOG_CHECKPOINT_TIMEOUT has expired
    3. LOG_CHECKPOINT_INTERVAL has been reached
    4. DBA requires so (alter system checkpoint)

Maybe you are looking for