Proxy 4 - Cache size keeps growing

I may have a wrong cache setting somewhere, but I can't find it. I am running Proxy 4.0.2 (for windows).
Under Cache settings, I have "Cache Size" set to 800MB. Under "Cache Capacity" I have it set to 1GB (500 MB-2GB).
The problem is my physical cache size on the hard drive keeps growing and growing and is starting to fill the partition on the hard drive. At last count, the "cache" directory on the hard drive which holds the cache files is now using 5.7GB of space and still growing.
Am I mis-understanding something? I thought the max physical size would be a lot lower, and stop at a given size. But the cache directory on the hard drive is now close to 6GB and still growing day by day. When is it going to stop growing, or how do I stop it and put a cap on the physical size it can grow to on the hard drive?
Thanks

Until 4.03 is out, you can use this script..
Warning: experimental, run this on a copy of cache first to make sure that it works as you want it.
The firs argument is the size in MB's that you want to remove.
I assume your cachedir is "./cache" if it is not, then change the variable $cachedir to
the correct value.
==============cut-here==========
#!/bin/perl
use strict;
use File::stat;
my $cachedir = "./cache";
my $gc_size; #bytes
my $verbose = 0;
sub gc_file {
    my $file = shift;
    my $sb = stat($file);
    $gc_size -= $sb->size;
    unlink $file;
    print "$gc_size more after $file\n" if $verbose;
    exit 0 if $gc_size < 0;
sub main {
    my $size = shift;
    $gc_size = $size * 1024 * 1024; #in MB's
    opendir(DIR, $cachedir) || die "can't opendir $cachedir: $!";
    my @sects = grep {/^s[0-9]\.[0-9]{2}$/} readdir(DIR);
    closedir DIR;
    foreach my $sect (@sects) {
        chomp $sect;
        opendir (CDIR, "$cachedir/$sect") || die "cant opendir $cachedir/$sect: $!";
        my @ssects = grep {/^[A-F0-9]{2}$/} readdir(CDIR);
        closedir CDIR;
        foreach my $ssect (@ssects) {
            chomp $ssect;
            opendir (SCDIR, "$cachedir/$sect/$ssect") || die "cant opendir $cachedir/$sect/$ssect: $!";
            my @files = grep {/^[A-Z0-9]{16}$/} readdir(SCDIR);
            closedir SCDIR;
            foreach my $file (@files) {
                gc_file "$cachedir/$sect/$ssect/$file";
main $ARGV[0] if $ARGV[0];
=============cut-end==========On your second problem, the easiest way to recover a corrupted partition is to list out the sections in that partition, and delete those sections that seem like odd ones
eg:
$ls ./cache
s4.00 s4.01 s4.02 s4.03 s4.04 s4.05 s4.06 s4.07 s4.08 s4.09 s4.10 s4.11 s4.12 s4.13 s4.14 s4.15 s0.00
Here the s0.00 is the odd one out, so remove the s0.00 section. Also keep an eye on the relative sizes of the sections. if the section to be removed is larger than the rest of the sections combinde, you might not want to remove that.
WARNING: anything you do, do on a copy

Similar Messages

  • Index size keep growing while table size unchanged

    Hi Guys,
    I've got some simple and standard b-tree indexes that keep on acquiring new extents (e.g. 4MB per week) while the base table size kept unchanged for years.
    The base tables are some working tables with DML operation and nearly same number of records daily.
    I've analysed the schema in the test environment.
    Those indexes do not fulfil the criteria for rebuild as follows,
    - deleted entries represent 20% or more of the current entries
    - the index depth is more then 4 levels
    May I know what cause the index size keep growing and will the size of the index reduced after rebuild?
    Grateful if someone can give me some advice.
    Thanks a lot.
    Best regards,
    Timmy

    Please read the documentation. COALESCE is available in 9.2.
    Here is a demo for coalesce in 10G.
    YAS@10G>truncate table t;
    Table truncated.
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                         65536
    TIND                      65536
    YAS@10G>insert into t select level from dual connect by level<=10000;
    10000 rows created.
    YAS@10G>commit;
    Commit complete.
    YAS@10G>
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                        196608
    TIND                     196608We have 10,000 rows now. Let's delete half of them and insert another 5,000 rows with higher keys.
    YAS@10G>delete from t where mod(id,2)=0;
    5000 rows deleted.
    YAS@10G>commit;
    Commit complete.
    YAS@10G>insert into t select level+10000 from dual connect by level<=5000;
    5000 rows created.
    YAS@10G>commit;
    Commit complete.
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                        196608
    TIND                     327680Table size is the same but the index size got bigger.
    YAS@10G>exec show_space('TIND',user,'INDEX');
    Unformatted Blocks .....................               0
    FS1 Blocks (0-25)  .....................               0
    FS2 Blocks (25-50) .....................               6
    FS3 Blocks (50-75) .....................               0
    FS4 Blocks (75-100).....................               0
    Full Blocks        .....................              29
    Total Blocks............................              40
    Total Bytes.............................         327,680
    Total MBytes............................               0
    Unused Blocks...........................               0
    Unused Bytes............................               0
    Last Used Ext FileId....................               4
    Last Used Ext BlockId...................          37,001
    Last Used Block.........................               8
    PL/SQL procedure successfully completed.We have 29 full blocks. Let's coalesce.
    YAS@10G>alter index tind coalesce;
    Index altered.
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                        196608
    TIND                     327680
    YAS@10G>exec show_space('TIND',user,'INDEX');
    Unformatted Blocks .....................               0
    FS1 Blocks (0-25)  .....................               0
    FS2 Blocks (25-50) .....................              13
    FS3 Blocks (50-75) .....................               0
    FS4 Blocks (75-100).....................               0
    Full Blocks        .....................              22
    Total Blocks............................              40
    Total Bytes.............................         327,680
    Total MBytes............................               0
    Unused Blocks...........................               0
    Unused Bytes............................               0
    Last Used Ext FileId....................               4
    Last Used Ext BlockId...................          37,001
    Last Used Block.........................               8
    PL/SQL procedure successfully completed.The index size is still the same but now we have 22 full and 13 empty blocks.
    Insert another 5000 rows with higher key values.
    YAS@10G>insert into t select level+15000 from dual connect by level<=5000;
    5000 rows created.
    YAS@10G>commit;
    Commit complete.
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                        262144
    TIND                     327680Now the index did not get bigger because it could use the free blocks for the new rows.

  • Time machine keeps growing in size or backing up a size far greater than it

    Apologies if this is covered elsewhere but after hours of searching I still haven't been able to find a resolution which is actually fixing my problem.
    When my time machine tries to back up the size of the back up just keeps growing and growing - a problem which has been reported several times but none of the solutions are fixing the problem for me. I've completely reset time machine (disconnect external hard drive, deleting the preferences from ~library/preference, then reset up with my exclusions) and the problem just reappeared.
    I've gone through this routine several times and in most cases had no joy. On one occasion I thought the problem had been resolved as it did the initial backup and then a couple of hourly backs and then suddenly it returned a message saying it had insufficient space (trying to back up 470GB but disk only has 300GB available) which was just ridiculous as the full back up size was under 50GB.
    I've verified the disk on several occasions and it always comes up clean. In a fit of desperation I've even tried reformatting the external drive with no improvements
    I'm at the point of deleting time machine from my laptop and going back to the old Backup program. Can anyone offer any thing else I could try.

    Hi, and welcome to the forums.
    Have you Verified your internal HD (and Repaired any externals that are also being backed-up)?
    If so, it's probably something damaged or corrupted in your installation of OSX. I'd suggest downloading and installing the 10.6.4 "combo" update. That's the cleverly-named combination of all the updates to Snow Leopard since it was first released, so installing it should fix anything that's gone wrong since then, such as with one of the normal "point" updates. Info and download available at: http://support.apple.com/kb/DL1048 Be sure to do a +Repair Permissions+ via Disk Utility (in your Applications/Utilities folder) afterwards.
    If that doesn't help, reinstall OSX from your Snow Leopard Install disc (that won't affect anything else), then apply the "combo" again.

  • How do i change proxy settings so it doesnt keep asking me "authentication req. The proxy web2.ucsd.edu is requesting a username and password. The site says: ucsd Squid Proxy-cache"?

    I changed my proxy setting to access a restricted school website. I don't know how to change it back to normal settings! Every time i'm browsing internet, Authentication Required windows pop up like 4-7 times a day! randomly! it says "the proxy web2.ucsd.edu:3128 is requesting a username and password. The site says: UCSD Squid proxy-cache". and makes me put in username and password every time. sooo annoying. how do i make the setting go back to default??

    1. Open firefox
    2. Go to "Tools" tab
    3. Go to "Options"
    4. Click on "Advanced"
    5. Open "Network" tab
    6. Click on "Settings"
    7. Select "No Proxy"
    8. Click "OK"

  • Swapping and Database Buffer Cache size

    I've read that setting the database buffer cache size too large can cause swapping and paging. Why is this the case? More memory for sql data would seem to not be a problem. Unless it is the proportion of the database buffer to the rest of the SGA that matters.

    Well I am always a defender of the large DB buffer cache. Setting the bigger db buffer cache alone will not in any way hurt Oracle performance.
    However ... as the buffer cache grows, the time to determine 'which blocks
    need to be cleaned' increases. Therefore, at a certain point the benefit of a
    larger cache is offset by the time to keep it sync'd to the disk. After that point,
    increasing buffer cache size can actually hurt performance. That's the reason why Oracle has checkpoint.
    A checkpoint performs the following three operations:
    1. Every dirty block in the buffer cache is written to the data files. That is, it synchronizes the datablocks in the buffer cache with the datafiles on disk.
    It's the DBWR that writes all modified databaseblocks back to the datafiles.
    2. The latest SCN is written (updated) into the datafile header.
    3. The latest SCN is also written to the controlfiles.
    The following events trigger a checkpoint.
    1. Redo log switch
    2. LOG_CHECKPOINT_TIMEOUT has expired
    3. LOG_CHECKPOINT_INTERVAL has been reached
    4. DBA requires so (alter system checkpoint)

  • Sparsebundle keeps growing rapidly

    I have 2 macs with 500 GB Drives backing up to a 2TB time capsule, about 250 GB of used space on each.  The spares bundle on one keeps growing until it hogs all the space.  One sparsebundle now it occupies 1.5TB, will not allow further backups, and has deleted all but the most recent backup.  The other one stays at about 370 GB, but still contains several months of backups.  Why does one keep growing, and what can I do to correct this problem.
    Thanks.
    Ray R

    It sounds like more than the internal HD is getting backed-up.  Are there other drives in or connected to the "rogue" Mac?  
    See what it shows for Estimated size of full backup under the exclusions box in Time Machine Preferences > Options.  If that's larger than the 250 GB or so that's on the internal HD, exclude the other drive(s) you don't want backed-up.
    It's also possible that sparse bundle is damaged.   A clue may be lurking in your logs.  Use the widget in #A1 of Time Machine - Troubleshooting to display the backup messages from your logs.  Locate the most recent backup from that Mac, then copy and post all the messages here.
    Either way, your best bet might be to delete the sparse bundle for that Mac, per #Q5 in Using Time Machine with a Time Capsule (via Ethernet if at all possible, as that will take quite a while).
    The next backup of that Mac will, of course, be a full one (so do it via Ethernet, too).   Then see what the next one does.  If it's a full backup, see #D7 in Time Machine - Troubleshooting.

  • DB size is growing gradually

    My database size is growing gradually, what could be the reason and how to fix it? OS is windows xp.
    Hey it's D:\oracle\product\10.2.0\admin\orcl\bdump folder having orcl_mmon_8640.trc files getting written in rapid pase....!!!!
    What should i do now?
    Here goes couple of information:
    SQL> select * from v$sgainfo;
    NAME BYTES RES
    Fixed SGA Size 1250428 No
    Redo Buffers 7135232 No
    Buffer Cache Size 415236096 Yes
    Shared Pool Size 180355072 Yes
    Large Pool Size 4194304 Yes
    Java Pool Size 4194304 Yes
    Streams Pool Size 0 Yes
    Granule Size 4194304 No
    Maximum SGA Size 612368384 No
    Startup overhead in Shared Pool 37748736 No
    Free SGA Memory Available 0
    11 rows selected.
    SQL> show parameter sga;
    NAME TYPE VALUE
    lock_sga boolean FALSE
    pre_page_sga boolean FALSE
    sga_max_size big integer 584M
    sga_target big integer 584M
    SQL>
    Edited by: user1945932 on Feb 2, 2011 1:46 PM
    Edited by: user1945932 on Feb 2, 2011 2:25 PM

    If you are asking about the memory usage you see using operating system utilities, it could be a memory leak. It would be helpful if you posted more details about which patch level of the OS, and the exact patch level of Oracle. The latter can be seen with:
    SYS@TPRD> select * from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE    10.2.0.4.0      Production
    TNS for HPUX: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - ProductionNotice how much easier it is to read when we use the tag before and after the information?                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • New cache size then FF restore it again to 75...

    Well, i'be set a new Cache size ( Default is 75 ) to 250mb, but after a few minutes or when i close and Re-open FF the default chache is 75 insted of the new 250mb, any way to make solve this?, some config in about;config?, or in a .ini file?, any advise whould be cool.
    Thx and regards!

    The default value is 640000, what value should do type to have 250mb or more of cache size?
    Also there is other tweak that should do to change and keep the value for the number i whant?
    Sorry for my bad english, not my native language

  • Proxy cache

    Hello,
    I am having problems with a programming assignment. It's a proxy cache assignment, code has been written and I need to fill in the blanks, but I am confused on how to start.
    Here's the Proxy cache code
    * ProxyCache.java - Simple caching proxy
    * $Id: ProxyCache.java,v 1.3 2004/02/16 15:22:00 kangasha Exp $
    import java.net.*;
    import java.io.*;
    import java.util.*;
    public class ProxyCache {
        /** Port for the proxy */
        private static int port;
        /** Socket for client connections */
        private static ServerSocket socket;
        /** Create the ProxyCache object and the socket */
        public static void init(int p) {
         port = p;
         try {
             socket = /* Fill in */;
         } catch (IOException e) {
             System.out.println("Error creating socket: " + e);
             System.exit(-1);
        public static void handle(Socket client) {
         Socket server = null;
         HttpRequest request = null;
         HttpResponse response = null;
         /* Process request. If there are any exceptions, then simply
          * return and end this request. This unfortunately means the
          * client will hang for a while, until it timeouts. */
         /* Read request */
         try {
             BufferedReader fromClient = /* Fill in */;
             request = /* Fill in */;
         } catch (IOException e) {
             System.out.println("Error reading request from client: " + e);
             return;
         /* Send request to server */
         try {
             /* Open socket and write request to socket */
             server = /* Fill in */;
             DataOutputStream toServer = /* Fill in */;
             /* Fill in */
         } catch (UnknownHostException e) {
             System.out.println("Unknown host: " + request.getHost());
             System.out.println(e);
             return;
         } catch (IOException e) {
             System.out.println("Error writing request to server: " + e);
             return;
         /* Read response and forward it to client */
         try {
             DataInputStream fromServer = /* Fill in */;
             response = /* Fill in */;
             DataOutputStream toClient = /* Fill in */;
             /* Fill in */
             /* Write response to client. First headers, then body */
             client.close();
             server.close();
             /* Insert object into the cache */
             /* Fill in (optional exercise only) */
         } catch (IOException e) {
             System.out.println("Error writing response to client: " + e);
        /** Read command line arguments and start proxy */
        public static void main(String args[]) {
         int myPort = 0;
         try {
             myPort = Integer.parseInt(args[0]);
         } catch (ArrayIndexOutOfBoundsException e) {
             System.out.println("Need port number as argument");
             System.exit(-1);
         } catch (NumberFormatException e) {
             System.out.println("Please give port number as integer.");
             System.exit(-1);
         init(myPort);
         /** Main loop. Listen for incoming connections and spawn a new
          * thread for handling them */
         Socket client = null;
         while (true) {
             try {
              client = /* Fill in */;
              handle(client);
             } catch (IOException e) {
              System.out.println("Error reading request from client: " + e);
              /* Definitely cannot continue processing this request,
               * so skip to next iteration of while loop. */
              continue;
    }

    And the HTTPRequest code
    * HttpRequest - HTTP request container and parser
    * $Id: HttpRequest.java,v 1.2 2003/11/26 18:11:53 kangasha Exp $
    import java.io.*;
    import java.net.*;
    import java.util.*;
    public class HttpRequest {
        /** Help variables */
        final static String CRLF = "\r\n";
        final static int HTTP_PORT = 80;
        /** Store the request parameters */
        String method;
        String URI;
        String version;
        String headers = "";
        /** Server and port */
        private String host;
        private int port;
        /** Create HttpRequest by reading it from the client socket */
        public HttpRequest(BufferedReader from) {
         String firstLine = "";
         try {
             firstLine = from.readLine();
         } catch (IOException e) {
             System.out.println("Error reading request line: " + e);
         String[] tmp = firstLine.split(" ");
         method = "GET";
         URI = "HTTP/";
         version = "1.1";
         System.out.println("URI is: " + URI);
         if (!method.equals("GET")) {
             System.out.println("Error: Method not GET");
         try {
             String line = from.readLine();
             while (line.length() != 0) {
              headers += line + CRLF;
              /* We need to find host header to know which server to
               * contact in case the request URI is not complete. */
              if (line.startsWith("Host:")) {
                  tmp = line.split(" ");
                  if (tmp[1].indexOf(':') > 0) {
                   String[] tmp2 = tmp[1].split(":");
                   host = tmp2[0];
                   port = Integer.parseInt(tmp2[1]);
                  } else {
                   host = tmp[1];
                   port = HTTP_PORT;
              line = from.readLine();
         } catch (IOException e) {
             System.out.println("Error reading from socket: " + e);
             return;
         System.out.println("Host to contact is: " + host + " at port " + port);
        /** Return host for which this request is intended */
        public String getHost() {
         return host;
        /** Return port for server */
        public int getPort() {
         return port;
         * Convert request into a string for easy re-sending.
        public String toString() {
         String req = "";
         req = method + " " + URI + " " + version + CRLF;
         req += headers;
         /* This proxy does not support persistent connections */
         req += "Connection: close" + CRLF;
         req += CRLF;
         return req;
    }HTTPResponse code
    * HttpResponse - Handle HTTP replies
    * $Id: HttpResponse.java,v 1.2 2003/11/26 18:12:42 kangasha Exp $
    import java.io.*;
    import java.net.*;
    import java.util.*;
    public class HttpResponse {
        final static String CRLF = "\r\n";
        /** How big is the buffer used for reading the object */
        final static int BUF_SIZE = 8192;
        /** Maximum size of objects that this proxy can handle. For the
         * moment set to 100 KB. You can adjust this as needed. */
        final static int MAX_OBJECT_SIZE = 100000;
        /** Reply status and headers */
        String version;
        int status;
        String statusLine = "";
        String headers = "";
        /* Body of reply */
        byte[] body = new byte[MAX_OBJECT_SIZE];
        /** Read response from server. */
        public HttpResponse(DataInputStream fromServer) {
         /* Length of the object */
         int length = -1;
         boolean gotStatusLine = false;
         /* First read status line and response headers */
         try {
             String line = /* Fill in */;
             while (line.length() != 0) {
              if (!gotStatusLine) {
                  statusLine = line;
                  gotStatusLine = true;
              } else {
                  headers += line + CRLF;
              /* Get length of content as indicated by
               * Content-Length header. Unfortunately this is not
               * present in every response. Some servers return the
               * header "Content-Length", others return
               * "Content-length". You need to check for both
               * here. */
              if (line.startsWith("GET") ||
                  line.startsWith("HTTP/1.1")) {
                  String[] tmp = line.split(" ");
                  length = Integer.parseInt(tmp[1]);
              line = fromServer.readLine();
         } catch (IOException e) {
             System.out.println("Error reading headers from server: " + e);
             return;
         try {
             int bytesRead = 0;
             byte buf[] = new byte[BUF_SIZE];
             boolean loop = false;
             /* If we didn't get Content-Length header, just loop until
              * the connection is closed. */
             if (length == -1) {
              loop = true;
             /* Read the body in chunks of BUF_SIZE and copy the chunk
              * into body. Usually replies come back in smaller chunks
              * than BUF_SIZE. The while-loop ends when either we have
              * read Content-Length bytes or when the connection is
              * closed (when there is no Connection-Length in the
              * response. */
             while (bytesRead < length || loop) {
              /* Read it in as binary data */
              int res = /* Fill in */;
              if (res == -1) {
                  break;
              /* Copy the bytes into body. Make sure we don't exceed
               * the maximum object size. */
              for (int i = 0;
                   i < res && (i + bytesRead) < MAX_OBJECT_SIZE;
                   i++) {
                  /* Fill in */
              bytesRead += res;
         } catch (IOException e) {
             System.out.println("Error reading response body: " + e);
             return;
         * Convert response into a string for easy re-sending. Only
         * converts the response headers, body is not converted to a
         * string.
        public String toString() {
         String res = "";
         res = statusLine + CRLF;
         res += headers;
         res += CRLF;
         return res;
    }

  • Mod_wl_ohs_0202.log keep growing

    Grid control 11.1.0.1.0 installed on Redhat 5.2. The reposotory database is Oracle 11.2.0.2 on the same Linux box.
    File /u01/app/gc_inst/WebTierIH1/diagnostics/logs/OHS/ohs1/mod_wl_ohs_0202.log keeps growing, 6.5 gb after 6 months. renamed the file and created a empty mod_wl_ohs_0202.log. But the the old file still gets writing. Not sure if I should remove the file.
    What is the best practice to manage this file to prevent it grow too big?
    Thanks

    please check article-ID
    11G Grid Control Performance: Webtier log - mod_wl_ohs.log in the OMS Home is very Large in Size and not Rotated [ID 1271676.1]
    in MOS...
    HTH

  • Is there a solution to Rapidweaver Stacks Cache size problems?

    Is there a solution to Rapidweaver Stacks Cache size problems?
    My website http://www.optiekvanderlinden.be , has about 100 pages and is about 250MB in size. Every time I open RapidWeaver with the Stacks-plugin, the user/library/cache/com.yourhead.YHStacksKit folder grows to several Gigabytes in size. If I don't  manually empty the com.yourhead.YHStacksKit Cache daily, AND empty the Trash before i turn off my mac, my Mac still has problems and stalls at startup the next day. Sometimes up to 5 minutes and more before he really boots. If i have a been working on such a large site for a few weeks without deleting that cache, than my iMac doesn't start up anymore, and the only thing left to do is completely reinstall the iOs operating system. This problem doesn't happen the days i use all my other programs but don't use Rapidweaver.
    Does anyone know a solution and / or what is the cause of this?

    once more,
    i just launched my project in Rapidweaver, and WITHOUT making any changes to my pages, the cache/com.yourhead.YHStacksKit folder filled itself with 132 folders (total 84Mb), most of them containing images of specific pages etc. The largest folder in the cache is the one with the images of a page containing a catalogue of my company, this folder is 7Mb. It seems by just starting up a project, immediately different folders are created containing most of the images in my project, and some folders with a log.txt file. Still i don't see the reason why it is filling op a huge Cache even without making changes to the project. I have 2 different sites, so 2 different projects, and i get this with both of them.

  • BerkeleyDB cache size and Solaris

    I am having problems trying to scale up an application that uses BerkelyDB-4.4.20 on Sun Sparc servers running Solaris 8 and 9.
    The application has 11 primary databases and 7 secondary databases.
    In different instances of the application, the size of the largest pimary database
    ranges only from 2MB to 10MB, but those will grow rapidly over the
    course of the semester.
    The servers have 4-8 GB of RAM and 12-20 GBytes of swap.
    Succinctly, when the primary databases are small, the application runs as expected.
    But as the primary databases grow, the following, counterintuitive phenomenon
    occurs. With modest cache sizes, the application starts up, but throws
    std::exceptions of "not enough space" when it attempts to delete records
    via a cursor. The application also crashes randomly returning
    RUN_RECOVERY. But when the cache size is increased, the application
    will not even start up; instead, it fails and throws std::exceptions which say there
    is insufficient space to open the primary databases.
    Here is some data from a server that has 4GB RAM with 2.8 GBytes free
    (according to "top") when the data was collected:
    DB_CONFIG............db_stat -m.................................Result
    set_cachesize........Pool......Ind. Cache
    0 67108864 1.........80 MB.......8 KB................Starts but crashes and can't delete by
    .....................................................................cursor because of insufficient space
    0 134217728 1.......160 MB......8 KB.................Same as case above
    0 268435456 1........320 MB.....8 KB.................Doesn't start and says there is
    ......................................................................not enough space to open a primary
    ......................................................................database.
    0 536870912 1.........512 MB...16 KB.................Doesn't start and says there is
    ......................................................................not enough space to open a primary
    ......................................................................database (although it mentions a
    ......................................................................different primary database than before.
    1 073741884 1........1GB 70MB....36 KB............Doesn't start and says there is
    ......................................................................not enough space to open a primary
    ......................................................................database (although it mentions a
    ......................................................................different pimary database than
    ......................................................................previously).
    2 147483648 1.........2GB 140MB...672 KB........Doesn't start and says there is
    ......................................................................not enough space to open a primary
    ......................................................................database (although it mentions a
    ......................................................................different pimary database than
    ......................................................................previously).
    I should also mention that the application is written in Perl and uses
    the Sleepycat::Db Perl module to interface with the BerkeleyDB C++ API.
    Any help on how to interpret this data and, if the problem is the
    interface with Solaris, how to tweak that, will be greatly appreciated.
    Sincerely,
    Bill Wheeler, Department of Mathematics, Indiana University, Bloomington.

    Having found answers to my questions, I think I should document them here.
    1. On the matter of the error message "not enough space", this message
    apparently orginates from Solaris. When a process (e.g., an Apache child)
    requests additional (virtual) memory (via either brk or mmap) such that the
    total (virtual) memory allocated to the process would exceed the system limit
    (set by the setrlimit command), then the Solaris kernel rejects the request
    and returns the error ENOMEM . Somewhat cryptically, the text for this error
    is "not enough space" (in contrast, for instance, to "not enough virtual
    memory").
    Apparently, when the BerkeleyDB cache size is set too large, a process
    (e.g., an Apache child) that attempts to open the environment and databases
    may request a total memory allocation that exceeds the system limit.
    Then Solaris will reject the request and return the ENOMEM error.
    Within Solaris, the only solutions are apparently
    (i) to decrease the cache size or
    (ii) to increase the system limit via the setrlimit command.
    2. On the matter of the DB_RUNRECOVERY errors, the cause appears
    to have been the use of the DB_TXN_NOWAIT flag in combination with
    code that was mishandling some of the resulting, complex situations.
    Sincerely,
    Bill Wheeler

  • JRE 1.6.0_05 Plugin settings ignore cache size

    Hi!
    I just tried to reduce the cache size of my Java plugin and, however, this setting is ignored by the plugin. I set the size on 50 MB, but the plugin still caches temporary files in the default folder (C:\Dokumente und Einstellungen\mas\Anwendungsdaten\Sun\Java\Deployment\cache ...) - actually about 100 MB.
    Does anybody knows why, or is this just a bug?
    Is it okay, that the plugin caches all files of an applet several times, although it�s always the same applet (same version, same host, same everything...)?
    Thank you in advance!
    Best regards
    Marc

    Hey lynchmob,
    Try these steps to correct the problem, you need to be logged in as an administrator:
    1. Go to the group policy editor. You can get they by typing gpedit.msc into the Run dialog.
    2. Navigate to computer configuration->administrative templates->windows components->internet explorer.
    3. Disable "make proxy settings per-machine (rather than per-user)".
    4. Login with the user account and go to Internet Options.
    5. Go to the Connection tab.
    6. Click on the Lan Settings button.
    7. You may notice that the proxy settings are not correct. Change the proxy settings to be whatever is required for your proxy.
    8. Configure Java to use browser proxy settings.
    9. Open the java console.
    10. Set debug level to 5.
    11. Press 'p' to reload proxy settings. Use the trace messages to verify correct proxy settings were loaded.

  • Other keeps growing

    the Other category under capacity keeps growing even after deleting cache history, bookmarks, songs and emails.
    When I fisrt upgraded to 4.0.1 it went from 20mg to 200 meg,,,now after trying to fix all the problems the new software has caused it has grown to 500meg.
    Can anyone explain what is happening?
    and how to fix it?

    What is "Other" and What Can I Do About It?- Apple Support Communities
    What is the Other on my iPhone and How to Remove It

  • CONFUSED with Cache size versus Increment by in Sequence

    Dear all,
    I have a sequence that has the following attributes
    MINVALUE 1
    MAXVALUE 9999999999999
    CACHE SIZE 20
    INCREMENT BY 1
    and tested with SELECT * FROM all_sequences and verified the above attributes.
    The issue is when used, it started at 21 and has been incrementing by 20. Is it because of the Cache size overriding Increment by? I am totally confused. Please help...
    Thank you,

    As long as the Sequence is in the Library Cache in the Shared Pool, you will get increments of 1.
    However, as with any other object in the Shared Pool, a Sequence can get "aged out" of the Shared Pool if Oracle has insufficient memory to allocate for new SQLs. If it does get aged out, at the next load back into memory, it will come back with a value of 21. If you keep hitting the Sequence and keep it "busy" (ie hot), it will return 22, 23, 24 etc. If you use it infrequently and your shared pool size is small, it might be exited out of the shared pool and, at the next call, return with a value of 40 !
    See MetaLink Note#61760.1 on using DBMS_SHARED_POOL.KEEP to "pin" Sequences and other objects. If you "keep" too many objects in the shared pool , you may end up getting ORA-4031 errors !
    Note : Even if you pin your shared pool you can still "lose" values when :
    a. SHUTDOWN, STARTUP cycle happens
    b. Transactions allocate a sequence but do a rollback (a sequence increment does not get rolled-back)
    Edited by: Hemant K Chitale on Jun 5, 2009 10:53 AM

Maybe you are looking for