Shared memory:  apache memory usage in solaris 10

Hi people, I have setup a project for the apache userID and set the new equivalent of shmmax for the user via projadd. In apache I crank up StartServers to 100 but the RAM is soon exhausted - apache appears not to use shared memory under solaris 10. Under the same version of apache in solaris 9 I can fire up 100 apache startservers with little RAM usage. Any ideas what can cause this / what else I need to do? Thanks!

a) How or why does solaris choose to share memory
between processes
from the same program invoked multiple times
if that program has not
been specifically coded to use shared memory?Take a look at 'pmap -x' output for a process.
Basically it depend on where the memory comes from. If it's a page loaded from disk (executable, shared library) then the page begins life shared among all programs using the same page. So a small program with lots of shared libraries mapped may have a large memory footprint but have most of it shared.
If the page is written to, then a new copy is created that is no longer shared. If the program requests memory (malloc()), then the heap is grown and it gathers more private (non-shared) page mappings.
Simply: if we run pmap / ipcs we can see a
shared memory reference
for our oracle database and ldap server. There
is no entry for apache.
But the total memory usage is far far less than
all the apache procs'
individual memory totted up (all 100 of them, in
prstat.) So there is
some hidden sharing going on somewhere that
solaris(2.9) is doing,
but not showing in pmap or ipcs. (virtually
no swap is being used.)pmap -x should be showing you exactly which pages are shared and which are not.
b) Under solaris 10, each apache process takes up
precisely the
memory reported in prstat - add up the 100
apache memory details
and you get the total RAM in use. crank up the
number of procs any
more and you get out of memory errors so it
looks like prstat is
pretty good here. The question is - why on
solaris10 is apache not
'shared' but it is on solaris 9? We set up
all the usual project details
for this user, (jn /etc/projects) but I'm
guessing now that these project
tweaks where you explicitly set the shared
memory for a user only take
effect for programs explicitly coded to use
shared memory , e.g. the
oracle database, which correctly shows up a
shared memory reference
in ipcs .
We can fire up thousands of apaches on the 2.9
system without
running out of memory - both machines have the
same ram !
But the binary versions of apache are exactly
the same, and
the config directives are identical.
please tell me that there is something really
simple we have missed!On Solaris 10, do all the pages for one of the apache processes appear private? That would be really, really unusual.
Darren

Similar Messages

  • Shared memory usage

    hi group,
    I have a scenario that i have to use shared memory concept in routines of BI...here i have to write a abap code and check on a file. and i want to take that file from the shared memory concept. and then check. i also dont want to fetch th data from table but want to fetch this data also from some file using shared memory... i am totally new to shared memory concept and objects used there...can anyone guide me the step by step procedure for this...also if some help file available then that will be a great help...
    thanks in advance..

    A program which may help you is "ipcs". Specifically, "ipcs -am" run as root
    will give you information on all of the shared memory segments in use. The
    size of the segment is part of the data. This should give you a picture of
    what is going on in the system.
    Alan
    Sun Developer Technical Support
    http://www.sun.com/developers/support

  • Shared memory usage of Producer Consumer Problem

    Hai,
    I am able to write the Producer-Consumer problem as a interthread communication. But I don't know how to write it using shared memory. (i.e. I wish to start Producer and consumer as a different process). Please help me, sorry for the Temp file idea.
    Thanks
    MOHAN

    I have read that in the new brightly JDK1.4 you would be able to use shared memory, but I have to confess that I'm not sure about if I'm mixing different informations (lots of stuff read, u know?). Try reviewing the JDK1.4 SE new features.
    Also, I should reconsider using "traditional" ways like RMI, Sockets, and things like those.
    Regards.

  • Shared memory in solaris 10 with oracle

    Hi, I am a newbie to solaris and i have some questions on shared memory, Oracle in Solaris
    My Questions might seem different, however please do read and try to answer. Thanks in advance.
    1) if a solaris server has say 40gb of Ram, what would be the maximum size of a shared memory segment in this machine?
    I know that if the server has 40GB. then max shared memory size is 10GB i.e. one fourth of ram, however not sure
    2) What is the maximum size of a shared memory segment in solaris that a root user can define.
    + i know that its some where near 14 GB not very sure +
    3) Assume i have created a user X and i allocated say 10GB limit for this user for shared memory.
    I login to solaris using X and now, can i increase the size of the shared memory that this user can use?
    I have a situation, where the root user, created a user named DBA and the root user allocated some 15gb for this DBA user as the max SHM limit.
    Now the DBA user has set the max limit for shared memory as 1TB, which is causing hell of problems in the system.
    * I am Not very sure on the concept. I am new to this product and facing this problem. please advice.*
    + Thanks +
    Krishnakanth (Simply KK)

    Not sure why your "oracle" user (owner who will be creating the instance) has been assigned the project user.root. I would say create a seperate project may be "dba" and give access to this project to the owner of the user who will be creating the oracle instance.
    and then try to issue the command:
    prctl -n project.max-shm-memory -v 8gb -r -i project dba
    and check to see if you are still facing a problem?
    Edited by: somjitdba on Apr 2, 2009 9:54 PM

  • Solaris 10 - Restrict memory usage

    Bonjour,
    I use Solaris 10 Release 6/06 on SPARC system.
    I need to restrict the memory usage for users.
    Unfortunately, for the moment, we can't increase the amount of RAM.
    So, in a first time, I decided to use projects and max-shm-memory. But, this is only for shared memory segment, not real memory usage.
    rcapd use SWAP instead of RAM ... it not solve my issue.
    How can limit memory usage of an user ? It's possible of Solaris 10 (without zones) ?
    Thx.
    Guillaume

    does ulimit help you? this applies to the shell rather than a user...
    [ulimit (1)|http://docs.sun.com/app/docs/doc/816-5165/ulimit-1?l=en&a=view&q=ulimit]

  • Solaris 10 shared memory config/ora 11g

    The ora 11 install guide for spark solaris 10 is very confusing wrt shared memory and my system does not seem to using memory correctly, lots of swapping on an 8GB real memory system.
    The doc says to set /etc/system to:
    shmsys:shminfo_shmmax project.max-shm-memory 4294967296
    but infers that this is not used.
    Then, the doc states to set a project shared mem value of 2GB:
    # projmod -sK "project.max-shm-memory=(privileged,2G,deny)" group.dba
    Why is this number different?
    By setting to to 2G as documented oracle did not work at all and so I found Note:429191.1
    on the solaris 10 memory which hints that these numbers should be big:
    % prctl -n project.max-shm-memory -r -v 24GB -i project oracle_dss
    % prctl -n project.max-shm-memory -i project oracle_dss
    project: 101: oracle_dss
    NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
    project.max-shm-memory
    privileged 24.0GB - deny -
    system 16.0EB max deny
    Is there some logic in how to get solaris 10/ora 11 to hold hands. The install doc does not seem to contain it.

    system does not seem to using memory correctly, lots of swapping on an 8GB real memory system.We could start (for example) with this question - How big is your SGA or how much of 8GB RAM takes your SGA?
    The doc says to set /etc/system to:
    shmsys:shminfo_shmmax project.max-shm-memory 4294967296
    but infers that this is not used.From documentation:
    In Solaris 10, you are not required to make changes to the /etc/system file to implement the System V IPC. Solaris 10 uses the resource control facility for its implementation. However, Oracle recommends that you set both resource control and /etc/system/ parameters. Operating system parameters not replaced by resource controls continue to affect performance and security on Solaris 10 systems.
    Then, the doc states to set a project shared mem value of 2GB:
    # projmod -sK "project.max-shm-memory=(privileged,2G,deny)" group.dba
    Why is this number different?It's an example how "To set the maximum shared memory size to 2 GB"
    By setting to to 2G as documented oracle did not work at all Docs says:
    On Solaris 10, verify that the kernel parameters shown in the following table are set to values greater than or equal to the recommended value shown.
    If your SGA was greater than 2G I'm nor wondering why "oracle did not work at all".
    So for 4GB SGA (for example) you need allow allocation of 4G of shared memory.
    Note: shmsys:shminfo_shmmax != project.max-shm-memory. "project.max-shm-memory" is replacement of "shmsys:shminfo_shmmax" but function of these parameters differs.
    "project.max-shm-memory resource control limits the total amount of shared memory of one project, whereas previously, the shmsys:shminfo_shmmax parameter limited the size of a single shared memory segment."
    Relevant link to Sun docs: http://docs.sun.com/app/docs/doc/819-2724/chapter1-33

  • Solaris 10, Oracle 10g, and Shared Memory

    Hello everyone,
    We've been working on migrating to Solaris 10 on all of our database servers (I'm a UNIX admin, not a DBA, so please be gentle) and we've encountered an odd issue.
    Server A:
    Sun V890
    (8) 1.5Ghz CPUs
    32GB of RAM
    Server A was installed with Solaris 10 and the Oracle data and application files were moved from the old server (the storage hardware was moved between servers). Everything is running perfectly, and we're using the resource manager to control the memory settings (not /etc/system)
    The DBAs then increase the SGA of one of the DBs on the system from 1.5GB to 5GB and it fails to start (ORA-27102). According to the information I have, the maximum shared memory on this system should be 1/4 of RAM (8 GB, actually works out to 7.84 GB according to prctl). I verified the other shared memory/semaphore settings are where they should be, but the DB would not start with a 5 GB SGA. I then decided to just throw a larger max shared memory segment at it, so I used the projmod to increase project.max-shm-memory to 16GB for the project Oracle runs under. The DB now starts just fine. I cut it back down to 10GB for project.max-shm-memory and the DB starts ok. I ran out of downtime window, so I couldn't continue refining the settings.
    Running 'ipcs -b' and totalling up the individual segments showed we were using around 5GB on the test DB (assuming my addition is correct).
    So, the question:
    Is there a way to correlate the SGA of the DB(s) into what I need the project.max-shm-memory to? I would think 7.84GB would be enough to handle a DB with 5GB SGA, but it doesn't appear to be. We have some 'important' servers getting upgraded soon and I'd like to be able to refine these numbers / settings before I get to them.
    Thanks for your time,
    Steven

    To me, setting a massive shared memory segment just seems to be inefficient. I understand that Oracle is only going to take up as much memory (in general) as the SGA. And I've been searching for any record of really large shared memory segments causing issues but haven't found much (I'm going to contact Sun to get their comments).
    The issue I am having is that it doesn't make sense that the DB with a 5GB SGA is unable to startup when there is an 8GB max shared memory segment, but a 10GB (and above) seems to work. Does it really need double the size of the SGA when starting up, but 'ipcs' shows it's only using the SGA amount of shared memory? I have plans to cut it down to 4GB and test again, as that is Oracle's recommendation. I also plan to run the DB startup through truss to get a better handle on what it's trying to do. And, if it comes down to it, I'll just set a really big max shared memory segment, I just don't want it to come back and cause an issue down the road.
    The current guidance on Metalink still seems to be suggesting a 4GB shared memory segment (I did not get a chance to test this yet with the DB we're having issues with).
    I can't comment on how the DBA specifically increased the SGA as I don't know what method they use.

  • Solaris process memory usage increase but not forever

    On Solaris 10 I have a multithreaded process with a strange behaviour. It manages complicated C++ structures (RWTVal or RWPtr). These structures are built from data stored in a database (using Pro*C). Each hour the process looks for new informacion in database, builds new structures on memory and it frees older data. But, each time it repeats this procedure, the process memory usage increases several MB (12/16MB). Process's memory usage starts from 100M until near 1,4G. Just to this point, it seems the process has memory leaks. But the strange behaviour is that after this point, the process stops to continue growing up anymore. When I try to look for memory leaks (using Purify tool) the process doesn't grow up and no significant leaks were showed. Did anyone found a similar behaviour or can explain what could be happening?

    markza wrote:
    Hi, thanks for responding
    Ja, i guess thats possible, but to do it all one row by row seems ridiculous, and it'll be so time consuming and sluggish surely.  I mean, for a months worth of data (which is realistic) thats 44640 individual queries.  If push comes to shove, then I'll have to try that for sure.  
    You can see by the example that I'm saving it to a text file, in csv format.  So it needs to be a string array, a cluster won't be of much help I dont think.
    The only other way I can think of is to break it up into more manageable chunks...maybe pull each column separately in a for loop and build up a 2D array like that until the spreadsheet storing.  
    You only do 1 query, but instead of Fetching All (as the Select does) you'll use the cursor to step through the data.
    You can use Format to String or Write Spreadsheet fire with doubles.
    You can break it down to get the data day by day instead of a full month at once.
    /Y
    LabVIEW 8.2 - 2014
    "Only dead fish swim downstream" - "My life for Kudos!" - "Dumb people repeat old mistakes - smart ones create new ones."
    G# - Free award winning reference based OOP for LV

  • Shared memory (System V-style) - High usage of phys memory and page outs

    Hi!
    I get a much higher usage of physical memory when I use shared memory than I would expect. Please, I would really need someone to confirm my conclusions, so that I can revise my ignorance in this subject.
    In my experiments I create a shared memory segment of 200 MB and I have 7 processes attaching to it. I have a ws of 1 GB.
    I expect to see what I see when I attach to the shared memory segment in terms of virtual size, i.e. SIZE in prstat. After attaching (mapping it) to the process all 7 processes are about ~203 MB each and this makes sense. RSS, in prstat, is about 3 MB for each process and this is ok to me too.
    It is what I see when each of the 7 processes start to write a simple string like 'Hello!' in parallel to each page in their shared memory segment that I get surprised. I run out of memory on my ws after a while and the system starts to wildly page out physical memory to disk so that the next 'Hello!' can be written to the next page in the shared memory segment. It seems that each page written to in the shared memory chunk is mapped into each process private address space. This means that the shared memory is not physically shared, just virtually shared. Is this correct?
    Can memory be physically shared, so that my 7 processes only use 200 MB? ISM? DISM?
    I create a shared memory segment in a C-program with the following calls:
    shmid = shmget(key, SHM_SIZE, 0644 | IPC_CREAT)
    data = shmat(shmid, (void *)0, 0);Thanks in advance
    /Sune

    Your problem seemed reasonable. What were you doing wrong?
    Darren

  • Solaris 8 memory usage

    Is there a tool like McDougal's prtmem that will show accruate (or more accruate) memory usage then vmstat's freemem will show?

    tzzhc4 wrote:
    prtmem was part of the MEMTOOLS package you just listed, I belive it relies on a kernel module in that package and doesn't work on any of the newer kernel revisions.But it certainly works on 8, right? And that's the OS you were referring to, so I assumed you were thinking of something else.
    From that page:
    System Requirements:     SPARC/Solaris 2.6
                   SPARC/Solaris 7
                   SPARC/Solaris 8
                   SPARC/Solaris 9
                   x86 /Solaris 8
                   x86 /Solaris 9
    So if that's what you want to use, go for it!
    I thought freemem didn't include pages that had an identity, so there could be more memory free then was actually listed in freemem.What do you mean by 'identity'? Most pages are either allocated/reserved by a process (in use) or used by the disk cache. Under Solaris 7 and earlier, both reduced the 'freemem' number. Under 8 and later, only the first one does.
    Darren

  • Very high memory usage..possible memory leak?  Solaris 10 8/07 x64

    Hi,
    I noticed yesterday that my machine was becoming increasingly slow, where once it was pretty snappy. It's a Compaq SR5250NX with 1GB of RAM. Upon checking vmstat, I noticed that the "Free" column was ~191MB. Now, the only applications I had open were FireFox 2.0.11, GAIM, and StarOffice. I closed all of them, and the number reported in the "Free" column became approximately 195MB. "Pagefile" was about 5.5x that size. There were no other applications running and it's a single user machine, so I was the only one logged in. System uptime: 9 days.
    I logged out, logged back in, to see if that had an affect. It did not. Rebooted and obviously, that fixed it. Now with only FireFox, GAIM, and a terminal open, vmstat reports "Free" as ~450MB. I've noticed if I run vmstat every few seconds, the "Free" total keeps going down. Example:
    unknown% vmstat
    kthr      memory            page            disk          faults      cpu
    r b w   swap  free  re  mf pi po fr de sr cd s0 s1 s2   in   sy   cs us sy id
    0 0 0 870888 450220  9  27 10  0  1  0  8  2 -0 -0 -0  595 1193  569 72  1 28
    unknown% vmstat
    kthr      memory            page            disk          faults      cpu
    r b w   swap  free  re  mf pi po fr de sr cd s0 s1 s2   in   sy   cs us sy id
    0 0 0 870880 450204  9  27 10  0  1  0  8  2 -0 -0 -0  596 1193  569 72  1 28
    unknown% vmstat
    kthr      memory            page            disk          faults      cpu
    r b w   swap  free  re  mf pi po fr de sr cd s0 s1 s2   in   sy   cs us sy id
    0 0 0 870828 450092  9  27 10  0  1  0  8  2 -0 -0 -0  596 1193  570 71  1 28
    unknown%Output of prstat -u Kendall (my username ) is as follows:
       PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP
      2026 Kendall   124M   70M sleep   59    0   0:01:47 1.4% firefox-bin/7
      1093 Kendall    85M   77M sleep   59    0   0:07:15 1.1% Xsun/1
      1802 Kendall    60M   15M sleep   59    0   0:00:08 0.1% gnome-terminal/2
      1301 Kendall    93M   23M sleep   49    0   0:00:30 0.1% java/14
      1259 Kendall    53M   15M sleep   49    0   0:00:32 0.1% gaim/1
      2133 Kendall  3312K 2740K cpu1    59    0   0:00:00 0.0% prstat/1
      1276 Kendall    51M   12M sleep   59    0   0:00:11 0.0% gnome-netstatus/1
      1247 Kendall    46M   10M sleep   59    0   0:00:06 0.0% metacity/1
      1274 Kendall    51M   13M sleep   59    0   0:00:05 0.0% wnck-applet/1
      1249 Kendall    56M   17M sleep   59    0   0:00:07 0.0% gnome-panel/1
      1278 Kendall    48M 9240K sleep   59    0   0:00:05 0.0% mixer_applet2/1
      1245 Kendall  9092K 3844K sleep   59    0   0:00:00 0.0% gnome-smproxy/1
      1227 Kendall  8244K 4444K sleep   59    0   0:00:01 0.0% xscreensaver/1
      1201 Kendall  4252K 1664K sleep   59    0   0:00:00 0.0% sdt_shell/1
      1217 Kendall    55M   16M sleep   59    0   0:00:00 0.0% gnome-session/1
       779 Kendall    47M 2208K sleep   59    0   0:00:00 0.0% gnome-volcheck/1
       746 Kendall  5660K 3660K sleep   59    0   0:00:00 0.0% bonobo-activati/1
      1270 Kendall    49M   10M sleep   49    0   0:00:00 0.0% clock-applet/1
      1280 Kendall    47M 8904K sleep   59    0   0:00:00 0.0% notification-ar/1
      1199 Kendall  2928K  884K sleep   59    0   0:00:00 0.0% dsdm/1
      1262 Kendall    47M 2268K sleep   59    0   0:00:00 0.0% gnome-volcheck/1
    Total: 37 processes, 62 lwps, load averages: 0.11, 0.98, 1.63System uptime is 9 hours, 48 minutes. I'm just wondering why the memory usage seems so high to do...nothing. It's obviously a real problem as the machine turned very slow when vmstat was showing 195MB free.
    Any tips, tricks, advice, on which way to go with this?
    Thanks!

    Apologies for the delayed reply. School has been keeping me nice and busy.
    Anyway, here is the output of prstat -Z:
       PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP
      2040 Kendall      144M   76M sleep   59    0   0:04:26 2.0% firefox-bin/10
    28809 Kendall     201M  193M sleep   59    0   0:42:30 1.9% Xsun/1
      2083 Kendall      186M   89M sleep   49    0   0:02:31 1.2% java/58
      2260 Kendall       59M   14M sleep   59    0   0:00:00 1.0% gnome-terminal/2
      2050 Kendall       63M   21M sleep   49    0   0:01:35 0.6% realplay.bin/4
      2265 Kendall     3344K 2780K cpu1    59    0   0:00:00 0.2% prstat/1
    29513 Kendall     71M   33M sleep   39    0   0:07:25 0.2% gaim/1
    28967 Kendall     56M   18M sleep   59    0   0:00:24 0.1% gnome-panel/1
    29060 Kendall     93M   24M sleep   49    0   0:02:58 0.1% java/14
    28994 Kendall     51M   13M sleep   59    0   0:00:23 0.1% wnck-applet/1
    28965 Kendall     49M   14M sleep   59    0   0:00:33 0.0% metacity/1
       649 noaccess   164M   46M sleep   59    0   0:09:54 0.0% java/23
    28996 Kendall     51M   12M sleep   59    0   0:00:50 0.0% gnome-netstatus/1
      2264 Kendall    1352K  972K sleep   59    0   0:00:00 0.0% csh/1
    28963 Kendall  9100K 3792K sleep   59    0   0:00:03 0.0% gnome-smproxy/1
    ZONEID    NPROC  SWAP   RSS MEMORY      TIME  CPU ZONE
         0           80          655M  738M    73%       1:18:40 7.7% global
    Total: 80 processes, 322 lwps, load averages: 0.27, 0.27, 0.22Sorry about the bad formatting, it's copied from the terminal.
    In any event, we can see that FireFox is sucking up 145MB (??!?!!? crazy...) XSun, 200MB, and java 190MB. I'm running Java Desktop System (Release 3) so I assume that is what accounts for the the high memory usage RE: java process. But, XSun, 200MB?
    Is this normal and I just need to toss another gig in, or what?
    Thanks

  • How do I remove a DB from shared memory in Solaris 10?

    I'm having trouble removing  an in-memory database placed in shared memory. I set SHM key and cache size, and then open an environment with flags: DB_CREATE | DB_SYSTEM_MEM | DB_INIT_MPOOL | DB_INIT_LOG | DB_INIT_LOCK | DB_INIT_TXN. I also set the flag DB_TXN_NOSYNC on the DbEnv. At the end, after closing all Db and DbEnv handles, I create a new DbEnv instance and call DbEnv::remove. That's when things get weird.
    If I have the force flag set to 0, then it throws an exception saying "DbEnv::remove: Device busy". The shared memory segments do not get removed in this case (checking with `ipcs -bom`).
    When the force flag is set to zero, the shared memory is released but the program crashes saying "Db::close: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery".
    What am I doing wrong?

    This is curious, since a simple program similar to what is described is known to work. I've modified the standard sample program examples/cxx/EnvExample.cpp C++ to use an in-memory database, DB_SYSTEM_MEM, and DB_TXN_NOSYNC. The "Device busy" symptom occurs if the close of the environment handle is bypassed. I have not been able to reproduce the DB_RUNRECOVERY error.
    How does the program's use of Berkeley DB different from what is provided in EnvExample.cpp?
    Is it possible to send me the relevant portions of it?
    Regards,
    Charles Koester
    Oracle Berkeley DB

  • Solaris 10 Kernel memory usage

    We have a 32 GB RAM server running about 14 zones. There are multiple databases, application servers, web servers, and ftp servers running in the various zones.
    I understand that using ZFS will increase kernel memory usage, however I am a bit concerned at this point.
    root@servername:~/zonecfg #mdb -k
    Loading modules: [ unix krtld genunix specfs dtrace uppc pcplusmp ufs md mpt ip indmux ptm nfs ]
    ::memstat
    Page Summary Pages MB %Tot
    Kernel 4108442 16048 49%
    Anon 3769634 14725 45%
    Exec and libs 9098 35 0%
    Page cache 29612 115 0%
    Free (cachelist) 99437 388 1%
    Free (freelist) 369040 1441 4%
    Total 8385263 32754
    Physical 8176401 31939
    Out of 32GB of RAM, 16GB is being used by the kernel. Is there a way to find out how much of that kernel memory is due to ZFS?
    It just seems an excessively high amount of our memory is going to the kernel, even with ZFS being used on the server.

    root@servername:~ #mdb -k
    Loading modules: [ unix krtld genunix specfs dtrace uppc pcplusmp ufs md mpt ip hook neti sctp arp usba uhci fcp fctl qlc nca lofs zfs random fcip crypto logindmux ptm nfs ]
    ::memstat
    Page Summary Pages MB %Tot
    Kernel 4314678 16854 51%
    Anon 3538066 13820 42%
    Exec and libs 9249 36 0%
    Page cache 29347 114 0%
    Free (cachelist) 89647 350 1%
    Free (freelist) 404276 1579 5%
    Total 8385263 32754
    Physical 8176401 31939
    :quit
    root@servername:~ #kstat -m zfs
    module: zfs instance: 0
    name: arcstats class: misc
    c 12451650535
    c_max 33272295424
    c_min 1073313664
    crtime 175.759605187
    deleted 26773228
    demand_data_hits 89284658
    demand_data_misses 1995438
    demand_metadata_hits 1139759543
    demand_metadata_misses 5671445
    evict_skip 5105167
    hash_chain_max 15
    hash_chains 296214
    hash_collisions 75773190
    hash_elements 995458
    hash_elements_max 1576353
    hits 1552496231
    mfu_ghost_hits 4321964
    mfu_hits 1263340670
    misses 11984648
    mru_ghost_hits 474500
    mru_hits 57043004
    mutex_miss 106728
    p 9304845931
    prefetch_data_hits 10792085
    prefetch_data_misses 3571943
    prefetch_metadata_hits 312659945
    prefetch_metadata_misses 745822
    recycle_miss 2775287
    size 12451397120
    snaptime 2410363.20494097
    So it looks like our kernel is using 16GB and ZFS is using ~12GB for it's arc cache. Is a 4GB kernel for other stuff normal? It still seems like a lot of memory to me, but I don't know how all the zones affect the amount of memory the kernel needs.

  • How to specify maximum memory usage for Java VM in Tomcat?

    Does any one know how to setup memory usage for Java VM, such as "-Xmx256m" parameter, in Tomcat?
    I'm using Tomcat 3.x in Apache web server on Sun Solaris platform. I already tried to add the following line into tomcat.properties, like:
    wrapper.bin.parameters=-Xmx512m
    However, it seems to me that this doesn't work. So, how about if my servlet will consume a large amount of memory that exceeds the default 64M memory boundary of Java VM?
    Any idea will be appreciated.
    Haohua

    With some help we found the fix. You have to set the -Xms and -Xmx at installation time when you install Tomcat 4.x as a service. Services do not read system variables. Go to the command prompt in windows, and in the directory where tomcat.exe resides, type "tomcat.exe /?". You will see jvm_options as part of the installation. Put the -Xms and -Xmx variables in the proper place during the install and it will work.
    If you can't uninstall and reinstall, you can apply this registry hack that dfortae sent to me on another thread.
    =-=-=-=-=-=
    You can change the parameters in the Windows registry. If your service name is "Apache Tomcat" The location is:
    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Apache Tomcat\Parameters
    Change the JVM Option Count value to the new value with the number of parameters it will now have. In my case, I added two parameters -Xms100m and -Xmx256m and it was 3 before so I bumped it to 5.
    Then I created two more String values. I called the first one I added 'JVM Option Number 4' and the second 'JVM Option Number 5'. Then I set the value inside each. The first one I set to '-Xms100m' and the second I set to '-Xmx256m'. Then I restarted Tomcat and observed when I did big processing the memory limit was now 256 MB, so it worked. Hope this helps!
    =-=-=-=-=
    I tried this and it worked. I did not want to have to go through the whole reinstallation process, so this was best for me.
    Thanks to all who helped on this.

  • Clear memory usage

    Hello,
    I have created a web app. I'm using Ubuntu 10.04, Netbeans 6.9 and Tomcat 6 server.
    My application is based on Apache Lucene search. (I'm not sure, maybe Lucene is causing my problem, but I doubt it.)
    My problem is:
    When I search for some data, for e.g a word which is String, I store the results in ArrayLists. But everytime I send a request to servlet, I clear arraylists by doing this:
    ArrayList.clear();But my memory usage is rising on every search. And finally, I get my system using full memory.
    Mem:
    total       used       free     shared    buffers     cached
    4025       3860        164          0         54       1267What could cause this?
    Maybe I have to clear "cached" somehow?
    Edited by: peliukasss on Aug 24, 2010 11:39 AM

    peliukasss wrote:
    My problem is:
    When I search for some data, for e.g a word which is String, I store the results in ArrayLists. But everytime I send a request to servlet, I clear arraylists by doing this:
    ArrayList.clear();
    That's most likely unnecessary, unless you're keeping references to those arraylists needlessly somewhere. You're not storing them into the session are you?
    But my memory usage is rising on every search. And finally, I get my system using full memory.
    Mem:
    total       used       free     shared    buffers     cached
    4025       3860        164          0         54       1267What could cause this?The operating system uses the memory for caching and such if it can, because free memory has no intrinsic value. It can and will get rid of some cache if memory is required.
    Maybe I have to clear "cached" somehow?No, that's what the operating system keeps cached. Java has nothing to do with that.
    The actual used memory is seen on the second line, which counts out the buffers and cache, such as on my machine:
                 total       used       free     shared    buffers     cached
    Mem:          3837       3766         71          0         51       1640
    -/+ buffers/cache:       2074       1762
    Swap:        11240        378      10862Even though it might seem I have only 71 megabytes of free memory, there's actually 1.7GB free.
    You can give less memory for tomcat if you don't want it to go that high, look into the starting options. But there's nothing bizarre about what you're seeing.

Maybe you are looking for

  • FM or any method required to fecth the course type from the curriculum type

    Hi, I have a requirement where in i need to fetch all the couse type and curriculum types from the given course program (the given course program has various course program block ) i have made use of the FM   " LSO_CP_TRAININGSEQUENCE_GET " which als

  • How to identify the installed Weblogic Server and JDK are 32bit or 64bit?

    Hi everyone, I have a question ~ Both Weblogic Server and JAVA JDK are installed on the server already, but I only know the Weblogic Server is 10.3.4.0 and JAVA JDK version is 1.6.0_25. I know the 64bit Weblogic Server installation file is a wlsXXXX_

  • Unable to edit protected network

    hello guys. i am using Nokia N97 Mini mobile. whn i try to connect my Wifi connection at University. it says unable to connect. when i try to add a new access point from Settings>Connectivity>Destinations>New Access Point it checks for new connection

  • Relocating iTunes and songs

    I need to reinstall Windows (ads/alert windows driving me nuts in the bottom right corner). My room-mate partitioned my hard drive for me (I'm still not very good with this stuff, and he's not as good as he thinks he is, either), and he says I need t

  • Adobe LiveCycle Designer 8.0 - Keys Don't Nudge

    Does anyone know why, on a form in LiveCycle 8.0, when you click on an object, and click one of the arrows on the keyboard, it goes the same distance as when you are holding down the shift key with the arrow? According to help, just using the arrow k