Solaris 10 Usage
Hello people.
I am really new to solaris at this time, and i am interested on finding out HOW many users out there use Solaris for Desktop computers ( not servers ).
I ask this because i intend to make a website with alot of free software to be used on a Solaris machine.
So if anyone there can help me on this, mabye even tell me if this website will be a good ideea, i will appreciate .
Thank you
"a website with alot of free software"
Perhaps something comparable to:
http://www.sunfreeware.com/
or maybe
http://www.sun.com/download/
or maybe
http://sourceforge.net/search/?type_of_search=soft&words=solaris
Those are a few sites I think of, at the present time.
Similar Messages
-
Where is the download page for 9/04?
Yes, I know that 10 is out, but I'm not ready to move to it yet for various reasons. So, anyone have the magic StoreId & PartDetailId to get the dumb sunstore to take me to the FREE downloads for Solaris 9 x86 & sparc? It would seem that the links for 9 have gone missing, while the links for 8 are still there. Note, this is for personal use, so please don't tell me to contact my sun rep, because I have none. Also note that I do not want, nor can I necessarily afford, to shell out $100 for a media kit. At a very minimum, can anyone tell me what exact filenames are for the x86 & sparc cdrom isos? Thanks in advance.
Looks like you're SOL. According to the Solaris FAQ:
Q. Will free Solaris usage licenses be available for previous versions of Solaris?
A. None, free usage licenses begin with Solaris 10.
In other words, if you want to use a previous version of Solaris, you're screwed. Nice attitude, Sun. Forcing people to go to the next version of Solaris even if they're not ready. How very Microsoft-ish of you. -
Shared memory: apache memory usage in solaris 10
Hi people, I have setup a project for the apache userID and set the new equivalent of shmmax for the user via projadd. In apache I crank up StartServers to 100 but the RAM is soon exhausted - apache appears not to use shared memory under solaris 10. Under the same version of apache in solaris 9 I can fire up 100 apache startservers with little RAM usage. Any ideas what can cause this / what else I need to do? Thanks!
a) How or why does solaris choose to share memory
between processes
from the same program invoked multiple times
if that program has not
been specifically coded to use shared memory?Take a look at 'pmap -x' output for a process.
Basically it depend on where the memory comes from. If it's a page loaded from disk (executable, shared library) then the page begins life shared among all programs using the same page. So a small program with lots of shared libraries mapped may have a large memory footprint but have most of it shared.
If the page is written to, then a new copy is created that is no longer shared. If the program requests memory (malloc()), then the heap is grown and it gathers more private (non-shared) page mappings.
Simply: if we run pmap / ipcs we can see a
shared memory reference
for our oracle database and ldap server. There
is no entry for apache.
But the total memory usage is far far less than
all the apache procs'
individual memory totted up (all 100 of them, in
prstat.) So there is
some hidden sharing going on somewhere that
solaris(2.9) is doing,
but not showing in pmap or ipcs. (virtually
no swap is being used.)pmap -x should be showing you exactly which pages are shared and which are not.
b) Under solaris 10, each apache process takes up
precisely the
memory reported in prstat - add up the 100
apache memory details
and you get the total RAM in use. crank up the
number of procs any
more and you get out of memory errors so it
looks like prstat is
pretty good here. The question is - why on
solaris10 is apache not
'shared' but it is on solaris 9? We set up
all the usual project details
for this user, (jn /etc/projects) but I'm
guessing now that these project
tweaks where you explicitly set the shared
memory for a user only take
effect for programs explicitly coded to use
shared memory , e.g. the
oracle database, which correctly shows up a
shared memory reference
in ipcs .
We can fire up thousands of apaches on the 2.9
system without
running out of memory - both machines have the
same ram !
But the binary versions of apache are exactly
the same, and
the config directives are identical.
please tell me that there is something really
simple we have missed!On Solaris 10, do all the pages for one of the apache processes appear private? That would be really, really unusual.
Darren -
Is Solaris 9 Intel mature enough for serious usage?
Hello everyone,
I'm a Solaris newbie. I downloaded Solaris 8 Intel and installed a few times with and without success since Sun first released version 8 for free download. I had sucessfully installed Solaris 8 on Compaq Proliant Server and IBM Intel Xseries (with bad video) but failed on Dell PowerEdge (due to the lack of RAID and SCSI dirver or I didn't know where to get the drivers my hardware.
The reason I posed the question is that the System Requirement Note states the Solaris 9 x86 OE Customer Early Access software is not suitable for deployment on any production systems.
Is the current release of version 9 stable or final release? I'd rather not waste my bandwidth, time (It'd take me 3 days to download 3 iso images), CDs for beta releases.
Please advise me.
Thanks.
TrentDepends on what you mean by "serious usage". Like right now, it provides a electronic place for me to dump my school work to instead of bringing disks or blank CDs.
Overall, it's an alright system. Though, Solaris 9 was a little more difficult to set up on my old computer (probably more because of the computer itself rather than the software.)
But, I did it. Installed Solaris 9 on 32 MB of RAM on a Pentium MMX 166. -
Solaris 10 - Restrict memory usage
Bonjour,
I use Solaris 10 Release 6/06 on SPARC system.
I need to restrict the memory usage for users.
Unfortunately, for the moment, we can't increase the amount of RAM.
So, in a first time, I decided to use projects and max-shm-memory. But, this is only for shared memory segment, not real memory usage.
rcapd use SWAP instead of RAM ... it not solve my issue.
How can limit memory usage of an user ? It's possible of Solaris 10 (without zones) ?
Thx.
Guillaumedoes ulimit help you? this applies to the shell rather than a user...
[ulimit (1)|http://docs.sun.com/app/docs/doc/816-5165/ulimit-1?l=en&a=view&q=ulimit] -
Solaris process memory usage increase but not forever
On Solaris 10 I have a multithreaded process with a strange behaviour. It manages complicated C++ structures (RWTVal or RWPtr). These structures are built from data stored in a database (using Pro*C). Each hour the process looks for new informacion in database, builds new structures on memory and it frees older data. But, each time it repeats this procedure, the process memory usage increases several MB (12/16MB). Process's memory usage starts from 100M until near 1,4G. Just to this point, it seems the process has memory leaks. But the strange behaviour is that after this point, the process stops to continue growing up anymore. When I try to look for memory leaks (using Purify tool) the process doesn't grow up and no significant leaks were showed. Did anyone found a similar behaviour or can explain what could be happening?
markza wrote:
Hi, thanks for responding
Ja, i guess thats possible, but to do it all one row by row seems ridiculous, and it'll be so time consuming and sluggish surely. I mean, for a months worth of data (which is realistic) thats 44640 individual queries. If push comes to shove, then I'll have to try that for sure.
You can see by the example that I'm saving it to a text file, in csv format. So it needs to be a string array, a cluster won't be of much help I dont think.
The only other way I can think of is to break it up into more manageable chunks...maybe pull each column separately in a for loop and build up a 2D array like that until the spreadsheet storing.
You only do 1 query, but instead of Fetching All (as the Select does) you'll use the cursor to step through the data.
You can use Format to String or Write Spreadsheet fire with doubles.
You can break it down to get the data day by day instead of a full month at once.
/Y
LabVIEW 8.2 - 2014
"Only dead fish swim downstream" - "My life for Kudos!" - "Dumb people repeat old mistakes - smart ones create new ones."
G# - Free award winning reference based OOP for LV -
Is there a tool like McDougal's prtmem that will show accruate (or more accruate) memory usage then vmstat's freemem will show?
tzzhc4 wrote:
prtmem was part of the MEMTOOLS package you just listed, I belive it relies on a kernel module in that package and doesn't work on any of the newer kernel revisions.But it certainly works on 8, right? And that's the OS you were referring to, so I assumed you were thinking of something else.
From that page:
System Requirements: SPARC/Solaris 2.6
SPARC/Solaris 7
SPARC/Solaris 8
SPARC/Solaris 9
x86 /Solaris 8
x86 /Solaris 9
So if that's what you want to use, go for it!
I thought freemem didn't include pages that had an identity, so there could be more memory free then was actually listed in freemem.What do you mean by 'identity'? Most pages are either allocated/reserved by a process (in use) or used by the disk cache. Under Solaris 7 and earlier, both reduced the 'freemem' number. Under 8 and later, only the first one does.
Darren -
Hi,
I have an unusual problem with Solaris 9 servers I am responsible for. Occasionally a server will suffer from very poor performance, characterized by disk throughput ramping up to 100%. I figured there must be a process responsible for hogging the disk, so I first tried shutting down as many processes as I could. The problem is that the servers are distributed around the world, and I need to be careful not to stop so many processes that the server becomes unavailable. This did not yield a solution, so I found some tools to list disk usage by process. I found that when the disk usage is at 100%, there was very little process disk activity, so it was not possible to identify a guilty process. Strangely, one of the partitions affected (according to iostat) is the backup partition. The only solution I have found so far is to do a reboot.
Has anyone seen this problem before?
Thanks
NickIssue fixed. Kernel memory consumption caused by too much outgoing TCP connection. See http://forum.sun.com/thread.jspa?threadID=26311&tstart=0
-
Hi everybody,
I'm a newbie and I created four data bases on Oracle 9i under
Solaris. Ech data bases uses about 400 MB of memory. Is that a
normal value? Basically there is no data in the data bases, yet.
I created them as general data bases with the wizard.
Thanks for any comments or hints.
Best regards,
IngoMichael ,
If you are looking for the amount of physical memory on your system,
use the 'prtconf' command. If you would like to get some virtual
memory page statistics, use the 'vmstat' command. If you would like to
find out about the memory allocated to a particular process, use
'ps -e -opid,vsz,rss,args'. The only problem is the different behavior on
different versions of Solaris. Check the man pages to find out about
the usage of each command.
A good tool is 'prtmem' in the MemTool package which can be
downloaded anonymously from playground.sun.com in /pub/memtool/
or goto www.solarisinternals.com . The MemTool tools are reported to
work on Solaris 2.6 to 2.8. The 'prtmem' tool will break down the
memory usage as follows.
1) Total Memory - approx. total memory
2) Kernel Memory - memory allocated to the kernel
3) Application - amount of anonymous pages in memory
4) Executable & libs - amount of executable and shared library pages
in memory
5) File Cache - amount of file cache not on free list
6) Free, file cache - amount of file cache on free list
7) Free, free - memory that is free
Please note that the "Free, free" number is usually very close,
if not equal, to zero. This is due to the VM systems use of the free
memory to cache files.
The ultimate resource for this type of info, from a kernel perspective, is
the "Solaris Internals" book written by Mauro and McDougall. I
recommend it highly!
HTH,
Rob -
High CPU usage reported on Solaris 8
Hello,
In my OEM 10g Grid Control / Hosts view, I have high CPU Util% reported for one of my Solaris servers (pegged at 100% right now); a remote node running the 10g agent.
However, when I look at the box and use the native Unix commands (uptime, sar, vmstat, iostat, etc), I cannot see why OEM is complaining. When I run "top" on the box, it does show I/O wait @ 97-99%, but that value is kinda bogus, there is no I/O queued or pending, so I'm wondering if the Oracle Agent is somehow using that bogus value of I/O wait to decide on total CPU usage? Anyone seen this, and know a fix?I ran 'nmump foo' to get the usage, and it looks like either "osLoad" or "osCpuUsage" would be what it's executing, although the text inside both of those perl scripts says they are only for Linux or Tru64. I'm not sure what to make of the outputs:
$ nmupm osLoad
em_result=0.06|0.05|0.06|843677.000000|178|8|279360901.000000|1223751574.000000|
313790808.000000|98206630.000000|2580027199.000000|8192.000000|4194304.000000|56
0026113789.000000|4234766618533.000000|1227715348459.000000
$ nmupm osCpuUsage
em_result=1|54665464.000000|22348544.000000|597678197.000000|379255063.000000
em_result=2|93177600.000000|25722959.000000|752622995.000000|182421611.000000
em_result=3|118210934.000000|29060067.000000|639495186.000000|267178975.000000
em_result=4|47736833.000000|21075094.000000|590237308.000000|394895925.000000
I found the metric brower line in emd.properties and uncommented it out, now I'll try to figure out where the 'metric browser' is, and how I see what it's doing...if you happen to catch my reply, and could be a bit more specific about where this is, that'd be great. -
Very high memory usage..possible memory leak? Solaris 10 8/07 x64
Hi,
I noticed yesterday that my machine was becoming increasingly slow, where once it was pretty snappy. It's a Compaq SR5250NX with 1GB of RAM. Upon checking vmstat, I noticed that the "Free" column was ~191MB. Now, the only applications I had open were FireFox 2.0.11, GAIM, and StarOffice. I closed all of them, and the number reported in the "Free" column became approximately 195MB. "Pagefile" was about 5.5x that size. There were no other applications running and it's a single user machine, so I was the only one logged in. System uptime: 9 days.
I logged out, logged back in, to see if that had an affect. It did not. Rebooted and obviously, that fixed it. Now with only FireFox, GAIM, and a terminal open, vmstat reports "Free" as ~450MB. I've noticed if I run vmstat every few seconds, the "Free" total keeps going down. Example:
unknown% vmstat
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr cd s0 s1 s2 in sy cs us sy id
0 0 0 870888 450220 9 27 10 0 1 0 8 2 -0 -0 -0 595 1193 569 72 1 28
unknown% vmstat
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr cd s0 s1 s2 in sy cs us sy id
0 0 0 870880 450204 9 27 10 0 1 0 8 2 -0 -0 -0 596 1193 569 72 1 28
unknown% vmstat
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr cd s0 s1 s2 in sy cs us sy id
0 0 0 870828 450092 9 27 10 0 1 0 8 2 -0 -0 -0 596 1193 570 71 1 28
unknown%Output of prstat -u Kendall (my username ) is as follows:
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
2026 Kendall 124M 70M sleep 59 0 0:01:47 1.4% firefox-bin/7
1093 Kendall 85M 77M sleep 59 0 0:07:15 1.1% Xsun/1
1802 Kendall 60M 15M sleep 59 0 0:00:08 0.1% gnome-terminal/2
1301 Kendall 93M 23M sleep 49 0 0:00:30 0.1% java/14
1259 Kendall 53M 15M sleep 49 0 0:00:32 0.1% gaim/1
2133 Kendall 3312K 2740K cpu1 59 0 0:00:00 0.0% prstat/1
1276 Kendall 51M 12M sleep 59 0 0:00:11 0.0% gnome-netstatus/1
1247 Kendall 46M 10M sleep 59 0 0:00:06 0.0% metacity/1
1274 Kendall 51M 13M sleep 59 0 0:00:05 0.0% wnck-applet/1
1249 Kendall 56M 17M sleep 59 0 0:00:07 0.0% gnome-panel/1
1278 Kendall 48M 9240K sleep 59 0 0:00:05 0.0% mixer_applet2/1
1245 Kendall 9092K 3844K sleep 59 0 0:00:00 0.0% gnome-smproxy/1
1227 Kendall 8244K 4444K sleep 59 0 0:00:01 0.0% xscreensaver/1
1201 Kendall 4252K 1664K sleep 59 0 0:00:00 0.0% sdt_shell/1
1217 Kendall 55M 16M sleep 59 0 0:00:00 0.0% gnome-session/1
779 Kendall 47M 2208K sleep 59 0 0:00:00 0.0% gnome-volcheck/1
746 Kendall 5660K 3660K sleep 59 0 0:00:00 0.0% bonobo-activati/1
1270 Kendall 49M 10M sleep 49 0 0:00:00 0.0% clock-applet/1
1280 Kendall 47M 8904K sleep 59 0 0:00:00 0.0% notification-ar/1
1199 Kendall 2928K 884K sleep 59 0 0:00:00 0.0% dsdm/1
1262 Kendall 47M 2268K sleep 59 0 0:00:00 0.0% gnome-volcheck/1
Total: 37 processes, 62 lwps, load averages: 0.11, 0.98, 1.63System uptime is 9 hours, 48 minutes. I'm just wondering why the memory usage seems so high to do...nothing. It's obviously a real problem as the machine turned very slow when vmstat was showing 195MB free.
Any tips, tricks, advice, on which way to go with this?
Thanks!Apologies for the delayed reply. School has been keeping me nice and busy.
Anyway, here is the output of prstat -Z:
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
2040 Kendall 144M 76M sleep 59 0 0:04:26 2.0% firefox-bin/10
28809 Kendall 201M 193M sleep 59 0 0:42:30 1.9% Xsun/1
2083 Kendall 186M 89M sleep 49 0 0:02:31 1.2% java/58
2260 Kendall 59M 14M sleep 59 0 0:00:00 1.0% gnome-terminal/2
2050 Kendall 63M 21M sleep 49 0 0:01:35 0.6% realplay.bin/4
2265 Kendall 3344K 2780K cpu1 59 0 0:00:00 0.2% prstat/1
29513 Kendall 71M 33M sleep 39 0 0:07:25 0.2% gaim/1
28967 Kendall 56M 18M sleep 59 0 0:00:24 0.1% gnome-panel/1
29060 Kendall 93M 24M sleep 49 0 0:02:58 0.1% java/14
28994 Kendall 51M 13M sleep 59 0 0:00:23 0.1% wnck-applet/1
28965 Kendall 49M 14M sleep 59 0 0:00:33 0.0% metacity/1
649 noaccess 164M 46M sleep 59 0 0:09:54 0.0% java/23
28996 Kendall 51M 12M sleep 59 0 0:00:50 0.0% gnome-netstatus/1
2264 Kendall 1352K 972K sleep 59 0 0:00:00 0.0% csh/1
28963 Kendall 9100K 3792K sleep 59 0 0:00:03 0.0% gnome-smproxy/1
ZONEID NPROC SWAP RSS MEMORY TIME CPU ZONE
0 80 655M 738M 73% 1:18:40 7.7% global
Total: 80 processes, 322 lwps, load averages: 0.27, 0.27, 0.22Sorry about the bad formatting, it's copied from the terminal.
In any event, we can see that FireFox is sucking up 145MB (??!?!!? crazy...) XSun, 200MB, and java 190MB. I'm running Java Desktop System (Release 3) so I assume that is what accounts for the the high memory usage RE: java process. But, XSun, 200MB?
Is this normal and I just need to toss another gig in, or what?
Thanks -
Problem in Compilation of a c++ code due to usage of regexec in Solaris.
HI all,
I am using regexec as "err=regexec(®exBuffer, m_data, maxMatches, regexMatches, eFlags)"
i am having a problem running this code in Solaris although i works fine in windows.
When i try to display indivial values of the structure regexec, i get the following compilation error:
"common/src/fact/regexp.cpp", line 88: Error: allocated is not a member of boost::regex_tA.
"common/src/fact/regexp.cpp", line 89: Error: buffer is not a member of boost::regex_tA.
"common/src/fact/regexp.cpp", line 90: Error: can_be_null is not a member of boost::regex_tA.
"common/src/fact/regexp.cpp", line 91: Error: fastmap is not a member of boost::regex_tA.
"common/src/fact/regexp.cpp", line 92: Error: Unexpected ")" -- Check for matching parenthesis.
"common/src/fact/regexp.cpp", line 92: Error: Badly formed expression.
"common/src/fact/regexp.cpp", line 93: Error: newline_anchor is not a member of boost::regex_tA.
"common/src/fact/regexp.cpp", line 94: Error: no_sub is not a member of boost::regex_tA.
"common/src/fact/regexp.cpp", line 95: Error: not_bol is not a member of boost::regex_tA.
"common/src/fact/regexp.cpp", line 96: Error: not_eol is not a member of boost::regex_tA.
"common/src/fact/regexp.cpp", line 98: Error: regs_allocated is not a member of boost::regex_tA.
"common/src/fact/regexp.cpp", line 99: Error: syntax is not a member of boost::regex_tA.
"common/src/fact/regexp.cpp", line 100: Error: translate is not a member of boost::regex_tA.
"common/src/fact/regexp.cpp", line 101: Error: used is not a member of boost::regex_tA.
Help!!!
Regards
Ajithi,
If the data has been archieved then there is no tool to get the archieve data back in BW. So I suggest to investigate further.
Thanks -
Hello to all.
I had an application (a C programme) running on Solaris 2.4. Recently i upgraded to Solaris 8.
I recompiled this application (everything went smoothly). However i have a serious problem i
cannot debug further. This application runs as a daemon. As the time passes it eats all my swap space
available and never releases it. When i run out of swap space, the application crashes and has to be
restarted.
Any ideas what may be the problem ?Sounds like a memory leak.
-
Swap space usage in solaris 2.6
Hi
I'm confused abt how the OS uses the swap space in the solaris 2.6. My system has the following configuration.
RAM - 2560 M.
Disk based memory for swap - 513 M (From 'top' / 'swap -l' command)
The 'df -k' over the "/tmp" directory is showing Total around 1.8 GB space for the "swap" volume mounted over "/tmp".
Does it mean that, only 513M of 1.8 GB is allocated for a swap slice?? BTW this is the only swap slice that our system has. So what's happening to the remaining part of 1.8 GB?? Is it getting wasted??
When does the OS exactly starts using the disk memory?? Is it like if the OS finds that the available RAM is not sufficient for its memory operations then it starts using the disk memory??
Any help in clearing my doubts would be highly apreciated.
Rgds
raviHi
Thanks for the response. I understand the concept of anonymous memory. But what is confusing me is the "/tmp" directory. The "df -k" command over the "/tmp" directory is always showing used % as 1%. Also the "swap -s" is never matching with the output of "df -k".. ( I suppose both represent the Virtual attributes of swap space i.e. disk backed memory + portion of the RAM).
for example following is the output of the above commands at a particular instance.
df -k
swap 1908728 1768 1906960 1% /tmp
tcsb.tcs.gs1::/users/tcsuser/tmp-> swap -l
swapfile dev swaplo blocks free
/dev/dsk/c1t0d0s1 32,1 16 1049744 84048
tcsb.tcs.gs1::/users/tcsuser/tmp-> swap -s
total: 589008k bytes allocated + 98672k reserved = 687680k used, 1908920k available
Is there anything i'm missing here??
-ravi -
How to Maximize Memory usage for JDK 6 on Solaris 10.
Hi there,
I deployed jdk 6 & Tomcat 6 running on Solaris 10 with 8GB of RAM Sun Fire x64 x4100 M2 m/c.
Its relatively fast. However, the java could allowed me to set the Max Java OPT to only 3GB. I don't understand and there isn't any extra applications. All are prior Solaris 10.
Java will not be able to start if I set more than -J-Xmx3072m.
Appreciate your advise and expertise.
regards,
jose(airqq)Hello Neville:
I seem to be coming off as negative - not my usual approach. However.......
I am a retired webmaster and don't have Lion but I am impressed by your tip. Well written and researched. I think we should all do our bit to educate and encourage others to open applications and use the Terminal.
a different point of view from someone who has been posting in these forums for many years.
Almost all of the people who seek help here are not:
1. Retired webmasters.
2. Know anything about the arcane terminal commands and, more importantly, how much damage they can cause their system by a missing space or character. Apple has spent a lot of time to develop OS X to a point where people do not need to learn about or understand Unix-like command language.
I salute people who are technical enough to use terminal commands. It is, however, IMHO, a real disservice to suggest to non-tecchie people that it is ever necessary (or desirable) to operate at that level.
Having said all that, I respect your comment and point of view. I would reiterate, however, that solving non-existent problems is a waste of time. By non-existent, I mean that if there are no performance problems, it is irrelevant how much memory a program uses. My current base-level iMac has 4 GB of memory and a Terabyte of HD space.
Barry
Message was edited by: Barry Hemphill -
Solaris 10 Kernel memory usage
We have a 32 GB RAM server running about 14 zones. There are multiple databases, application servers, web servers, and ftp servers running in the various zones.
I understand that using ZFS will increase kernel memory usage, however I am a bit concerned at this point.
root@servername:~/zonecfg #mdb -k
Loading modules: [ unix krtld genunix specfs dtrace uppc pcplusmp ufs md mpt ip indmux ptm nfs ]
::memstat
Page Summary Pages MB %Tot
Kernel 4108442 16048 49%
Anon 3769634 14725 45%
Exec and libs 9098 35 0%
Page cache 29612 115 0%
Free (cachelist) 99437 388 1%
Free (freelist) 369040 1441 4%
Total 8385263 32754
Physical 8176401 31939
Out of 32GB of RAM, 16GB is being used by the kernel. Is there a way to find out how much of that kernel memory is due to ZFS?
It just seems an excessively high amount of our memory is going to the kernel, even with ZFS being used on the server.root@servername:~ #mdb -k
Loading modules: [ unix krtld genunix specfs dtrace uppc pcplusmp ufs md mpt ip hook neti sctp arp usba uhci fcp fctl qlc nca lofs zfs random fcip crypto logindmux ptm nfs ]
::memstat
Page Summary Pages MB %Tot
Kernel 4314678 16854 51%
Anon 3538066 13820 42%
Exec and libs 9249 36 0%
Page cache 29347 114 0%
Free (cachelist) 89647 350 1%
Free (freelist) 404276 1579 5%
Total 8385263 32754
Physical 8176401 31939
:quit
root@servername:~ #kstat -m zfs
module: zfs instance: 0
name: arcstats class: misc
c 12451650535
c_max 33272295424
c_min 1073313664
crtime 175.759605187
deleted 26773228
demand_data_hits 89284658
demand_data_misses 1995438
demand_metadata_hits 1139759543
demand_metadata_misses 5671445
evict_skip 5105167
hash_chain_max 15
hash_chains 296214
hash_collisions 75773190
hash_elements 995458
hash_elements_max 1576353
hits 1552496231
mfu_ghost_hits 4321964
mfu_hits 1263340670
misses 11984648
mru_ghost_hits 474500
mru_hits 57043004
mutex_miss 106728
p 9304845931
prefetch_data_hits 10792085
prefetch_data_misses 3571943
prefetch_metadata_hits 312659945
prefetch_metadata_misses 745822
recycle_miss 2775287
size 12451397120
snaptime 2410363.20494097
So it looks like our kernel is using 16GB and ZFS is using ~12GB for it's arc cache. Is a 4GB kernel for other stuff normal? It still seems like a lot of memory to me, but I don't know how all the zones affect the amount of memory the kernel needs.
Maybe you are looking for
-
Approval procedure required for Delivery note based on condtions
Dear All, I have the followings situation in which the business flow is as such that Sales Quotation is made and based on the Sales Quotation AR Down Payment request is made. Once the downpayment is recieved from the customer entries are made in Inco
-
How to Redirect to another page Using JavaScript in ADF Faces?
Hi Guys, I have a UI user case that has a af:menu which contains mutiple af:goMenuItem. When user click on the menu, the menu slides down and shows up the af:goMenuItem. As we know, you could define the page destinations in af:goMenuItem to go to ano
-
Hai , i faced one problem i.e i sent sendor structure through asunchronous communication but in my sendor side having problem so where can i find the problem and after rectifying the problem can i sent the same data through synchronous communicati
-
During a facetime call my front camera is fine, but when I switch, my friend said it freezes on their end, but keeps moving on mine. Ive restarted and everything. What could be the issue? 4.3.3 IPhone 4.
-
Differences between using a Object Reference and the Object itself
Example: public class Myclass //difference between Myclass myobj; //and Myclass myobj = new Myclass(); can anyone help me?