RAM allocation limit per process/ user

Hi All,
my wife is using a MacPro with 2Gb RAM at work to run a bioinformatics terminal application (mcl) and it seems she doesn't have enough RAM for the dataset she is trying to analyze.
So here is my question(s):
Since this machine has a 64bit CPU (Dual-Core Intel Xeon) and Tiger is supposed to be largely 64 bits, the terminal app should be able to allocate more than 4Gb of ram if we extend it to lets say 6 Gb of RAM - right?
I'm asking this because the sysadmin says even if the machine has more than 4Gb RAM, one process cannot allocate more than 4Gb RAM!
I personally doubt that he is right but I'm not sure.
Also do we need to upgrade to 10.5 to have full memory access or will 10.4 be sufficient?
It would be great if somebody could clarify this. I tried to find answers to these questions in the forums but couldn't find any, so if this was already answered somewhere please forgive me =:)
Thanks
Uli

Try to help you along a little.
The GUI in Tiger is 32-bit sort of, while your application would also have to be coded to be 64-bit to fully utilize 4-32GB.
Because it is a terminal application, not running in GUI, it is not and should not be affected or limited by the 4GB limit of Tiger's GUI.
the system can cache and hold data in memory, and Leopard is more efficient.
There is no real relationship of physical RAM to number of cores, having larger L3 cache helps but is beyond your ability to control. And more cores can hurt if a program can't or isn't programmed to use multiple cores (in fact it can thrash around some).
Memory performance, why having 8 DIMMs helps (2008 model, not 2006/7)
http://www.barefeats.com/harper12.html
Memory testing
http://www.barefeats.com/harper3.html
About 64-bit computing
http://www.geekpatrol.ca/blog/150/
http://arstechnica.com/articles/paedia/cpu/x86-64.ars
http://en.wikipedia.org/wiki/64-bit
http://www.geekpatrol.ca/2006/12/eight-core-mac-pro-benchmarks/
http://arstechnica.com/articles/columns/mac/mac-02172004.ars
Applications, systems, and hardware - Intel and others have begun to bring out better compilers to optimize working on multi-core systems.
You should consider 4GB as minimum in any case for Mac Pro, and that is for anyone/anything pretty much, not just demanding applications where faster cpu and/or more memory, and even faster disk drives can help.
I would upgrade to 4 x 2GB of RAM for 10GB. Then make sure your applications are compatible with Leopard and consider moving to 10.5.3 later, hopefully 10.5.3 will show up on DVD.
Tiger and Leopard use free memory to hold data in cache which helps. 2GB is pretty much minimum starvation diet.
Is your Mac Pro the stock 2.66GHz? (check about this Mac and System Profile).

Similar Messages

  • Limit dialog processes per user

    Hello,
    We would like to know if it possible to limit dialog processes per user.
    We have a user that he runs every dialog processes without permissions and we would like to limit the number of such processes to, for example, 5. This processes are an RFC for external java program.
    Thanks and regards,
    Néstor.

    <removed by moderator>
    Do not copy and paste. Always quote the source
    Read the "Rules of Engagement"
    Edited by: Juan Reyes on Jan 12, 2010 9:39 AM

  • How to increase the per-process file descriptor limit for JDBC connection 15

    If I need JDBC connection more that 15, the only solution is increase the per-process file descriptor limit. But how to increase this limit? modify the oracle server or JDBC software?
    I'm using JDBC thin driver connect to Oracle 806 server.
    From JDBC faq:
    Is there any limit on number of connections for jdbc?
    No. As such JDBC drivers doesn't have any scalability restrictions by themselves.
    It may be it restricted by the no of 'processes' (in the init.ora file) on the server. However, now-a-days we do get questions that even when the no of processes is 30, we are not able to open more than 16 active JDBC-OCI connections when the JDK is running in the default (green) thread model. This is because the no. of per-process file descriptor limit exceeded. It is important to note that depending on whether you are using OCI or THIN, or Green Vs Native, a JDBC sql connection can consume any where from 1-4 file descriptors. The solution is to increase the per-process file descriptor limit.
    null

    maybe it is OS issue, but the suggestion solution is from Oracle document. However, it is not provide a clear enough solution, just state "The solution is to increase the per-process file descriptor limit"
    Now I know the solution, but not know how to increase it.....
    pls help.

  • How to limit mail size of outgoing messages per domain/user

    Hello,
    i want to limit the size of a mail a user can send. For inbound mail there are attributes in the directory service: mailDomainMsgMaxBlocks and mailMsgMaxBlocks.
    But these limitations only enforce the size of incoming mail.
    I've read the messaging server admin guide but all i could find out is that i could limit the size of outgoing mail via a channel.
    For me this solution ist not granular enough. I want to set the outgoing mail size limit per user. Furthermore i don't know the impact of configuring 200+ channels for enforcing individual outgoing mail size limits per hosted domain.
    Does anybody know a solution for this problem? Maybe i've read over something. Or can anybody tell me the performance impact of 200+ channels at least?
    Thank you very much,
    af_inet
    JES 2005Q1, Solaris10, V440

    Hi jay_plesset,
    thanks a lot for your clarifications.
    Why would you need 200 channels? That sounds like a
    very strange setup. Setting outbound max size per
    user sounds like a very unusual demand, too.Let me explain: I got several departments who control their email settings in the messaging server via a webapp. So they can add, delete and modify users. They can manage mailing lists and so on. It's really a cool tool :-) I want this thing to be as granular as an own mailserver for the department would be. So it would be cool when the webapp writes an attribute like maxMsgBlocks in the DS and the thing is done.
    I don't understand why you guys could do the stuff for inbound but not for outbound. Is there a technical reason or is it just because it seems strange? :-)
    If you REALLY need something like that, 200 channels
    would need a POOL of at least 200 jobs in order to
    reliably run, and that's likely to need lots of
    system memory.I think that's not the way i want to go.
    Consider a custom channel, or something like that,
    that does an LDAP lookup for setting message size on
    sending. You'd need to enforce smtp authentication,
    so you know who your messages are coming from, first,
    and then you could proceed from there. This is not a
    trivial setup.Sounds interesting. Any documentation hints for custom channels?
    Thanks again for your reply,
    af_inet

  • Max data segment that can be allocated per process

    What is the maximum data segment that can be allocated per process on solaris?
    (The equivalent of "maxdsiz" on hp-ux).
    Thank you,
    J

    its configurable. Look at the rlimit max for the process. There should also be defined a maximum value you can set the rlimit to somewhere in the sys/* headers. (sorry can't track it down, my dev machine is kaput).

  • Kernel parameters -maximum threads per process

    How can we change the kernel parameters also how can we increase the maximum number of threads allowed .
    How can we increase maimum perocess per used id .

    There is no kernel parameter limiting the maximum
    number of threads allowed. If you are talking about
    user level threads, you will run into virtual address
    space limitations at about 3000 for a process
    assuming 32-bit address space and default
    stack size of 1M per thread, and assuming you are
    not using the alternate thread library (see threads(3thr))
    or Solaris 9. If you need more than this many
    threads at the same time, I suspect you are doing something
    incorrectly. Otherwise, try using a smaller stack size
    per thread. If you are running on Solaris 9, or using
    the alternate thread library, both give you a 1x1
    thread model, i.e., each user thread has a corresponding
    kernel entity (lwp). In this case, you will cause
    your machine to hang by eating up all available
    space for lwp's. In either case, the question should be:
    "how do I limit the number of threads per process?", since
    there is currently no limitation other than space.
    In Solaris 9, you can use resource management to
    limit the number of lwp's (and therefore user threads)
    per process.

  • Session Limit per POD?

    Hi!
    I understand that a POD can hold multiple SOD instances
    - Each SOD instance has a limit on the amount of concurrent sessions you can open (when transmiting WS calls)
    Now.
    ... If I have 5 SOD instances in a given POD
    ... and each of these 5 instances can open up to 10 concurrent sessions.
    ... this will mean theoretically in a multithreaded environment (transmitting to 5 instances concurrently),
    I can have up to 50 (5x10) concurrent open sessions.
    HOWEVER ... is there a such a thing as session limit per POD (that we should take into consideration)?
    Meaning if we have 6 SOD instance (each with 10 max concurrent sessions) in a given POD ...
    ... and a POD have a limit of 30 max concurrent sessions
    ... we should NOT be running all 6 concurrently (because it will need 60 concurrent sessions which exceeds POD limit).
    To reiterate the question... Is there such thing as max number of allowable sessions per POD?
    Thanks

    First, you should not "patch" directly the spfile but user ALTER SYSTEM command.
    Note also that SESSIONS is a derived parameter from PROCESSES.
    To increase SESSIONS parameter:
    1. connect with SYSDBA privilege:
    sqlplus / as sysdba2. change PROCESSES parameter:
    SQL> alter system set processes=200 scope=spfile;3. reboot instance
    shutdown immediate
    startup4. check parameter sessions:
    SQL> show parameter sessions;
    NAME                                 TYPE        VALUE
    java_max_sessionspace_size           integer     0
    java_soft_sessionspace_limit         integer     0
    license_max_sessions                 integer     0
    license_sessions_warning             integer     0
    logmnr_max_persistent_sessions       integer     1
    sessions                             integer     225
    shared_server_sessions               integerBefore the change, I had PROCESSES set to 150 and SESSIONS to 170.
    To count all sessions in your instance:
    select count(*) from v$session;Please make sure also to give exact Oracle error message number if any.
    Message was edited by:
    Pierre Forstmann

  • Per Process system memlock and huge page

    Hello All,
    I noticed in our new environment while database starts It shows the dump of system resource information for SGA. This is showing us few information of which I am confused.
    It says,
    Per process system memlock (soft) limit =193g
    Expected per process system memlock (soft) limit to lock.
    Shared Global  area into memory 4096M.
    Available system page size
    4K, 2048k
    Supported system pagesize:
    pagesize =4K, available_pages=configured, expected_pages=1048581, Allocated pages=1048581 and No errors.
    Reasons, for not supporting certain system pagesizes:
    2048K- Dynamic allocate and free memory regions
    Now I am not understanding where from 193G is coming!
    cat /proc/meminfo | grep -i Huge
    AnonHugePages:     407552 kb
    Hugepages_total:     0
    Hugepages_free:     0
    Hugepages_rsvd;     0
    Hugepages_Surp:     0
    Hugepagesize:          2048 Kb
    cat /etc/sysctl.conf | grep -i huge     says nothing.
    shmmni=4096
    shmall =1835008
    shmmax= 6388763852
    sem = 250 32000 100 128
    Could you please help me to understand where from getting the value 193g?
    Oracle versoin: 12.1.0.2.0 and RHEL 6.6
    Regards,
    J_DBA_Sourav

    Hello,
    It is a normal file system and database upon that.
    GRID is not being used. Though what you have asked to post about limit.conf
    * hard core 0
    only this is mentioned.
    inside /etc/security/limits.d there is file, 91-oracle.conf where below are mentioned.
    oracle soft memlock 202457088               -> I calculated this number, if it is in KB then it is 193.07G but I don't know what is this doing.
    oracle hard memlock 202457088
    oracle soft core unlimited
    oracle hard core unlimited
    oracle soft noproc 131072
    oracle hard noproc 131072
    oracle soft nofile 131072
    oracle hard nofile 131072
    there is generic file in $ORACLE_HOME/crs/install/s_crsconfig_defs
    there a segment for CRS_LIMIT_MEMLOCK=unlimited is mentioned and CRS_LIMIT_CORE =unlimted other than lots of other parameters. I am not able to copy and paste as operation is not allowed.
    Regards,
    J_DBA_Sourav

  • How to get correctly the percent of used CPU per process

    I'm trying to get the percent of used CPU per process on windows with Qt/C++. Firstly i get a list of running processes and after, for each process i try to get the used CPU, for most process the result looks valid (they match with task manager in windows), but with the AIDA64 process (that is running a CPU stress test in background), i got strange values like 312% what is wrong with my c++ code?
        sigar_t *sigarproclist;
        sigar_proc_list_t proclist;
        sigar_open(&sigarproclist);
        sigar_proc_list_get(sigarproclist, &proclist);
        for (size_t i = 0; i < proclist.number; i++)
            sigar_proc_cpu_t cpu;
            int status1 = sigar_proc_cpu_get(sigarproclist, proclist.data[i], &cpu);
            if (status1 == SIGAR_OK)
                Sleep(50);
                int status2 = sigar_proc_cpu_get(sigarproclist, proclist.data[i], &cpu);
                if (status2 == SIGAR_OK)
                    sigar_proc_state_t procstate;
                    sigar_proc_state_get(sigarproclist, proclist.data[i], &procstate);
                    qDebug() << procstate.name << cpu.percent * 100 << "%";
        sigar_close(sigarproclist);

    You may need to scale (divide) by the number of cores.  This is the code sigar is using on windows to get the process cpu:
    SIGAR_DECLARE(int) sigar_proc_time_get(sigar_t *sigar, sigar_pid_t pid,
                                           sigar_proc_time_t *proctime)
        HANDLE proc = open_process(pid);
        FILETIME start_time, exit_time, system_time, user_time;
        int status = ERROR_SUCCESS;
        if (!proc) {
            return GetLastError();
        if (!GetProcessTimes(proc,
                             &start_time, &exit_time,
                             &system_time, &user_time))
            status = GetLastError();
        CloseHandle(proc);
        if (status != ERROR_SUCCESS) {
            return status;
        if (start_time.dwHighDateTime) {
            proctime->start_time =
                sigar_FileTimeToTime(&start_time) / 1000;
        else {
            proctime->start_time = 0;
        proctime->user = FILETIME2MSEC(user_time);
        proctime->sys  = FILETIME2MSEC(system_time);
        proctime->total = proctime->user + proctime->sys;
        return SIGAR_OK;
    The windows api doc indicates that the time here is a sum over all threads and thus will need to be scaled by number of cores.
    We had to do something like this in our use of the Java bindings of the 1.6.4 release of SIGAR.  I'm curious to know if this works for you.
    Best,
    Vishal

  • RAM Allocation and Media Transcoding not working?

    So as much as I love Final Cut Pro X and all of the great new features, I've come across two big concerns that I can't seem to fix or find a solution too.
    1. I have heard that FCPX can basically use as much RAM that you can throw at it. This to me means that the more I get, the quciker things will Render, and Transcode and editing should be smoother. Considering that my iMac (Quad Core i7 3.4 ghz) is the latest model and it only came with 4gb of RAM, I decided to upgrade my RAM. I recently bought and installed 32 gb (4 sticks fo 8gb) of RAM. I know the Maximum says 16gb, however Crucial claims to have done significant testing on the RAM and says that the i7 can in fact work with it no problem and it seems to show up and work so far. My main concern though is that even with all of that RAM FCPX doesn't seem to be using nearly that much. I tried giving it tasks such as transcoding some files and applying a lot of effects to things in the timeline and it only seems to be using about 3 or 4 gbs worth of RAM, which is nothing compared to the 32gb that it should be using. I literally have things waiting to be transcoded or rendered when I ahve plenty of free RAM just waiting to be used. To me this doesn't seem correct. Does FCPX have some sort of RAM allocation control or does anyone know what I can do to make FCPX realize I have that much RAM and tell it to use it? Has anyone else had this issue? I was thinking maybe erasing the preference files? Maybe something in the preferences thinks that the RAM is still only 4gb?
    2. My second question is in regards to Transcoding. I've read alot online that says it is best to Transcode your footage to Pro Res on Import to make sure it is at the optimal settings to edit in FCPX. My original footage is h.264 from my DSLR and works, but if I can convert it to Pro Res and it runs even smooother, I would love to do so. I know this option comes up when you import, but I have recently discovered that you can right click on a files in the Event Browser and tell it to Transcode after the fact. Since I have a lot of clips I thought maybe grabbing all of hte clips at once and telling it to batch Transcode all at once would be helpful, especially considering how much RAM I have now, but it doenst seem to be working. Upon trying that I noticed the first clip goes like normal, but then it just stops. The rest of hte clips just stay frozen and don't move at all. I can play and pause the Trancoding options in the Render Window, but still nothing continues. It just stays frozen. I've tried with smaller batches and it still doesn't seem to work. Any ideas on why my transcoding just stops. I usually have to cancel that Trancode process and try again, but still nothing. Why is this happening?
    Any ideas?
    -Mike

    M. Video Productions wrote:
    My main concern though is that even with all of that RAM FCPX doesn't seem to be using nearly that much. I tried giving it tasks such as transcoding some files and applying a lot of effects to things in the timeline and it only seems to be using about 3 or 4 gbs worth of RAM, which is nothing compared to the 32gb that it should be using. I literally have things waiting to be transcoded or rendered when I ahve plenty of free RAM just waiting to be used.
    If the program has enough RAM to do it's assigned work, more RAM will not speed things up. RAM does not necessarily equate to speed of operation. Transcoding still requires disk write operations.

  • How to find out how much RAM  allocated to ASE sybase 15.7 on HP-unix

    Dear expert/Sir,
               How to find out how much RAM  allocated to Sybase ASE 15.7 and I using operating system is HP-Unix, because I have set  max memory

    Appending to Bret's reply, the following may help you in understanding
    SAP Sybase ASE maintains two other measures of memory besides max memory.
    You can observe them using sp_configure "memory" procedure
    1. The total logical memory : represents the amount of memory required by all objects that are allocated in the configuration file; for example, users, caches, open databases, and open objects.
    2. The total physical memory : represents the amount of memory that is actually in "use" at a given time.This fluctuates according to the number of users online and the number of open objects, among other factor
    HTH
    Rajesh

  • Limit Item Process MM-SUS scenario

    Hello Team,
    I want to understand the limit item process in MM-SUS scenario.
    What are the process steps and document created as per system.
    Kindly share any refrence document related to this process.As i need to implement this process for my client.
    Regards
    Shailendra Tiwari

    Hope that are you in MM-SUS.
    Did you post invoice in MM and reflected to SUS system or Did you post invoice in SUS system and not reflected in ECC system?
    Note 891594 - SRM50-SUS: Invoice without items for a limit
    Note 1062825 - Errors in SUS ASN , SUS Invoice and SUS Confirmation
    Note 536597 - SRM-SUS: Invoice for limit order not possible
    muthu

  • Substition -  n-Level Output Limit Approval Process

    Hi,
    we are using a process-controlled workflow in SRM 7.0; n-Level Output Limit Approval Process; see BC-Sample process scheme 9C_BUS2121_EX03.
    Situation 1: a manager (approval limit 3.000) sets an user (without approval limit entry in the OrgMgmt) as substitute. The substitute succeeds in approving a purchase order.
    The purchase order is approved despite the missing approval limit of the substitute. Even exception entry "Calling class /SAPSRM/CL_WF_RULE_CONTXT_SC method PREV_APPROVAL_LIMIT
    raises error exception
    Message no. /SAPSRM/BRF086"
    in SLG1 does not disturb the approval. The exception-entry itself is fine; however the approval is not supposed to succeed if such an exception happens.
    Situation 2: a manager (approval limit 3.000) sets an user (approval limit 1.000) as substitute. The substitute can now approve up to 3.000 without additional approver. This is not the expected result.
    Do you know any solutions for these problems?
    Best regards,
    Frank

    Hi,
       What is your business requirement? if you want to stop not ordering the SC if the substitute approval limit is less than the system determine approver approval limit.. try to implement the logic in BBP_DOC_CHECK_BADI.. where you have to build the same logic to find the substitute approval limit.. if the approval limit is less , then error out..
    Saravanan

  • Trying to determine costs per concurrent user

    I have tried to investigate the services and theirs costs as well as I can and tried to get help. However I have very little experience with hosting and databases so after all this research I'm left with a bunch of questions.
    Forgive me if this is too many for this kind of forum, if so, could you please refer me to where I can learn more?
    We're developing a multiplayer browser game in Unity3D. The game will communicate with a SQL database. I'm trying to determine how much it will cost me per Monthly Active User (MAU) and per Concurrent User (CCU). To do so I need some things clarified. In order
    to determine which feature will be the most limiting and therefore define which performance level I need.
    They will on average communicate with the database app. two times a minute (defined as two queries per minute). Does this mean that I need a transaction rate of at least two per CCU?
    How fast can you open and close a session? I'm trying to figure out whether I need to have one session open constantly per user connected to the game. If so, I suppose max sessions would be equal to CCU?
    If I need more CCU than one database of a given performance level supports can I then just subscribe for more databases which will then automatically become copies of one another and you will divide users between the databases automatically?
    As I understand your description of your database services through Basic, Standard and Premium, the database will be able to work independently of additional webhosting. However then I do not understand why there is no info in regards to bandwith limitations.
    Do I need your webhosting services along with subscribing for the database? Then I suppose the bandwith limitations of the webhosting applies for the database as well?
    In regards to the webhosting, besides for maybe handling the database, the website's by far most demanding service will be to allow users to download a game of max 30 MB. How can I estimate how many users will be able to download simoultaneously from a given
    VM instance, e.g. basic 1?
    If the database depends on this website for communication to the game, how can I then estimate how demanding that will be to the power of a given VM instance?
    I realise that some of these questions may be difficult to give precise answers to but any help with getting closer to an answer is highly appreciated.

    This is by no means accurate and complete, but will give you an idea about how to go about doing estimates:
    First you need to know what database tier you need: basic, standard or premium.
    The tier you need depends on the processing power you need, and directly relates to the number of concurrent users you have at any given time and the processing requirement of the database.
    You can determine this only after you do some benchmarking of your app.
    Supposed you have 100 users and you need basic tier S0, then cost per user will be (cost for S0)/100.
    Then you need cost for your web service/site.
    Again you need to know how powerful it needs to be.
    You can determine this only after you do some benchmarking of your app.
    Again supposed you need a A3 then cost per user will be (cost of A3)/100.
    Then you need cost for data transfer from your web service to your user.
    Supposed every data transfer is 10K, then you have 20K/min.
    Supposed a user stays in your game for an hour, then you have 1.2M/user/session.
    Supposed the user plays 10 times/month, then you have 12M/user per month. Then you can calculate the cost per user.
    I suppose you can safely assume that bandwidth is not a limitation in Azure for your app for the moment.
    Frank

  • Analyzing per process, per filesystem IO

    Hi,
    Assume that by watching sar data for a certain interval I find that one device is used heavily. The next thing I would want to do is find the process generating most I/O at that time on that device.
    The thing I know how to do are:
    1) Find what process is generating I/O using lwp_ru.inblock and lwp_ru.oublock structures (as mentioned in one of the previous answers)
    2) Find how many write operations (and other parameters) were sent to an I/O device (using kstat from that device).
    How difficult would it be to link the two ? To find how much a process was writing to a certain device (say sd0 or something similar). Could this be done by tracing the write calls ? (but then, by knowing the file descriptor how can you obtain the file location).
    Just as vminfo provider complements vmstat data I would like some equivalent to complement iostat data. I realize here things could be more complicated (memory is one, devices are many, different drivers and so on).
    Is there any Sun free documentation (on docs.sun.com) describing in more detail the Solaris kernel, memory system, I/O system and so on ? DTrace makes you ask more questions about all of these.
    Thank you,
    Vlad Grama.

    G'Day,
    "Analyzing per process, per filesystem IO " reminded me of this command,
    # ./psio -f 10
         UID   PID  PPID %I/O    STIME TTY      TIME CMD
    brendan  6293  6281  2.1 06:32:50 pts/6   00:01 find /
           "     "     "  1.3  /dev/dsk/c0d0s0, /
           "     "     "  0.6  /dev/dsk/c0d0s3, /var
           "     "     "  0.2  /dev/dsk/c0d0p0
        root     3     0  0.0 12:09:33 ?       00:43 fsflush
           "     "     "  0.0  /dev/dsk/c0d0s3, /var
        root     0     0  0.0 12:09:32 ?       00:03 schedThe psio command used prex to fetch it's I/O per process data. I've just rewritten it using DTrace, it can be found on http://www.brendangregg.com/psio.html
    I've also just started writing another DTrace program that may help solve your problem. It's also on the website but is short enough to paste here. (I started programming in DTrace about 18 hours ago, expect future versions of this code to be much, much, better):
    First some example output,
    # ./iosnoop.d
      UID   PID  PPID   SIZE DEV       BLOCK   VNODE      INODE CMD
      100  6253  6183   2048 26738691  16      0              0 vi /etc/motd
      100  6253  6183   8192 26738691  336     0              0 vi /etc/motd
      100  6253  6183   1024 26738691  2582    e0bd42c0     113 vi /etc/motd
      100  6253  6183   1024 26738691  2582    e0bd42c0     113 vi /etc/motd
      100  6253  6183   1024 26738691  2582    e0bd42c0     113 vi /etc/motd
      100  6253  6183   1024 26738691  2582    e0bd42c0     113 vi /etc/motd
      100  6253  6183   8192 26738691  336     0              0 vi /etc/motd
      100  6253  6183   1024 26738691  2582    e0bd42c0     113 vi /etc/motd
      100  6253  6183   1024 26738691  2582    e0bd42c0     113 vi /etc/motd
      100  6253  6183   1024 26738691  2582    e0bd42c0     113 vi /etc/motd
      100  6253  6183   1024 26738691  2582    e0bd42c0     113 vi /etc/motd
        0     3     0   2048 26738691  1952    0              0 fsflush
        0     3     0   2048 26738691  16      0              0 fsflush
        0     3     0   2048 26738688  16      0              0 fsflushAnd now the iosnoop.d program,
    #!/usr/sbin/dtrace -s
    ** iosnoop.d - A short program to print I/O events as they happen, with
    **      useful details such as UID, PID, inode, command, etc.
    **      Written in DTrace (Solaris 10).
    ** USAGE:       ./iosnoop.d
    ** 12-Mar-2004, ver 0.5. First release, check for newer versions.
    ** Standard Disclaimer: This is freeware, use at your own risk.
    ** ToDo: More details, modes of operation, process different I/O types...
    ** 12-Mar-2004  Brendan Gregg   Created this.
    #pragma D option quiet
    dtrace:::BEGIN {
            printf("%5s %5s %5s %6s %-9s %-7s %-10s %5s %s\n",
             "UID","PID","PPID","SIZE","DEV","BLOCK","VNODE","INODE","CMD");
    fbt:genunix:bdev_strategy:entry
            **  strategy, fetch and store user details
            bufp = (buf_t *)arg0;
            dev = bufp->b_edev;
            blk = bufp->_b_blkno._f;
            str_uid[dev,blk] = curpsinfo->pr_euid;
            str_pid[dev,blk] = pid;
            str_ppid[dev,blk] = curpsinfo->pr_ppid;
            str_args[dev,blk] = (char *)curpsinfo->pr_psargs;
    fbt:genunix:biodone:entry
            **  biodone, fetch all values and print
            bufp = (buf_t *)arg0;
            dev = bufp->b_edev;
            blk = bufp->_b_blkno._f;
            pagep = (page_t *)bufp->b_pages;
            vnodep = (int)pagep == 0 ? 0 : (vnode_t *)pagep->p_vnode;
            vnode =  (int)vnodep == 0 ? 0 : (int)vnodep;
            inodep = (int)vnodep == 0 ? 0 : (inode_t *)vnodep->v_data;
            inode =  (int)inodep == 0 ? 0 : inodep->i_number;
            suid = str_uid[dev,blk];
            spid = str_pid[dev,blk];
            sppid = str_ppid[dev,blk];
            sargs = str_args[dev,blk];
            printf("%5d %5d %5d %6d %-9d %-7d %-10x %5d %s\n",
             suid,spid,sppid,bufp->b_bcount,bufp->b_edev,
             bufp->_b_blkno._f,vnode,inode,stringof(sargs));
    }... Both my strategys have been to target disk block I/O events, rather than kstat or proc structures. By using these probes I can read precise timestamps event by event rather than reading a sum after the fact. It's fairly easy to dump block addresses and timestamps and then plot it in StarOffice or GNUplot - which can really illustrate the problem.
    Check for future versions of these tools, I've only just started with DTrace. (I'd better sleep now, it's 7am :)
    Brendan Gregg
    [Sydney, Australia]

Maybe you are looking for