How to limiting CPU resource per process

Solaris 8 OS
Sun 4800 Server
I have 4 CPU modules. There is a runaway process that consumes 25% of the CPU resources. Which is one whole CPU in my situation.
Is there a way for me to limit the CPU consumption from user processes? IE: process can't use more than 5-10% of total CPU at a given time.
There is a command 'limit'. Limit can only control the amount of CPU TIME. This is not applicable because sometimes user need to run processes which last for days.
There is a way to do this . . By buying Solaris Resource Manager. This software cost over $10,000. It will come with a lot of things that I do not need.
There's got to be a way to limit this.
I have done some extensive researching and NO lead.
Any help would be greatly appreciated.

I was thinking about this a bit more. I remember a while back Sun has a BATCH scheduling class available for download. Unfortunately, I can't find it anymore. Maybe Sun turned it into a for purchase product. Anyway, the comments about priority are relevant here.
Just because a process is chewing up your CPU doesn't necessarily mean its a bad thing. If the process is doing work, that shouldn't be considered a problem. If the job is impacting other jobs in the systems that are considering more important, then its a problem. Moving this application into a batch scheduling class would definately take care of this problem OR using NICE to lower the process nice value to say 19 will also work. This would make it so that other processes with higher nice values would preempt this application off the CPU. A BATCH scheduling class would do the same thing.
OR MAYBE and I don't now if this is possible and its way out in left field, maybe investigate changing the default scheduling class to IA so that all normal processes startup in the IA scheduling class and put this abusive process in the TS scheduling class. This way IA scheduling class apps will always preempt anything in the TS scheduling class. Sorta like turning your TS scheduling class into a BATCH scheduling class. Of course, using NICE is probably much more easy.

Similar Messages

  • How do you get the per process I/O through Sigar

    I want to know whether there is any way that one can gather the per process I/O i.e. disk read/write performed solely by a particular process through SIGAR. As far as I know only system parameters relating to specific processes(like system CPU, Mem) can be gathered. I would be grateful if anyone could give me any idea about the same. I wanted to do this in a system independent way so SIGAR seems to e the best choice.Thanks!

    Although iTunes has this field available, it has no method to automatically populate it.  There are some third-party utilities available that will scan media files and calculate a BPM value (a quick Google search should locate candidates), and some of these may offer integration with iTunes.  BeaTunes (https://www.beatunes.com/en/index.html) seems to suitable - there's a trial version available for both Windows and Mac.
    Its possible that some specialist retailers may include BPM data in their media downloads, though I've not actually seen examples of this.

  • Limiting cpu for a process

    Hi,
    I see this type of question has been asked before - but there doesn't seem to be an answer.
    I want to limit how much (as a percentage of CPU) a process can use.
    I'm not talking about NICE or RENICE as this only lowers priority in relation to other processes, I want a process to run and never use more than 30% CPU.
    The reason is I have Rosetta Stone language software, and when waiting for input (via Macromedia) it uses 100% CPU. After initial arguing with the vendor - who tried to blame my setup/machine etc - they admitted it was a known issue with their software on the Mac.
    I like the software, but 100% CPU kicks the fan on quickly which annoys me intensely- and is generally "not good".
    I also have the same issue with Epson scan software - 100% CPU when it's doing nothing.
    I'm genuinely surprised there's nothing to limit a process.

    You mean like a RAM check? Passed with flying colors. I also have a new harddrive. Still the same problems. I don't think I can check anything else. Is there RAM onboard the motherboard?
    Do you think a mac store test if it has overheating problems?

  • Weblogic spiking to 99% CPU resources while processing inbound JMS messages

    We have an Integration Gateway server that is receiving data into a JMS queue from SAP/TIBCO and a legacy system for processing. We have Weblogic 8.1 SP3 and the server is running ClarifyCRM 12.5 Integration Gateway components including Tuxedo. In the past few weeks, Weblogic has been hanging up multiple times trying to receive this data from these systems. Specifically java.exe process used by Weblogic spikes to 99% CPU and the server becomes completely unresponsive. So far the only workaround we have is shutting down Weblogic, moving the incoming messages stored in the JM file store that are hanging the server to a different location so that they won't be processed by Weblogic and then restart Weblogic.
              What we see when this happens is that the hung up messages stay in the "Messages Pending" queue.
              A developer has mentioned that this may be happening because the threads handling the messages are hanging.
              I want to know if there is a tool from BEA to view and edit the JMS File Store on the file system to further analyze the incoming messages.

    I agree about the older version. However this particular version and service pack are the only ones that have been tested in our production and dev environments. So currently I don't have any options but to use it. However the system has been running relatively fine for the past 1-1.5 years.
              Do you happen to know the name of that tool that you mentioned below?
              Thanks in advance.

  • How to get correctly the percent of used CPU per process

    I'm trying to get the percent of used CPU per process on windows with Qt/C++. Firstly i get a list of running processes and after, for each process i try to get the used CPU, for most process the result looks valid (they match with task manager in windows), but with the AIDA64 process (that is running a CPU stress test in background), i got strange values like 312% what is wrong with my c++ code?
        sigar_t *sigarproclist;
        sigar_proc_list_t proclist;
        sigar_open(&sigarproclist);
        sigar_proc_list_get(sigarproclist, &proclist);
        for (size_t i = 0; i < proclist.number; i++)
            sigar_proc_cpu_t cpu;
            int status1 = sigar_proc_cpu_get(sigarproclist, proclist.data[i], &cpu);
            if (status1 == SIGAR_OK)
                Sleep(50);
                int status2 = sigar_proc_cpu_get(sigarproclist, proclist.data[i], &cpu);
                if (status2 == SIGAR_OK)
                    sigar_proc_state_t procstate;
                    sigar_proc_state_get(sigarproclist, proclist.data[i], &procstate);
                    qDebug() << procstate.name << cpu.percent * 100 << "%";
        sigar_close(sigarproclist);

    You may need to scale (divide) by the number of cores.  This is the code sigar is using on windows to get the process cpu:
    SIGAR_DECLARE(int) sigar_proc_time_get(sigar_t *sigar, sigar_pid_t pid,
                                           sigar_proc_time_t *proctime)
        HANDLE proc = open_process(pid);
        FILETIME start_time, exit_time, system_time, user_time;
        int status = ERROR_SUCCESS;
        if (!proc) {
            return GetLastError();
        if (!GetProcessTimes(proc,
                             &start_time, &exit_time,
                             &system_time, &user_time))
            status = GetLastError();
        CloseHandle(proc);
        if (status != ERROR_SUCCESS) {
            return status;
        if (start_time.dwHighDateTime) {
            proctime->start_time =
                sigar_FileTimeToTime(&start_time) / 1000;
        else {
            proctime->start_time = 0;
        proctime->user = FILETIME2MSEC(user_time);
        proctime->sys  = FILETIME2MSEC(system_time);
        proctime->total = proctime->user + proctime->sys;
        return SIGAR_OK;
    The windows api doc indicates that the time here is a sum over all threads and thus will need to be scaled by number of cores.
    We had to do something like this in our use of the Java bindings of the 1.6.4 release of SIGAR.  I'm curious to know if this works for you.
    Best,
    Vishal

  • How to increase the per-process file descriptor limit for JDBC connection 15

    If I need JDBC connection more that 15, the only solution is increase the per-process file descriptor limit. But how to increase this limit? modify the oracle server or JDBC software?
    I'm using JDBC thin driver connect to Oracle 806 server.
    From JDBC faq:
    Is there any limit on number of connections for jdbc?
    No. As such JDBC drivers doesn't have any scalability restrictions by themselves.
    It may be it restricted by the no of 'processes' (in the init.ora file) on the server. However, now-a-days we do get questions that even when the no of processes is 30, we are not able to open more than 16 active JDBC-OCI connections when the JDK is running in the default (green) thread model. This is because the no. of per-process file descriptor limit exceeded. It is important to note that depending on whether you are using OCI or THIN, or Green Vs Native, a JDBC sql connection can consume any where from 1-4 file descriptors. The solution is to increase the per-process file descriptor limit.
    null

    maybe it is OS issue, but the suggestion solution is from Oracle document. However, it is not provide a clear enough solution, just state "The solution is to increase the per-process file descriptor limit"
    Now I know the solution, but not know how to increase it.....
    pls help.

  • How do I track down which (sub)VI is using all of my CPU resources?

    Hi,
    I'm having some trouble debugging a project...
    Problem:
    Every once in a while, all my Vi's in the project seem to freeze up or execute extremely slow. The machine's CPU load rises to 100% (99% for the LabVIEW.exe process). The normal CPU load with the application running is around 10-15%.
    Windows and LabVIEW are still responsive, albeit very sluggish. I can run the "Performance and memory" tool but I can't find any abnormalities.
    Question:
    Is there a way to find out what's causing the high CPU load? For example by showing the CPU load per VI in memory.
    Any input is appreciated.
    Kind regards,
    Pieter-Jan

    Thank you all for the replies. I was out of the office last week and didn't get a chance to reply untill now.
    My first thought was also a loop with no timing in it, like billco suggested. The problem is finding which loop is causing the problem, as six seperate loops run continuously and make a wide variety of subVI calls, some static, some dynamic.
    @altenbach: The profiler will only show VI's that have started and terminated execution. This means that if I start the profiler when the applicaton has already frozen, I can see very little happening.
    I could start the profiler before starting any of the VIs, but the problem usually only occurs after hours or days of running. And when it freezes, I can nolonger shut it down properly.
    I have attached two logs of the profiler: One during normal operation and one while the application was 'hanging'.
    @crossrulz: I've checked memory usage in the task manager, and it is not excessive.
    The project is a bit large to post here (around 300 VIs).
    Also, the total CPU load goes up to 100%, but when I tested with a single untimed loop I can only get it to go up to 50% because it's a dual core machine and a single loop can only take over one of the processor cores. This makes me question the theory of the untimed loop...
    Any ideas?
    Attachments:
    ProfileData_Hanging.txt ‏446 KB
    ProfileData_Normal.txt ‏443 KB

  • How many levels of CPU resources can be allocated by a simple---QNo.121

    How many levels of CPU resources can be allocated by a simple resource plans?

    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14231/dbrm.htm#i1007878
    Up to eight consumer groups can be specified and the only plan directive that can be specified is for CPU.

  • How to find the cpu usage per query for a time period

    Hi All,
    Is there a way to find the cpu used per query for a given time period?
    DB:10.2.0.5
    OS:AIX
    Thanks

    user13364377 wrote:
    if there are multiple queries starting at the same time, then what to do?Handle:      user13364377
    Status Level:      Newbie (10)
    Registered:      Jul 5, 2010
    Total Posts:      264
    Total Questions:      113 (84 unresolved)
    why so many unanswered questions?
    clarify your question
    same SQL multiple times from different sessions?
    or
    different SQLs from different sessions?

  • Lightroom Mobile Sync - Extremely High CPU Usage/Sync Process Causes LR To Lag

    Since my other thread doesn't seem to be getting any responses, I'm pasting what I've found here. Please keep in mind I am not a beginner with Lightroom and consider myself very familiar with Lightroom's features excluding the new mobile sync.
    1st message:
    I'm on Lr 5.5 and using the 30 day trial of Adobe CC to try syncing one collection of slight more than 1000 images. Despite already having generated the Smart Previews, I can see my CPU crunching through image after image (the rolling hills pattern in the task manager) while doing the sync. I was assuming, since I already created the Smart Previews, that the sync of this collection would begin immediately and be done by simply uploading all of the existing Smart Previews. The Smart Previews folder of the catalog is 871MB and has stayed the same despite the CPU obviously doing *something*. As it is now, the sync progress is incredibly slow, almost at a pace like it's actually exporting full-res JPGs from the RAW images (as a comparison only, I know this should not be what it's actually doing).
    Another side effect of this is that I'm basically unable to use my computer for other tasks due to the high CPU utilization.
    Win 7 x64 / Lightroom 5.5
    Intel i5 2500k OC'd 4.5GHz
    16GB RAM
    SSD for OS, separate SSD for working catalog and files
    2nd message:
    As a follow up, now Lightroom thinks all 1026 photos are synced (as shown in "All Sync Photographs" portion of the Catalog though all images after the 832nd image show the sync icon per image stuck at "Building Previews for Lightroom Mobile" and the status at the top left corner has been stuck at "Syncing 194 photos" for over 12 hours. Is there no option to force another sync via Lightroom Desktop and also force the iOS app to manually refresh (perhaps by pulling down on the collections view, like refreshing via the Mail app)?
    3rd message:
    One more update, I went into Preferences and deleted all mobile data, which automatically signed me out of Adobe CC and then I signed back in. Please keep in mind the Smart Previews were long generated before even starting the trial, and I also manually generated them again (it ran through quickly since it found they were already generated) many times. Now that I'm re-syncing my collection of 1026 images, I can clearly see Lightroom using the CPU to regenerate the Smart Previews which already exist. I have no idea why it's doing this except that it's making the process of uploading the Smart Previews extremely slow. I hope this time around it will at least sync all 1026 images to the cloud.
    4th message:
    All 1026 images synced just fine and I could run through my culling workflow on the iPad/iPhone perfectly. Now I'm on a new catalog (my current workflow unfortunately uses one catalog per event) and I see the same problem: Smart Previews already generated but when syncing, Lightroom seems to re-generate them again anyway (or take up a lot of CPU simply to upload the existing Smart Previews). Can anyone else chime in on what their CPU utilization is like during the sync process when Smart Previews are already created?
    New information:
    Now I'm editing a catalog of images that is synced to Lightroom Mobile and notice that my workflow has gotten even slower between photos (relative to what it was before, this is not a discussion about how fast/slow LR should perform). Obviously Lightroom is syncing the edited settings to the cloud, but I can see my CPU running intensively (all 4 cores) on every image I edit and the CPU utilization graph looks different than before I started using LR mobile sync. It still feels like every change isn't simply syncing an SQLite database change but re-generating a Smart Preview to go with it (I'm not saying this is definitely what's happening, but something is intensively using the CPU that wasn't prior to using LR Mobile).
    For example: I only update the tint +5 on an image. I see the CPU spike up to around 30-40%, then falls back down, then back up to 100%, then back down to another smaller spike while Lightroom says "Syncing 1 photo".  I've attached a screenshot of my CPU graph when doing this edit on just one image. During this entire time, if I try to move onto edit another image, the program is noticeably slower to respond than it was prior to using LR mobile, due to the fact that there appear to be much more CPU intensive tasks running to sync the previous edit. This is proven by un-syncing the collection and immediately the lag goes away.
    I'd be happy to test/try anything you have in mind, because it's my understanding that re-syncing photos that are edited that are already in the cloud should be simply updating the database file rather than require regenerating any Smart Previews or other image data. If indeed that's what it should be doing, then some other portion of LR is causing massive CPU usage. If this continues, I will probably not choose to proceed with a subscription despite the fact that i think LR mobile adds a lot of value and boosts my workflow significantly if it wasn't causing the program to lag so badly in the process.
    I know this message was incredibly long and probably tedious to read through so thanks in advance to anyone who gets through it
    -Jeff

    Thanks for reporting. Just passed  along your info to some of our devs. One of the things that needs to be created (beside smart previews) during an initial sync are thumbnails + previews for the LrM app - Guido
    Hi Guido,
    Thanks for pointing this out. I realized the same thing when I tried syncing a collection for offline mode and found out the required space sounded more like Previews + Smart Previews rather than just the Smart Previews.
    greule wrote:
    Hi Jeff, are your images particularly large or do you make a lot of changes which you save to the original file as part of your workflow?
    The CPU usage is almost certainly from us uploading JPEG previews not the Smart Previews - particularly during develop edits as these force new JPEG previews to be sent from Lightroom desktop, but would not force new Smart Previews (unless the develop edits are modifying the original file making us think the Smart Preview is out of date) to be sent.
    Guido
    My images are full-resolution ~22mp Canon 5D Mark III RAW files so they're fairly large. Even if I only make one basic change such as exposure changes, I saw the issue. By "save to the original file" I'm assuming metadata changes such as timestamps, otherwise edits to the images aren't actually written to the original file. I'm only doing develop module edits so I shouldn't be touching the original file at all at this point in my workflow.
    I think it makes sense now that you mention that new JPEG previews need to be generated and sent to the cloud due to updated develop edits. My concern is that this seems to be done in real-time as opposed to how Lightroom Desktop works (which is to render a new Standard Preview or 1:1 Preview on demand, which means only one is being rendered at any given time while viewing it in Loupe View or possibly 2 in Compare View). If I edit, for example, 10 images quickly in a row, once the sync kicks in a few seconds later, editing the 11th image is severely hindered due to the previous 10 images' JPEG previews being rendered and sync'd to the cloud (I'm assuming the upload portion doesn't take much CPU, but the JPEG render will utilize CPU resources to the fullest if it can). Rendering Standard/1:1 Previews locally and being able to walk away while the process finishes works because it is at the start of my workflow, but having to deal with on-the-fly preview rendering while I'm editing greatly impacts my ability to edit. Perhaps there can be a way to limit max CPU utilization for background sync tasks?
    It may help to know that I'm running a dual-monitor setup, with Lightroom on a 27" 2560x1440 display maximized to fit the display (2nd display not running LR's 2nd monitor). Since I'm using a retina iPad, the optimal Standard Previews resolution should be the same at 2880 pixels.
    Thanks again for the help!

  • Kernel parameters -maximum threads per process

    How can we change the kernel parameters also how can we increase the maximum number of threads allowed .
    How can we increase maimum perocess per used id .

    There is no kernel parameter limiting the maximum
    number of threads allowed. If you are talking about
    user level threads, you will run into virtual address
    space limitations at about 3000 for a process
    assuming 32-bit address space and default
    stack size of 1M per thread, and assuming you are
    not using the alternate thread library (see threads(3thr))
    or Solaris 9. If you need more than this many
    threads at the same time, I suspect you are doing something
    incorrectly. Otherwise, try using a smaller stack size
    per thread. If you are running on Solaris 9, or using
    the alternate thread library, both give you a 1x1
    thread model, i.e., each user thread has a corresponding
    kernel entity (lwp). In this case, you will cause
    your machine to hang by eating up all available
    space for lwp's. In either case, the question should be:
    "how do I limit the number of threads per process?", since
    there is currently no limitation other than space.
    In Solaris 9, you can use resource management to
    limit the number of lwp's (and therefore user threads)
    per process.

  • X200 Hardware Interrupts constantly at 40% of CPU resources

    As many other thinkpad owners I started to experience excessive "Hardware Interrupts" using between 40-45% of my CPU resources (on both cores) on my x200. Note that Task Manager will _not_ show this, you will need something like Microsoft's "Process Explorer" to show this. This started about a week ago, following a slew of updates I had been rolling in from "System Update" on my Vista x64 Business.
    I searched the Internet high and low and found many similar experiences (not only with Lenovo models, not only with Vista). Obviously, it is nearly impossible to find out what's causing this.  There is a useful tool called "KrView" by Microsoft that will display more info about which driver is causing the interrupts, but unfortunately it does not run under Vista x64.There is also an on-board Vista tool that allows you to check for unsigned drivers but that didn't seem to be the problem (all drivers were certified ok).
    Other have reported problems with their network drivers where such interrupts would occur as soon as they turn on WiFi. Some were successful by removing their installation of Deamon Tools (which I have not installed, so that didn't help).Others suggested hard disk drivers or CD-ROM driver issues.
    I thus tried to home in on the probably culprit, and found the following puzzling effect: This only occurs when my x200 is in the docking station! After a clean startup, it will be nice and idle for some time, but after some 5 minutes it will start to constantly show the 40-45% of "Hardware Interrupts." Once this happens, it almost never goes back down, even if you undock it. However, if you _start_ the system in an undocked state, the "Hardware Interrupts" do not appear!! I even tried to manually connect all the different pieces that I have attached to my docking station one by one to my undocked laptop (e.g., network cable, external keyboard and mouse, USB harddisk) - none did trigger this. But as soon as you redock your laptop, it will spike up again to its 40-50% "Hardware Interrupts".
    I _assume_ that it might be the display driver that causes this, as this is the only piece of hardware that I couldn't "test" on my standalone laptop: I use a displayport cable to connect by 24" monitor, and the x200 does not have a connector for this... I tried installing the latest Intel graphics drivers but the won't install and tell me that I must use the OEM lenovo versions. I also could not "rollback" the driver (option greyed out) and the old driver version doesn't seem to be available on the lenovo post...
    Alternatively, it might be some sort of "docking station driver" that's causing this. Although I'm not sure what this driver would be and how to update it :-(
    Any hints? (sorry, ultra-long post)

    And this is the kernrate log after I uninstalled the sata controller from control panel. No CPU load.
    I restarted the machine after removing the sata controller and it installed the controller again, it has prompted me to reboot again to finish the installation but I haven't done that yet. Until now, no high load has happened.
    C:\Documents and Settings\ctrlER>"C:\Program Files\KrView\Kernrates\Kernrate_i38
    6_XP.exe"
    /==============================\
    < KERNRATE LOG >
    \==============================/
    Date: 2009/02/19 Time: 13:35:13
    Machine Name: TUTTLE
    Number of Processors: 2
    PROCESSOR_ARCHITECTURE: x86
    PROCESSOR_LEVEL: 6
    PROCESSOR_REVISION: 0f0a
    Physical Memory: 2535 MB
    Pagefile Total: 4422 MB
    Virtual Total: 2047 MB
    PageFile1: \??\C:\pagefile.sys, 2046MB
    OS Version: 5.1 Build 2600 Service-Pack: 3.0
    WinDir: C:\WINDOWS
    Kernrate User-Specified Command Line:
    C:\Program Files\KrView\Kernrates\Kernrate_i386_XP.exe
    Kernel Profile (PID = 0): Source= Time,
    Using Kernrate Default Rate of 25000 events/hit
    Starting to collect profile data
    ***> Press ctrl-c to finish collecting profile data
    ===> Finished Collecting Data, Starting to Process Results
    ------------Overall Summary:--------------
    P0 K 0:00:00.125 ( 1.2%) U 0:00:00.031 ( 0.3%) I 0:00:10.312 (98.5%) DPC
    0:00:00.031 ( 0.3%) Interrupt 0:00:00.000 ( 0.0%)
    Interrupts= 1673, Interrupt Rate= 160/sec.
    P1 K 0:00:00.250 ( 2.4%) U 0:00:00.000 ( 0.0%) I 0:00:10.218 (97.6%) DPC
    0:00:00.015 ( 0.1%) Interrupt 0:00:00.031 ( 0.3%)
    Interrupts= 1673, Interrupt Rate= 160/sec.
    TOTAL K 0:00:00.375 ( 1.8%) U 0:00:00.031 ( 0.1%) I 0:00:20.531 (98.1%) DPC
    0:00:00.046 ( 0.2%) Interrupt 0:00:00.031 ( 0.1%)
    Total Interrupts= 3346, Total Interrupt Rate= 320/sec.
    Total Profile Time = 10468 msec
    BytesStart BytesStop Byt
    esDiff.
    Available Physical Memory , 2158850048, 2162970624, 4120
    576
    Available Pagefile(s) , 4226367488, 4225921024, -446
    464
    Available Virtual , 2132660224, 2131611648, -1048
    576
    Available Extended Virtual , 0, 0,
    0
    Total Avg. Rate
    Context Switches , 9745, 931/sec.
    System Calls , 29229, 2792/sec.
    Page Faults , 1707, 163/sec.
    I/O Read Operations , 72, 7/sec.
    I/O Write Operations , 37, 4/sec.
    I/O Other Operations , 1000, 96/sec.
    I/O Read Bytes , 24076, 334/ I/O
    I/O Write Bytes , 1380, 37/ I/O
    I/O Other Bytes , 1382780, 1383/ I/O
    Results for Kernel Mode:
    OutputResults: KernelModuleCount = 129
    Percentage in the following table is based on the Total Hits for the Kernel
    Time 493 hits, 25000 events per hit --------
    Module Hits msec %Total Events/Sec
    ntkrnlpa 395 10468 80 % 943351
    hal 37 10468 7 % 88364
    KSecDD 33 10468 6 % 78811
    NETw5x32 12 10468 2 % 28658
    win32k 7 10468 1 % 16717
    Tppwrif 2 10468 0 % 4776
    spiv 2 10468 0 % 4776
    avipbb 1 10468 0 % 2388
    TPHKDRV 1 10468 0 % 2388
    igxpmp32 1 10468 0 % 2388
    NDIS 1 10468 0 % 2388
    Ntfs 1 10468 0 % 2388
    ================================= END OF RUN ==================================
    ============================== NORMAL END OF RUN ==============================
    C:\Documents and Settings\ctrlER>
    C:\Program Files\KrView\Kernrates\Kernrate_i386_XP.exe -z ntkrnlpa
    Kernel Profile (PID = 0): Source= Time,
    Using Kernrate Default Rate of 25000 events/hit
    CallBack: Finished Attempt to Load symbols for 804d7000 \WINDOWS\system32\ntkrnl
    pa.exe
    Starting to collect profile data
    ***> Press ctrl-c to finish collecting profile data
    ===> Finished Collecting Data, Starting to Process Results
    ------------Overall Summary:--------------
    P0 K 0:00:00.156 ( 1.2%) U 0:00:00.078 ( 0.6%) I 0:00:12.296 (98.1%) DPC
    0:00:00.031 ( 0.2%) Interrupt 0:00:00.015 ( 0.1%)
    Interrupts= 1736, Interrupt Rate= 139/sec.
    P1 K 0:00:00.171 ( 1.4%) U 0:00:00.031 ( 0.2%) I 0:00:12.328 (98.4%) DPC
    0:00:00.015 ( 0.1%) Interrupt 0:00:00.015 ( 0.1%)
    Interrupts= 1735, Interrupt Rate= 138/sec.
    TOTAL K 0:00:00.328 ( 1.3%) U 0:00:00.109 ( 0.4%) I 0:00:24.625 (98.3%) DPC
    0:00:00.046 ( 0.2%) Interrupt 0:00:00.031 ( 0.1%)
    Total Interrupts= 3471, Total Interrupt Rate= 277/sec.
    Total Profile Time = 12531 msec
    BytesStart BytesStop Byt
    esDiff.
    Available Physical Memory , 2166996992, 2164838400, -2158
    592
    Available Pagefile(s) , 4227276800, 4225929216, -1347
    584
    Available Virtual , 2132660224, 2130460672, -2199
    552
    Available Extended Virtual , 0, 0,
    0
    Total Avg. Rate
    Context Switches , 10583, 845/sec.
    System Calls , 30623, 2444/sec.
    Page Faults , 1704, 136/sec.
    I/O Read Operations , 45, 4/sec.
    I/O Write Operations , 26, 2/sec.
    I/O Other Operations , 725, 58/sec.
    I/O Read Bytes , 17252, 383/ I/O
    I/O Write Bytes , 280, 11/ I/O
    I/O Other Bytes , 1377388, 1900/ I/O
    Results for Kernel Mode:
    OutputResults: KernelModuleCount = 129
    Percentage in the following table is based on the Total Hits for the Kernel
    Time 502 hits, 25000 events per hit --------
    Module Hits msec %Total Events/Sec
    ntkrnlpa 397 12531 79 % 792035
    KSecDD 39 12531 7 % 77807
    hal 39 12531 7 % 77807
    NETw5x32 17 12531 3 % 33915
    win32k 4 12531 0 % 7980
    igxpmp32 2 12531 0 % 3990
    avgntflt 1 12531 0 % 1995
    mrxsmb 1 12531 0 % 1995
    NDIS 1 12531 0 % 1995
    ACPI 1 12531 0 % 1995
    ===> Processing Zoomed Module ntkrnlpa.exe...
    ----- Zoomed module ntkrnlpa.exe (Bucket size = 16 bytes, Rounding Down) -------
    Percentage in the following table is based on the Total Hits for this Zoom Modul
    e
    Time 397 hits, 25000 events per hit --------
    Module Hits msec %Total Events/Sec
    NtBuildNumber 322 12531 81 % 642406
    PoShutdownBugCheck 20 12531 5 % 39901
    ZwYieldExecution 11 12531 2 % 21945
    Kei386EoiHelper 5 12531 1 % 9975
    RtlIpv6StringToAddressA 5 12531 1 % 9975
    ProbeForRead 4 12531 1 % 7980
    KeTickCount 4 12531 1 % 7980
    KeSynchronizeExecution 4 12531 1 % 7980
    wctomb 4 12531 1 % 7980
    LsaDeregisterLogonProcess 3 12531 0 % 5985
    IoWMISetNotificationCallback 1 12531 0 % 1995
    SeTokenIsWriteRestricted 1 12531 0 % 1995
    PsSetProcessPriorityByClass 1 12531 0 % 1995
    PsEstablishWin32Callouts 1 12531 0 % 1995
    PoQueueShutdownWorkItem 1 12531 0 % 1995
    ObQueryNameString 1 12531 0 % 1995
    ExRaiseStatus 1 12531 0 % 1995
    mbstowcs 1 12531 0 % 1995
    PoSetPowerState 1 12531 0 % 1995
    PoStartNextPowerIrp 1 12531 0 % 1995
    MmTrimAllSystemPagableMemory 1 12531 0 % 1995
    MmIsThisAnNtAsSystem 1 12531 0 % 1995
    MmProtectMdlSystemAddress 1 12531 0 % 1995
    MmIsDriverVerifying 1 12531 0 % 1995
    KeRundownQueue 1 12531 0 % 1995
    ================================= END OF RUN ==================================
    ============================== NORMAL END OF RUN ==============================
    C:\Documents and Settings\ctrlER>
    Message Edited by ctrler on 02-19-2009 05:43 AM

  • SQL Developer occasionally hogs all CPU resources

    Client OS: Windows XP Professional
    Connecting to : Oracle 9.2.0.3 on Sun Solaris server
    I encountered a few instances where SQL Developer hogs all CPU resources on my Windows XP notebook.
    I usually have to kill the SQL Developer process.
    I have noticed this happening occasionally when I :
    1. hit Enter at the SQL Worksheet window
    2. hit F9 at the SQL Worksheet window - the query does not start
    How do I collect a dump or some kind of debug information when the above happens ?
    Who can I send the information to ?

    How do I collect a dump or some kind of debug
    information when the above happens ?Start SQLDeveloper using sqldeveloper.exe in <sqldeveloper>\jdev\bin
    When sqldeveloper starts hogging cpu, go to the console window and type ctrl-break. You should get a thread dump.
    Who can I send the information to ?If you have a support contract, you can raise a TAR, otherwise post the thread dump here.

  • How to monitor CPU usage and performance on a Hyper-V server with several VM's

    I have a server that is running Windows 2008 64 bit Hyper-V, with 8 gigs of RAM and Intel Xeon X3440 @ 2.53 Ghz, which gives me 8 logical cores in the performance monitor on the host system.
    I have set up three Virtual Machines, all running Windows 2008 32 bit.
    Build server, running Team City
    Staging server
    SQL Server, running SQL Server 2005
    I have some troubles with the setup in that the host monitor remains responsive at all times, even though the VM's are seemingly working at 100% cpu and are very sluggish and unresponsive. (I have asked a separate question about that.)
    So the question here is: What is the best way to monitor how the physical CPU's are actually utilized? The reason I am asking is that I am being told that i cannot reliably use the task manager to monitor CPU usage in a VM.

    First, you have to remember that in Hyper-V that the "host" is called a parent partition and it really just like a virtualized guest with special permissions and roles. Just like any other child/guest, when you open up Task Manager, you can not see the CPU
    usage of the other children on the server.
    Ben Armstrong has a good explanation of this here:http://blogs.msdn.com/virtual_pc_guy/archive/2008/02/28/hyper-v-virtual-machine-cpu-usage-and-task-manager.aspx
    To summarize his post, you need to check three things to get an accurate picture of CPU utilization:
    View the CPU usage on each guest - this is available through Hyper-V Manager or Performance Monitor.
    CPU usage due to context switching - this is the perfmon counter called % Hypervisor Run Timeunder Hyper-V Hypervisor Virtual Processor
    Child partition worker process - vmwp.exe running on the parent partition (1 per child). This handles Hyper-V operations like saving
    state.

  • How to limit CPU usage on webcam stream handling?

    Hi, I am working on a robotics project where I use Java and JMF for image processing which is passed through JNI to OpenCV. My project pages are at: http://robot.lonningdal.net
    I am having problems with performance - the VIA CN13000 board is not exactly a racer, but a good and cheap alternative for robotics projects. The two main CPU consumers are speech recognition and webcamera stream handling.
    After some profiling I see that 50% the CPU is busy just decoding the stream where I discard almost all the pictures because my robot really only needs 1 frame per second to operate. I have looked up the JMF samples and found that you could adjust the framerate, but this doesnt work on my Logitech Pro 4000. If I could only limit the amount of pictures the webcamera sent down the stream I would free quite a lot of CPU resources.
    So my questions are, can anyone recommend a webcamera that does allow me to set this framerate through JMF to limit the actual stream from the camera? (important not to confuse this with adjusting visual framerate which actually just discards pictures). Or is there an alternative API I can use to grab single images if webcameras support this feature? Its important that this doesnt have a lot of overhead processing required per picture but that I can e.g. do a getImage() call to the API and it gets the image immediately.
    Any help would be greatly appreciated. Thank you.

    Hi John,
    I saw your update to my original thread in
    http://forum.java.sun.com/thread.jspa?threadID=570463&start=0&tstart=0
    I'm always glad to hear people using the code :-)
    You can set the framerate of that JMF captures the video stream, in the VideoFormat object.
    I've had some success with this approach before.
    So, assuming you still kept the setFormat() method from my code, here's a simple hard coded modification, where you set the framerate in the code.
    Ps. just curious, whereabouts you are in the world ?
    regards,
    Owen
    public void setFormat ( VideoFormat selectedFormat )
            if ( formatControl != null )
                player.stop();
                currentFormat = selectedFormat;
    replace with
    public void setFormat ( VideoFormat selectedFormat )
            float frameRate = 2.0f;   // 2 frames per second, alter as you wish
            if ( formatControl != null )
                player.stop();
               VideoFormat selectedFormatPlusFrameRate = new VideoFormat(selectedFormat.getEncoding(),
                                                    selectedFormat.getSize(),
                                                    selectedFormat.getMaxDataLength(),
                                                    selectedFormat.getDataType(),
                                                    frameRate);
                currentFormat = selectedFormatPlusFrameRate;Edited posted code, had commented out player.stop(), but you really do need that.

Maybe you are looking for

  • Skype is not working on my phone

    My skype to go number is not working on my cell phone. I call it and it doesn't ring, the skype menu options do not show up. Its completely without sound, but I can see that the call is going on because its connecting. I try to call other phones ever

  • RADIUS Server on Mac OS X Server 10.5 Leopard

    I must set up a Radius Server on my LAN and WLAN, I will do this with Mac OS X Server v.10.5 Leopard but I don't know if it's compatible with any routers(for LAN and WLAN Access) and with Windows XP Pro SP3 computers. Can anyone help me????

  • Allowing user to create new Apex User

    Hi, I have a requirement that a certain user group needs to create new users to use Apex application. With my understanding, no user can create new user without apex admin privilege. I can't give admin privilege to any user. Is there any workaround?

  • Inventory management movements

    Hi I am having a material X in two storage location  10 and 20. material will be stored in bulk amounts in storage location 20 and 10 is our small SL which holds very little quantity. can the stock be moved automatically between SL 20 to 10 when the

  • ADF Query wildcard search

    Hi Experts, i have a requirement in ADF search panel should allow wildcard search. i would like to know is it OOTB or how can we achieve this? appreciate your help Thanks, KT