Limiting cpu for a process

Hi,
I see this type of question has been asked before - but there doesn't seem to be an answer.
I want to limit how much (as a percentage of CPU) a process can use.
I'm not talking about NICE or RENICE as this only lowers priority in relation to other processes, I want a process to run and never use more than 30% CPU.
The reason is I have Rosetta Stone language software, and when waiting for input (via Macromedia) it uses 100% CPU. After initial arguing with the vendor - who tried to blame my setup/machine etc - they admitted it was a known issue with their software on the Mac.
I like the software, but 100% CPU kicks the fan on quickly which annoys me intensely- and is generally "not good".
I also have the same issue with Epson scan software - 100% CPU when it's doing nothing.
I'm genuinely surprised there's nothing to limit a process.

You mean like a RAM check? Passed with flying colors. I also have a new harddrive. Still the same problems. I don't think I can check anything else. Is there RAM onboard the motherboard?
Do you think a mac store test if it has overheating problems?

Similar Messages

  • How to limiting CPU resource per process

    Solaris 8 OS
    Sun 4800 Server
    I have 4 CPU modules. There is a runaway process that consumes 25% of the CPU resources. Which is one whole CPU in my situation.
    Is there a way for me to limit the CPU consumption from user processes? IE: process can't use more than 5-10% of total CPU at a given time.
    There is a command 'limit'. Limit can only control the amount of CPU TIME. This is not applicable because sometimes user need to run processes which last for days.
    There is a way to do this . . By buying Solaris Resource Manager. This software cost over $10,000. It will come with a lot of things that I do not need.
    There's got to be a way to limit this.
    I have done some extensive researching and NO lead.
    Any help would be greatly appreciated.

    I was thinking about this a bit more. I remember a while back Sun has a BATCH scheduling class available for download. Unfortunately, I can't find it anymore. Maybe Sun turned it into a for purchase product. Anyway, the comments about priority are relevant here.
    Just because a process is chewing up your CPU doesn't necessarily mean its a bad thing. If the process is doing work, that shouldn't be considered a problem. If the job is impacting other jobs in the systems that are considering more important, then its a problem. Moving this application into a batch scheduling class would definately take care of this problem OR using NICE to lower the process nice value to say 19 will also work. This would make it so that other processes with higher nice values would preempt this application off the CPU. A BATCH scheduling class would do the same thing.
    OR MAYBE and I don't now if this is possible and its way out in left field, maybe investigate changing the default scheduling class to IA so that all normal processes startup in the IA scheduling class and put this abusive process in the TS scheduling class. This way IA scheduling class apps will always preempt anything in the TS scheduling class. Sorta like turning your TS scheduling class into a BATCH scheduling class. Of course, using NICE is probably much more easy.

  • How could i get the kernel and user cpu usage for each process

    Hi all,
    In order to monitor the system CPU usage, I would like write a script to gather the kernel and user CPU usage for each process, like the prstat or top does. As always missing the shortlived kernel usage, prstat or top cann't get the precise CPU usage. I checked with the dtrace syscall, proc and fbt provider, but don't get which one is useful.
    Please provide your comments and suggestion.
    Thanks in adv

    mail2sleepy wrote:
    As I've studied the "dtrace" for a while, and seems Sun gives a pretty high score on this new feature.....I do want to know whether there's some probe can work for it, like writing a "dtrace" version prstat.You can write a prstat without dtrace. Because that's just polling at specific intervals and reading some process structures from /proc. You could have dtrace fire a probe every 5 seconds and read the same thing, but it wouldn't really be using any features of dtrace. Trhying to write it "in dtrace" doesn't make much sense.
    What you could do that would be harder via other methods is to fire a probe at process exit that displayed the process information including total CPU time. They could print exactly when processes exited. Doing that without dtrace would be very difficult.
    Darren

  • Doc2text process using 100% cpu for way too long

    I've run into this issue before, and have never received a satisfactory resolution.
    When we publish certain PDF documents via collab to the KD, the doc2text process on our portal servers (also our API servers) spin up to 100% CPU for about 90 seconds. This causes the publish from collab to fail, so we resort to directly uploading the file to the KD via content upload.
    1) Why is doc2text running on our portal servers in the first place? Is this a result of the portal server or api service? If i move the API service off to another machine, will that remove this CPU hog from our front end servers?
    2) Has anyone seen this before, or found a way around it? Is there a newer version of doc2text in 10gR3 that isn't so terrible?
    I'm preparing to move to 1-cpu virtuals in our new environment so any process that takes up 100% of the CPU is going to kill performance.

    Doc2text is native search client functionality used by portal, automation, ws, collab, publisher, etc. The portal components use it when indexing content to search server. It is installed seperately with each portal component. If you have multiple portal components installed on the same box, the ones that use Doc2text will share it from the common folder where it is located. Doc2text uses different libraries for the purpose of raw text extraction and processing, the processed information being sent in a proprietary format to the search server.
    Doc2text is a processor heavy component as it needs to execute intensives tasks necessary to parse and extract raw data from such file formats as Microsoft Office and Adobe Acrobat. Some files may be more complex to extract data from and take longer for this task to complete.
    The best way to increase performance would be to install the various portal components onto their own machines. For example not installing portal and automation on seperate servers. This way when you run a crawler job that imports files, the running of doc2text will not use cpu cycles on the portal web server.

  • High CPU usage for SERVER0 process in XI

    Hi,
    SERVER0 process is taking high CPU usage, previously it happen once its due to communication channel error, So we deativated some unwanted communication channels in Alert config and the CPU usage is reduced.
    After some weeks its again CPU usage is increasing for SERVER0 process in some time.
    Can anyone help in this case.
    Thank you,
    gopi.

    Hi Gopi,
    Check out Tim's reply and check for ur Java stack settings,
    Extended Memory parameters
    <i>The bottom line for J stacks is to keep everything in memory and out of swap. </i>
    Other tips:
    - u may try clearing folder usr\sap\<SID>\DVEBMGS00\j2ee\cluster\server0\apps
    - try increasing virtual memory
    - increase heap size
    <i>[Reward if helpful]</i>
    Regards,
    Prateek

  • Is a PowerBook G4 still good for what I need it for? 17" 1.67 Ghz CPU, 2GB RAM. Need for word processing, Dreamweaver, Illustrator, Sculptris, other programming software., Internet browsing, and light video editing.

    Powerbook G4 1.67 ghz, 2Gb ram, OS X 10.5 Leopard. Is it good enough for word processing, Dreamweaver, flash, sculptris, some programming software, Internet browsing, and some basic video editing? It also includes iLife and iWork, what is actually included in these packages?

    This is the fastest of the G4 Powerbooks which was a good performer when it was new. Graphics intense programs will not perform well by todays standard. In addition the up to date software programs need an Intel processor and more RAM. iLife includes iPhoto, iMovie, and GarageBand. iWork includes Pages, Numbers, and Keynote.
    http://www.apple.com/hk/en/iwork/
    http://www.apple.com/lae/ilife/

  • Which Mac Mini for word processing?

    Hello, my partner is looking to buy a computer to use exclusively for word processing.  Since she literally only wants to use a word processor – probably either MS Word or Nisus Writer – and nothing else (not even the internet), I suggested she might get a secondhand Mac Mini.
    I was wondering if anyone had any particular advice on what to look out for – otherwise it would just be a case of seeing what the cheapest device on offer is at a local second-hand computer shop.
    These would be the requirements:
    1.  It must be able to run at least Snow Leopard, in order to run a modern word processor.
    2.  It must be secure, i.e. not liable to break/malfunction easily.  (My partner is writing a novel, so she doesn't want to lose it half way through!)
    3.  It must be able to start up quickly.
    I'm assuming these are modest requirements, so I assume I'm really asking if there's any particularly dodgy models I should look out for etc.
    Any thoughts welcome!
    Jon

    Adding an aftermarket SSD drive won't help keep the cost down though.  To some extent, often startup doesn't matter much as many people never shut their Macs down.  My MBP has not been turned off for more than a few days in 4 years - I'll sleep it when away from it, and the hard drive and screen are timed out to sleep, but I don't bother shutting it down and rebooting it often.  And the difference in boot time for an SSD versus a HDD is likely going to be under one minute on any machine made in the last 3-5 years, so is it worth the cost for that, especially if you are only rarely restarting the machine?  To my mind that answer is no, but everybody is different.
    If I/O is an actually bottleneck or high activity issue on a machine, that is when an SSD becomes worth the extra cost, to my mind.  For word processing, I/O will not be a limitation at all, even with an older 5400rpm conventional HDD, so using an SSD is largely paying for something without any real benefit.  Even in terms of reliability, SSDs are hardly error free or not prone to failure, and in real world use or MTBF type ratings, a good quality conventional HDD is just as reliable as any decent SSD (and sometimes is more so for certain types of failures under heavy use read/write conditions).
    Buying used is fine, if you know the place you are buying from and can trust them to only offer decent items.  As long as you backup regularly so you can recover, a used item may suit your budget and needs just fine.
    In the tight budget scenario though, it is hard to beat some of the available Windows PC consumer machines, and many of those are pretty good quality.  MS Word is MS Word by and large, whether on OS X or Windows.  Keep in mind for the PC route, the cost of decent AV (although many of the free tools are very good and more than enough if you are educated and savvy about what to do and not do with email or web sites&links).

  • 3750 stack - high cpu for "hulc running con" every one or two minutes

    I have a stack of 4 x 3750 :
    Switch Ports Model              SW Version            SW Image
    *    1 28    WS-C3750G-24PS     12.2(53)SE            C3750-IPSERVICES-M
         2 12    WS-C3750G-12S      12.2(53)SE            C3750-IPSERVICES-M
         3 12    WS-C3750G-12S      12.2(53)SE            C3750-IPSERVICES-M
         4 28    WS-C3750G-24PS     12.2(53)SE            C3750-IPSERVICES-M
    And every minutes or two minutes (approximately), I have high cpu for "hulc running con" without doing anything.
    coeur#sh proc cpu sort
    CPU utilization for five seconds: 86%/28%; one minute: 49%; five minutes: 49%
     PID Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process 
     312   670849714   3362057     199535 44.22%  5.00%  4.96%   0 hulc running con 
      82   234351581 353049778        663  3.76%  5.99%  5.95%   0 HLFM address lea 
     277       11138      8537       1304  1.05%  1.70%  1.12%   1 Virtual Exec     
     142    10589757 353659360         29  0.90%  0.33%  0.22%   0 Hulc LED Process 
    coeur#sh proc cpu sort
    CPU utilization for five seconds: 95%/27%; one minute: 52%; five minutes: 49%
     PID Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process 
     312   670832227   3361981     199545 55.59%  8.95%  5.76%   0 hulc running con 
      82   234334259 353043921        663  3.99%  5.78%  6.02%   0 HLFM address lea 
     277        6286      7531        834  0.63%  1.06%  0.61%   1 Virtual Exec     
     106    15342490  19948859        769  0.47%  0.20%  0.15%   0 hpm counter proc 
     119     2034091  13865800        146  0.31%  0.04%  0.03%   0 LLDP Protocol    
     142    10588906 353653113         29  0.31%  0.21%  0.16%   0 Hulc LED Process 
    How can I find which is doing a "show run" ? Which debug can I do ?
    Thanks

    We solved our problem it was caused because we were routing in and out the same L3 interface.
    Fixed this issue and our CPU dropped dramatically.

  • FBU Internal Job Queue Full for Synchronous Processing of Print Requests

    Hello All,
    We are getting a lot of errors in system log (SM21) with the
    Error : FBU Internal Job Queue Full for Synchronous Processing of Print Requests
    User :SAPSYS
    MNO:FBU
    =============================================================
    Documentation for system log message FB U :
    If a spool server is to process output requests in the order they were
    created using multiple work processes, the processing of requests for a
    particular device can only run on one work process. To do this an
    internal queue (limited size) is used.
    If too many requests are created for this device too quickly, the work
    process may get overloaded. This is recognized when, as in this case,
    the internal queue is exhausted.
    This can only be solved by reducing the print load or increasing
    processor performance or, if there is a connection problem to the host
    spooler, by improving the transfer of data to the host spooler.
    Increasing the number of spool work processes will not help, as
    requests for one device can only be processed by one work process. If
    processing in order of creation is not required, sequential request
    processing can be deactivated (second page of device configuration in
    Transaction SPAD). This allows several work processes to process
    requests from the same device thus alleviating the bottleneck.
    Enlarging the internal queue will only help if the overload is
    temporary. If the overload is constant, a larger queue will eventually
    also be overloaded.
    ===========================================================
    Can you please tell me how to proceed.
    Best Regards,
    Pratyusha

    Solution is here:
    412065 - Incorrect output sequence of output requests
    Reason and Prerequisites
    The following messages appear in the developer trace (dev_trc) of the SPO processes or in the syslog:
    S  *** ERROR => overflow of internal job queue [rspowunx.c   788]
    Syslog Message FBU:
    Internal job queue for synchronous request processing of output requests full
    The "request processing sequence compliance" on a spool server with several SPO processes only works provided the server-internal job queue (see Note 118057) does not overflow. The size of this request queue is prepared using the rspo/global_shm/job_list profile parameter. The default value is 50 requests. However, if more output requests arrive for the spool server than can be processed (and the internal request queue is full as a result), more SPO processes are used to process the requests (in parallel), and the output sequence of the requests is no longer guaranteed.
    Solution
    Increase the rspo/global_shm/job_list profile parameter to a much larger value. Unfortunately, the value actually required cannot be found by "trial and error" because this queue contains all the incoming output requests on a spool server, not just the "sequence compliant" requests. A practical lower limit for this value represents the maximum sequence-compliant output requests for the above generated output device. If, for example, 1000 documents that should be output in sequence are issued from an application program to an output device, the queue must be able to hold 1000 entries so that it does not overflow if the SPO process processes the requests at a maximum low-speed.

  • After installing 10.5.6, finder and quicklook server hog cpu for 10 minutes

    I installed 10.5.6. Now upon every restart the Finder and Quick Look Server use 60-80% of my cpu for 5-10 minutes. This means that I can't really do anything until these processes are done because they cause constant delays. This did not happen in the previous version of Leopard, and if it's intentional on the part of Apple, it's a mistake. Very very annoying. Any ideas how to get this to stop?

    stuartwgibson,
    How did you install 10.5.6?
    If you used "Software Update," you might want to try installing the Mac OS X 10.5.6 Combo Update.
    ;~)

  • CPU를 많이 점유하는 CONCURRENT MANAGER PROCESSES 의 조치방법

    제품 : AOL
    작성날짜 : 2004-11-30
    CPU를 많이 점유하는 CONCURRENT MANAGER PROCESSES 의 조치방법
    ================================================
    PURPOSE
    OS상에서 TOP/TOPAS같은 Monitoring tool로 볼때 FNDLIBR process중 과도하게 CPU를 많이 차지하는 것을 확인할때가 있다. 이는 CM Setup의 조정을 통해
    일부 해소할수 있다. 이 Note는 이에 대해 기술하고 있다.
    Explanation
    우선 Process의 갯수, Sleep time을 확인해야 한다.
    Process한개당 CPU 점유율을 줄이기 위해서는 Process의 갯수를 줄이고 Sleep time을 늘려줘야 한다.
    1. Select the System Administrator Responsibility
    2. (N)Concurrent -> Manager -> Define
    3. Look up the manager that is taking all the CPU.
    4. Click on the Work Shifts button. Here, you can find the number or target
    processes and the sleep time.
    5. Lower the number of processes, or increase the sleep time.
    6. Restart the manager for the changes to take effect.
    Example
    만약 manager 한개당 10개의 process , Sleep time 3을 지정했다면, 10개의 process가 3초에 한번씩 처리할 job 이 있는지 확인하고 이런 빈번한 확인이 시스템에 부하를 줄수 있으므로 Sleep time을 늘리고 이를 확인하는 process의 갯수를 줄여줘야 한다.
    Reference Documents
    <Note:114380.1>

  • Is ACR as good as Photoshop for jpeg processing?

    I have CS3 and use ACR 4.1 for processing my RAWs. I love it's intuitive easy workflow.
    I have many many jpeg files from my pre-RAW days that I'd like to improve by doing slight adjustments i.e. crop, adjust tone, WB, set white point, contrast, saturation, colour tweaking,resize, sharpen. I do this so they can be displayed online on a Zenfolio or pBase account, NOT for printing.
    I like ACR's workflow and myriad of adjustment options yet I wonder if it is a better platform for jpeg processing than Photoshop. For ease of use it clearly is, but what about end product?
    For best results, Photoshop requires me to create a new adjustment layer for each adjustment. I DO speed up the workflow by creating Actions that convert the files to 16 bits as well as creating new adjustment layers upfront. I still have to apply batch processes to merge layers, resize, sharpen, convert back to 8 bit and save as jpg. But I still have to go in to each layer and make adjustments. ACR 4.1 is much easier because all the sliders are more easily accessible. But I wonder if it sacrifices results because I don't know if it is working in 8 or 16 bits. With ACR workflow is easier to recover shadow/highlight detail (though I am not sure if it is more effective).
    For the best results am I still better off using PS for jpeg tweaking, despite its more cumbersome workflow, because I can work in 16 bits?
    I really like ACR's workflow. But from what little theory that I do have it sounds like PS ability to work in 16 bits still gives it the edge.Or am I missing something here?

    "...if you open a JPEG...it is converted to 16 bit, linear Pro Photo RGB for editing. It's been shown (by Lightroom) ...that the totality of the edits in CR are in linear ProPhoto you could see a real benefit to processing camera JPEGs (as apposed to already edited JPEGS saved out of like Photoshop) over doing similar edits in Photoshop."
    This is interesting and good news! Now my grasp of all this technical talk is limited so bear with me if I am not up to speed. I just want to make sure I understand this.
    ACR opens all 8 bit JPEGs automatically in 16 bit ProPhoto workspace BEFORE the actual editing is performed?
    At the risk of hopelessly confusing myself even further, why would ACR be a better platform for JPEG processing with out of camera jpegs but not previously edited JPEGs?
    Jeff, I know you've stated "it's beed shown by Lightroom" but can you show me this? Where has it been shown? A link, webpage? That way I can look at this myself, spare you trying to explain it to me and (yes this is pure self interest) hopefully free time up for you to work on the ACR 4.1 book I am looking forward to.
    Thanks for the time you devote to helping us noobs pursue our passions!!

  • CPU Load By Process

      I need to get a running report of how much load each process is putting on the CPU. 
      Is there a way to get a report that will show four to eight hours giving each process and how much CPU it is using?

    not exactly what you have asked for but, Process Explorer, https://technet.microsoft.com/en-gb/sysinternals/bb896653.aspx?f=255&MSPPError=-2147217396
    Run this and enable the extra columns "CPU Time" and "CPU Cycles"
    it will show how much time each running process as been active for and also the amount of cycles its demanding from CPU
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. If you find an answer helpful
    then please "Vote As Helpful"

  • High CPU - SNMP ENGINE process

    Based on the stace trace, how do I determine what object is being polled??? (what is causing the high cpu for SNMP ENGINE)
    From the “show stack 295” output (SNMP ENGINE)
    router>show stack 295
    Process 295:  SNMP ENGINE
      Stack segment 0x68226DC0 - 0x68229CA0
      FP: 0x68229BD0, RA: 0x62A0D95C
      FP: 0x68229BF8, RA: 0x628276A8
      FP: 0x68229C68, RA: 0x629FBE08
      FP: 0x68229C80, RA: 0x629FBDEC
    Thanks

    Yeah, I backup srkala. Currently we dont have any external tool to decode the stacks.
    These stacks needs to be decoded on the source code server for every perticular IOS and hence can't be made external.
    For any issue you need to collect and post either to CSC, we can check that, or you can open the TAC case to get it reviewed.
    Also, the decode information doesnt reveals much, it further needs processing to which will be out of scope for anyone without some intenal tools help.
    -Thanks

  • Setting ram limits on individual programs/processes

    Is it possible to set ram limits on individual programs/processes?
    Specifically, can you limit the about of ram gunzip uses whenever it is run?

    That helps some, but the true desired effect is to do something similar to virtualizing memory -- I want to be able to say "process x is allowed to work within 512mb of memory," for example.
    Thanks for the pointer on setrlimit... It does kill the process if it hits the hard limit? The man page sounds almost like it does what I want, but I guess not.
    Last edited by mrbug (2008-09-29 00:25:57)

Maybe you are looking for

  • Sleep/ALS problem even AFTER Apple Store ALS replacement!

    So I've had this sleep/ALS problem for months; wave my hand near the left ALS and display goes black. Gets better after PB has been on for a while. I like my office rather dimly lit (hey, a little "mood" lighting) but I have to have it lit up like an

  • IPad/iPhone 4 say they are syncing to iTunes but looks frozen

    I keep trying to sync up my iPad to my PC.  I've never had a problem before.  Says it is syncing up but sits for hours and nothing happens.  I have removed iTunes from my PC and downloaded it again, but the same situation happens.  I have the same pr

  • Webutil - client_text_io.fopen

    Hi! I have a problem. My database charset is EE8ISO8859P2 (SELECT * FROM v$nls_parameters) and I use AS for 10g. I also installed webutil 1.0.6 and I call function client_text_io.fopen(p_file,'W','WINDOWS-1250'); When I write to file I use client_tex

  • Which frame is used for movie preview?

    Which frame is used for a home movie preview, takin with an iPhone 4s?  I guess the question is, is there a way to specify the frame of the movie which is used for the preview? -Thanks

  • Password Keeper would not show asterisks. Last attempt left

    Suddenly my bb bold password keeper would not accept my password. I know i am entering it correctly. it doesnt show asterisk anymore which is explainable in forums but it would still not accept my password. on 8th attempt i entered "blackberry" and l