Manually allocating processor core usage?

Hi all,
So I'm not sure if I'm posting this in the right seciton, but here it goes. So my logic board is broken and Activity Moniter shows that kernel_task (the os itself) is using between 500-600% CPU usage (I have 8 cores so this is about 63-75% CPU capacity)
For reasons I won't get into, I can't fix my computer for about a month. However, there are some higher-performance things I still would like to do. For example, play Dwarf Fortress. Now Dwarf Fortress is not multithreaded, so in theory, if the crazy-high kernel_task usage is restricted to 6 cores, and I can somehow guarentee that 1 core is mostly dedicated to Dwarf Fortress, then I could still play it to a degree. (It's possible that slowdown due thrashing and multi-core issues makes this impossible, but I'm not sure).
Is there some way I can run the program giving it precedence to one of the cores? Or on the other side, restrict kernel_task to the first 6 cores? I'm pretty sure the program itself has no mechanism for this, so it would have to be something from the mac side. I thought maybe virtualizing a machine and dedicating the cores to that machine might work, but the overhead might make the whole thing slower. I'm a technical user, so any suggestions are welcome.

kernel_task does not randomly use 600% CPU.  Something is definitely wrong here.
I don't know that you can assign a "nice" value to kernel_task to de-prioritize its CPU usage.  Some processes you can, but I don't think you can with kernel_task.  Better to find out why it's burning CPU.

Similar Messages

  • Uneven Core usage on dual core processor

    Hi there
    I noticed today whilst doing some video encoding (.TS files to .MP4) that my Mac Mini is not spreading the load over both of the cores in my dual core processor. Whilst Core 1 is showing nearly 100% Core 2 is hardly being used. Also I noticed that when the load on Core 1 dips then Core 2 will peak but as soon as the load goes back to Core 1 it will equally dip on Core 2 so essentially in Activity Monitor it looks like the core usage is a negative image of each other. The total CPU usage never peaked above 54%.
    Could this an indication of a failing CPU or is it a case of poor core management? The software I've been using is VLC Player and also iMagiConverter.
    Cheers
    Jonty

    What I don't understand is why software developers make software (especially video encoding that is all about CPU number crunching) that only utilises 50% of the CPU resources!
    There are any number of possible reasons, from an application using a cross-platform code base that doesn't lend itself well to mulitiprocessor optimization for a particular platform, to inexperienced programmers who don't well understand programming for parallel threads, to developers who just want to crank out an application as quickly as they can (iMagiConverter seems to sort of fall into that category, given the number of applications of a similar nature that the "company" has produced). You'd have to get involved in the project development to know for sure.
    Regards.

  • Secondary Costs Manual Allocation but with a negative debit for the sender

    Hi,
    We would like to do a manual allocation for secondary cost (the sender objet is a cost center and the receiver objet is other cost center). The only way we know to do this, is using transaction KB15N, but with this transaction, system credits a sender object (for example, a cost center) and debits a receiver object (for example, an new cost center). Is it possible to do this allocation but having a debit (a negative debit) for the sender object and a debit (a possitive debit) for the receiver objets? We do not want to have credits, because the credit does not work properly for the cost splitting.
    Thank you really very much,
    Ariana

    Hi,
    If I put the receiving cost center in the sender side and the sending cost center on the receiving side with a negative value, the problem is that the credit is done for the receiving cost center. For the kss2 we have not problems because all of the costs are on the debit, but for the receiving cost center when we run the keu5 we have problems because keu5 transaction does not take properly costs on the credit (it does not consider costs on the credit as negative costs on the debit).
    Does anybody know how can we solve this problema?
    Thank you really very much,
    Ariana
    Edited by: Ariana Landaluce on Jun 23, 2009 8:45 AM

  • Processor Core Cache Hierarchy Error in event viewer and BSOD

    Hello everyone. Lately I have had some problems with random freezing, and I can't figure out what's causing it, hoping to get some help here.
    I have a MSI X79A GD65 (8D) motherboard and 4930K cpu, currently running @ 4.4ghz, 1,33V, disabled all the power saver features.
    The PC have been stable for almost a year without issue, and just recently the PC have started randomly freezing. It can be up and running for a whole week without problems and then suddenty freeze, or sometimes just few hours and then freeze. It seems like it always freezes when surfing the web, and it tend to happen during scrolling a website. It is never happening during gaming with heavy load.
    The error message is following:
    WHEA Logger A fatal hardware error has occurred.
    Reported by component: Processor Core
    Error Source: Machine Check Exception
    Error Type: Cache Hierarchy Error
    Processor APIC ID: 8
    Things I have tried: Reverted back to stock speeds in bios, updated to latest bios, tried a other power supply and formated windows with newest drivers, no difference. Memtest does not show any errors, and the pc passes prime/intel burn test. Cpu temps are fine.
    What more can I do ? Is there a chance that the CPU have taken damage from my "little" overclock ?

    Hi,
    Can you tell what were temps of the CPU while overclocked and under full load (Intel Burn Test)?
    Also, did you try different RAM? And are you currently running RAM at frequency of....? (Reduce to 1600MHz).

  • Best Multiprocessing setup for 24 Processor Cores & 64gb of RAM using new MacPro OSX 10.9.5.

    Whats the best Multiprocessing setup for 24 Processor Cores & 64gb of RAM using new MacPro OSX 10.9.5.
    I'm still getting an After Effects Warning: at the end of some renders saying "A frame failed to render using Render Multiple Frames Simultaneously.

    Hi Szalam,
    I'm using the latest After Effects (13.2.0.49)
    See attached some screen grabs of my setup.

  • Will SAP software support Intel Processor Core-2-Duo 6650   2.66 Ghz

    Hi Experts,
    I am new to SAP.
    I have to install SAP ECC 6.0 ,NW-04s(PI,BI,MI,.......) , so I need to know whether SAP Software will support the below illustrated Hardware or not.
    1. Intel Processor Core-2-Duo 6650   2.66 Ghz
    2. Mother Board Intel DG33FBC.
    3.RAM 4 GB.
    I appreciate  your answers,and also points will be rewarded.
    regards
    Ken George

    Hi All,
        You are having the best config. System for Installing SAP.
    You can Install SAP ECC 6.0. The performancel wll be also be good.
    Reward if useful.
    Regards,
    Pherasath

  • How can I tell the VMware GdbServer stub to set a processor core during kernel debugging session?

    I'm using the VMware DbgServer stub for kernel mode debugging a Windows X86 VM and I want to set a different thread (or processor) in order to get/set the processor context, but the DbgServer replies that the target only has one thread running.
    (gdb) info threads
    Sending packet:
    $T1#85...Ack
    Packet received: OK
    Sending packet: $qfThreadInfo#bb...Ack
    Packet received: m1
    Sending packet: $qsThreadInfo#c8...Ack
    Packet received: l
    Thread 1 ( Name: VCPU 0 ) 0x81626a08 in ?? ()
    Questions:
    1.  How can I tell the VMware DbgServer stub to set a processor core or even another thread during kernel debugging session?
    2.  How can I set/get context or stepping from a different thread or processor core?
    Any help will be highly appreciated.
    Thanks,
    Alex.

    Nick-
    In my experience, the only tell is in the result with Lightroom unless your processor is dog slow. You can select PS as the active app. on my Mac and see it working.
    You could also modify the action to not save and close, and then check History in PS.

  • Are AS3 timers make use of multi processor cores?

    Can anyone tell if a Timer in AS3 will force Flash player to create a new Thread internally and so make use of another processor core on a multicore processor?
    To Bernd: I am asking because onFrame I can just blit the prepared in a timer handler function frame. I mean to create a timer running each 1ms and call the emulator code there and only show generated pixels in the ON_ENTER_FRAME handler function. In that way, theoretically, the emulator will use in best case a whole CPU-core. This will most probably not reach the desired performance anyway, but is still an improvement. Still, as I mentioned in my earlier posts, Adobe should think of speeding up the AVM. It is still generally 4-5 slower than Java when using Alchemy. Man, there must be a way to speed up the AVM, or?
    For those interested what I am implementing, look at:
    Sega emulated games in flash 
    If moderators think that this link is in some way an advertisement and harms the rules of this forum, please feel free to remove it. I will not be offended at all.

    Hello Boris,
    thanks for taking the time and explaining why your project needs 60 fps. If I understand you correctly those 60 fps are necessary to maintain full audio samples rate. You said your emulator collects sound samples at the frame rate and the reduced sampling rate of 24/60 results in "choppy sound". Are there any other reasons why 60 fps are necessary? The video seems smooth.
    That "choppy sound" was exactly what I was hearing when you sent me the source code of your project. But did you notice that I "solved" (read: "hacked around") the choppy sound problem even at those bad sampling rates? First off, I am not arguing with you about whether you need 60fps, or not. You convinced me that you do need 60fps. I still want to help you solve your problem (it might take a while until you get a FlashPlayer that delivers the performance you need).
    But maybe it is a good time to step back for a moment and share some of the results of your and my performance improvements to your project first. (Please correct me if my numbers are incorrect, or if you disagree with my statements):
    1) Embedding the resources instead of using the URLLoader.
    Your version uses URLLoader in order to load game resources. Embedding the resources instead does not increase the performance. But I find it more elegant and easier to use. Here is how I did it:
    [Embed(source="RESOURCES.BIN", mimeType="application/octet-stream")]
    private var EmbeddedBIN:Class;
    const rom : ByteArray = new EmbeddedBIN;
    2) Sharing ByteArrays between C code and AS code.
    I noticed that your code copied a lot of bytes between video and audio memory buffers on the C side into a ByteArray that you needed in order to call AS functions. I suggested using a technique for sharing ByteArrays between C code and AS code, which I will explain in a separate post.
    The results of this performance optimization were mildly disappointing: the frame rate only notched up by 1-2 fps.
    3) Optimized switch/case for op table calls
    Your C code used a big function table that allows you to map op codes to functions. I wrote a script that converted that function table to a huge switch/case statement that is equivalent to your function table. This performance optimization was a winner. You got an improvement of 30% in performance. I believe the frame rate suddenly jumped to 25fps, which means that you roughly gained 6fps. I talked with Scott (Petersen, the inventor of Alchemy) and he said that function calls in general and function tables are expensive. This may be a weakness within the Alchemy glue code, or in ActionScript. You can work around that weakness by replacing function calls and function tables with switch/case statements.
    4) Using inline assembler.
    I replaced the MemUser class with an inline assembler version  as I proposed in this post:
    http://forums.adobe.com/thread/660099?tstart=0
    The results were disappointing, there was no noticeable performance gain.
    Now, let me return to my choppy sound hack I mentioned earlier. This is were we enter my "not so perfect world"...
    In order to play custom sound you usually create a sound object and add an EventListener for SampleDataEvent.SAMPLE_DATA:
    _sound = new Sound();
    _sound.addEventListener( SampleDataEvent.SAMPLE_DATA, sampleDataHandler );
    The Flash Player then calls your sampleDataHandler function for retrieving audio samples. The frequency of those requests does not necessarily match with the frequency onFrameEnter is being called. Unfortunately your architecture only gets "tickled" by onFrameEnter, which is currently only being called 25fps. This becomes your bottleneck, because no matter how often the Flash Player asks for more samples, the amount will always be limited by the frame rate. In this architecture you always end up with the FlashPlayer asking for more samples than you have if the frame rate is too low.
    This is bad news. But can't we chat a little bit and assume that the "sample holes" can be filled by using sample neighbors on the time line? In other words, can't we just  stretch the samples? Well, this is what I came up with:
    private function sampleDataHandler(event:SampleDataEvent):void
         if( audioBuffer.length > 0 )
              var L : Number;
              var R : Number;
              //The sound channel is requesting more samples. If it ever runs out then a sound complete message will occur.               
              const audioBufferSize : uint = _swc.sega_audioBufferSize();
              /*     minSamples, see http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/events/SampleDataEvent.html
                   Provide between 2048 and 8192 samples to the data property of the SampleDataEvent object.
                   For best performance, provide as many samples as possible. The fewer samples you provide,
                   the more likely it is that clicks and pops will occur during playback. This behavior can
                   differ on various platforms and can occur in various situations - for example, when
                   resizing the browser. You might write code that works on one platform when you provide
                   only 2048 samples, but that same code might not work as well when run on a different platform.
                   If you require the lowest latency possible, consider making the amount of data user-selectable.                         
              const minSamples : uint = 2048;
              /*     For the maximum sample rate of 44100 we still only get 1470 samples:
                   snd.buffer_size = (rate / vdp_rate) = 44100 / 60 = 735.
                   samples = snd.buffer_size * channels = 735 * 2 = 1470.
                   So we need to stretch the samples until we have at least 2048 samples.
                   stretch = Math.ceil(2048 / (735*2)) = 3.
                   snd.buffer_size * channels * stretch = 735 * 2 * 3 = 2790.
                   Bingo: 2790 > 2048 !
              const stretch : uint = Math.ceil(minSamples / audioBufferSize);
              audioBuffer.position = 0;
              if( stretch == 1 )
                   event.data.writeBytes( audioBuffer );
              else
                   for( var i : uint = 0; i < audioBufferSize; ++i )
                        L = audioBuffer.readFloat();
                        R = audioBuffer.readFloat();
                        for( var k : uint = 0; k < stretch; ++k )
                             event.data.writeFloat(L);
                             event.data.writeFloat(R);
              audioBuffer.position = 0;
    After using that method the sound was not choppy anymore! Even though I did hear a few crackling bits here and there the sound quality improved significantly.
    Please consider this implementation as a workaround until Adobe delivers a FlashPlayer that is 3 times faster :-)
    Best wishes,
    - Bernd

  • Pages '09 only uses one processor core?

    Hello,
    some days ago i had to open a document containing nearly 2.000 pages. Don't ask why.
    Working with this document was quite complicated because Pages was responding very slowly. I used the activity monitor to see what was going on and i recognized that Pages only uses one core of my eight core MacPro.
    I tested iWork on my iMac and MacBook Pro (both DualCores) as well: same problem.
    Does iWork only use one processor core?
    Kind regards,
    roman

    How much memory do you have in your system? I'll bet if you check activity monitor (applications/utilities) and go to system memory tab therein you will see lots of page outs. This happens with huge docs and little memory as the doc is being swapped between active operating memory and the hard disk The only way to solve this problem is to get more memory (not storage) in your system. For a 2,000 page doc you're going to need lots and lots and lots of extra gigs of memory. By the way - how big is the too big file of yours?
    Number of processor cores isn't going to solve the issue if you have pageouts. It is a system memory/size of document issue and completely unrelated to how many processor cores you are using.

  • Usage of more processor cores?

    Hi,
    my customer has a database running on a node with 4 processors of type EM64T. Now the question is, whether the Oracle process uses all the processors?
    Node:
    HW - HP ProLiant DL380 G5
    OS - MS Windows Server 2003 Standard x64 Edition Version 5.2.3790 SP 2 Build 3790
    Processors - 4 x EM64T Family 6 Model 15 Stepping 11 GenuineIntel 2GHz
    Memory - 4GB
    Database:
    Oracle 11+g+R1
    Regards,
    Martin

    Hi Martin,
    The number of processors used is more a function of the O/S than Oracle. I'm sure Oracle "optimizes" itself to the number of processors but, ultimately the O/S is the main determining factor. Oracle can balance its threads among available processors by setting the thread affinity but, that just a way of telling the O/S what is best for Oracle. The O/S could ignore it if it wanted to for whatever reason.
    HTH,
    John.

  • Processor idle / usage

    Hi
    I have so much power in this MP with 2.8GHz 8 core etc etc... but activity monitor always tells me there's loads of idle processor power.
    Today i was running Aperture (it wasn't doing much, but it was running), i was rendering some video in Final Cut Pro, and I was uploading a video to YouTube...all at the same time, and Activity monitor told me the processor was 96% idle... surely it should be working its nuts off?
    Anyone know why?
    Cheers
    J

    ...just a thought... is there a setting i have to change to make the MP use more of its power (unlikely i think) or are there some applications which just can't use the multiple processors...

  • Finding dynamic memory allocations in core file

    Hi,
    Is it possible to find out which data structures were allocated by analysing a core file?
    I want to find out which objects are causing the memory to increase, and I have the core file of the program.
    Thanks in advance,
    Paulo

    It's almost impossible. Anyway, it would be pure heuristics - looking at stack contents, finding familiar patterns in heap, etc, etc.

  • Core usage

    Hi alI,
    i have a questione about cpu usage on my Power Book.
    It appears to be using  only odd core so 4 core instead of 8, is it normal ?
    Thanks in advance .
    MM

    update:
    Apple technician told me the way Logic uses cores changes depending on it's application.... ugh

  • Manually allocating RAM to applications, how?

    Hi there friends,
    I am looking to know how to manually change de default RAM allocated (automatically, right?) to a program.
    Basically, I'm using a memory demanding application, and it could always do the job within about 512mb of RAM. The thing is, I have other program, whichs doesn't require alot, but use up a big part of my RAM. (BTW, I have 1gb of RAM, only)
    I could easily afford having my other program cached on disk, while the memory demanding one has all the speed from the RAM.
    I've readed a lot from other pages (a little "allocating RAM mac" googling), and it has all shown me how to change the default RAM allocated to applications, but only for Classic..
    Even if OS X has it's own automated memory management, I'd like to give it a try, if it possible, of course.
    Great thanks to anyone who could give me a hand!

    You can not do this. And if you could, there is no way you could manually improve on the system's memory management. In fact there isn't really a "default" allocation. Memory is allocated dynamically, as needed. An active process always gets priority on RAM. If your "other" application is cached on disk, then it is not active. It must be swapped back into RAM before it can become active. And vice versa. If it is using RAM while inactive, an active process that needs the RAM will get it. The system does what needs to be done automatically.

  • Uneven processor core didtribution

    Why does Logic distribute plug in processing unevenly?
    I am currently getting regular Core Audio system overloads and my System Performance meter is showing core 1 and 4 running at about 30%, core 2 is unused and core 3 is running in the red (90%-100%). I am using all 3rd party plug ins with the exception of 2 instances of EXS24.
    I have tried re-instantiating plug ins and setting up my local machine as a node. It makes no difference. It seems that all 3rd party plug ins want to use a single processor.
    Does anyone know a workaround?
    Logic 7.2.3
    Mac OS X 10.4.10
    Mac Pro 2x2.66 Dual Core
    5GB RAM
    1 x 250GB Internal System Drive
    1 x 500GB Internal Audio Drive
    2 x 500GB Internal Sample Drives
    RME FireFace 800
    Emagic AMT8
    Mac Pro 2 x 2.66GHz Dual   Mac OS X (10.4.8)   5GB RAM, Fireface 800, LP 7.2.3

    Welcome to the forum bof,
    Mac-Pro Quad runs in Symmetric with Memory of Rams,
    so it may be a case of the Rams being un-even.
    Rams has to run in pairs for you to get the full 256/bits,
    in the Memory performance.
    Try taking out 1 Giz and allow Four Gizs of Rams,
    and see what happens.
    Secondly i am guessing use use Migration to cross over,
    to the Mac-Pro,
    if that is the case, the Machine will run like Molasses,
    therefore its best to do a clean in-stall.
    Before doing a in-stall back-up your stuff,
    then erase all H.D. and re-format,
    then install.
    You only need to install OSX on the boot Drive.
    No need for Apps on one H.D. and Sample on another H.D.
    Put Apps/Samples on same boot Disk, Bay 1
    and File/Audio on the other H.D. in Bay 2
    the other H.D. other than the boot H.D. can be set to raid,
    give-ing twice the speed if its two H.D. set to a Raid 0,
    And the Forth H.D. for back-up which can be set also to Mirror
    for the set of Raid.
    Also get some form of Data-Recovery sofware in recovery H.D.
    never know when the Sh£t hits the Fan.
    Disk Warrior can be consider.
    Hope all this makes sense,
    Fr.BlayZay.

Maybe you are looking for

  • Problem in multiple item for change document objects

    hi gurus, I have created change document object in tcode SCDO . It had giveN function module /TMW/CHG_OBJ1_WRITE_DOCUMENT. CDPOS AND CDHDR tables are updated with changed data. now i am trying to display all old and new data in se38 program. here my

  • Help!! My iPod keeps resetting itself.

    Hey guys, So when I leave my ipod in my deak for a day and take it out the next day to listen to it. It doesn't turn on conventionally...I get the apple logo on a black backdrop. Then it gets me to the menu. And also, when it was plugged into my Kens

  • Problem with optional attribute caching on a custom tag

    Hello, I've created a tag by extending TagSupport. I have one attribute that is optional. I'm having a problem with this attribute since the tag is cached. If the value is not specified in the tag, it is always using the previous value from the past

  • How to chose or change the production folder thumbnails ?

    There is a way to assigne a specific thumbnails for a production folder ?

  • HT5517 itunes movies not working with airplay/apple TV

    Itunes music and pictures work fine when using airplay on my macbook pro (2011), but as soon as I want to play a movie that's stored on the macbook, I get at 3150 error and the macbook and Apple TV want to access the itunes store. What's the deal her