Usage of more processor cores?

Hi,
my customer has a database running on a node with 4 processors of type EM64T. Now the question is, whether the Oracle process uses all the processors?
Node:
HW - HP ProLiant DL380 G5
OS - MS Windows Server 2003 Standard x64 Edition Version 5.2.3790 SP 2 Build 3790
Processors - 4 x EM64T Family 6 Model 15 Stepping 11 GenuineIntel 2GHz
Memory - 4GB
Database:
Oracle 11+g+R1
Regards,
Martin

Hi Martin,
The number of processors used is more a function of the O/S than Oracle. I'm sure Oracle "optimizes" itself to the number of processors but, ultimately the O/S is the main determining factor. Oracle can balance its threads among available processors by setting the thread affinity but, that just a way of telling the O/S what is best for Oracle. The O/S could ignore it if it wanted to for whatever reason.
HTH,
John.

Similar Messages

  • Manually allocating processor core usage?

    Hi all,
    So I'm not sure if I'm posting this in the right seciton, but here it goes. So my logic board is broken and Activity Moniter shows that kernel_task (the os itself) is using between 500-600% CPU usage (I have 8 cores so this is about 63-75% CPU capacity)
    For reasons I won't get into, I can't fix my computer for about a month. However, there are some higher-performance things I still would like to do. For example, play Dwarf Fortress. Now Dwarf Fortress is not multithreaded, so in theory, if the crazy-high kernel_task usage is restricted to 6 cores, and I can somehow guarentee that 1 core is mostly dedicated to Dwarf Fortress, then I could still play it to a degree. (It's possible that slowdown due thrashing and multi-core issues makes this impossible, but I'm not sure).
    Is there some way I can run the program giving it precedence to one of the cores? Or on the other side, restrict kernel_task to the first 6 cores? I'm pretty sure the program itself has no mechanism for this, so it would have to be something from the mac side. I thought maybe virtualizing a machine and dedicating the cores to that machine might work, but the overhead might make the whole thing slower. I'm a technical user, so any suggestions are welcome.

    kernel_task does not randomly use 600% CPU.  Something is definitely wrong here.
    I don't know that you can assign a "nice" value to kernel_task to de-prioritize its CPU usage.  Some processes you can, but I don't think you can with kernel_task.  Better to find out why it's burning CPU.

  • Processor Core Cache Hierarchy Error in event viewer and BSOD

    Hello everyone. Lately I have had some problems with random freezing, and I can't figure out what's causing it, hoping to get some help here.
    I have a MSI X79A GD65 (8D) motherboard and 4930K cpu, currently running @ 4.4ghz, 1,33V, disabled all the power saver features.
    The PC have been stable for almost a year without issue, and just recently the PC have started randomly freezing. It can be up and running for a whole week without problems and then suddenty freeze, or sometimes just few hours and then freeze. It seems like it always freezes when surfing the web, and it tend to happen during scrolling a website. It is never happening during gaming with heavy load.
    The error message is following:
    WHEA Logger A fatal hardware error has occurred.
    Reported by component: Processor Core
    Error Source: Machine Check Exception
    Error Type: Cache Hierarchy Error
    Processor APIC ID: 8
    Things I have tried: Reverted back to stock speeds in bios, updated to latest bios, tried a other power supply and formated windows with newest drivers, no difference. Memtest does not show any errors, and the pc passes prime/intel burn test. Cpu temps are fine.
    What more can I do ? Is there a chance that the CPU have taken damage from my "little" overclock ?

    Hi,
    Can you tell what were temps of the CPU while overclocked and under full load (Intel Burn Test)?
    Also, did you try different RAM? And are you currently running RAM at frequency of....? (Reduce to 1600MHz).

  • Sar CPU usage for each processor

    In Linux, mpstat allows you to specify a specific processor or all processors when getting CPU % usage. In OS X, sar looks pretty limited as I can only get CPU usage for all processors. I'm trying determine how many cores are in use (basically for figuring out how many single cores are available for jobs if im submitting 1 handbrake job per core)

    hello,
    I sugget to create a time series chart with the following SQL:
    with p as (select target_guid,property_value cpucount from mgmt$target_properties where property_name='CPUCount' and property_type='INSTANCE')
    select h.target_name,
    h.rollup_timestamp,
    h.average/p.cpucount pct_cpu
    from mgmt$metric_hourly h,p
    where h.metric_name='instance_efficiency'
    and h.metric_column='cpuusage_ps' and h.target_guid in (select target_guid from mgmt$db_dbninstanceinfo where target_type='oracle_database' and host=??EMIP_BIND_TARGET_GUID?? )
    and h.rollup_timestamp between ??EMIP_BIND_START_DATE?? and ??EMIP_BIND_END_DATE??
    and p.target_guid=h.target_guid
    order by 1,2
    You have to select the host target in the report and period is customizable.
    I hope this is correct and answers your question.
    Regards,
    Noel

  • Are AS3 timers make use of multi processor cores?

    Can anyone tell if a Timer in AS3 will force Flash player to create a new Thread internally and so make use of another processor core on a multicore processor?
    To Bernd: I am asking because onFrame I can just blit the prepared in a timer handler function frame. I mean to create a timer running each 1ms and call the emulator code there and only show generated pixels in the ON_ENTER_FRAME handler function. In that way, theoretically, the emulator will use in best case a whole CPU-core. This will most probably not reach the desired performance anyway, but is still an improvement. Still, as I mentioned in my earlier posts, Adobe should think of speeding up the AVM. It is still generally 4-5 slower than Java when using Alchemy. Man, there must be a way to speed up the AVM, or?
    For those interested what I am implementing, look at:
    Sega emulated games in flash 
    If moderators think that this link is in some way an advertisement and harms the rules of this forum, please feel free to remove it. I will not be offended at all.

    Hello Boris,
    thanks for taking the time and explaining why your project needs 60 fps. If I understand you correctly those 60 fps are necessary to maintain full audio samples rate. You said your emulator collects sound samples at the frame rate and the reduced sampling rate of 24/60 results in "choppy sound". Are there any other reasons why 60 fps are necessary? The video seems smooth.
    That "choppy sound" was exactly what I was hearing when you sent me the source code of your project. But did you notice that I "solved" (read: "hacked around") the choppy sound problem even at those bad sampling rates? First off, I am not arguing with you about whether you need 60fps, or not. You convinced me that you do need 60fps. I still want to help you solve your problem (it might take a while until you get a FlashPlayer that delivers the performance you need).
    But maybe it is a good time to step back for a moment and share some of the results of your and my performance improvements to your project first. (Please correct me if my numbers are incorrect, or if you disagree with my statements):
    1) Embedding the resources instead of using the URLLoader.
    Your version uses URLLoader in order to load game resources. Embedding the resources instead does not increase the performance. But I find it more elegant and easier to use. Here is how I did it:
    [Embed(source="RESOURCES.BIN", mimeType="application/octet-stream")]
    private var EmbeddedBIN:Class;
    const rom : ByteArray = new EmbeddedBIN;
    2) Sharing ByteArrays between C code and AS code.
    I noticed that your code copied a lot of bytes between video and audio memory buffers on the C side into a ByteArray that you needed in order to call AS functions. I suggested using a technique for sharing ByteArrays between C code and AS code, which I will explain in a separate post.
    The results of this performance optimization were mildly disappointing: the frame rate only notched up by 1-2 fps.
    3) Optimized switch/case for op table calls
    Your C code used a big function table that allows you to map op codes to functions. I wrote a script that converted that function table to a huge switch/case statement that is equivalent to your function table. This performance optimization was a winner. You got an improvement of 30% in performance. I believe the frame rate suddenly jumped to 25fps, which means that you roughly gained 6fps. I talked with Scott (Petersen, the inventor of Alchemy) and he said that function calls in general and function tables are expensive. This may be a weakness within the Alchemy glue code, or in ActionScript. You can work around that weakness by replacing function calls and function tables with switch/case statements.
    4) Using inline assembler.
    I replaced the MemUser class with an inline assembler version  as I proposed in this post:
    http://forums.adobe.com/thread/660099?tstart=0
    The results were disappointing, there was no noticeable performance gain.
    Now, let me return to my choppy sound hack I mentioned earlier. This is were we enter my "not so perfect world"...
    In order to play custom sound you usually create a sound object and add an EventListener for SampleDataEvent.SAMPLE_DATA:
    _sound = new Sound();
    _sound.addEventListener( SampleDataEvent.SAMPLE_DATA, sampleDataHandler );
    The Flash Player then calls your sampleDataHandler function for retrieving audio samples. The frequency of those requests does not necessarily match with the frequency onFrameEnter is being called. Unfortunately your architecture only gets "tickled" by onFrameEnter, which is currently only being called 25fps. This becomes your bottleneck, because no matter how often the Flash Player asks for more samples, the amount will always be limited by the frame rate. In this architecture you always end up with the FlashPlayer asking for more samples than you have if the frame rate is too low.
    This is bad news. But can't we chat a little bit and assume that the "sample holes" can be filled by using sample neighbors on the time line? In other words, can't we just  stretch the samples? Well, this is what I came up with:
    private function sampleDataHandler(event:SampleDataEvent):void
         if( audioBuffer.length > 0 )
              var L : Number;
              var R : Number;
              //The sound channel is requesting more samples. If it ever runs out then a sound complete message will occur.               
              const audioBufferSize : uint = _swc.sega_audioBufferSize();
              /*     minSamples, see http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/events/SampleDataEvent.html
                   Provide between 2048 and 8192 samples to the data property of the SampleDataEvent object.
                   For best performance, provide as many samples as possible. The fewer samples you provide,
                   the more likely it is that clicks and pops will occur during playback. This behavior can
                   differ on various platforms and can occur in various situations - for example, when
                   resizing the browser. You might write code that works on one platform when you provide
                   only 2048 samples, but that same code might not work as well when run on a different platform.
                   If you require the lowest latency possible, consider making the amount of data user-selectable.                         
              const minSamples : uint = 2048;
              /*     For the maximum sample rate of 44100 we still only get 1470 samples:
                   snd.buffer_size = (rate / vdp_rate) = 44100 / 60 = 735.
                   samples = snd.buffer_size * channels = 735 * 2 = 1470.
                   So we need to stretch the samples until we have at least 2048 samples.
                   stretch = Math.ceil(2048 / (735*2)) = 3.
                   snd.buffer_size * channels * stretch = 735 * 2 * 3 = 2790.
                   Bingo: 2790 > 2048 !
              const stretch : uint = Math.ceil(minSamples / audioBufferSize);
              audioBuffer.position = 0;
              if( stretch == 1 )
                   event.data.writeBytes( audioBuffer );
              else
                   for( var i : uint = 0; i < audioBufferSize; ++i )
                        L = audioBuffer.readFloat();
                        R = audioBuffer.readFloat();
                        for( var k : uint = 0; k < stretch; ++k )
                             event.data.writeFloat(L);
                             event.data.writeFloat(R);
              audioBuffer.position = 0;
    After using that method the sound was not choppy anymore! Even though I did hear a few crackling bits here and there the sound quality improved significantly.
    Please consider this implementation as a workaround until Adobe delivers a FlashPlayer that is 3 times faster :-)
    Best wishes,
    - Bernd

  • Pages '09 only uses one processor core?

    Hello,
    some days ago i had to open a document containing nearly 2.000 pages. Don't ask why.
    Working with this document was quite complicated because Pages was responding very slowly. I used the activity monitor to see what was going on and i recognized that Pages only uses one core of my eight core MacPro.
    I tested iWork on my iMac and MacBook Pro (both DualCores) as well: same problem.
    Does iWork only use one processor core?
    Kind regards,
    roman

    How much memory do you have in your system? I'll bet if you check activity monitor (applications/utilities) and go to system memory tab therein you will see lots of page outs. This happens with huge docs and little memory as the doc is being swapped between active operating memory and the hard disk The only way to solve this problem is to get more memory (not storage) in your system. For a 2,000 page doc you're going to need lots and lots and lots of extra gigs of memory. By the way - how big is the too big file of yours?
    Number of processor cores isn't going to solve the issue if you have pageouts. It is a system memory/size of document issue and completely unrelated to how many processor cores you are using.

  • Command is used to get the usage of each CPU core in multi core cpu

    Hi,
    Using sar command how can i get usage of each cpu core.
    -Thanks

    The best way to do this is to put the monitor name as a property bag in the script and pass that to your event details. Otherwise, we're looking at querying the database each time the monitor generates an event, and this is overhead that is really not
    necessary. The other option, which is just even worse in terms of performance, is to use powershell to query the SDK for the monitor name. Both of these options are not going to be a good solution, because now you need to implement action accounts that can
    either query the database or the sdk.
    Jonathan Almquist | SCOMskills, LLC (http://scomskills.com)

  • More processor-power?

    Hi everyone!
    I'm in need of assistance. I'm currently working on a major production, an album. I'm using Logic Pro 7.1 on a PowerBook G4 with 1.5 GHz and 1.5 DDR ram.
    The project is lagging and I have a lot of restrictions, a lot of things that I can't do in my projects, due to lagging and the program going down, when I press play.
    I'm thinking of buying a desktop computer.
    Do I need more processor-power (more GHz) or more ram or what do I need to speed up the processing of channelstrips and audio-recs?
    Best Regards
    Anders Bach Pedersen, Denmark
    http://www.myspace.com/epicentremusic
    PowerBook G4   Mac OS X (10.4.7)   Using Logic Pro

    hi anders,
    yes, you need more of everything.
    more cpu - think MacPro
    more ram - think bank loan (re: Mac Pro)
    more speed with more HDDs - wider bus and more revs/minute.
    it's an investment in your career, after all.

  • Best Multiprocessing setup for 24 Processor Cores & 64gb of RAM using new MacPro OSX 10.9.5.

    Whats the best Multiprocessing setup for 24 Processor Cores & 64gb of RAM using new MacPro OSX 10.9.5.
    I'm still getting an After Effects Warning: at the end of some renders saying "A frame failed to render using Render Multiple Frames Simultaneously.

    Hi Szalam,
    I'm using the latest After Effects (13.2.0.49)
    See attached some screen grabs of my setup.

  • Will SAP software support Intel Processor Core-2-Duo 6650   2.66 Ghz

    Hi Experts,
    I am new to SAP.
    I have to install SAP ECC 6.0 ,NW-04s(PI,BI,MI,.......) , so I need to know whether SAP Software will support the below illustrated Hardware or not.
    1. Intel Processor Core-2-Duo 6650   2.66 Ghz
    2. Mother Board Intel DG33FBC.
    3.RAM 4 GB.
    I appreciate  your answers,and also points will be rewarded.
    regards
    Ken George

    Hi All,
        You are having the best config. System for Installing SAP.
    You can Install SAP ECC 6.0. The performancel wll be also be good.
    Reward if useful.
    Regards,
    Pherasath

  • How can I tell the VMware GdbServer stub to set a processor core during kernel debugging session?

    I'm using the VMware DbgServer stub for kernel mode debugging a Windows X86 VM and I want to set a different thread (or processor) in order to get/set the processor context, but the DbgServer replies that the target only has one thread running.
    (gdb) info threads
    Sending packet:
    $T1#85...Ack
    Packet received: OK
    Sending packet: $qfThreadInfo#bb...Ack
    Packet received: m1
    Sending packet: $qsThreadInfo#c8...Ack
    Packet received: l
    Thread 1 ( Name: VCPU 0 ) 0x81626a08 in ?? ()
    Questions:
    1.  How can I tell the VMware DbgServer stub to set a processor core or even another thread during kernel debugging session?
    2.  How can I set/get context or stepping from a different thread or processor core?
    Any help will be highly appreciated.
    Thanks,
    Alex.

    Nick-
    In my experience, the only tell is in the result with Lightroom unless your processor is dog slow. You can select PS as the active app. on my Mac and see it working.
    You could also modify the action to not save and close, and then check History in PS.

  • More Processor Questions

    Looking at what processors I can put into a mini.
    Intel has a "Core 2 Extreme" line. Will these work?
    Will the "Core 2 Duo" line work? I don't see much difference in specs in the two lines...
    The quad core proc in the "Extreme" line looks very interesting... in if the mini would support it.
    Matt
    Mac Mini   Mac OS X (10.4.2)  

    You can't put a Core 2 Extreme onto the mini's motherboard, as it requires a totally different socket (LGA775 Socket), as does the Core 2 Duo "E" series.
    The mini's motherboard only supports type 479 socketed processors like the Core Duo and the "T" series Core 2 Duo.
    Also, the idling power consumption and thus the heat dissipation of the Core 2 Extreme is rather high, see this page from anandtech's comparison between the Core 2 Duo and Core 2 Extreme:
    http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=2795&p=7

  • Extra 256MB of RAM or 133mhz more processor speed?

    Hello all, I am currently watchin an Ebay auction for a 667 mhz DA processor (i can just pop that in my DA and it would work right?) I am debating wether or not to buy the faster processor or a 512mb stick to replace one of my 256. What do you all feel would give me better performance? Processor or RAM?

    Whats a good price for a used 667MHz CPU?
    What can you find RAM for? 512MBs of PC133 seems to be about $70 bux US... (anyone have a cheaper RAM source?).
    MacWorld (March 2006) has an article about the benefits of adding RAM. It was suggested that you could gauge the need for more memory by lauching "Activity Monitor" and checking the System Memory" activity under normal use conditions... if the amount of "Free" memory is less then 10% under normal use loads more RAM is called for. The article suggests that more Memory won't speed up the actual "number crunching" done by the CPU (it will still operate at the same clock cycle) but seek time for the numbers is reduced as frequently used code is in "fast memory" (as opposed to being on a hardrrive which is slower). More RAM means more fast memory. This is a good thing as it will improve the speed of app switching and general system performance. If CPU intensive work is to be done you might want the new CPU though...
    Get both (but check the price of new CPUs from Fastmac, macsales (OWC), Giga and Sonnet.
    D

  • No more quad core?!?!?

    OMG. I have been waiting 2 years for an update to the Mac mini, only to find that their highest end model will still likely be less powerful than the one I've got. The Mac mini is (or used to be) an awesome alternative to the all-in-one iMac or the super powerful (and expensive) Mac Pro. But come on — no quad core option? Why would you cripple the mini like that? I don't want an all-in-one. I like separate components (ever have to lug an entire iMac to an Apple Store when the problem is just the motherboard?). This is a major blow. I am severely disappointed, and definitely not buying a new Mac mini.

    I totally agree with the original poster.
    After 2 years with no update, these new Mac Minis are a major disappointment to anyone who has used the previous models. What is Apple thinking? Last week, £649 would buy a Quad-Core i7 and this week £799 (£150 more) buys you a Dual-Core i7!?!?!
    Also, before anyone posts anything to do with speed tests, it's not just about how fast these newer Minis are (or aren't) compared with the 2012 models. It's about load distribution over multiple cores and the lack of Hyper-Threading, which (correct me if I'm wrong) requires the i7 chip to have four cores, to get another virtual sub cores on top.
    As a user of Apples Logic Pro X, it's performance is greatly enhanced when it can distribute it's audio engine over multiple cores - using the Quad-Core i7 2012 model - Logic shows eight cores, which is fantastic for a machine that (previously) cost £649.
    What would it cost me to get equal or better performance now?
    £1,359 for a build-to-order 21.5-inch iMac
    £1,599 for an entry-level 15-inch Retina MacBook Pro (or more on a build-to-order)
    £1,789 for a build-to-order 27-inch iMac
    £1,999 for a top range 15-inch Retina MacBook Pro (or more on a build-to-order)
    £2,499 for an entry-level Mac Pro (or more on a build-to-order)
    This is unacceptable. Apple aren't trying to offer Mac Mini users the ability to stay within Mini range, as they seem intent on forcing users to switch to a different product line, which (if they cannot afford it), could ultimately make them switch altogether to a PC.
    I truly hope that if the Mac Mini line does continue into the future, that Apple will regain the common sense to listen to their end users. I've been a Mac user since 1992 and would never think of switching, but this is truly an epic mistake in Apples design decisions and one I hope they fix, sooner rather than later.

  • Uneven processor core didtribution

    Why does Logic distribute plug in processing unevenly?
    I am currently getting regular Core Audio system overloads and my System Performance meter is showing core 1 and 4 running at about 30%, core 2 is unused and core 3 is running in the red (90%-100%). I am using all 3rd party plug ins with the exception of 2 instances of EXS24.
    I have tried re-instantiating plug ins and setting up my local machine as a node. It makes no difference. It seems that all 3rd party plug ins want to use a single processor.
    Does anyone know a workaround?
    Logic 7.2.3
    Mac OS X 10.4.10
    Mac Pro 2x2.66 Dual Core
    5GB RAM
    1 x 250GB Internal System Drive
    1 x 500GB Internal Audio Drive
    2 x 500GB Internal Sample Drives
    RME FireFace 800
    Emagic AMT8
    Mac Pro 2 x 2.66GHz Dual   Mac OS X (10.4.8)   5GB RAM, Fireface 800, LP 7.2.3

    Welcome to the forum bof,
    Mac-Pro Quad runs in Symmetric with Memory of Rams,
    so it may be a case of the Rams being un-even.
    Rams has to run in pairs for you to get the full 256/bits,
    in the Memory performance.
    Try taking out 1 Giz and allow Four Gizs of Rams,
    and see what happens.
    Secondly i am guessing use use Migration to cross over,
    to the Mac-Pro,
    if that is the case, the Machine will run like Molasses,
    therefore its best to do a clean in-stall.
    Before doing a in-stall back-up your stuff,
    then erase all H.D. and re-format,
    then install.
    You only need to install OSX on the boot Drive.
    No need for Apps on one H.D. and Sample on another H.D.
    Put Apps/Samples on same boot Disk, Bay 1
    and File/Audio on the other H.D. in Bay 2
    the other H.D. other than the boot H.D. can be set to raid,
    give-ing twice the speed if its two H.D. set to a Raid 0,
    And the Forth H.D. for back-up which can be set also to Mirror
    for the set of Raid.
    Also get some form of Data-Recovery sofware in recovery H.D.
    never know when the Sh£t hits the Fan.
    Disk Warrior can be consider.
    Hope all this makes sense,
    Fr.BlayZay.

Maybe you are looking for