Dual processors & Logic: Anyway to designate CPU usage?

I've been told that a single plug-in can only be assigned to a single CPU in Logic, and in my personal setup, it seems CPU 1 is doing most of the work (75%) while CPU 2 only does about 15%. When I do heavy songs, CPU 1 starts peaking while CPU 2 is never even close to 50%!! Is there a way for the user to designate which CPU is being used for particular plugins/processing? I'd like to better balance my CPU between my two processors if at all possible, allowing for more processing power on those heavy songs.
Thanks in advance!

In short, no. That is not a user option. However, I have found that running the song for a bit and then stopping the song and restarting from the beginning often balances out the CPU load.
jord

Similar Messages

  • How to reduce CPU usage when working in Logic Pro and Kontakt5

    Hello All,
    I have a MacBook Pro 15" early 2011 2.0 GHZ. My main acitivity is making music with my macbook pro. I am using Logic Pro 9 in combination with Kontakt 5.
    I want to get some suggestions in the setup so I can reduce the usage of the CPU. By the way I have upgraded my system with a SSD 256GB , 16GB RAM and have removed the optical drive and installed the the previous factory set HD in the optical bay. I have placed the samples of the Kontakt 5 in the HD so I have enough space on my SSD to run the progrmas fast and smooth !
    If possible give your advise on an optimum setup such as reachanaling the sounds through BUS to AUX channels and other methods in order to reduce the CPU usage. You may even post some good links where I can find more info.
    Thanks in advance
    Wkrgds, Levon

    welshwiggle wrote:
    So Center of Pan,
    If you were to run something like Omnipshere on an active SI track do you get anywhere near the "1 core CPU" spike I'm getting on my Late 2011 TOTR MBP?
    I still dont really get why L9 on a new MBP peforms worse than eg your PC.
    Older versions of Logic (ie: 5.5) don't see individual cores, there is one disk meter and one CPU meter. In some ways it's a more efficient system. This is running on WinXP which happens to be a very good system for audio. Consider this, XP can be installed and take up only half a gig of disk space and do 90% of what most operating systems can do today. 
    I do see single core spikes on my single processor quad core Mac Pro.  There's no way to get around a processor intensive plugin on a single track. A channel strip is tied to a single process (core). I stopped upgrading at 10.6.8 and Logic 9.1.3, because it works. I still get the occasional single core use even with multiple plugs and tracks but I can usually break the logjam by quickly starting and stopping Logic 2 or 3 times.
    I will not upgrade Mac hardware or software unless the next version of Logic is outstanding, I'm fine where I'm at for the next year or two. I enjoy recording performers more than anything else so al I really need is a good sounding, solid system.

  • Ableton Live 6 vs Logic Pro 8 CPU usage/audio quality.

    Hi,
    Any user's have feedback regarding CPU usage between Abelton Live 6 vs Logic Pro 8. I'm currently running Live 6 on an MBP 2.0ghz (CD) and am constantly running into audio drop outs etc etc.I am reading throughout the net that Logic Pro 8 is more CPU friendly than LIve 6... is this true, overall? Also any users ever do a mixing test between the two apps to see if there was an audio difference when rendering your final project to aiff or wav? I am thinking on using Live 6 as a stratch pad to "create" as I enjoy it's flexibility then transfer the project over to Logic, however that depends if there really is a difference in CPU usage and Audio quality between the two DAW's.
    Thanks.
    guess i should also state that i will be running on the new imac's come tax return time. lol

    Logic 8 is very CPU friendly, I can say that much. I never found Ableton Live 5 to be very taxing either, though. Pro Tools is.
    But like you I use Ableton as a scratchpad, I love how it works, and Logic for the finishing touches. But that's just because I'm not good at Logic yet. Not fast at it.
    Get them both! You'll be glad you did.

  • Logic Pro 9: CPU usage

    I could definitely use some help!
    I am a well experience Logic Pro 9 user, and I am having some serious issues with CPU usage.  I have a Mac Mini Server with a 2 GHz Core i7 and 8 GB of 1333 MHz DDR3 RAM.  With my current system, I run all of my applications and operating system on a dedicated internal HD.  All my sessions are run from a seperate internal HD.
    Typically I don't experience many issues with CPU usage with exception of this project.  Every 5 seconds I get this error.
    "System Overload.
    The audio engine was not able to process all required data in time.
    (-10011)"
    I have tried everything under the sun and even the old "who's your father" trick to eliminate this issue.  NOTHING has worked yet.  Here's what I've done so far: 
    1. Increase my buffer size to it's max at 1024 samples.
    2. Freeze all tracks which I use heavy processing plugins.
    3. Flatten and merge all take folders.
    4. Merge all audio regions.
    5. Delete all unused audio regions in my bin.
    6. Set all channels to "No Input"
    7. Close out of all other programs.
    8. Turn off WiFi.
    9. Turn off Bluetooth
    I'm currently running 35 audio tracks, 3 Aux tracks and 0 midi tracks.  I have very little automation, and I'm not using Flex time. I have never experience this much CPU usage ever, and it's driving me INSANE.
    Any help is grealy appreciated.

    What does the Drive/CPU usage (Performance) metering look like in Logic? That overload error can mean a lot of different things. Typically... it's a system bus overload and can be caused by Logic's Space Designer and/or Delay Designer. What is the Process Buffer Range set at?
    Have you tried a project of this size since upgrading to 10.8.4? 
    Curious, what sample rate are you using?
    I like AOL's idea of importing each track, might narrow it down.

  • MacPro 2008 Logic CPU Usage (EvanLogicBenchmark Test)

    Some days ago I did the EvanLogicBenchmark Test on my MacPro Quad 2x2.8 gHz just for fun, but the results were somehow not quite funny:
    Logic 64 Bit: 58 Tracks
    Logic 32 Bit: only 16 Tracks
    As I use logic 32Bit that performance counts for me. I know that normally the 32 Bit mode should show just about the same performance as the 64 Bit mode, but I also saw some reports from people having the same decrease on a MacPro 2008 in 32 Bit mode.
    Looking at the activity monitor I can see a CPU usage of 35% (in total) with the 16 tracks playing in 32Bit mode.
    Maybe someone can confirm this or has found the reason for that lack of performance?
    12 GB RAM / 4x SATA HDs
    OS X 10.6.3 / Logic 9.1.1
    256 buffer / Fireface 800
    2xATI Radeon HD 2600 XT
    MOTU PCI 424

    That cannot be right. When you go to 32 bit, performance is cut to just over a QUARTER of 64-bit performance?? I believe that the 'normal' difference should be around 10 %, so I'ld expect if you get 58 tracks at 64 bit, you should be able to get around 50+ tracks on 32 bit. At how many bits do you run SL?

  • Sar CPU usage for each processor

    In Linux, mpstat allows you to specify a specific processor or all processors when getting CPU % usage. In OS X, sar looks pretty limited as I can only get CPU usage for all processors. I'm trying determine how many cores are in use (basically for figuring out how many single cores are available for jobs if im submitting 1 handbrake job per core)

    hello,
    I sugget to create a time series chart with the following SQL:
    with p as (select target_guid,property_value cpucount from mgmt$target_properties where property_name='CPUCount' and property_type='INSTANCE')
    select h.target_name,
    h.rollup_timestamp,
    h.average/p.cpucount pct_cpu
    from mgmt$metric_hourly h,p
    where h.metric_name='instance_efficiency'
    and h.metric_column='cpuusage_ps' and h.target_guid in (select target_guid from mgmt$db_dbninstanceinfo where target_type='oracle_database' and host=??EMIP_BIND_TARGET_GUID?? )
    and h.rollup_timestamp between ??EMIP_BIND_START_DATE?? and ??EMIP_BIND_END_DATE??
    and p.target_guid=h.target_guid
    order by 1,2
    You have to select the host target in the report and period is customizable.
    I hope this is correct and answers your question.
    Regards,
    Noel

  • Dual Processors -- How Exactly Does Logic Use Them?

    I've searched this forum but haven't found any definitive, exacting information that would answer this question: How does Logic utilize both processors of a Dual Processor G5?
    Real information -- not guesses -- would be really appreciated. Thanks.

    Rohan, thanks for your detailed reply.
    Actually, I hadn't surmised anything about how the dual processor architecture was implemented, except for the one idea that, perhaps, the second processor would be used to accellerate screen graphic drawing while the first one ("primary processor"??) dealt with audio matters. But I have the feeling this ain't happenin'.
    I haven't yet put Logic through any serious paces yet with my new G5/Logic system, so I've yet to see the System Performance meter read on anything but the left side of the Audio Meter.
    But back to the graphics drawing situation, compared to my G4 dual 450 (with dual head Radeon graphics accellerator card) running an older version of Logic, the response of Logic 7 running on my new "whoop-tee-doo" G5 dual 2.7 is deathly slow in certain respects...
    For example, I recorded 8 simultaneous audio tracks. When I hit stop, the spinning pizza wheeled about for at least 30 seconds before rendering the waveform display for each track. And while the pizza is spinning, you're locked out of performing other operations. But on my G4, I could record audio and immediately start doing other things while the waveforms rendered.
    Enough whining for now...
    Tommy, exactly. It's nice to know how your tools work. When buying a computer or software, I don't think that anything concerning their operation should be left unexplained. But that's a lamentation for another day. In the meantime, it's great that this forum exists so Logic users can help each other out.

  • Cannot upgrade my Power Mac G5 Dual Processor (CPU type: PowerPC G5  (3.0))  to Mac OS 10.4 - anybody knows how to do this

    Trying desperately to upgrade my Power Mac G5 Dual Processor to Mac OS X.4 - what kind of upgrade is needed? How may I proceed?

    Hello Jens,
    Might be easier to get Leo/10.5.x
    Tiger Requirements...
    To use Mac OS X 10.4 Tiger, your Macintosh needs:
        * A PowerPC G3, G4, or G5 processor
        * Built-in FireWire
        * At least 256 MB of RAM (I recommend 1GB minimum)
        * DVD drive (DVD-ROM), Combo (CD-RW/DVD-ROM) or SuperDrive (DVD-R) for installation
        * At least 3 GB of free disk space; 4 GB if you install the XCode 2 Developer Tools  (I recommend 20GB minimum)
    http://support.apple.com/kb/HT1514
    See Tom's, (Texas Mac Man), great info on where/how to find/get Tiger...
    https://discussions.apple.com/message/15305521#15305521
    old: http://discussions.apple.com/thread.jspa?messageID=9755670&#9755670
    Or Ali Brown's great info on where/how to find/get Tiger...
    http://discussions.apple.com/thread.jspa?messageID=10381710#10381710
    Leopard requirements/10.5.x...
        *  Mac computer with an Intel, PowerPC G5, or PowerPC G4 (867MHz or faster) processor
    minimum system requirements
        * 512MB of memory (I say 1.5GB for PPC at least, 2-3GB minimum for IntelMacs)
        * DVD drive for installation
        * 9GB of available disk space (I say 30GB at least)
    You have to call Apple & likely ask for a Product Specialist to get it, if they still have it! Helps to tell them you have an iPad/iPhone & you can't run 10.6.

  • Why animation keeps breaking even though CPU usage oscilate between 60-90%?

    I made a simple animation.
    In standalone player there is no breaking, but in browser every few seconds it stops and jumps, even though through all the time CPU usage never exceeds 100% but oscillate between 60 and 90%.
    I tested it on two machines, both dual core above 4 GHz and result is the same.
    Any ideas why and what to to to fix it?
    Link: lucidwork.com/emma2/index2.php
    Regards
    S.J.

    this is an excerpt from one chapter (optimizing game performance) of a book i wrote (Flash Game Development: In a Social, Mobile and 3D World), and is intended to show this is not necessarily a simple topic amenable to a forum-fix.
    Optimization Techniques
    Unfortunately, I know of no completely satisfactory way to organize this information. In what follows, I discuss memory management first with sub-topics listed in alphabetical order. Then I discuss CPU/GPU management with sub-topics listed in alphabetical order.
    That may seem logical but there are, at least, two problems with that organization.
    I do not believe it is the most helpful way to organize this information.
    Memory management affects CPU/GPU usage, so everything in the Memory Management section could also be listed in the CPU/GPU section.
    Anyway, I am going to also list the information two other ways, from easiest to hardest to implement and from greatest to least benefit.
    Both of those later listings are subjective and are dependent on developer experience and capabilities, as well as, the test situation and test environment. I very much doubt there would be a consensus on ordering of these lists.  Nevertheless, I think they still are worthwhile.
    Easiest to Hardest to Implement
    Do not use Filters.
    Always use reverse for-loops and avoid do-loops and avoid while-loops.
    Explicitly stop Timers to ready them for gc (garbage collection).
    Use weak event listeners and remove listeners.
    Strictly type variables whenever possible.
    Explicitly disable mouse interactivity when mouse interactivity not needed.
    Replace dispatchEvents with callback functions whenever possible.
    Stop Sounds to enable Sounds and SoundChannels to be gc'd.
    Use the most basic DisplayObject needed.
    Always use cacheAsBitmap and cacheAsBitmapMatrix with air apps (i.e., mobile devices).
    Reuse Objects whenever possible.
    Event.ENTER_FRAME loops: Use different listeners and different listener functions applied to as few DisplayObjects as possible.
    Pool Objects instead of creating and gc'ing Objects.
    Use partial blitting.
    Use stage blitting.
    Use Stage3D.
    Greatest to Least Benefit
    Use stage blitting (if there is enough system memory).
    Use Stage3D.
    Use partial blitting.
    Use cacheAsBitmap and cacheAsBitmapMatrix with mobile devices.
    Explicitly disable mouse interactivity when mouse interactivity not needed.
    Do not use Filters.
    Use the most basic DisplayObject needed.
    Reuse Objects whenever possible.
    Event.ENTER_FRAME loops: Use different listeners and different listener functions applied to as few DisplayObjects as possible.
    Use reverse for-loops and avoid do-loops and while-loops.
    Pool Objects instead of creating and gc'ing Objects.
    Strictly type variables whenever possible.
    Use weak event listeners and remove listeners.
    Replace dispatchEvents with callback functions whenever possible.
    Explicitly stop Timers to ready for gc.
      16. Stop Sounds to enable Sounds and SoundChannels to be gc'd.

  • System Performance window and CPU usage

    The System Performance window shows two bars - what do these mean? My guess would be for dual cpu, on my quad only two show up but I assume that's because Apple needs to update to show all four?
    Related question - when doing a torture test on my quad, Logic seems to show almost full CPU usage and choke while Activity Monitor only has logic using about 130% of available cpu (out of 400%). Processor load seems to be split equally between all four (none of which seem to be averaging over 50%). On a dual, how close does Logic get to 200%? Is there anything I can do to get Logic to use more of the available CPU power?
    I'm running an assortment of softsynths, most with Space Designer (do longer verbs in SD use more CPU?).
    Along these same lines, are there any standard Logic benchmarks?

    They indicate multiple processors if audio is subdivided. It is most likely Logic hasn't got a CPU meter update-don't the quads simply operate as pairs per core-fuctionally similar to a Dual but when it comes to the processor the duties are split, as opposed to 4 procs it works more like 2 coupled...
    Longer IRs require more calculations, they add up really fast, esp. at higher sampling rates-because the IRs convert to the host (session) rate, as opposed to the rate the IR is.

  • CPU Usage high when USB eneabled

    I have a PC running Windows XP with a MS6340 (V5) motherboard and Athlon XP 1800 processor.
    When getting a poor response I noticed that the performance tab on Task manager showed that CPU usage was wildly fluctuating between 50-60%. This was farily soon after booting up and nothing else was running. The Processes tab on task manager showed that there were no processes running which got anywhere near accounting for this.
    I downloaded a program called Iarsn TaskInfo2002 which shows that the CPU Usage was because of something called Interrupts Time placeholder and (to a lesser extent) DPC Time Placeholder.
    Anyway the interesting thing is that  by disabling my USB ports (via device manager) and then rebooting the problem was gone. (Interrupts Time and DPC Placeholders don't clock anything.)
    Now, however, I have a USB camera and Printer and so MUST get my USB ports enabled again.
    I can enable them after booting and then the problem is not as bad, but I nearly always forget to disable them again before shutting down and then next time its a pain.
    Is this likely to be
    A motherboard problem,
    BIOS problem
    Windows XP problem
    or me being stupid.
    Any (polite) advice would be gratefuilly received
    SIDDY

    Hi,
    It's not a motherboard problem.
    As the board only provides the ports, nothing more.
    The program that queries the ports for data is most likely the problem.
    Try deinstalling the printer or camera software, see what happens.

  • High cpu usage for garbage collection (uptime vs total gc time)

    Hi Team,
    We have a very high cpu usage issue in the production.
    When we restart the server, the cpu idle time would be around 95% and it comes down as days goes by. Today idle cpu is 30% and it is just 6th day after the server restart.
    Environemnt details:
    Jrockit version:
    Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.2_05-b04)
    BEA WebLogic JRockit(TM) 1.4.2_05 JVM R24.4.0-1 (build ari-38120-20041118-1131-linux-ia32, Native Threads, GC strategy: parallel)
    Gc Algorithm: JRockit Garbage Collection System currently running strategy: Single generational, parallel mark, parallel sweep
    Number Of Processors: 4
    Max Heap Size: 1073741824
    Total Garbage Collection Time: 21:43:56.5
    Uptime: 114:33:4.1
    Total Garbage Collection Count: 420872
    Total Number Of Threads: 198
    Number Of Daemon Threads: 191
    Can you guys please tell me what would be problem in the server which causing the high cpu usage?
    One more thing I would like to know is that why the total number of threads is 198 when we specified the Executor pool size as 25? I agree that weblogic would create some threads for its maintenance but around 160 threads!!! something is wrong I guess.
    Santhosh.
    [email protected]

    Hi,
    I'm having a similar problem, but haven't been able to resolve it yet. Troubleshooting is made even harder by the fact that this is only happening on our production server, and I've been unable to reproduce it in the lab.
    I'll post whatever findings I have and hopefully we'll be able to find a solution with the help of BEA engineers.
    In my case, I have a stand-alone Tomcat server that runs fine for about 1-2 days, and then the JVM suddenly starts using more CPU, and as a result, the server load shoots up (normal CPU utilization is ~5% but eventually goes up to ~95%; load goes from 0.1 to 4+).
    What I have found so far is that this corresponds to increased GC activity.
    Let me list my environment specs before I proceed, though:
    CPU: Dual Xeon 3.06GHz
    RAM: 2GB
    OS: RHEL4.4 (2.6.9-42.0.2.ELsmp)
    JVM build 1.5.0_03-b07 (BEA JRockit(R) (build dra-45238-20050523-2008-linux-ia32, R25.2.0-28))
    Tomcat version 5.5.12
    JAVA_OPTS="-Xms768m -Xmx768m -XXtlasize16k -XXlargeobjectlimit16k -Xverbose:memory,cpuinfo -Xverboselog:/var/log/tomcat5/jvm.log -Xverbosetimestamp"
    Here are excerpts from my verbose log (I'm getting some HT warning, not sure if that's a problem):
    [Fri Oct 20 15:54:18 2006][22855][cpuinfo] Detected SMP with 2 CPUs that support HT.
    [Fri Oct 20 15:54:18 2006][22855][cpuinfo] Trying to determine if HT is enabled.
    [Fri Oct 20 15:54:18 2006][22855][cpuinfo] Trying to read from /dev/cpu/0/cpuid
    [Fri Oct 20 15:54:18 2006][22855][cpuinfo] Warning: Failed to read from /dev/cpu/0/cpuid
    [Fri Oct 20 15:54:18 2006][22855][cpuinfo] Trying to read from /dev/cpu/1/cpuid
    [Fri Oct 20 15:54:18 2006][22855][cpuinfo] Warning: Failed to read from /dev/cpu/1/cpuid
    [Fri Oct 20 15:54:18 2006][22855][cpuinfo] HT is: supported by the CPU, not enabled by the OS, enabled in JRockit.
    [Fri Oct 20 15:54:18 2006][22855][cpuinfo] Warning: HT enabled even though OS does not seem to support it.
    [Fri Oct 20 15:54:55 2006][22855][memory ] GC strategy: System optimized over throughput (initial strategy singleparpar)
    [Fri Oct 20 15:54:55 2006][22855][memory ] heap size: 786432K, maximal heap size: 786432K
    [Fri Oct 20 16:07:30 2006][22855][memory ] Changing GC strategy to generational, parallel mark and parallel sweep
    [Fri Oct 20 16:07:30 2006][22855][memory ] 791.642-791.874: GC 786432K->266892K (786432K), 232.000 ms
    [Fri Oct 20 16:08:02 2006][22855][memory ] 824.122: nursery GC 291998K->274164K (786432K), 175.873 ms
    [Fri Oct 20 16:09:51 2006][22855][memory ] 932.526: nursery GC 299321K->281775K (786432K), 110.879 ms
    [Fri Oct 20 16:10:24 2006][22855][memory ] 965.844: nursery GC 308151K->292222K (786432K), 174.609 ms
    [Fri Oct 20 16:11:54 2006][22855][memory ] 1056.368: nursery GC 314718K->300068K (786432K), 66.032 ms
    [Sat Oct 21 23:21:09 2006][22855][memory ] 113210.427: nursery GC 734274K->676137K (786432K), 188.985 ms
    [Sat Oct 21 23:30:41 2006][22855][memory ] 113783.140: nursery GC 766601K->708592K (786432K), 96.007 ms
    [Sat Oct 21 23:36:15 2006][22855][memory ] 114116.332-114116.576: GC 756832K->86835K (786432K), 243.333 ms
    [Sat Oct 21 23:48:20 2006][22855][memory ] 114841.653: nursery GC 182299K->122396K (786432K), 175.252 ms
    [Sat Oct 21 23:48:52 2006][22855][memory ] 114873.851: nursery GC 195060K->130483K (786432K), 142.122 ms
    [Sun Oct 22 00:01:31 2006][22855][memory ] 115632.706: nursery GC 224096K->166618K (786432K), 327.264 ms
    [Sun Oct 22 00:16:37 2006][22855][memory ] 116539.368: nursery GC 246564K->186328K (786432K), 173.888 ms
    [Sun Oct 22 00:26:21 2006][22855][memory ] 117122.577: nursery GC 279056K->221543K (786432K), 170.367 ms
    [Sun Oct 22 00:26:21 2006][22855][memory ] 117123.041: nursery GC 290439K->225833K (786432K), 69.170 ms
    [Sun Oct 22 00:29:10 2006][22855][memory ] 117291.795: nursery GC 298947K->238083K (786432K), 207.200 ms
    [Sun Oct 22 00:39:05 2006][22855][memory ] 117886.478: nursery GC 326956K->263441K (786432K), 87.009 ms
    [Sun Oct 22 00:55:22 2006][22855][memory ] 118863.947: nursery GC 357229K->298971K (786432K), 246.643 ms
    [Sun Oct 22 01:08:17 2006][22855][memory ] 119638.750: nursery GC 381744K->322332K (786432K), 147.996 ms
    [Sun Oct 22 01:11:22 2006][22855][memory ] 119824.249: nursery GC 398678K->336478K (786432K), 93.046 ms
    [Sun Oct 22 01:21:35 2006][22855][memory ] 120436.740: nursery GC 409150K->345186K (786432K), 81.304 ms
    [Sun Oct 22 01:21:38 2006][22855][memory ] 120439.582: nursery GC 409986K->345832K (786432K), 153.534 ms
    [Sun Oct 22 01:21:42 2006][22855][memory ] 120443.544: nursery GC 410632K->346473K (786432K), 121.371 ms
    [Sun Oct 22 01:21:44 2006][22855][memory ] 120445.508: nursery GC 411273K->347591K (786432K), 60.688 ms
    [Sun Oct 22 01:21:44 2006][22855][memory ] 120445.623: nursery GC 412391K->347785K (786432K), 68.935 ms
    [Sun Oct 22 01:21:45 2006][22855][memory ] 120446.576: nursery GC 412585K->348897K (786432K), 152.333 ms
    [Sun Oct 22 01:21:45 2006][22855][memory ] 120446.783: nursery GC 413697K->349080K (786432K), 70.456 ms
    [Sun Oct 22 01:34:16 2006][22855][memory ] 121197.612: nursery GC 437378K->383392K (786432K), 165.771 ms
    [Sun Oct 22 01:37:37 2006][22855][memory ] 121398.496: nursery GC 469709K->409076K (786432K), 78.257 ms
    [Sun Oct 22 01:37:37 2006][22855][memory ] 121398.730: nursery GC 502490K->437713K (786432K), 65.747 ms
    [Sun Oct 22 01:44:03 2006][22855][memory ] 121785.259: nursery GC 536605K->478156K (786432K), 132.293 ms
    [Sun Oct 22 01:44:04 2006][22855][memory ] 121785.603: nursery GC 568408K->503635K (786432K), 71.751 ms
    [Sun Oct 22 01:50:39 2006][22855][memory ] 122180.985: nursery GC 591332K->530811K (786432K), 131.831 ms
    [Sun Oct 22 02:13:52 2006][22855][memory ] 123573.719: nursery GC 655566K->595257K (786432K), 117.311 ms
    [Sun Oct 22 02:36:04 2006][22855][memory ] 124905.507: nursery GC 688896K->632129K (786432K), 346.990 ms
    [Sun Oct 22 02:50:24 2006][22855][memory ] 125765.715-125765.904: GC 786032K->143954K (786432K), 189.000 ms
    [Sun Oct 22 02:50:26 2006][22855][memory ] 125767.535-125767.761: GC 723232K->70948K (786432K), 225.000 ms
    vvvvv
    [Sun Oct 22 02:50:27 2006][22855][memory ] 125768.751-125768.817: GC 712032K->71390K (786432K), 64.919 ms
    [Sun Oct 22 02:50:28 2006][22855][memory ] 125769.516-125769.698: GC 711632K->61175K (786432K), 182.000 ms
    [Sun Oct 22 02:50:29 2006][22855][memory ] 125770.753-125770.880: GC 709632K->81558K (786432K), 126.000 ms
    [Sun Oct 22 02:50:30 2006][22855][memory ] 125771.699-125771.878: GC 708432K->61368K (786432K), 179.000 ms
    So, I'm running with the default GC strategy which lets the GC pick the most suitable approach (single space or generational). It seems to switch to generational almost immediately and runs well - most GC runs are in the nursery, and only once in a while it goes through the older space.
    Now, if you look at [Sun Oct 22 02:50:27 2006], that's when everything changes. GC starts running every second (later on it's running 3 times a second) doing huge sweeps. It never goes through the nursery again, although the strategy is still generational.
    It's all downhill from this point on, and it's a matter of hours (maybe a day) before we restart the server.
    I guess my only question is: What would cause such GC behavior?
    I would appreciate your ideas/comments!
    Thanks,
    Tenyo

  • Kernel Task over 400% CPU usage

    Hello, I have some questions about the Kernel Task in my activity monitor.
    Recently, after a hard disk failure, I have my Macbook Pro wiped and start afresh with Mavericks. All was well, but it slowed down tremendously in the last couple of days. I checked the activity monitor and Kernel Task is always around 400-470%, sometimes it even peaked to 1000+%! I closed all my applications but it is still taking up all my RAM. I cannot work when it is freezing up every 2 seconds. What is the problem?
    Here is the report, if it helps:
    EtreCheck version: 1.9.12 (48)
    Report generated 3 August 2014 12:38:14 BST
    Hardware Information:
      MacBook Pro (13-inch, Early 2011) (Verified)
      MacBook Pro - model: MacBookPro8,1
      1 2.7 GHz Intel Core i7 CPU: 2 cores
      4 GB RAM
    Video Information:
      Intel HD Graphics 3000 - VRAM: 384 MB
      Color LCD 1280 x 800
    System Software:
      OS X 10.9.4 (13E28) - Uptime: 1 day 8:2:57
    Disk Information:
      WDC WD5000LPVX-00V0TT0 disk0 : (500.11 GB)
      EFI (disk0s1) <not mounted>: 209.7 MB
      Macintosh HD (disk0s2) / [Startup]: 499.25 GB (121.06 GB free)
      Recovery HD (disk0s3) <not mounted>: 650 MB
      HL-DT-ST DVDRW  GS31N 
    USB Information:
      Apple Computer, Inc. IR Receiver
      Apple Inc. FaceTime HD Camera (Built-in)
      Apple Inc. Apple Internal Keyboard / Trackpad
      Apple Inc. BRCM2070 Hub
      Apple Inc. Bluetooth USB Host Controller
    Thunderbolt Information:
      Apple Inc. thunderbolt_bus
    Gatekeeper:
      Anywhere
    Kernel Extensions:
      [not loaded] com.m-audio.driver.firewire.dice (2.4.2 - SDK 10.6) Support
    Launch Daemons:
      [loaded] com.adobe.fpsaud.plist Support
    Launch Agents:
      [running] com.maudio.profire.helper.plist Support
    User Launch Agents:
      [loaded] com.google.keystone.agent.plist Support
    User Login Items:
      iTunesHelper
      Google Drive
    Internet Plug-ins:
      FlashPlayer-10.6: Version: 14.0.0.145 - SDK 10.6 Support
      Flash Player: Version: 14.0.0.145 - SDK 10.6 Support
      QuickTime Plugin: Version: 7.7.3
      JavaAppletPlugin: Version: 14.9.0 - SDK 10.7 Check version
      Default Browser: Version: 537 - SDK 10.9
    Safari Extensions:
      Searchme: Version: 1.3
    Audio Plug-ins:
      BluetoothAudioPlugIn: Version: 1.0 - SDK 10.9
      AirPlay: Version: 2.0 - SDK 10.9
      AppleAVBAudio: Version: 203.2 - SDK 10.9
      iSightAudio: Version: 7.7.3 - SDK 10.9
    iTunes Plug-ins:
      Quartz Composer Visualizer: Version: 1.4 - SDK 10.9
    3rd Party Preference Panes:
      Flash Player  Support
      PreferencesPane  Support
    Time Machine:
      Time Machine not configured!
    Top Processes by CPU:
          11% Disk Utility
          11% repair_packages
          7% helpd
          5% WindowServer
          3% hidd
    Top Processes by Memory:
      233 MB Finder
      66 MB mds_stores
      49 MB WindowServer
      37 MB mds
      34 MB Disk Utility
    Virtual Memory Information:
      1.22 GB Free RAM
      1.35 GB Active RAM
      394 MB Inactive RAM
      1017 MB Wired RAM
      1.50 GB Page-ins
      100 MB Page-outs

    The kernel is using excessive processor cycles. Below are some possible causes for the condition.
    Throttling
    When it gets high temperature readings from the hardware, or a low-voltage reading from the battery, the kernel may try to compensate by interrupting the processor(s) to slow them down and reduce power consumption. This condition can be due to:
    ☞ a buildup of dust on the logic board
    ☞ high ambient temperature
    ☞ a worn-out or faulty battery in a portable
    ☞ the malfunction of a cooling fan, a temperature sensor, a voltage sensor, or some other internal component
    Note that if the problem is caused by a sensor, there may be no actual overheating or undervoltage.
    If the computer is portable, test with and without the AC adapter connected. If kernel_task hogs the processor only on battery power, the fault is in the battery or the logic board. If it happens only on AC power, charging is causing the machine to heat up. That may be normal on some models. CPU usage should drop when charging is complete.
    Apple Diagnostics or the Apple Hardware Test, though not very reliable, is sometimes able to detect a fault. For more thorough hardware testing, make a "Genius" appointment at an Apple Store, or go to another authorized service provider.
    If nothing is wrong with the hardware, then whatever you can do to improve cooling may help.
    Installed software
    User-installed software that includes a device driver or other kernel code may thrash the kernel. That category includes virtualization software, such as Parallels and VMware, as well as most commercial "anti-virus" products. Some system-monitoring applications, such as "iStat," can also contribute to the problem. You can test for this possibility by completely disabling or removing the software according to the developer's instructions, or starting in safe mode. Note, however, that disabling a system modification without removing it or testing in safe mode may not be as easy as you think.
    Corrupt NVRAM or SMC data
    Sometimes the problem is cleared up by resetting the NVRAM or the SMC.
    External display
    Connecting an external LCD display to some MacBook Pro models while the lid is open may cause this issue. If applicable, test by closing the lid or disconnecting the display. You might get better results with a newer LED display.

  • Lightroom Mobile Sync - Extremely High CPU Usage/Sync Process Causes LR To Lag

    Since my other thread doesn't seem to be getting any responses, I'm pasting what I've found here. Please keep in mind I am not a beginner with Lightroom and consider myself very familiar with Lightroom's features excluding the new mobile sync.
    1st message:
    I'm on Lr 5.5 and using the 30 day trial of Adobe CC to try syncing one collection of slight more than 1000 images. Despite already having generated the Smart Previews, I can see my CPU crunching through image after image (the rolling hills pattern in the task manager) while doing the sync. I was assuming, since I already created the Smart Previews, that the sync of this collection would begin immediately and be done by simply uploading all of the existing Smart Previews. The Smart Previews folder of the catalog is 871MB and has stayed the same despite the CPU obviously doing *something*. As it is now, the sync progress is incredibly slow, almost at a pace like it's actually exporting full-res JPGs from the RAW images (as a comparison only, I know this should not be what it's actually doing).
    Another side effect of this is that I'm basically unable to use my computer for other tasks due to the high CPU utilization.
    Win 7 x64 / Lightroom 5.5
    Intel i5 2500k OC'd 4.5GHz
    16GB RAM
    SSD for OS, separate SSD for working catalog and files
    2nd message:
    As a follow up, now Lightroom thinks all 1026 photos are synced (as shown in "All Sync Photographs" portion of the Catalog though all images after the 832nd image show the sync icon per image stuck at "Building Previews for Lightroom Mobile" and the status at the top left corner has been stuck at "Syncing 194 photos" for over 12 hours. Is there no option to force another sync via Lightroom Desktop and also force the iOS app to manually refresh (perhaps by pulling down on the collections view, like refreshing via the Mail app)?
    3rd message:
    One more update, I went into Preferences and deleted all mobile data, which automatically signed me out of Adobe CC and then I signed back in. Please keep in mind the Smart Previews were long generated before even starting the trial, and I also manually generated them again (it ran through quickly since it found they were already generated) many times. Now that I'm re-syncing my collection of 1026 images, I can clearly see Lightroom using the CPU to regenerate the Smart Previews which already exist. I have no idea why it's doing this except that it's making the process of uploading the Smart Previews extremely slow. I hope this time around it will at least sync all 1026 images to the cloud.
    4th message:
    All 1026 images synced just fine and I could run through my culling workflow on the iPad/iPhone perfectly. Now I'm on a new catalog (my current workflow unfortunately uses one catalog per event) and I see the same problem: Smart Previews already generated but when syncing, Lightroom seems to re-generate them again anyway (or take up a lot of CPU simply to upload the existing Smart Previews). Can anyone else chime in on what their CPU utilization is like during the sync process when Smart Previews are already created?
    New information:
    Now I'm editing a catalog of images that is synced to Lightroom Mobile and notice that my workflow has gotten even slower between photos (relative to what it was before, this is not a discussion about how fast/slow LR should perform). Obviously Lightroom is syncing the edited settings to the cloud, but I can see my CPU running intensively (all 4 cores) on every image I edit and the CPU utilization graph looks different than before I started using LR mobile sync. It still feels like every change isn't simply syncing an SQLite database change but re-generating a Smart Preview to go with it (I'm not saying this is definitely what's happening, but something is intensively using the CPU that wasn't prior to using LR Mobile).
    For example: I only update the tint +5 on an image. I see the CPU spike up to around 30-40%, then falls back down, then back up to 100%, then back down to another smaller spike while Lightroom says "Syncing 1 photo".  I've attached a screenshot of my CPU graph when doing this edit on just one image. During this entire time, if I try to move onto edit another image, the program is noticeably slower to respond than it was prior to using LR mobile, due to the fact that there appear to be much more CPU intensive tasks running to sync the previous edit. This is proven by un-syncing the collection and immediately the lag goes away.
    I'd be happy to test/try anything you have in mind, because it's my understanding that re-syncing photos that are edited that are already in the cloud should be simply updating the database file rather than require regenerating any Smart Previews or other image data. If indeed that's what it should be doing, then some other portion of LR is causing massive CPU usage. If this continues, I will probably not choose to proceed with a subscription despite the fact that i think LR mobile adds a lot of value and boosts my workflow significantly if it wasn't causing the program to lag so badly in the process.
    I know this message was incredibly long and probably tedious to read through so thanks in advance to anyone who gets through it
    -Jeff

    Thanks for reporting. Just passed  along your info to some of our devs. One of the things that needs to be created (beside smart previews) during an initial sync are thumbnails + previews for the LrM app - Guido
    Hi Guido,
    Thanks for pointing this out. I realized the same thing when I tried syncing a collection for offline mode and found out the required space sounded more like Previews + Smart Previews rather than just the Smart Previews.
    greule wrote:
    Hi Jeff, are your images particularly large or do you make a lot of changes which you save to the original file as part of your workflow?
    The CPU usage is almost certainly from us uploading JPEG previews not the Smart Previews - particularly during develop edits as these force new JPEG previews to be sent from Lightroom desktop, but would not force new Smart Previews (unless the develop edits are modifying the original file making us think the Smart Preview is out of date) to be sent.
    Guido
    My images are full-resolution ~22mp Canon 5D Mark III RAW files so they're fairly large. Even if I only make one basic change such as exposure changes, I saw the issue. By "save to the original file" I'm assuming metadata changes such as timestamps, otherwise edits to the images aren't actually written to the original file. I'm only doing develop module edits so I shouldn't be touching the original file at all at this point in my workflow.
    I think it makes sense now that you mention that new JPEG previews need to be generated and sent to the cloud due to updated develop edits. My concern is that this seems to be done in real-time as opposed to how Lightroom Desktop works (which is to render a new Standard Preview or 1:1 Preview on demand, which means only one is being rendered at any given time while viewing it in Loupe View or possibly 2 in Compare View). If I edit, for example, 10 images quickly in a row, once the sync kicks in a few seconds later, editing the 11th image is severely hindered due to the previous 10 images' JPEG previews being rendered and sync'd to the cloud (I'm assuming the upload portion doesn't take much CPU, but the JPEG render will utilize CPU resources to the fullest if it can). Rendering Standard/1:1 Previews locally and being able to walk away while the process finishes works because it is at the start of my workflow, but having to deal with on-the-fly preview rendering while I'm editing greatly impacts my ability to edit. Perhaps there can be a way to limit max CPU utilization for background sync tasks?
    It may help to know that I'm running a dual-monitor setup, with Lightroom on a 27" 2560x1440 display maximized to fit the display (2nd display not running LR's 2nd monitor). Since I'm using a retina iPad, the optimal Standard Previews resolution should be the same at 2880 pixels.
    Thanks again for the help!

  • High CPU usage on select

    This is a spin off from another thread which started off as slow inserts. But what actaully happens is every insert is preceded by select it turned out that the select was slow.
    We have a multi-tier web application conencting to the DB using connection pool and inserting about a 100000 records in a table. To isolate the issue I wrote a PL/SQL which does the same thing.
    This problem happens every time the schema is recreated or the table dropped and created again and we start inserting. When the table is empty, the selects choose a full table scan but as the records are inserted it continues to use the same even though after a few thousands of rows I run stats. But as its running if gather stats and flush the shared pool, it picks up the new plan using the indexes and immediately gets faster.
    But in either case, full tablescan being slow after a few thousands of rows or using the index and getting much faster. Or me just doing the same select and no inserts on a table with 100000 rows, the CPU seems to be pegged to the core.
    The code snipped repeated again
    DECLARE
       uname    NVARCHAR2 (60);
       primid   NVARCHAR2 (60);
       num      NUMBER;
    BEGIN
       FOR i IN 1 .. 100000
       LOOP
          uname := DBMS_RANDOM.STRING ('x', 20);
          primid := DBMS_RANDOM.STRING ('x', 30);
          DBMS_OUTPUT.put_line (uname || ' ==> ' || primid);
          SELECT   COUNT (*)
              INTO num
              FROM TEST
             WHERE ID = 0
               AND (primid = primid OR UPPER (username) = uname OR uiname = uname
               AND (deldate IS NULL)   
          ORDER BY TIME DESC;
          INSERT INTO TEST
               VALUES (0, uname, uname, 1, uname, primid);
          IF MOD (i, 200) = 0
          THEN
             COMMIT;
             DBMS_OUTPUT.put_line ('Commited');
          END IF;
       END LOOP;
    END;This is the original thread
    Re: Slow inserts

    Maybe if you post the actual code, or a code as similar to the actual code, the users of this forum may provide you with more appropriate suggestions.
    Personally, I would like to understand what is the logic behind, so I can provide you with better advices.
    Anyway, let's focus on the code that we currently have on the table.
    Why your CPU goes high?
    - Huge amount of LIOs produced by SELECT statement which is executed 100000 times.
    - Usage of single-row / aggregate functions
    - You mentioned you have some indexes created on table TEST. Index maintenance consumes some CPU as well.
    Let's focus on the SELECT statement, since it is the most important reason for having high number of LIOs thus having high CPU usage.
    I built a test case using the query you provided, with one difference, I named TEST table columns as COL1, COL2, etc. And instead of 100,000 cycles, I did 10,000
    declare
       uname varchar2(60);
       pname varchar2(60);
       num number(5);
       begin
       for i in 1..10000 loop
          uname:=dbms_random.string('x',30);
          pname:=dbms_random.string('x',20);
          select count(1)
          into num
          from test
          where col1=0
          and (col2=uname or upper(col5)=pname or col3=uname)
          and col4 is not null;
          insert into test
          values (0,uname,pname,1,uname,uname);
          if mod(i,200)=0 then
                  commit;
          end if;
       end loop;
      end;When I run 10046 trace and made tkprof report, I got the following for the SELECT part:
    SELECT COUNT(1)
    FROM
    TEST WHERE COL1=0 AND (COL2=:B1 OR UPPER(COL5)=:B2 OR COL3=:B1 ) AND COL4 IS
      NOT NULL
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        2      0.00       0.00          0          0          0           0
    Execute  10050      0.48       0.43          0          0         50           0
    Fetch    10000     94.07      94.37          0    2910664          2       10000
    total    20052     94.56      94.80          0    2910664         52       10000As you can see, tkprof report indicated high CPU usage and 2,910,664 LIO calls.
    The execution plan (I didn't include that part) indicated FULL TABLE scan on table TEST was used.
    At this point the goal should be to reduce the number of LIO calls.
    For this purpose I created the following indexes:
    TEST_IDX1 on TEST(col2)
    TEST_IDX2 on TEST(col3)
    TEST_IDX3 on TEST(upper(col5)) - a Function Based Index
    Let's forget about the statistics at this moment.
    I will use index_combine hint in the SELECT statement to make CBO to try every index combination for listed indexes (B-Tree) and make bitmap conversion.
    The new code looks like this
    declare
       uname varchar2(60);
       pname varchar2(60);
       num number(5);
       begin
       for i in 1..10000 loop
          uname:=dbms_random.string('x',30);
          pname:=dbms_random.string('x',20);
          select /*+ index_combine(test test_idx1 test_idx2 test_idx3) */ count(1)
          into num
          from test
          where col1=0
          and (col2=uname or upper(col5)=pname or col3=uname)
          and col4 is not null;
          insert into test
          values (0,uname,pname,1,uname,uname);
          if mod(i,200)=0 then
                  commit;
          end if;
       end loop;
      end;After running 10046 trace and creating tkprof report, I got the following result:
    SELECT /*+ index_combine(test test_idx1 test_idx2 test_idx3) */ COUNT(1)
    FROM
    TEST WHERE COL1=0 AND (COL2=:B1 OR UPPER(COL5)=:B2 OR COL3=:B1 ) AND COL4 IS
      NOT NULL
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute  10000      0.79       0.70          0          0          0           0
    Fetch    10000      0.68       0.71          3      59884          0       10000
    total    20001      1.47       1.42          3      59884          0       10000
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 54     (recursive depth: 1)
    Rows     Row Source Operation
      10000  SORT AGGREGATE (cr=59884 pr=3 pw=0 time=1188641 us)
          0   TABLE ACCESS BY INDEX ROWID TEST (cr=59884 pr=3 pw=0 time=1012723 us)
          0    BITMAP CONVERSION TO ROWIDS (cr=59884 pr=3 pw=0 time=915796 us)
          0     BITMAP OR  (cr=59884 pr=3 pw=0 time=820728 us)
          0      BITMAP CONVERSION FROM ROWIDS (cr=20039 pr=1 pw=0 time=258455 us)
          0       INDEX RANGE SCAN TEST_IDX1 (cr=20039 pr=1 pw=0 time=157107 us)(object id 52988)
          0      BITMAP CONVERSION FROM ROWIDS (cr=19902 pr=1 pw=0 time=198466 us)
          0       INDEX RANGE SCAN TEST_IDX2 (cr=19902 pr=1 pw=0 time=109999 us)(object id 52989)
          0      BITMAP CONVERSION FROM ROWIDS (cr=19943 pr=1 pw=0 time=198730 us)
          0       INDEX RANGE SCAN TEST_IDX3 (cr=19943 pr=1 pw=0 time=107200 us)(object id 52990)As you can see the number of LIO calls fallen dramatically. Also CPU time is significantly less.
    The second code completed in few seconds compared to the previous one which needed about 100 seconds to complete.
    Please be aware that this is just an example of tuning the code that you provided.
    This solution might not be suitable for your actual code, since we don't have any information about it. That's why it is important to give us as much information as you could, so you can get the most appropriate answer.
    If the test code is similar to the actual one, you should focus on reducing LIOs calls.
    In order to achieve it, you may want to use hints to force an index to be used.
    Cheers,
    Mihajlo

Maybe you are looking for