Multi core processing??

Hi, I've heard that multi core processors such as those in the new iMac and in the Mac Pro are only any good if the software knows how to use them properly. I am wanting to use Apples Logic Studio software, would I be better off with say a 3.06G core 2 duo, or a 2.66G i5?? For Logic only, I couldnt care about games or anything else.

Think of it this way: Intel has already announced that they are sunsetting the C2D line of processors, so purchasing something that is about ready to get phased out doesn't make much sense to me as it will quickly become outdated (some could argue that it already is outdated with the number of quadcore and higher procs coming out).
If anything, purchase one of the quad-core models to get more out of it down the line. More and more software is becoming multi-core aware, so even while an app you use now doesn't support it, it may eventually.

Similar Messages

  • Premier Pro Queue - Multi-Core Processing

    I am sure you all have thought about this but it would be nice if you take advantage of the multi-core processing of the Mac in order to run more than one Queue item at the same time.
    Currently, each item in the queue has to wait but since we got multi-core systems like my Mac Pro with Dual Quad Core it would be nice to be encoding multiple projects at the same time.  Or is this because it's software encoding and not hardware restriction?
    I did notice in Preferences there is a Enable Parallel Encoding.  Is this not multi-core processing?
    Thanks,
    Kenneth

    Think of it this way: Intel has already announced that they are sunsetting the C2D line of processors, so purchasing something that is about ready to get phased out doesn't make much sense to me as it will quickly become outdated (some could argue that it already is outdated with the number of quadcore and higher procs coming out).
    If anything, purchase one of the quad-core models to get more out of it down the line. More and more software is becoming multi-core aware, so even while an app you use now doesn't support it, it may eventually.

  • HOST Failed errors with multi-core processing in Compressor

    I used Digital Rebellion's Pref Mgr to trash all prefs; then Compressor Repair to fix them.  Rebooted.  Stopped Qmaster.  Reset services.  Established 5 instances ( I have 8 core Xeon 2.8ghz Mac Pro ).  Restarted Qmaster sharing.  Opened FCP file, sent to Compressor.  Selected DVD settings.  Chose cluster I set up in Qmaster.  Submitted. Instant "HOST Failed" error messages on the five instances that appeared in Batch Monitor.
    Help.  Tired.  Frustrated.  ******.

    If you really are running 10.6.2, I'd start by running all software updates.
    See what happens if you export a 5 minute section of your movie, export that as a master file and bring that movie into a new Compressor job,
    Post a screen shot of the Computer Sharing settings window to see whether anyone can spot something amiss.
    Russ

  • Limitations LabVIEW in Multi Core Processing.....

    What are the limitations of LabVIEW when running on a dual or quad processor system? 
    Recent literature shows that LabVIEW has a distinct advantage in running on a multicore system.  That is, assuming that the VI is written such that the threads can run concurrently.  So, what if two VIs share a functional global or a standard global?  And how about shared variables?  Do these have any effect on the ability of LabVIEW to run the threads on separate processors?
    Any other limitations?
    Thanks.

    The main limitation (that has bitten me the most often) is that functions that use the ui thread (property nodes and many call library nodes being to two most common), can block each other slowing down the system (but since LabVIEW is multithreaded this problem exists in a single core system but is less pronounced since it can't lead a to a core not doing work). But if you avoid property nodes where possible (especially in daq loops) and use defer panel updates if your changing a large amount you should be fine. You can set call library nodes to be reentrant to avoid running them in the ui thread (but be extremely careful to either add your own locking mechanism, or be absolutely sure that the call is really reentrant, very bad things can happen if your not careful with reentrant dll calls).
    Note: I don't think DAQmx property nodes are run in the UI Thread but I'm not certain of that.
    The one other "limitation" that comes to mind is that some race conditions (that are from improper code), may show up (or have an increased chance of showing up) in a multicore system.
     A functional global can only be used by one thread at a time. So if two threads try to use the same functional global at the same time, the first to try will run and the second will wait (perhaps running other sections of code while waiting, note: this is what makes function globals safe) until the first is finished. Global variable reads will only see completely written variables (if that's you concern). But if you have more than one writer you could likely have race conditions (this will probably affect a single core system as well, and you should try to replace them with a functional global in that case). Shared variables are about the same as global but with some extra logic (and overhead) that can be used for handling various race conditions (guaranteeing one writer or adding buffering for readers), and they support communication between separate systems (the main reason to use them). Remember (in general) if things don't share a data dependency than they can run con currently.
    Note: If you have have a lot of cores (4+) you may want to adjust the the system in threadconfig.vi
    Matt W

  • Hung Threads with multi thread processing

    We use TopLink v10.1.3.5
    One of our applications uses multi-core processing and when multiple threads try to access lazily loaded 1-m relationships at the same time, the threads just hang forever. There are no errors but the thread dump says it’s waiting on a condition.
    We do not use cache Synchronization.
    The 1-m relationship is privately owned, uses Batch reading and Indirection.
    The problem occurs intermittently and we cannot reproduce it at will.
    Found this discussion, but since we are on 10.3.5, I think we should already have the patch?
    Hung Threads (Toplink 10.1.3)
    Can anyone please provide any help on this.
    Thanks for the help.
    Thread dump -
    at java/lang/Object.wait(Native Method)
    at java/lang/Object.wait(Object.java:167(Compiled Code))
    at oracle/toplink/internal/helper/ConcurrencyManager.acquire(ConcurrencyManager.java:76(Compiled Code))
    at oracle/toplink/internal/identitymaps/CacheKey.acquire(CacheKey.java:85(Compiled Code))
    at oracle/toplink/internal/identitymaps/IdentityMap.acquireLock(IdentityMap.java:85(Compiled Code))
    at oracle/toplink/internal/identitymaps/IdentityMapManager.acquireLock(IdentityMapManager.java:101(Compiled Code))
    at oracle/toplink/internal/sessions/IdentityMapAccessor.acquireLock(IdentityMapAccessor.java:68(Compiled Code))
    at oracle/toplink/internal/sessions/IdentityMapAccessor.acquireLock(IdentityMapAccessor.java:58(Compiled Code))
    at oracle/toplink/internal/descriptors/ObjectBuilder.buildObject(ObjectBuilder.java:502(Compiled Code))
    at oracle/toplink/internal/descriptors/ObjectBuilder.buildObject(ObjectBuilder.java:382(Compiled Code))
    at oracle/toplink/mappings/OneToOneMapping.valueFromRow(OneToOneMapping.java:1020(Compiled Code))
    at oracle/toplink/mappings/DatabaseMapping.readFromRowIntoObject(DatabaseMapping.java:1045(Compiled Code))
    at oracle/toplink/internal/descriptors/ObjectBuilder.buildAttributesIntoObject(ObjectBuilder.java:245(Compiled Code))
    at oracle/toplink/internal/descriptors/ObjectBuilder.buildObject(ObjectBuilder.java:564(Compiled Code))
    at oracle/toplink/internal/descriptors/ObjectBuilder.buildObject(ObjectBuilder.java:382(Compiled Code))
    at oracle/toplink/internal/descriptors/ObjectBuilder.buildObjectsInto(ObjectBuilder.java:678(Compiled Code))
    at oracle/toplink/internal/queryframework/DatabaseQueryMechanism.buildObjectsFromRows(DatabaseQueryMechanism.java:142(Compiled Code))
    at oracle/toplink/queryframework/ReadAllQuery.executeObjectLevelReadQuery(ReadAllQuery.java:483(Compiled Code))
    at oracle/toplink/queryframework/ObjectLevelReadQuery.executeDatabaseQuery(ObjectLevelReadQuery.java:813(Compiled Code))
    at oracle/toplink/queryframework/DatabaseQuery.execute(DatabaseQuery.java:620(Compiled Code))
    at oracle/toplink/queryframework/ObjectLevelReadQuery.execute(ObjectLevelReadQuery.java:781(Compiled Code))
    at oracle/toplink/queryframework/ReadAllQuery.execute(ReadAllQuery.java:451(Compiled Code))
    at oracle/toplink/publicinterface/Session.internalExecuteQuery(Session.java:2089(Compiled Code))
    at oracle/toplink/publicinterface/Session.executeQuery(Session.java:993(Compiled Code))
    at oracle/toplink/internal/indirection/QueryBasedValueHolder.instantiate(QueryBasedValueHolder.java:62(Compiled Code))
    at oracle/toplink/internal/indirection/QueryBasedValueHolder.instantiate(QueryBasedValueHolder.java:55(Compiled Code))
    at oracle/toplink/internal/indirection/DatabaseValueHolder.getValue(DatabaseValueHolder.java:61(Compiled Code))
    at oracle/toplink/indirection/IndirectList.buildDelegate(IndirectList.java:202(Compiled Code))
    at oracle/toplink/indirection/IndirectList.getDelegate(IndirectList.java:359(Compiled Code))
    at oracle/toplink/indirection/IndirectList.size(IndirectList.java:703(Compiled Code))

    All I can tell from the stack shown is that you are triggering indirection on a collection to an object that has an eagerly fetched 1:1 mapping to an object that is locked in the cache.  The thread is waiting on the lock owner to finish building the object before it can be returned.
    To tell what is going wrong, you will need to look at the complete thread dump to see what other threads might be building the object with its cache key locked.  For instance, a long running query might make it appear the system is hung when it is just a bottleneck in the application.  If you can, you should also try to call session.getIdentityMapAccessor().printIdentityMapLocks(); at the time of the problem as it will print off information on the locks such as the object involved and which threads hold them.  As for patches; 10.1.3 is an older stream but I am unaware of fixes that are not included in 10.1.3.5 that might resolve this - you should go through support to be sure.
    Best Regards,
    Chris

  • Dual Processor Multi Core Parrell Processing Question

    Hey Guys
    I'm looking for a little clarification on an issue with parrell
    processing in LabView. If I have a Dual Processor machine with two 4 core CPU's
    will be able to access all 8 cores in the LabView environment. I'm presuming it
    can use any cores the operating system can see?   
    Thanks for the help,
    Tom
    Solved!
    Go to Solution.

    Norbert B wrote:
    it is the job of the OS that applications can use all cores if necessarry. So for the application itself, it should make no difference if the system (in hardware) is MultiCPU, MultiCore or even simply HyperThread.....
    Norbert 
    Its true, but I would like to add my 5 cents here.
    Lets say, if you have single loop like
    while (true){
    //do something
    then OS will get no chance to run it in multiple threads. So, you will get max 12,5% CPU load at 8 cores PC or 50% max on dual core PC.
    I have dual core PC right now, and lets check it:
    So, as we can see - 50% CPU load reached (one core loaded more, but its another story).
    Well, if we will use two while loops, then we will get 100 % load:
    Of course, if you will need to load all 8 cores, then you should have 8 parallel loops.
    Compare BD above with the following:
    We have two Array minmax functions, and they independend, but we have 50% only.
    Well, you can get also 100% CPU utulization withing single while loop. In th example below you have two SubVI, which called in the same loop:
    We have here 100 %. Important, that these VIs should be reenterant!
    See what happened if they not reeenterant:
    Now a little bit about Vision. Behing of most of the Vision SubVIs are DLL calls. Some Vision functions already optimized for multicore execution. For example, convolution:
    On the BD above we have single loop with one SubVI, but both cores are used (because convolute itself already optimized for multi core).
    Remember, that not all Vision functions optimized yet. For, example, LowPass still single-threaded (compare this BD with BD above):
    Sure, we can utilize multi cores - just perform parallel execution (you have to split image to two parts, then join together and so on):
    Remember, that SubVIs should be reeentrant, and all DLL calls should be thred safe (not in UI thread). Also good idea to turn off debugging in such experiments for eliminate additional CPU load.
    Another point about 8 cores. As far as I know, LabVIEW (and LabVIEW-based application) will support only 4 cores within one execution system by default (at least prior to LabVIEW 2009). If you need to utulize all 8 cores, then you should add some lines into LabVIEW.ini. Refer to the following thread where you can found more details:
    Interpolate 1d slow on 8 core machine
    Hope all written above was correct. 
    Thank for reading and best regards,
    Andrey.
    Message Edited by Andrey Dmitriev on 11-27-2009 02:50 PM

  • Multi Core Mac Users: Dynamic Link, AE, and Warp Stabilization.

    Now that Final Cut X has lost it's edge .. and more Mac users may be migrating .. here is some input :
    Stabilization was the most important 'effect' which prompted me to buy a 12 Core Mac, CoreMelt's Lock & Load Stabilization AE Plug In, and the Adobe Master Suite. Second was the "Multi-Camera" editing.
    Additional purchases ( to solve a sluggish frame rate ) were a NVidia GTX 285 video card, more RAM, and a 4 TB RAID 0.
    Now with my Mac Pro running ( after several issues ) I would like to point out that:
    1.) The "Adobe Stabilization Server" seems to only use one of the cores. It also seems that it is faster without AE  multicore processing enabled. Other effects use half and it seems that some use all. With stabilization very time consuming .. I would hope that Adobe would address this like the multiprocessing 'aerendercore' processes that are seen in the Activity Monitor. I find that the Activity Monitor is one of the most important applications to be aware of ... and for developers who are interested in improving their applications .. a Godsend .. others who would like to bury their performance issues .. a nightmare. So .. a Multi Core Mac may be redundant if you
    2.) If you are having any dynamic link crashes or lost linked compositions ... read my other posts.
    3.) to be cont.

    Ashe?
    If yes, your problem is the result of bad karma.

  • PSE 10 both Organizer and Editor crash on multi-core hardware; set affinity to single cpu mode fixes

    As I have noted in thread http://forums.adobe.com/thread/941128 I have found that both the editor and the organizer for PSE 10 crash in a multi core environment, I am running WIndows 64 bit, latest updates (includes sp1), I had the same problems with Windows 7 32 bit.
    Solution is to set the affinity to a single cpu.  From the task manager, select the 'processes' tab, find the exe (eg PhotoshopElementsOrganizer), right-click, select 'Set affinity...' and reduce the number of CPUs selected (checked) to exactly one.  (Under different versions of Windows the exact means for doing this varies, but the essential steps are the same).  It doesn't matter which cpu you select, but if you do both the Editor and the Organizer, put them on different CPUs.
    Given that pretty well all hardware these days is multi-core, my assumption is that Adobe test in a virtualized environment and so do not discover this type of unreliabillity.  Adobe really do need to test their software more thoroughly as the exact same problem existed in PSE 6, so it is hardly new to them.

    The solution is in the above.
    Should Adobe read this, they should take note that a virtualized environment does not provide a decent fidelity emulation of multi-core CPU systems.  The ONLY way to test the reliability of an application is on REAL hardware.

  • Aggregate Storage And Multi-Threading/Multi-Core Systems

    Please pardon if this question has been asked before, but the Forum search is not returning any relevant results.
    We are in the process of purchasing hardware for an 11.1.2 Essbase environment. We are going 64-bit, on Windows 2008, with either 32 GB or 64 GB of system RAM. The debate we are having is the number of CPUs and cores per CPU. We have not built any ASO databases as of yet, but we plan to launch a major BSO to ASO conversion project once 11.1.2 is off the ground here.
    Historically, with BSO, we did not see performance improvements significant enough to justify the cost of additional CPUs when we ran calcs on multi-CPU systems vs. single or dual CPU systems, even when the settings and design should have taken the most advantage of BSO's multi-threading capabilities. However, it would seem that ASO's design may be able to make better use of multi-core systems.
    I know that there are a lot of factors behind any system's performance, but in general, is ASO in 11.1.2 written well enough to make it worthwhile to consider, say, a four CPU, total 16 core system vs. a 2 CPU, total four core system?

    Grand central dispatch - infancy, not really doing its job, and I don't think apps have to be specifically written for HT, but they do have to not do things that they use to - prevent threads from going to sleep! or be parked.
    high usage is not necessarily high efficiency. often the opposite.
    Windows 7 seems to be optimized for multi-core thanks to a lot of reworking. Intel wants and knows it isn't possible to hand code, that the hardware has to be smarter, too. But the OS has a job, and right now I don't think it does it properly. Or handle memory.
    Gulftown's 12MB cache will help, and over all should be 20% more efficient doing its work.
    With dual processors, and it doesn't look like there are two quick path bridges, data shuffling has led to memory thrashing. Use to be page thrashing with not enough memory. Then core thrashing but having the cores, but not integrated (2008 is often touted as being greatest design so far, but it was FOUR dual-cores, 2009 was the first with a processor that really was new design and (native) 4-core.
    One core should be owned by the OS so it is always available for its own work and housekeeping.
    The iTunes audio bug last year showed how damaging and not to implement code and how a thread could usurp processing and add a high cpu temperature while basically doing nothing, sort of a denial of service attack on the processor - those 80*C temps people had.
    All those new technology features under development and not like OpenCL, GCD and even OpenGL are tested, mature but rather 1.0 foundation for the future. A year ahead of readiness.

  • 64-bit and Support for Multi-Core Computers

    I recently bought one of the new 12-core Mac Pro's from Apple, but was disappointed to see the lack of 64-bit support and optimization for multi-core computers within InDesign. When producing magazines with hundreds of pages and hundreds of fonts, the need for multi-core support and 64-bit processing are vital.
    Can anyone at Adobe confirm that they will implement this in CS6? It's long overdue.

    InDesign is a really complicated program. 64 bit support is of marginal use of InDesign. It would be faster (as is the case using ID Server 64 bit on Windows), but I'm not sure how much of a difference that would make for the average desktop user.
    Adobe has done a lot of multi-threading work under the hood for CS5 (and pdf export is one of the first fruits of that). What further support they might add for CS+, is anyones guess, but it'll be easier now that a lot of the preliminary work is done. Each feature would be a separate effort, so if there's something SPECIFIC you'd like to see multi-threaded, I suggest you write that in a feature request. The more detailed you can be about it, the better!
    Harbs

  • SAP Permormance on Sun T2000 multi core servers.

    Hi guys,
    On some of the newer sun servers, the performance isn't quite as good as what you would expect.
    When you are running a specific job, let say patching using saint for instance the process works as expected, but the disp+work process seems to be just allocated to 'one' of the servers CPU's, rather than the process being distributed across the servers multi-cores, and doesn't seem to be much if any quicker.
    I'm sure some of our ZONE settings in S10 must be wrong etc, but have followed  the documentation precisely from SAP.
    Am i missing some Solaris functionality or do we have to tell SAP to use multi-cores ?
    Just intrested in other peoples experiences on the newer Sun servers
    Regards
    James

    An ABAP workprocess is single threaded. Basically that means that the speed of any ABAP program is running is CPU-wise only dependent on the speed of the CPU.
    An ABAP system can't leverage the multi-core multi-thread architecture of the new processors seen on the single process. You will see e. g. a significant performance increase if you install a Java engine since those engines have multiple concurrent threads running and can so be processed in parallel as opposed to the ABAP part.
    What you can do to speed up an import is setting the parameter
    PARALLEL
    in the STMS configuration. Set the number to the available number of cores you have. This will increase the import speed since multiple R3trans processes are forked. However, during the XPRA still only one workprocess will be used.
    Markus

  • 1 core VS multi core in a web application: performance issue

    Hi,
    I'm having trouble with a web application in a multi cpu server (w2ksp4, iis+wl9.2)
    I have prepared a set of JMeter stress tests, and the application is only capable to finish 5 transactions in a multi cpu (2 cpus with 2 cores each) but if I bind the JVM of the weblogic process to only 1 core, then the application can handle more than 60 transactions without errors.
    I'm in production side; developers tell me "hardware problem" but it seems more likely a poorly designed application (as per my previous experience with them)
    The syntoms are lot of null pointers exceptions and threads stuck when in multi core scenario.
    Althought I have not put lot of details, any of you have ever seen something similar?
    If anybody needs further information please feel free to ask
    Thanks,
    Antonio

    What operating system are you using?
    make sure you are trying a certificated configuration JDK and OS.
    Oracle Fusion Middleware Supported System Configurations
    If using unix/Linux OS based you migh be hitting low entropy issue, you can add
    -Djava.security.egd=file:/dev/./urandom to JAVA_OPTIONS and retest the issue
    Best Regards
    Luz

  • Light room multi core now or next version?

    I am going to purchase a new Mac Pro and they come in either a single quad core with a max of 8 GB RAM or a dual quad core configuration with a max of 64 GB RAM. I'm am trying to decide if it is worth spending the extra money to get the dual quad core model or is it just money spent for features I can't enjoy.
    Does the current version of Lightroom and Photoshop support the use multicore processors or will the next versions for the Mac do that?
    How much RAM does one need if they have Photoshop, Lightroom, Mail, Safari open at the same time? Will Lightroom and Photoshop be able to use more RAM in their next versions?

    It's not so much how an application "recognizes" multi-cores but rather how multi-threaded is the application. For example, if I import (or export) 500 images can I continue reviewing in Lr's Library module? The answer, of course, is yes.
    Whether a single quad or dual qauds make a difference is entirely dependent on your processing.
    Here's some information on hardware requirements for Photoshop: http://www.adobepress.com/articles/article.asp?p=1247538&seqNum=2

  • Clarification of Max 2xCPUs Caveat for OBISE1 for Multi-Core Processors

    Hi all,
    I hope this post finds you all well.
    I'm just hoping for clarification on the Oracle Business Intelligence Standard Edition One caveat that it can use a maximum of 2 CPUs. Can someone please confirm how this effects multi-core processors?
    For example, if I have a machine with 2 x quad-core processor, is this classified as 2 CPUs?
    Kind Regards,
    Gary.

    It's not so much how an application "recognizes" multi-cores but rather how multi-threaded is the application. For example, if I import (or export) 500 images can I continue reviewing in Lr's Library module? The answer, of course, is yes.
    Whether a single quad or dual qauds make a difference is entirely dependent on your processing.
    Here's some information on hardware requirements for Photoshop: http://www.adobepress.com/articles/article.asp?p=1247538&seqNum=2

  • Advantages of Multi-Core Processors

    I'm trying to better understand the benefits of a computer with a multi-core processor. I notice that the Mac Pro is available with up to 12 cores! But I'm told that a lot of applications don't fully support multi-threading yet. So surely more cores wont make individual apps that much faster? So is it more to do with multi-tasking? For instance, if I have a video being encoded at the same time as playing a game and rendering a 3D graphic, will each app be assigned its own core? Or something to that effect?
    Just trying to get my head around it, any help appreciated.
    Thanks,
    Adam

    Think of an application as someone performing a task, say painting a house.
    A team of people can get the house painted faster than one person working alone can, but only if the other members of the team have brushes, paint, ladders, etc. to be able to work in parallel with the first person.
    If the application's authors wrote the application such that it processes a single task from beginning to end, other processors and cores will sit idle while only one core on one processor churns away at that task.
    If the application's authors wrote it with the ability to split up the task into multiple individual chunks, the other cores and other processor will be able to accomplish the task much more quickly.
    At present, most applications written for Mac OS X can only use one or a few cores, so processor speed has the most effect on execution speed, so at present the 3.33 GHz six core Mac Pro is the best system to purchase for fastest execution of legacy apps like Final Cut Pro and Adobe's CS5 apps like Photoshop.
    On the other hand, newer apps, like Adobe's Premiere Pro, are written to be able to leverage the additional cores so the 12-core machine would be able to process tasks faster than the six core machine even though its processors run at a faster clock speed.
    Another way to look at this is say to think of a machine that makes a product.
    To a certain extent you can increase production by speeding up the machine, but eventually you hit a limit of how fast that one machine can run, and the only way to increase production capacity/speed is to add more machines to the factory floor.
    But if say the machines put bottle caps on filled bottles, it doesn't matter how many bottle capping machines you have if you only have one machine that can fill the bottles with product.

Maybe you are looking for

  • Cloning tool not working in LR 5

    Cloning tool is not working but healing tool is. Anyone else having this problem and if so how can it be fixed?

  • Ipad microseismic charging time

    my ipad is microseismic during the charging time, may I know y ? Actually is the backside microseismic.

  • Charge to become a developer?

    Hi, this may sound like a stupid question but I downloaded XCode to practice programming. When I wanted to add development tools it took me to the Developer website and told me to register. Now I'm freaking out that they might have charged me for tha

  • Navigation Button

    Hello, i have a custom iView from ABAP that is launched when pressing a certain part of a stardard page portal. Is it possible to go back to the standard page portal when a button is pressed in my custom iView? If yes, how can i do it please? Thanks

  • Replacing 1st Gen dock connector with 3rd Gen dock connector?

    The dock connector on my 1st Gen iPod Touch has stopped working, and i have an old 3rd Gen iPod Touch lying around that appears to be broken beyond repair. Provided that a simple cleaning will not clear up the problem, will swapping the 3rd Gen's doc