Turn on Grand Central Dispatch?

I recently upgraded to snow leopard. I have read on the Apple website Grand Central Dispatch will take advantage of both cores to make my computer more efficient at running tasks. Do I have to set up Grand Central Dispatch to do this, or does it just automatically do what it needs to in the background?
I have a Late 2006 iMac, 20", Intel Core 2 Duo, 2.16 GHz, 3GHz RAM, 64 bit.
Thanks, Adam

Hi PT1;
I think if you check into Grand Central Dispatch a bit more you will find that it is an API to allow programmer to take advantage of multiple cores. It is not something you can turn on it. It is more a matter of find applications that have been coded using GCD so that they are multi-core aware.
Allan

Similar Messages

  • How to tell is Grand Central Dispatch is running

    Hi, I just installed Snow Leopard and it is fantastic, runs much faster, but, I was wondering if there is any way to tell if Grand Central Dispatch is running, I have been reading about it and was wondering if it was installed when SL was installed. Can anyone shed light on this?

    It's built into Snow Leopard.
    It's a tool developers use to write code to take advantage of multicore processors.
    It is not obvious to the user, nor can you "see" it working, but it is there.

  • Hi! My phone speaker is not working, so I cant hear people when they call me. I bought my iphone 4s on February 2013 at the Grand Central Apple Store. I would like to know if I can take it to an Apple Store in Spain so they can fix it or change it. Thanks

    Hi! My phone speaker is not working, so I cant hear people when they call me. I bought my iphone 4s con February 2013 in the Grand Central (NYC - USA) Apple Store. I would like to know if I can take it to an Apple Store in Spain so they can fix it or change it. Or I need to find a way to take it to the US, where I bought it.
    Thanks!!!

    You will need to return the phone to the US. iPhone warranty is only good in the country of purchase, with the exception of the EU.

  • Bypassing Grand Central Press 1 prompt with PAP2-NA

    I am using PAP2-NA (Firmware Version: 3.1.9(LSc)) with the Gizmo Project and Grand Central. I am trying to bypass the Grand Central press "1" prompt on incoming calls. With Sunrocket Gizmo, the following is apparently the way to do this. What is the equivalent way of doing this with PAP2-NA? I see only 2 DTMF related options in PAP2-NA administration.
    ---DTMF Tx Method (InBand/AVT/INFO/Auto)
    ---Hook Flash Tx Method (None/AVT/INFO)
    I've tried all combinations of the above options but am still prompted.
    Approach with Sunrocket Gizmo from http://www.kalyansuman.com/2007/08/free-voip-with-gizmo-project-grand.html
    4) Change Gizmo settings using Telnet (to avoid press 1, 2 , 3 and 4 instructions on phone)
    Telnet into your Gizmo - Go to Command Prompt and type Telnet.
    At Telnet prompt - type 'open 192.168.251.1"
    This will prompt you for username and password - give 'admin' for user name and passowrd that you set in step 3.
    1) Press C2 and make sure RFC2833 (SDP and 2833 packets) is ALWAYS OFF
    2) Press Cs
    3) Look for Use SIP INFO for DTMF = Yes
    4) If it says no, then press c to change settings.
    4.1) Look for 18. SIP INFO for DTMF
    4.2) Press 18 to set it to yes
    4.3) Press y at the following prompt Use SIP INFO for transmitting DTMF digits?[y/n]
    4.4) Press w to write setting to Flash
    You should be now accepting inbound calls without needing to press 1, 2 and 3.

    Bump..I have the same problems (even worse) with my PAP2T-NA and GizmoProject/Grand Central. When I Click2Call (on GrandCentral) to my phone and some 1800 numbers, DTMF tones didn't work (such as Bank's menu) unless I used "DTMF Tx Method:InBand", and that solved the problem- but it created a new one. DTMF didn't work on Grand Central's menu (press 1 to accept, 2 to reject, and so on). So when somebody called me via GrandCentral, I couldn't answer the phone. I haven't found any solutions yet. I can't believe that I'm the only one who's having this issue. If SunRocket Gizmo is fixable with that, I think it should be addressable in PAP2T. Any suggestions? My current PAP2T-NA Firmware Version is 5.1.6(LS) (just updated). How should I fix it with these options? I need to get both of functions working. (To bypass the grandcentral "press 1" would be the ultimate goal only if the DMTF is working correctly.) DTMF Process INFO:Yes/No DTMF Process AVT:Yes/No DTMF Tx Method:InBand/AVT/INFO/AUTO DTMF Tx Mode:Normal/Strict Hook Flash Tx Method:None/AVT/INFO Thanks! Message Edited by neokao on 02-10-2008 10:13 PM
    Message Edited by neokao on 02-10-2008 10:14 PM

  • Open CL, Grand Central and multiple gpu's

    The information I find on the topic is a bit hazy. I was wondering if these Snow Leopard technologies would benefit from the use of multiple graphic cards, as this is not currently the case in Leopard. That would greatly help in exporting files in Aperture (raw decoding, adjustment applications), and I'm sure it would help with many other cpu intensive tasks.

    They are after all just ideas on the drawing table and not here, and the groundwork probably isn't - or the foundations - so it is pie in the sky prototypes and pure speculation.
    Bottom line: no one can say what, when, if.
    I think Adobe and Nvidia did a demo of CS4, and Nvidia has a GPU ($2000) aimed just at the GPGPU area.
    Also, Leopard isn't full 64-bit and Snow Leopard will be and require 64-bit drivers. CS5 will bring 64-bit code base to OS X.
    http://techreport.com/discussions.x/15571
    http://www.appleinsider.com/articles/08/04/03/adobe64_bit_mac_creative_suite_apps_wont_happen_till_v50.html
    http://www.tgdaily.com/content/view/37643/128/

  • Hyperthreading all "Hype"

    I've been reading alot about how "hyper threading" because it's virtual is really all hype...and in some cases it actually slows down your computer. Many people are suggesting turning it OFF instead of having it interfere with your productivity.
    The more I read about it, the more I think I'm going with the quad core i5 27" iMac which doesn't have hyper threading but DOES have turbo boost which from what Ive also read is much more useful.
    The i3 iMacs are considered the low end and they DONT have turbo boosting, but DO have hyperthreeading. Just as the dual core i5 iMacs do too. I know. I know. The top of the line i7 imac has hyper threading too.
    But most programs don't even take advantage of it really. And the few that do get a bump from it, it's only negligible....while the hyper threading will actually Slow down the functions.
    Basically what vie read says that hyper threading was created to give the illusion of mor cores but that once programs start taking advantage of multiple cores, it'll be pretty much useless.
    I do alot of video editing in iMovie and animation in light programs like anime studio as well as photoshop and web design.
    I know hyperthreading is suppose to help in certain tasks like rendering video, but not enough for me to have it gunning up other things....so I've decided to get the quad core i5 which doesn't have this feature....but does have turbo boost.

    A lot of people like to think that they're some kind of computer scientist, but really they're just idiots who don't fully understand what it is they're opining about. Just look at all the bad advice that still persists about computers.
    There's all the tips about prolonging laptop battery life, and proper care for laptop batteries. Most of this information pertains to NiCD batteries, which haven't been used in laptops for probably 15 years. Aside from cordless phones and power tools, you'd be hard pressed to find a NiCD battery anymore.
    A number of years back, people were convinced that you should never have more than a certain amount of RAM because it will slow your system down. This being due to a lack of L2 cache RAM. Systems only came with 512K at most, and this was enough to cache X amount of memory. I forget the exact number, but it really doesn't matter. What these people failed to comprehend, was that while it is true that uncached RAM would be slower, it would still be many times faster than the alternative of using swap space on the HDD. So it would NOT slow your system down, it would only be slower in relation to having fully cached RAM.
    There are all kinds of myths about the Windows registry, and the need for removing temp files to maintain system stability, how defragmenting both speeds up your system and prolongs the life of your HDD. Like with just about every other example here, all it really takes is a few seconds of thought to see how these ideas all fall apart very quickly when you start really analyzing them. The problem is, very few people do.
    So, we get to hyperthreading, which is a means by which some of the instruction logic of each core is duplicated allowing each core to process two threads at a time. I can see a lot of the bad information from the Pentium 4 days, when hyperthreading made it's debut, is still making the rounds.
    The real problem here isn't so much hyperthreading as it is the fundamental problem with multi-core CPUs in general. That problem is that there is only one path in and out of the CPU. One set of interrupts, that has to be shared by all the cores. Combine that with the many challenges associated with multi-threaded program creation, which applies to the OS process scheduler as well as user space apps, and it's easy to come to an incorrect conclusion.
    The problem is really more how operating systems treat multi-core CPUs like separate processors because it was easier than reworking a lot of the internal logic to make a distinction between the two. Problem is, there is a pretty big fundamental difference. Each CPU in older SMP systems, had their own dedicated socket and data pathways, and operated completely independently of one another. Multi-core CPUs are not the same. They're more of a time share. If you have a quad core system, and each core finishes processing data at the exact same time, only one core can actually send out its data at a time. And only one core can get fresh data to start processing. The cores all have to share a single pathway in and out of the CPU. So the more cores you start cramming into a single CPU, the more you're inclined to start piling on the workloads, and the more you're likely to run into the situation where at least one core is waiting to send out data. If you've ever been to a larger city, think of it like metering lights to get on the freeway. You may have 2-4 lanes where cars can queue up, but only one car can enter the freeway at a time.
    The operating system, and its process scheduler -- which decides which core and in what order processes go -- do not make the distinction between cores and CPUs. It would take a huge reworking of the internal logic to do this, and it would be a very expensive undertaking. You'd have to create several different test schedulers and then see which ones work better in the widest number of situations. You'll never have one that's better in every situation. That's just a pipe dream only people with no understanding of this sort of thing would believe. Then once you have the winner, you have to test the crap out of that thing, because a bug in the process scheduler will have very far reaching implications.
    The other part of this problem, is very few programs are multi-threaded. In part, because multi-threading very quickly gets to be unwieldy. Computer scientists all over the world are still trying to figure out a good model for how to tackle this particular problem, but it's not an easy one to solve. Just as a very simple example, let's take a video player, like say VLC, and say we want to make it multi-threaded. Well, the obvious solution is to just break off the individual tasks into different threads. So one thread for video decoding, another thread for audio decoding. Simple, right? Well, not so much. There is considerably less data to process with audio decoding compared to video decoding, so if you simply have one thread decoding audio and another doing video, the audio will drift further and further out of sync with the video until it likely finishes a couple minutes into an hour long video. So, you need a third thread which coordinates the actions between the other two. This is assuming a very simple player with no UI, no ability to pause or stop the video until it reaches the end of the file, because all of that would require at least a fourth thread, maybe even a fifth. This is an extremely simplistic overview of the situation. There's also the issue of what if two threads need to be able to share data. How does one thread communicate to the others that it encountered some kind of error?
    It's very easy for the number of threads in a program to explode, and keeping track of all of that is very difficult for a programmer. So most programs are simply one giant process. It's very easy for the programmer, but very inefficient for multi-core CPUs. All that data will go to a single core and tie it up for a long time. Meanwhile the process scheduler is sending data to other cores -- if it's smart... Sometimes the process scheduler will not even take into account whether or not a core is busy and just start queuing up processes behind some big process, even though other cores are sitting idle -- which then have to wait as soon as that big chunk of data is done and needs to go out. Using the metering light example, think of it like a semi pulling two trailers. It can't exactly accelerate very quickly, so cars who get sent in after it, will simply have to wait.
    Grand Central Dispatch (GCD) which is a new process scheduler introduced in 10.6 should at least solve part of this problem. It is constantly monitoring load levels on different cores, and attempts to spread processes out as evenly as possible among those cores. It's a rather elegant partial solution to the larger problem. It's also the easy part of the problem to solve, but it's at least something.
    You also have a lot of really bad information floating around, which is from the Pentium 4 days. Back then, the FSB speed was MAYBE 533MHz, and really that was a bit of a trick because that number was arrived at by doubling the actual rate. It was justified by saying that the CPU could both send and receive data on the same cycle. But the outbound rate was half of that. Combine that with the design decisions Intel made with the Pentium 4 that sacrificed performance for clock speed, and it's easy to see how hyperthreading was a good idea simply introduced before the infrastructure was there to support it. And Intel paid for it, as well as the idea of hyperthreading which now has people convinced that despite major architectural changes to the core of the Core i3/5/7 lines compared to the Pentium 4, everything is still the same. Another case of people who don't know what they're talking about opining on a subject anyway. These days, the memory controller has been integrated into the CPU, and the FSB speeds are 3-4X (minimum) what they were back in the Pentium 4 days.
    So, if we wrap this up a bit, let's just say that this is a very complex situation. There's plenty of blame to go around, so laying it all at the feet of hyperthreading is a bit much. Program vendors, including the OS makers, really deserve most of it. They've been lazy and complacent. They knew AMD and Intel were going to be pushing multi-core systems for years, but never did anything about it until well AFTER they arrived. Same with program vendors. They had a couple year heads up, but they just sat around twiddling their thumbs.
    If you want to get an i5 system, that's your decision. If you want to believe a lot of outdated information, or info that ignores the larger picture, that's also your choice. But do please refrain from spreading this misinformation. I'm sure you meant well, and no offense, but clearly you are about as qualified to speak on this subject as the people you're talking about. That is to say, not in the least. I only consider myself qualified enough to talk about the broad overview of the subject. There are a great many details which just make my head hurt if I try and think about them.
    A couple parting things. The Core i3 should have neither Turbo Boost or hyperthreading. CPU companies do this sort of thing all the time. Every CPU starts out as a top of the line model, but only a certain number of those will pass muster. So then they come up with lesser models with less stringent requirements to pass testing. So, a CPU may start out as a Core i7, but maybe the hyperthreading doesn't work. Rather than scrap it, they sell it as a Core i5. Another model may not be stable at speeds over say 2.6GHz, so it becomes a lower model i7. This is actually a good thing. It reduces the amount of waste going into landfills, and it makes CPUs cheaper for us. Win-win. Also, it's two words: a lot.

  • How do I speed up my MacBook Pro 3,1 (2007)

    I have a MacBook Pro 3,1 (year: 2007) running on OS 10.6.8 (Snow Leopard) and a 2.2 GHz Intel Core 2 Duo processor. It has a 1.5 GB 667 MHz DDR2 SDRAM with a bus speed of 800 MHz. It is still in good running condition and I am reluctant to let go of it. However, given the technology advancement, it is slowing down rapidly and I would very much like to revive its operating capabilities and give it a bit of a new life.
    Does anyone have any advice as to what I could do to make it run faster? Given its age, I am not looking to spend massive amounts of money on it, perhaps changing its RAM capacity to increase it, or anything I could use to speed it up.

    So would this be the 15":
    http://lowendmac.com/2007/15-in-macbook-pro-mid-2007/
    or the 17"?:
    http://lowendmac.com/2007/17-in-macbook-pro-mid-2007/
    "requires Mac OS X 10.4.9 Tiger or later
    Mac OS X 10.6 Snow Leopardcompatibility
    Grand Central Dispatch is supported.
    64-bit operation is supported.
    OpenCL is supported.
    OS X 10.8 Mountain Lion compatibility"
    AirPlay Mirroring is not supported.
    AirDrop is not supported.
    Power Nap is not supported."
    In both instances Low End Mac says:
    "RAM: 2 GB, expandable to 6 GB using PC2-5300 DDR2 RAM"
    You could also do this:
    http://eshop.macsales.com/shop/internal_storage/SSD/Mercury_Electra_3G_Solid_Sta te
    That would speed it up considerably but at the 3 GB/s rate.
    You should upgrade to Lion or Mountain Lion, they might be hard to find.

  • I have bought a 2nd hand late 2008 macbook air and can't install iPhoto as it requires update 10.9 or higher i can't seem to update any further than 10.7.5 help???

    I have bought a 2nd hand late 2008 macbook air and can't install iPhoto as it says it requires update 10.9 or higher i can't seem to update any further than 10.7.5 can anyone help please???

    Thankyou for your response... being new to this i have had a look and it says the following
    This system can run the last version of OS X 10.8 "Mountain Lion" as well as the current version of OS X 10.9 "Mavericks," but does not support the AirDrop, AirPlay Mirroring, or Power Nap features. It is not supported booting into 64-bit mode when running Mac OS X 10.6 "Snow Leopard." It does support "OpenCL" and Grand Central Dispatch introduced with Mac OS X 10.6 "Snow Leopard."
    Please note that OS X "Lion" 10.7 and subsequent versions of OS X, like Mountain Lion and Mavericks, are not capable of running Mac OS X apps originally written for the PowerPC processor as these operating systems do not support the "Rosetta" environment. To run PowerPC applications on this Mac, it will be necessary to use Mac OS X 10.6 "Snow Leopard" or earlier
    what do i do next
    Thankyou once again for your reply
    Gaz

  • Weblogic.kernel.Default not using all threads

    We are running weblogic 8.1 sp4 on HP Tru64 5.1.
    We have an admin server and 3 managed servers in a cluster.
    When monitoring the weblogic.kernal.Default execute threads through the weblogic console, threads 0 to 15 have zero total requests, and the user is marked '<WLS Kernal>'. Threads 16 to 24 share the entire workload with thousands of requests shared between them, and have user marked as 'n/a' (when idle).
    I have done a thread dump, and threads 0 to 15 just seem to be idle.
    This is the case for the admin server, server 1 and server 3. But server 2 actually shows all threads 0 to 24 as sharing the workload, and the user is 'n/a' for all threads.
    Is there a reason for this?
    Below is a snippet of the thread dump showing a thread that is has user '<WLS Kernel>':
    **** ExecuteThread: '3' for queue: 'weblogic.kernel.Default' thread 0x207d8e80 Stack trace:
    pc 0x03ff805c2cc8 sp 0x200026ecc00 __hstTransferRegisters (line -1)
    pc 0x03ff805b2384 sp 0x200026ecc00 __osTransferContext (line -1)
    pc 0x03ff805a3d20 sp 0x200026ecd90 __dspDispatch (line -1)
    pc 0x03ff805a309c sp 0x200026ecde0 __cvWaitPrim (line -1)
    pc 0x03ff805a0518 sp 0x200026ece80 __pthread_cond_wait (line -1)
    pc 0x03ffbff31558 sp 0x200026ecea0 monitor_wait (line 625)
    pc 0x03ffbff5ea48 sp 0x200026ecf10 JVM_MonitorWait (line 1037)
    pc 0x00003001304c sp 0x200026ecf20 -1: java/lang/Object.wait(J)V
    pc 0x0000302047dc sp 0x200026ecfd0 2: java/lang/Object.wait()V
    pc 0x000030204784 sp 0x200026ecfe0 8: weblogic/kernel/ExecuteThread.waitForRequest()V
    pc 0x0000302044ec sp 0x200026ed000 43: weblogic/kernel/ExecuteThread.run()V
    pc 0x03ffbff91b28 sp 0x200026ed030 unpack_and_call (line 100)
    pc 0x03ffbff89ca4 sp 0x200026ed040 make_native_call (line 331)
    pc 0x03ffbff27bb0 sp 0x200026ed0f0 interpret (line 368)
    pc 0x03ffbff36bf0 sp 0x200026ed9d0 jni_call (line 583)
    pc 0x03ffbff378d8 sp 0x200026eda60 jni_CallVoidMethodA (line 70)
    pc 0x03ffbff37980 sp 0x200026eda90 jni_CallVoidMethod (line 88)
    pc 0x03ffbff2ecf4 sp 0x200026edb10 java_thread_start (line 592)
    pc 0x03ffbff2e720 sp 0x200026edb30 thread_body (line 460)
    pc 0x03ff805cf278 sp 0x200026edc00 __thdBase (line -1)

    Grand central dispatch - infancy, not really doing its job, and I don't think apps have to be specifically written for HT, but they do have to not do things that they use to - prevent threads from going to sleep! or be parked.
    high usage is not necessarily high efficiency. often the opposite.
    Windows 7 seems to be optimized for multi-core thanks to a lot of reworking. Intel wants and knows it isn't possible to hand code, that the hardware has to be smarter, too. But the OS has a job, and right now I don't think it does it properly. Or handle memory.
    Gulftown's 12MB cache will help, and over all should be 20% more efficient doing its work.
    With dual processors, and it doesn't look like there are two quick path bridges, data shuffling has led to memory thrashing. Use to be page thrashing with not enough memory. Then core thrashing but having the cores, but not integrated (2008 is often touted as being greatest design so far, but it was FOUR dual-cores, 2009 was the first with a processor that really was new design and (native) 4-core.
    One core should be owned by the OS so it is always available for its own work and housekeeping.
    The iTunes audio bug last year showed how damaging and not to implement code and how a thread could usurp processing and add a high cpu temperature while basically doing nothing, sort of a denial of service attack on the processor - those 80*C temps people had.
    All those new technology features under development and not like OpenCL, GCD and even OpenGL are tested, mature but rather 1.0 foundation for the future. A year ahead of readiness.

  • Aggregate Storage And Multi-Threading/Multi-Core Systems

    Please pardon if this question has been asked before, but the Forum search is not returning any relevant results.
    We are in the process of purchasing hardware for an 11.1.2 Essbase environment. We are going 64-bit, on Windows 2008, with either 32 GB or 64 GB of system RAM. The debate we are having is the number of CPUs and cores per CPU. We have not built any ASO databases as of yet, but we plan to launch a major BSO to ASO conversion project once 11.1.2 is off the ground here.
    Historically, with BSO, we did not see performance improvements significant enough to justify the cost of additional CPUs when we ran calcs on multi-CPU systems vs. single or dual CPU systems, even when the settings and design should have taken the most advantage of BSO's multi-threading capabilities. However, it would seem that ASO's design may be able to make better use of multi-core systems.
    I know that there are a lot of factors behind any system's performance, but in general, is ASO in 11.1.2 written well enough to make it worthwhile to consider, say, a four CPU, total 16 core system vs. a 2 CPU, total four core system?

    Grand central dispatch - infancy, not really doing its job, and I don't think apps have to be specifically written for HT, but they do have to not do things that they use to - prevent threads from going to sleep! or be parked.
    high usage is not necessarily high efficiency. often the opposite.
    Windows 7 seems to be optimized for multi-core thanks to a lot of reworking. Intel wants and knows it isn't possible to hand code, that the hardware has to be smarter, too. But the OS has a job, and right now I don't think it does it properly. Or handle memory.
    Gulftown's 12MB cache will help, and over all should be 20% more efficient doing its work.
    With dual processors, and it doesn't look like there are two quick path bridges, data shuffling has led to memory thrashing. Use to be page thrashing with not enough memory. Then core thrashing but having the cores, but not integrated (2008 is often touted as being greatest design so far, but it was FOUR dual-cores, 2009 was the first with a processor that really was new design and (native) 4-core.
    One core should be owned by the OS so it is always available for its own work and housekeeping.
    The iTunes audio bug last year showed how damaging and not to implement code and how a thread could usurp processing and add a high cpu temperature while basically doing nothing, sort of a denial of service attack on the processor - those 80*C temps people had.
    All those new technology features under development and not like OpenCL, GCD and even OpenGL are tested, mature but rather 1.0 foundation for the future. A year ahead of readiness.

  • New iMac faster than my MacPro?

    I've recently purchased a new 24" iMac for a 2nd home I have out west. After a few days tinkering with it I'm pretty positive that this new machine is quicker than my 2 year old MacPro that I have at home. I was hoping after looking at the specs below if people could confirm that this should be the case.
    The reason I'm wondering is that I even though the iMac is brand new, the Mac Pro was and still is far more expensive than the iMac. The main reason I would like to know for sure is that since I work from home and have fairly advanced needs (two VMWare Fusion vms running on top of OSX 60+ hours a week working with important financial software), if the iMac is indeed faster I may be looking for an upgrade. Before I essentially toss my $2700 MacPro to the side though I want to make sure the lag that I notice that I don't yet see on the iMac couldn't be simply cured with an OS reinstall, which hasn't been done in over 2 years.
    I'm also a little unsure of how to compare the Xeon vs the current Pentium processors, as well as how important the 1067mhz vs the 667mhz ram is to my needs. I basically run two Fusion VMs with 1gb dedicated to each one in Unity, Safari, iTunes, Mail, Adium, and Skype occasionally.
    Specs for each machine..
    24" iMac - 2.93ghz Intel Core 2 Duo, 4GB Ram 1067mhz, 600gb ATA HD, NVIDIA GeForce GT 120 256MB
    MacPro w 30" Cinema - 2 x 2.66Ghz Dual Core Intel Xeon, 5GB Ram 667mhz, 250GB ATA HD, NVIDIA GeForce 7300 GT
    All advice greatly appreciated.

    I'm also a little unsure of how to compare the Xeon vs the current Pentium processors
    No Intel Mac has ever had a "Pentium" inside.
    The Mac Pro would be faster for applications that are designed to use multiple processors. It has 4 cores versus 2 in the iMac.
    VMware Fusion has a option (in the virtual machine's settings) to use more than one +virtual processor+, but it is not as efficient booting the OS directly. Also, it probable that things like the financial software you are running on the virtual machine is itself not designed to take advantage of multiple cores. Therefore, CPU clock speed becomes the overriding factor for performance in your case. Since the new iMac runs at 2.93 GHz versus 2.66 GHz for the Mac Pro, it is certainly possible that your iMac is faster than the Mac Pro, in your situation. If you were running Final Cut Studio or Logic Studio (or other app that takes advantage of all the cores), the Mac Pro would be faster.
    Also, Snow Leopard has a new technology called Grand Central Dispatch
    http://www.apple.com/macosx/technology/#grandcentral
    which is supposed to make use of multiple cores more efficient under Mac OS X. I don't think it will have too much impact on existing third-party software, but it will be interesting to see what the developers at VMware and other third-parties software firms can do with it. So your Mac Pro with four cores may become more efficient (faster) under Snow Leopard.

  • OpenCL / GCD and new Mac Pro's HD 5770 and HD 5870

    hey!
    considering to buy a new MacPro but I can't find any information if OpenCL and Grand Central Dispatch is supported by those new cards (HD 5770 and HD 5870). It seems that ATI Stream isn't that Mac friendly?!
    Thanks for your comments!

    A couple things -
    1. I know you said you don't have any money for more drives, but I would not bother using your 160 as a boot drive - why cripple the mac pro by putting the OS on a slow drive? The new mac pro comes with a 1tb WD Black by default... The Current generation of 1-2tb drives are probably 3x faster than those old 160's. Even the green drive will be faster.
    2. If you make money using aperture, using an SSD to store the OS and library on will pay for itself in one job. The OWC 120 SSD has the most impact of any aperture upgrade I've ever made - it is a substantial difference. In fact, when I got my new mac pro, it only took me 3 days to pull the SSD out of my macbook pro and put it in the mac pro - after using an SSD, using any traditional media based HD feels like going back to dial-up internet - it was that noticable and I couldn't take it... Opening a 30k image library (all referenced) took 18 seconds on the laptop drive, on the SSD on the laptop it took 1.5). I put the SSD in the optical bay and use the stock 1tb drive it came with for storing the referenced masters.
    3. You'll gain some speed benefit (and maintainability benefit) from using a referenced masters setup and storing your library on a separate drive from the raw images. Since you have room for 4 internal drives, (or more if you use 3rd party brackets or mount a drive in the optical bay) I'd definitely consider it. It makes a raid setup sort of not make a lot of sense... Since ideally you'd want off-site backup anyway -
    4. I was in your same boat when I got my first mac pro, I had to decide what to do with all the legacy drives... For me in the end I decided that time was money, and the time it took me to move stuff around across my collection of 5 hard drives was not worth the cost of just buying a few new 2tb drives (for around $120 US, and they are fast). I understand this may not be everyone's case -

  • Mac Pro 8-Core AC Power Requirements

    I am in the US (120 VAC) and am wondering if anyone has more details concerning the amperage usage on these units. Apple tech specs show only a MAXIMUM of 12 amps when running in the Low range (120 VAC). I am assuming that the maximum of 12A is during startup draw and possibly under heavy load conditions. But, is there a "normal" draw?
    The reason I am asking is that in the next 60 days, I will be replacing 18 PPC G5 units with the MP 8-Core units, and there will be a period of time that I will be required to run both units (due to software and workflow issues).
    In most newer wiring scenarios, the breakers are rated for either 20 or 25 amps, and will trip at an 80% load.
    I've got LCD's (minimal amp requirements), so I'm trying to determine how many units per circuit I can have running without tripping breakers.
    Any advice would be greatly appreciated.
    Thanks
    Chris

    I have an 8-core (Nehalem) Mac Pro with 14 GB of RAM and two internal drives. The measurements cited here are observed on the information display of an APC Back-UPS XS 1500 that has an additional 20" Apple Cinema Display plugged into the (measured) backed-up outlets of the UPS.
    This Mac has a second display but it is not connected to the backed-up outlet of the UPS and therefore not measured, but there could be extra power draw by the GPU because of it.
    The measurements are taken with the line voltage at about 119 volts RMS.
    When booting, there is a large instantaneous in-rush that showed over 400 watts, but it is just for the blink of an eye, so I wouldn't trust what the XS-1500 tells me. The in-rush current is non-trivial, though. I had to change to the XS-1500 UPS because an RS-900 UPS that I had earlier used with a dual core G5 PowerMac, which this MacPro replaced, did not have enough capacity to allow this Mac Pro to boot.
    During booting, the peak power draw is about 205 watts. The XS-1500 may not see spikes in the power draw, so the actual peaks may be a bit higher than 205 watts.
    After booting and when idle (no active apps), the power draw is about 170 watts.
    When put to sleep, the Mac Pro and the display together runs about 14 watts.
    I have written a (antenna design) program that uses Grand Central Dispatch and it can push Activity Monitor past 800% (I have seen it go up to 1200% on this mere 8 core machine, so I am not sure how much you can trust Activity Monitor either). In that condition, the power draw is 270 watts.
    All in all, the Mac Pro is quite power efficient, but you may want to stagger the booting of your 18 machines just in case the in-rush is too great.
    That being said, there is an unresolved bug as of 10.6.2 (you can find it in a different Mac Pro thread here) that causes power consumption to bump up to 240 watts (on my machine) whenever Core Audio is actively running. This jump in power occurs even when you play music in iTunes and occurs even when I use a couple of external USB or FireWire Sound Cards that I have (including the Griffin RadioShark and an external USB speaker). A couple of apps that I have written uses Core Audio -- and I can tell that the jump in power happens the instant you start causing Core Audio to issue callbacks.
    Kok Chen

  • Does iMovie09 use multiple cores??

    Does iMovies 09 take advantage of multiples cores such as the core i5 or the core i7? this is really the deal breaker for me in buying the new 27 inch iMac.

    Not as far as I know.
    One thing to realize is that it is very complicated for an application programmer to write code that takes advantage of multiple cores. Apple has addressed this in Snow Leopard by providing a system framework called Grand Central Dispatch that makes it simpler for application developers to use multiple cores. It still will take one or more development cycles before developers really start taking advantage of this.

  • Quicktime X only uses one core on a C2D

    On my mac mini C2D activity monitor shows that quicktime X refuses to use more then one core which makes hd movies look very laggy and bad. What can I do to make it use both cores?

    QuickTime itself limits to a single processor/core in most cases.
    Worst is that quicktime is slow to render movies.
    I hope Apple will rewrite quicktime so it supports Open CL and Grand Central Dispatch.
    Kind Regards
    Henrik

Maybe you are looking for

  • Bootcamp and Windows 7 and 8 on the new Mac Pro (Dec 2013)

    I want to install bootcamp on my new Mac Pro (Dec 2013). Bootcamp says it supports Windows 7 and 8, but there is a bunch of new hardware with the new Mac Pro, so I am concerned about bootcamp support for it. Are there any issues with Windows 7 Pro in

  • Possible to charge and listen at same time on XP Computer w/o iTunes?

    Is it possible to charge my iPod video and use it (listen/watch video) at the same time by connecting it to a computer that doesn't have iTunes installed? While it's connected and charging it has the "Do not disconnect" message and I can't enter the

  • Installment payments with withholding

    Hi Gurus!" When we using installment payments, withholding tax amount is calculated for each single item. It's to calculate by invoice when we use pay term with installment payment?? Tks, Delfim

  • Internal and External Encryption

    Hi, I'm currently struggling at some probably basic encryption question. I hope someone can give me a hint... I've got a customer migrating from Notes to Exchange 2013. They are running a PGP gateway which takes care of external encryption. In Notes

  • Intersect bug with Smart Guides

    The December updated didn't fix the intersect bug with Smart Guides. Previously it would snap to the intersection of paths and show you the intersect point. With CC it does nothing. Just passes straight over. Would you please fix this. Having to use