Multi-core Compiling

Even though ibberger touched the concept in the idea , I do think that most o people uses LabVIEW under Windows environment. Compiling a FPGA VI happens all in the PC under Windows. I noticed that during this process the compiler uses only one core. Since I'm using a machine with a 4 core processor, the CPU use rarely goes above 25%.  My idea is to update the compiler allowing it to be multicore. The user should have the option to limit the maximum number of cores available to the compiler. This is necessary because the user may want to continue working, while the compiling process is being done in background.

Here's shortcuts for speeding up single FPGA compiles on both Windows 7 and XP for i7 and i5 systems with Turbo Boost. This only applies to a workstation, not a compile farm.  In my case I was able to reduce the compile time 40-50%. It will at least shave several minutes off the compile.
Make sure Turbo Boost is enabled in the BIOS if the option is there.
Windows 7:
Find the Compile Worker shortcut in the start menu and copy it to the desktop.
Modify the compile worker target command to add CMD and START commands (spaces and quotes are important):
C:\Windows\System32\cmd.exe /C START " " /HIGH /AFFINITY 0X01   "C:\Program Files (x86)\National Instruments\FPGA\CompileWorker\CompileWorker.exe"
Windows XP:
Download PsExec from the Microsoft website. It will be part of a suite of commands. I used version 1.98. Unzip the directory, then copy PsExec to C:\Windows\System32.
As above, find the Compile Worker shortcut in the start menu and copy it to the desktop.
Modify the compile worker target to add the PsExec command:
C:\WINDOWS\System32\PsExec -high -a 0 -d  "C:\Program Files\National Instruments\FPGA\CompileWorker\CompileWorker.exe"
If your original Compile Worker command is different, use that. Make sure it is enclosed in quotes.
To use: double click on the modified shortcut to start the Compile Worker. It will start and remain dormant in the background. Open a project and perform an FPGA build. The compile will be queued to the Compile Worker as normal. CPU usage can be monitored in Task Manager. As the compile progresses, the first core usage should eventually peg at 100%. The synthesis phase should be noticeably faster.
For Windows 7, Intel has a Turbo Boost monitor tool.
Why it works: The compiler seems to benefit from higher CPU clock speeds rather than more cores. The scheduler tends to move the process among the different CPU cores, and so the CPU isn't loaded down enough for Turbo Boost to stay on. The above commands force the compile to stay on one core, encouraging Turbo Boost to stay on for the duration of the compile.

Similar Messages

  • Multi-Core support

    Dear users,
    When i'm exporting my Flash project (under Adobe Flash
    Professional 8)
    Flash only use 25% of my quadcore CPU. So it only used one
    core.
    Before i had a Dual-core CPU and Flash used only 50% of my
    CPU (so also just 1 core).
    I'm using a AMD platform.
    Motherboard: Asus M2N32-SLI Deluxe
    CPU: AMD Phenom X4 9850
    Is there any way to enable multi-core support under Flash?
    Thanks in advance,
    Remi

    i don't change any change in compile time from our single to
    our quad core machines. having said that it's not as if the single
    core maxs out, so..

  • Performanc​e comparison​---using LABVIEW..b​etween Embedded and general-pu​rpose Intel's single VS multi-core CPU

    hi guys;
    kindly, can any one tell me : is it possible to use Labview installed on desktop pc to show up statistically the main performance key features between mutlicore systems and single core systems but with (two versions, embedded and general purpose CPU) . in other words: I'm trying to resemble embedded cpu with general purpose CPU so that i can only work on Desktop pc than say that the obtained results are the same for embedded multicore CPU.
    to get things more clear:...embedded multicore processors are now hitting the embedded market segments such as (small and portable devices with internet, multimedia and wimax tech. enabled that take advantage of recent multicore tech.).
    general-puroses Multicore processors: like desktop and servers based processors,
    according to what 've read, Intel is producing same processor model but with different applications : (embedded and general-purposes applications)

    Hello,
    Please look at this page which shows a new feature in LabVIEW 2009, but particularly interesting is the video which shows the performance benefits on a PC platform of single core VS multicore.
    LabVIEW doesn't have any ability to emulate a multi-core embedded processor (unless its an intel x86 processor that labVIEW supports!) so to discover the performance benefits of embedded multicore processors, you would need an external hardware board and devise a test in which labVIEW can measure the timing via an external pin toggled by a program running on the embedded processor that could utilise multiple cores (there may be other ways but this is the way that comes to my mind).
    I hope this helps you!
    Mark B
    ===If this fixes your problem, mark as solution!===

  • Multi Core Mac Users: Dynamic Link, AE, and Warp Stabilization.

    Now that Final Cut X has lost it's edge .. and more Mac users may be migrating .. here is some input :
    Stabilization was the most important 'effect' which prompted me to buy a 12 Core Mac, CoreMelt's Lock & Load Stabilization AE Plug In, and the Adobe Master Suite. Second was the "Multi-Camera" editing.
    Additional purchases ( to solve a sluggish frame rate ) were a NVidia GTX 285 video card, more RAM, and a 4 TB RAID 0.
    Now with my Mac Pro running ( after several issues ) I would like to point out that:
    1.) The "Adobe Stabilization Server" seems to only use one of the cores. It also seems that it is faster without AE  multicore processing enabled. Other effects use half and it seems that some use all. With stabilization very time consuming .. I would hope that Adobe would address this like the multiprocessing 'aerendercore' processes that are seen in the Activity Monitor. I find that the Activity Monitor is one of the most important applications to be aware of ... and for developers who are interested in improving their applications .. a Godsend .. others who would like to bury their performance issues .. a nightmare. So .. a Multi Core Mac may be redundant if you
    2.) If you are having any dynamic link crashes or lost linked compositions ... read my other posts.
    3.) to be cont.

    Ashe?
    If yes, your problem is the result of bad karma.

  • PSE 10 both Organizer and Editor crash on multi-core hardware; set affinity to single cpu mode fixes

    As I have noted in thread http://forums.adobe.com/thread/941128 I have found that both the editor and the organizer for PSE 10 crash in a multi core environment, I am running WIndows 64 bit, latest updates (includes sp1), I had the same problems with Windows 7 32 bit.
    Solution is to set the affinity to a single cpu.  From the task manager, select the 'processes' tab, find the exe (eg PhotoshopElementsOrganizer), right-click, select 'Set affinity...' and reduce the number of CPUs selected (checked) to exactly one.  (Under different versions of Windows the exact means for doing this varies, but the essential steps are the same).  It doesn't matter which cpu you select, but if you do both the Editor and the Organizer, put them on different CPUs.
    Given that pretty well all hardware these days is multi-core, my assumption is that Adobe test in a virtualized environment and so do not discover this type of unreliabillity.  Adobe really do need to test their software more thoroughly as the exact same problem existed in PSE 6, so it is hardly new to them.

    The solution is in the above.
    Should Adobe read this, they should take note that a virtualized environment does not provide a decent fidelity emulation of multi-core CPU systems.  The ONLY way to test the reliability of an application is on REAL hardware.

  • Aggregate Storage And Multi-Threading/Multi-Core Systems

    Please pardon if this question has been asked before, but the Forum search is not returning any relevant results.
    We are in the process of purchasing hardware for an 11.1.2 Essbase environment. We are going 64-bit, on Windows 2008, with either 32 GB or 64 GB of system RAM. The debate we are having is the number of CPUs and cores per CPU. We have not built any ASO databases as of yet, but we plan to launch a major BSO to ASO conversion project once 11.1.2 is off the ground here.
    Historically, with BSO, we did not see performance improvements significant enough to justify the cost of additional CPUs when we ran calcs on multi-CPU systems vs. single or dual CPU systems, even when the settings and design should have taken the most advantage of BSO's multi-threading capabilities. However, it would seem that ASO's design may be able to make better use of multi-core systems.
    I know that there are a lot of factors behind any system's performance, but in general, is ASO in 11.1.2 written well enough to make it worthwhile to consider, say, a four CPU, total 16 core system vs. a 2 CPU, total four core system?

    Grand central dispatch - infancy, not really doing its job, and I don't think apps have to be specifically written for HT, but they do have to not do things that they use to - prevent threads from going to sleep! or be parked.
    high usage is not necessarily high efficiency. often the opposite.
    Windows 7 seems to be optimized for multi-core thanks to a lot of reworking. Intel wants and knows it isn't possible to hand code, that the hardware has to be smarter, too. But the OS has a job, and right now I don't think it does it properly. Or handle memory.
    Gulftown's 12MB cache will help, and over all should be 20% more efficient doing its work.
    With dual processors, and it doesn't look like there are two quick path bridges, data shuffling has led to memory thrashing. Use to be page thrashing with not enough memory. Then core thrashing but having the cores, but not integrated (2008 is often touted as being greatest design so far, but it was FOUR dual-cores, 2009 was the first with a processor that really was new design and (native) 4-core.
    One core should be owned by the OS so it is always available for its own work and housekeeping.
    The iTunes audio bug last year showed how damaging and not to implement code and how a thread could usurp processing and add a high cpu temperature while basically doing nothing, sort of a denial of service attack on the processor - those 80*C temps people had.
    All those new technology features under development and not like OpenCL, GCD and even OpenGL are tested, mature but rather 1.0 foundation for the future. A year ahead of readiness.

  • What are the speeds of singe/multi core processor for iPhone 6

    i have an iPhone 6 and have just run Geekbench 3 test for the processors and got:
    SIngle Core 1164
    Multi Core    2049
    Geekbench give an average score of:
    Single Core  1607
    Multi Core     2831
    The scores for my phone seem to be quite low in comparison to Geekbench results.
    I this something I should be concerned with?

    No, you don't have to be concerned. Every bench test will give you different results, and In the end it's a phone and not a high-end computer like the Mac Pro to do your video editing.

  • Dual Processor Multi Core Parrell Processing Question

    Hey Guys
    I'm looking for a little clarification on an issue with parrell
    processing in LabView. If I have a Dual Processor machine with two 4 core CPU's
    will be able to access all 8 cores in the LabView environment. I'm presuming it
    can use any cores the operating system can see?   
    Thanks for the help,
    Tom
    Solved!
    Go to Solution.

    Norbert B wrote:
    it is the job of the OS that applications can use all cores if necessarry. So for the application itself, it should make no difference if the system (in hardware) is MultiCPU, MultiCore or even simply HyperThread.....
    Norbert 
    Its true, but I would like to add my 5 cents here.
    Lets say, if you have single loop like
    while (true){
    //do something
    then OS will get no chance to run it in multiple threads. So, you will get max 12,5% CPU load at 8 cores PC or 50% max on dual core PC.
    I have dual core PC right now, and lets check it:
    So, as we can see - 50% CPU load reached (one core loaded more, but its another story).
    Well, if we will use two while loops, then we will get 100 % load:
    Of course, if you will need to load all 8 cores, then you should have 8 parallel loops.
    Compare BD above with the following:
    We have two Array minmax functions, and they independend, but we have 50% only.
    Well, you can get also 100% CPU utulization withing single while loop. In th example below you have two SubVI, which called in the same loop:
    We have here 100 %. Important, that these VIs should be reenterant!
    See what happened if they not reeenterant:
    Now a little bit about Vision. Behing of most of the Vision SubVIs are DLL calls. Some Vision functions already optimized for multicore execution. For example, convolution:
    On the BD above we have single loop with one SubVI, but both cores are used (because convolute itself already optimized for multi core).
    Remember, that not all Vision functions optimized yet. For, example, LowPass still single-threaded (compare this BD with BD above):
    Sure, we can utilize multi cores - just perform parallel execution (you have to split image to two parts, then join together and so on):
    Remember, that SubVIs should be reeentrant, and all DLL calls should be thred safe (not in UI thread). Also good idea to turn off debugging in such experiments for eliminate additional CPU load.
    Another point about 8 cores. As far as I know, LabVIEW (and LabVIEW-based application) will support only 4 cores within one execution system by default (at least prior to LabVIEW 2009). If you need to utulize all 8 cores, then you should add some lines into LabVIEW.ini. Refer to the following thread where you can found more details:
    Interpolate 1d slow on 8 core machine
    Hope all written above was correct. 
    Thank for reading and best regards,
    Andrey.
    Message Edited by Andrey Dmitriev on 11-27-2009 02:50 PM

  • 64-bit and Support for Multi-Core Computers

    I recently bought one of the new 12-core Mac Pro's from Apple, but was disappointed to see the lack of 64-bit support and optimization for multi-core computers within InDesign. When producing magazines with hundreds of pages and hundreds of fonts, the need for multi-core support and 64-bit processing are vital.
    Can anyone at Adobe confirm that they will implement this in CS6? It's long overdue.

    InDesign is a really complicated program. 64 bit support is of marginal use of InDesign. It would be faster (as is the case using ID Server 64 bit on Windows), but I'm not sure how much of a difference that would make for the average desktop user.
    Adobe has done a lot of multi-threading work under the hood for CS5 (and pdf export is one of the first fruits of that). What further support they might add for CS+, is anyones guess, but it'll be easier now that a lot of the preliminary work is done. Each feature would be a separate effort, so if there's something SPECIFIC you'd like to see multi-threaded, I suggest you write that in a feature request. The more detailed you can be about it, the better!
    Harbs

  • SAP Permormance on Sun T2000 multi core servers.

    Hi guys,
    On some of the newer sun servers, the performance isn't quite as good as what you would expect.
    When you are running a specific job, let say patching using saint for instance the process works as expected, but the disp+work process seems to be just allocated to 'one' of the servers CPU's, rather than the process being distributed across the servers multi-cores, and doesn't seem to be much if any quicker.
    I'm sure some of our ZONE settings in S10 must be wrong etc, but have followed  the documentation precisely from SAP.
    Am i missing some Solaris functionality or do we have to tell SAP to use multi-cores ?
    Just intrested in other peoples experiences on the newer Sun servers
    Regards
    James

    An ABAP workprocess is single threaded. Basically that means that the speed of any ABAP program is running is CPU-wise only dependent on the speed of the CPU.
    An ABAP system can't leverage the multi-core multi-thread architecture of the new processors seen on the single process. You will see e. g. a significant performance increase if you install a Java engine since those engines have multiple concurrent threads running and can so be processed in parallel as opposed to the ABAP part.
    What you can do to speed up an import is setting the parameter
    PARALLEL
    in the STMS configuration. Set the number to the available number of cores you have. This will increase the import speed since multiple R3trans processes are forked. However, during the XPRA still only one workprocess will be used.
    Markus

  • SAP XI on a multi-core, multi-thread server?

    Hi Everyone!
    Can anyone tell me whether SAP XI can run on a Server whose processor is multi-core, multi thread (4 cores, 8 threads per core, 32 threads)?
    Thanks in advance!
    Warm regards,
    Glenn

    Hi Ravi,
    Thanks for your reply! The server in question is SUN Solaris. What sort of confguration needs to be done on both the OS side and on XI? Is there a SAP note I can follow?
    Warm regards,
    Glenn

  • Premier Pro Queue - Multi-Core Processing

    I am sure you all have thought about this but it would be nice if you take advantage of the multi-core processing of the Mac in order to run more than one Queue item at the same time.
    Currently, each item in the queue has to wait but since we got multi-core systems like my Mac Pro with Dual Quad Core it would be nice to be encoding multiple projects at the same time.  Or is this because it's software encoding and not hardware restriction?
    I did notice in Preferences there is a Enable Parallel Encoding.  Is this not multi-core processing?
    Thanks,
    Kenneth

    Think of it this way: Intel has already announced that they are sunsetting the C2D line of processors, so purchasing something that is about ready to get phased out doesn't make much sense to me as it will quickly become outdated (some could argue that it already is outdated with the number of quadcore and higher procs coming out).
    If anything, purchase one of the quad-core models to get more out of it down the line. More and more software is becoming multi-core aware, so even while an app you use now doesn't support it, it may eventually.

  • Multi-core processor

    How do you take advantage of a multi-core processor in Java.

    cotton.m wrote:
    Try buying it dinner? Always remember though, no means no.What's the difference between a lady and a diplomat?
    When a diplomat says yes, he means maybe. When he says maybe, he means no. And if he were to say no, he wouldn't be a diplomat.
    When a lady says no, she means maybe. When she says maybe, she means yes. And, if she were to say yes, she wouldn't be a lady.

  • Songs from multi-disk compilations not staying together

    I have been trying to consolidate songs from multi-disk compilations together so that one album designation contains all the songs from say a two disk collection. I have gone into the info for each song and changed the track number and disk number to coorespond to their place in line. The problem is that on occasion one or more songs will not go with the compilation but make another album by the same name containing these stray songs. I have tried but cannot get the songs back into the main album with the rest of the tracks. Anyone have a suggestion?????

    See my previous post on Grouping Tracks Into Albums, in particular the topics One album, still too many covers and One cover for multi-disc album.
    tt2

  • What type of tasks take addvantage of multi core?

    What type of tasks take addvantage of multi core?
    Currently on my powermac I... photoshop large files, some video conversion (avi -> DVD via visual hub), and the reason why I'm considering upgrading is playing 1080p movies in VLC; currently choppy as all ****. Also when I figure it out converting these 1080p movies to a watchable format on my TV screen; probally going to have to wait till I get a blue-ray burner,
    Trying to figure if I should go all out on my next purchase with multiple cores.

    http://www.barefeats.com/harper.html
    http://www.barefeats.com/octopro3.html

Maybe you are looking for

  • ITunes Duplicate Artwork

    I'm running 10.7.1 and iTunes 10.4 (64-bit). When adding artwork to selected items within iTunes via the get info window, the artwork is added TWICE for each item when double-clicking on the artwork box and navigating to the art. However, when select

  • Noted itm, spl gl & stats posting relationship

    hi, noted item, special gl transaction and statistical posting, are they related to each other? for certain transaction, i do see they are needed each other and without 1 the transaction is not possible. can tell what is the relationship between them

  • Getting the error "check annotations are valid?" during deployment on 11g

    Hi All, I am getting the annotations and classnotfound error during deployment on weblogic 11g. But in the error stack trace it is not showing any classname. I am upgrading the code from weblogic 9.2 to 11g. The code running fine on 9.2. I made all t

  • Lightroom does not recognise Canon EOS350D

    I can import pictures from my Canon 350D using the supplied Canon EOS Utility, but I cannot import direct to Lightroom - it pops up a mesage stating no device is connected.  I have to get the pictures (RAW + jpeg) onto my computer using the EOS Utili

  • Cannot open DNG files in CS3

    I transferred files from my camera a Panasonic G3 to Lightroom 4 converting them to DNGs and then made a quick selection of those I wanted to process further in CS3. When I try to open any DNG in CS3 it tells me it cannot because " it is not the righ