Is LabWindows​/CVI running faster than LabView?

A colleague told me that LabWindows/CVI is much faster than LabView. And I make a mistake if I would go on using LabView. So he tried to convince me of LabWindows/CVI. What are your experiences? Which program is faster: LabWindows/CVI or LabView?
mara

It all depends on what you're doing. I've used both (LabWindows since v1 and LabVIEW since v3), and in my instrument control and daq applications, I don't see any. I think LabVIEW is faster to develop in and that's the main advantage for me. NI has a benchmark here if that helps at all. Either LabVIEW or LabWindows is a good choice. If you're an experienced C programmer, you might be better off sticking with the more familar CVI but if you're not, LabVIEW should work just fine.

Similar Messages

  • Can anyone confirm Safari-4 runs "faster" than S4BETA?

    Just like to know if my holding-out (want the blue progress bar) is costing me speed and features? That is-- is the Safari4 better/faster than Safari4-Beta?
    Thanks!

    Hi again Steve
    I have the beta running on my MacBook Pro and the final on my iMac. I don't see a difference, and I do like the blue progress bar.
    I left the Beta on the MBP due to adverse Network issues specific to the Unibody machine when 10.5.7 was released.

  • Are there some "waveform measurements" componet in LabWindows/CVI as in the LabVIEW ?

    I want to use some"waveform measurements" function , but I don't familiar with  LabVIEW  ,
    Is there a  corresponding  componet in  LabWindows/CVI ?

    This depends on your CVI version... the full development version includes the advanced math library, which provides functions such as PulseMeas to evaluate waveforms.

  • Can this class run fast than Hotspot ?

    My case in Sun hotspot is almost 2 times fast than jRockit. It's very strange.
    package com.telegram;
    public class byteutils {
         public final static byte[] bytea = { 48, 49, 50, 51, 52, 53, 54, 56, 57,
                   58, 65, 66, 67, 68, 69, 70 };
         public byteutils() {
              super();
         * convert length = 2L letters Hexadecimal String to length = L bytes
         * Examples: [01][23][45][67][89][AB][CD][EF]
         public static byte[] convertBytes(String hexStr) {
              byte[] a = null;
              try {
                   a = hexStr.getBytes("ASCII");
              } catch (java.io.UnsupportedEncodingException e) {
                   e.printStackTrace();
              final int len = a.length / 2;
              byte[] b = new byte[len];
              int idx = 0;
              int h = 0;
              int l = 0;
              for (int i = 0; i < len; i++) {
                   h = a[idx++];
                   l = a[idx++];
                   h = (h < 65) ? (h - 48) : (h - 55);
                   l = (l < 65) ? (l - 48) : (l - 55);
                   // if ((h < 0) || (l < 0)) return null;
                   b[i] = (byte) ((h << 4) | l);
              a = null;
              return b;
         public static String convertHex(byte[] arr_b) {
              if (arr_b == null)
                   return null;
              final int len = arr_b.length;
              byte[] byteArray = new byte[len * 2];
              int idx = 0;
              int h = 0;
              int l = 0;
              int v = 0;
              for (int i = 0; i < len; i++) {
                   v = arr_b[i] & 0xff;
                   l = v & 0xf;
                   h = v >> 4;
                   byteArray[idx++] = bytea[h];
                   byteArray[idx++] = bytea[l];
              String r = null;
              try {
                   r = new String(byteArray, "ASCII");
              } catch (java.io.UnsupportedEncodingException e) {
                   e.printStackTrace();
              } finally {
                   byteArray = null;
              return r;
         public static void main(String[] argv) {
              byte[] a = new byte[0x10000];
              for (int c = 0; c < 0x10000; c++) {
                   a[c] = (byte) (c % 256);
              String s = "";
              int LOOP = 10000;
              long l = System.currentTimeMillis();
              for (int i = 0; i < LOOP; i++) {
                   s = convertHex(a);
                   a = convertBytes(s);
              l = System.currentTimeMillis() - l;
              double d = l / (double) LOOP;
              System.out.println("" + d + "ms.");
    }

    Thanks! Your code is essentially a microbenchmark testing the performance of sun.nio.cs.US_ASCII.Decoder.decodeLoop() and encodeLoop(), with ~35% and ~30% spent in those two methods respectively. I have verified the behavior (i.e. Sun is faster than JRockit). Due to the microbenchmark nature, it may not affect a larger running program, but it may merit a closer look regardless. I have forwarded to the JRockit perf team for analysis.
    -- Henrik

  • Does the .rep report  run faster than the same program in .rdf format

    Hello,
         We have some reports timing out because they take too
    long to bring back results. If we used the .rep file instead of
    the .rdf file, will the reports run faster? Does it take very
    long to compile or is it most likely the SQL needs to be tuned
    for better performance?
    Thanks in Advance..
    Pmoore31

    No, the report will not run faster. It is most likely that you
    need to fine tune the SQL.

  • Why does audio run faster than video after burning my iMovie project in iDVD?

    I created a project in iMovie HD and shared it to iDVD 6. When I played the finished disc, I noticed that the audio ran much faster than the video. I rechecked my movie and audio clips in iMovie to make sure they were matched correctly, and I saw no problem there. I burned the disc again through iDVD and got the same problem when I played it in a DVD player.
    I have never had this problem before and I've been working on dvd projects all week. I think the only difference with this movie project is that I had movie clip audio and background audio playing at the same time at some parts of the movie, but I don't know if that's the issue.
    Any suggestions?

    Hi
    And just to add to OT-s brilliant suggestions - really - Do a Save as a DiskImage ! IMPORTANT !
    • When free space goes down on the Start-Up hard disk to 5Gb or less - strange things occurs - Yes even audio out of sync - to not working DVD at all.
    I secure a minimum of 25Gb free space when using SD-video to iDVD - if HD material I would guess 4-5 times more as it has to be re-calculated into SD and this needs space.
    DVD - is as standard SD-Video (as old time CRT-TVs) - even if You use DVD-Studio Pro or iDVD or Roxio Toast™ - That's what it is and using HD material doesn't improve a bit (may be even give a lesser result)
    Yours Bengt W

  • LabWindows/CVI timer runs faster than system time in multithreaded application on Win 2000

    The same application runs well on Win NT

    This is a rarely seen and unconfirmed problem that you are seeing with the Timer function. I have posted some information as well as another Developer Exchange user posting a workaround at the following link that you should take a look at:
    http://exchange.ni.com/servlet/ProcessRequest?RHIVEID=101&RPAGEID=135&HExpertOnly=&HFORCEKWTID=11348:5&HOID=506500000008000000AE1F0000

  • Calling a library function node much faster than labview code?

    Hi,  I wrote a labview routine to perform a multiple tau autocorrelation on a large array of integers.  A multi tau autocorrelation is a way to reduce the computation time of the correlation but at the expense of resolution.  You can taylor the multitau correlation to give you good resolution where you need it.  For instance, I require good resolution near the middle (the peak) of the correlation, so I do a linear autocorrelation for the first 64 channels from the peak, then I skip every second channel for the next 32, then skip every 4th channel for 32 more, then skip every 8th for 32 channels... etc.
    Originally, I wrote my own multitau calculation, but it took several hours to perform for just 1024 channels of the correlation of around 2million points of data.  I need to actually do the the correlation on probably 2 billion or more points of data, which would take days.  So then I tried using labview's AutoCorrelation.vi which calls a library function.  It could do a linear autocorrelation with 4 million points in less than a minute.  I figured that writing my code in C and calling it using a call library function node would be faster, but that much faster?
    Finally, I wrote some code that extracts the correlation data points that I would've got from my multitau code from the linear correlation function that I get from the AutoCorrelation.vi.  Clearly this is not optimal, since I spend time calculating all those channels of the correlation function just to throw them away in the end, but I need to do this because the final step of my procedure is to fit the correlation function to a theoretical one.  With say 2million points, the fit would take too long.  The interesting thing here is that simply extracting the 1024 point from the linear autocorrelation function takes a significant amount of time.  Is labview really that slow?
    So, my questions are...  if I rewrite my multitau autocorrelation function in C and call it using a call library function node, will it run that much faster?  Can I achieve the same efficiency if I use a formula node structure?  Why does it take so long just to extract 1024 points from an array?
    I've tried hiding indicators and this speeds things up a little bit, but not very much.
    I'll attach my code if you're interested in taking a look.  There is a switch on the front panel called 'MultiTau'... if in the off position, the code performs the linear autocorrelation with the AutoCorrelation.vi, if in the on position, it performs a multitau autocorrelation using the code I wrote.  Thanks for any help.
    Attachments:
    MultiTauAutocorrelate.vi ‏627 KB

    Hi,
    The C routine that AutoCorrelation.vi is using is probably a higly optimised routine. If you write a routine in LabVIEW, it should be less then 15% slower. But you'd have to know all ins and outs of LabVIEW. How data is handled, when memory is allocated, etc. Also note that the AutoCorrelation.vi has years of engineering behind it, and probably multiple programmers.
    It might even be possible that the c code uses an algorithmic improvement, like the Fast Fourier Transform improves speed on the Fourier Transform. I think the autocorrelation can be done using FFT, but that isn't my thing, so I'm not sure.
    For a fair comparation, posting the code in this forum was a good idea. I'm sure together we can get it to 115% or less of the C variant. (15/115 is just a guess, btw)
    I'm still using LV7.1 for client compatibility, so I'll look at the code later.
    Regards,
    Wiebe.
    "dakeddie" <[email protected]> wrote in message news:[email protected]...
    Hi,&nbsp; I wrote a labview routine to perform a multiple tau autocorrelation on a large array of integers.&nbsp; A multi tau autocorrelation is a way to reduce the computation time of the correlation but at the expense of resolution.&nbsp; You can taylor the multitau correlation to give you good resolution where you need it.&nbsp; For instance, I require good resolution near the middle (the peak) of the correlation, so I do a linear autocorrelation for the first 64 channels from the peak, then I skip every second channel for the next 32, then skip every 4th channel for 32 more, then skip every 8th for 32 channels... etc. Originally, I wrote my own multitau calculation, but it took several hours to perform for just 1024 channels of the correlation of around 2million points of data.&nbsp; I need to actually do the the correlation on probably 2 billion or more points of data, which would take days.&nbsp; So then I tried using labview's AutoCorrelation.vi which calls a library function.&nbsp; It could do a linear autocorrelation with 4 million points in less than a minute.&nbsp; I figured that writing my code in C and calling it using a call library function node would be faster, but that much faster?Finally, I wrote some code that extracts the correlation data points that I would've got from my multitau code from the linear correlation function that I get from the AutoCorrelation.vi.&nbsp; Clearly this is not optimal, since I spend time calculating all those channels of the correlation function just to throw them away in the end, but I need to do this because the final step of my procedure is to fit the correlation function to a theoretical one.&nbsp; With say 2million points, the fit would take too long.&nbsp; The interesting thing here is that simply extracting the 1024 point from the linear autocorrelation function takes a significant amount of time.&nbsp; Is labview really that slow?So, my questions are...&nbsp; if I rewrite my multitau autocorrelation function in C and call it using a call library function node, will it run that much faster?&nbsp; Can I achieve the same efficiency if I use a formula node structure?&nbsp; Why does it take so long just to extract 1024 points from an array?I've tried hiding indicators and this speeds things up a little bit, but not very much.I'll attach my code if you're interested in taking a look.&nbsp; There is a switch on the front panel called 'MultiTau'... if in the off position, the code performs the linear autocorrelation with the AutoCorrelation.vi, if in the on position, it performs a multitau autocorrelation using the code I wrote.&nbsp; Thanks for any help.
    MultiTauAutocorrelate.vi:
    http://forums.ni.com/attachments/ni/170/185730/1/M​ultiTauAutocorrelate.vi

  • What can I do if script runs faster than network?

    I've written an inter-application script that moves from InDesign, where it starts in AppleScript, to Photoshop, where the AppleScript runs a JavaScript to perform various tasks.
    It runs beautifully on my laptop at home where I do my development. Yesterday, using myself as guinea pig, I tried it in the office.
    On about the third run, I was horrified to see the ExtendScript Toolkit pop up with an error message (about as welcome as seeing an AppleScript inviting the user to open the Script Editor and fix a script).
    The error message was that app.bringToFront(); was not a valid function.
    This was true in InDesign, which has a different activation function, and I realised that even though my AppleScript had called for Photoshop to activate I was still in InDesign.
    The JavaScript app.bringToFront had been called as well because I had enclosed my code in the Tranberry template.
    So I pressed the stop button on ExtendScript, went back to InDesign and ran the script again. This time it worked as usual.
    Occasionally on our network we spend some time beachball-watching as some communication goes on in the background. So I imagine that the time the error was thrown was on one of those network slowdowns.
    The switch from InDesign to Photoshop did not happen fast enough, but the script ran on and issued a Photoshop JavaScript command while I was still in InDesign.
    In AppleScript such unfortunate communication with users can be avoided using "try... on error" blocks.
    Would there be any error-handling equivalent in JavaScript which would enable me to avoid them being thrown into ExtendScript Toolbox and would give them a friendly message apologising, explaining what had happened and inviting them to try again?

    Also AppleScript has a default timeout of 60 seconds before it wants to execute its next command. If in you case the opening and processing of the image in JavaScript takes any longer than this wrap your call out to Photoshop in a timeout block thus extending the alloted time to whatever you think may be suitable? Like so:
    tell application "Adobe InDesign CS2"
    activate
    tell active document
    set This_Image to image 1 of item 1 of rectangle 1
    set Image_Path to file path of item link of (item 1 of This_Image) as alias
    my Call_Photoshop(Image_Path)
    delay 1 -- same as sleep(1000);
    update item link of (item 1 of This_Image)
    end tell
    end tell
    on Call_Photoshop(Image_Path)
    with timeout of 180 seconds
    tell application "Adobe Photoshop CS2"
    activate
    set display dialogs to never
    open Image_Path
    set ID_Image to the current document
    tell ID_Image
    -- do my stuff
    close with saving
    end tell
    end tell
    end timeout
    end Call_Photoshop

  • MBP 1,1 Running faster than MBP 2,2?

    We have two older MacBook Pros that the kids use.
    One is a first generation MacBook Pro 1,1 (January 2006)
    - Core Duo 2GHz
    - 667 bus
    - 2G RAM
    - ATY Radeon X1600
    - 2MB L2 Cache
    The other is a MacBook Pro 2,2 (late Oct 2006)
    - Core 2 Duo 2.33GHz
    - 667 bus
    - 2G RAM
    - ATY Radeon X1600
    - 4MB L2 Cache
    Both are currently running Snow Leopard 10.6.8
    The MBP 2,2 had been upgraded to Lion but it was so sluggish is has been wiped and set back to Snow Leopard.
    The kids use these two computers often to play Minecraft. The issue is that the 2,2 model, which should easily out perform the 1,1 model at this game, lags badly when playing Minecraft. This makes no sense. It has better processor, twice the L2 Cache. Everything else is essentially the same.
    I have run every diagnostic, DiskWarrior, etc to try and determine what is going on but the 2,2 continues to under perform the lesser model.
    What can I do to make the world right here? Why would the superior system and twice the L2 lag on graphics?

    squidz wrote:
    Yep. I killed the partitions. I did not set it to 35 passes, just erased. If I restart and hold "alt" it no longer shows the Recovery partition or any other than Mac HD. Is there a way to test the HD spin rate or other diagnostic to see why is might be under performing its specs.
    You erased the Macintosh HD or you Erased the Top Level Drive makers Name. Just erasing the Macintosh HD partition does not remove the Recovery HD.
    Ok. None that I know of for Mac.

  • SCTL running faster than 40MHz for FPGA

    There is possibility of executing SCTL on FPGA with derived clock rates like 80Mhz. But I found no detail information about the limitations and precision of FPGA in such resolutions, does anybody have an experience on this?

    What do you mean by timing resolution? 
    If you create a derived clock at 80 MHz, and do all your design in that clock domain, you either WILL or WON'T successfully compile the design when it goes through the xilinx toolchain.  The limiting factor on this is how much work is being done on each iteration of the SCTL.  If you put a wildly complex algorithm in an SCTL and expect it to run at 80MHz, it won't do that (see previous post about pipelining a complex design within an SCTL).  However, if a design compiles at 80MHz, it will ALWAYS run at 80MHz, and will always provide you sampling at 80MHz, as long as your inputs are in that 80MHz domain.  One of the primary draws of FPGA design is that it is deterministic.  If 80MHz is sufficient resolution for your application, you design an FPGA that compiles at 80MHz,  then you should have high confidence that your application will work.  
    Please clarify if I'm not answering the right question.

  • Announcement: Security Update 5Q5FJ4QW for multiple versions of LabWindows/CVI and LabVIEW

    An update for LabWindows/CVI and LabVIEW users is now available for download. This update resolves security vulnerabilities in components installed with LabWindows/CVI 2010 SP1 and earlier and LabVIEW 2011 and earlier. Further details can be found at KnowledgeBase Article 5Q5FJ4QW: How Does National Instruments Security Update 5Q5FJ4QW Affect Me?
    The update can be downloaded with NI Update Service 2.0 (which installs with LabWindows/CVI 2010 SP1 and LabVIEW 2011) or from the Drivers and Updates page. Information about the update is also available in other languages through links in the Drivers and Updates page.
    This is free update for all LabWindows/CVI and LabVIEW users.
    National Instruments
    Product Support Engineer

    The correct link should be this one
    Proud to use LW/CVI from 3.1 on.
    My contributions to the Developer Zone Community
    If I have helped you, why not giving me a kudos?

  • How to prevent prompt to install CVI Run-Time Engine?

    The LabVIEW laptop for my client got messed up, so I spent several hours making it forget everything it ever knew about NI software.  I started by uninstalling all NI applications, then manually deleted all the folders that the uninstaller leaves behind, then ran a couple of registry cleaners to sweep out as much NI as possible, and finally ran regedit to see what was left.  In the end there were only some legacy drivers that regedit would not let me delete.
    Then I installed 8.6.1 from DVD, carefully selecting only the options we needed; LabVIEW core, cRIO, FPGA, PID, and the minimum set of drivers the installer would let me get away with.  Note that no Labwindows/CVI option boxes were checked.  When the installation was complete, I rebooted the machine and launched LabVIEW, which immediately prompted me to install the LabWindows/CVI 8.5.1 (not 8.6.1) Run-Time Engine.  I dutifully fed it my 8.6.1 DVD, which caused LabVIEW to crash.  After 3 reboot/retry cycles with the same result, I decided to appeal to the forum for help.
    What is causing LabVIEW to think I need the CVI Run-Time Engine (and a down-rev version, at that), and how do I convince LabVIEW that I do not?
    Jeff

    Thanks for the reply!  OK, so LabVIEW needs the CVI RTE even though I'm not using any CVI features.  I can live with that.  I downloaded LabWindows/CVI Run-Time Engine 8.5 from the NI web site and tried to install it.  After I confirmed that I accepted whatever that license agreement says, the installer told me that "No software will be installed or removed."
    Then I opened LabVIEW, and it went through the same "The feature you are trying to use..." popup and tried to install the CVI RTE.  The installation failed as usual, and LabVIEW crashed.
    A few minutes ago I found and ran CVTRTE.msi on the 8.6.1 distribution DVD.  I selected the "repair" option, which completed successfully.  After rebooting, I launched LabVIEW only to be greeted with the now-hated NI LabWindows/CVI 8.5.1 Run-Time Engine installer.
    The part of this that is so infuriating is that there appears to be no way for anyone to make a computer forget everything NI so you can start with a clean slate.
    Jeff
    Attachments:
    NoCanDo.jpg ‏28 KB

  • Why is JVM faster than CLR?

    hi
    i wrote a N-body algorithm in both Java and C# (shown below). i executed it using .NET CLR and JDK1.4.1. in JDK it is twice as fast as .NET (on win2000). now i am trying to find out why is it so??
    the interesting thing is that i ran some other algorithms like FFT and graph alogrithms, and they are faster in .NET. so i want to find is there some operation in the below algorithm that is making it run faster in JDK.
    in general, what can the possible reasons be for JVM to run faster than CLR?
    thanks
    double G = 6.6726E-11;
    double difference = 0.0;
    for(int i=0; i<numBodies; i++)
         accelarations[i] = 0.0;
         for(int j=0; j<numBodies; j++)
              if(i != j)
              difference = radii[i] - radii[j];
              if(difference != 0)
              accelarations[i] += masses/(Math.pow(difference, 2));
         accelarations[i] *= G;

    Interesting N-Body problem that treats accelerations as scalars.
    Anyway, if there is no optimisation for small integer powers in the Math.pow() method, then I'd expect almost all the time is used there or in its equivalent in .NET. Hardly a meaningful test of relative performance.
    Try using (difference * difference) instead.
    Sylvia.

  • I read on the website that the update 4.0 was supposed to work faster than the last but I that since I updated (2 days ago) my work in firefox has become quite sluggish. Any suggestions?

    Hi there. I upgraded to firefox 4 on Monday (two days ago) since I saw that the new version was supposed to run faster than firefox 3 that I was using. However, since I've started using it, I wait longer for a page to download than I had to before and the whole browser just is sluggish to respond. Any suggestions?

    Sounds like it either believes that there are headphones plugged in, or the switch is broken.
    Why in the world would you take it to Best Buy for service? Take it to an Apple Store. They can replace it on the spot if it's defective.

Maybe you are looking for