CIN node executes slower in LabVIEW 7.0 then 6.0.2

I have a VI that uses a CIN node to access a digital camera, recieve viewfinder data and return the data. This data is in turn displayed in a picture control, creating a real-time display on the front panel that shows what the camera is seeing. The whole process worked flawlessly in 6.0.2, the frame rate was fine. I recently upgraded to 7.0 and now the same VI runs a lot slower and the frame-rate has significantly slowed down (a drastic drop to about one frame per 5-10 seconds). I have watched the execution of the problem and have narrowed the problem down to the CIN node. I am wondering if anyone else has a solution/is experiencing the same problem. Any help will be greatly appreciated and the giver of such - worshipped
for generations to come.

Nevermind, found the answer. The CIN node wasn't declared to be "thread-safe" which caused the poor performance. All I had to do was add a snippet of code into my source code and the VI works like new. The code is available in the "Using External Code with LabVIEW" manual.

Similar Messages

  • Using cin node in labview error happen

    my name is Juan Chen , I wish to use cin node to calculate crc check code in la view. I use an array as input and a unsigned word as output. when I use while loop, error not enough memory or the memory can only read happen. what is the reason,please.

    how to use the call library function node ? in vcplusplus6 environment , if I only do rebuild all that can generate a dll, and don't use command to convert to lsb file. could I need add files cin.obj, labview.lsvb,etl to the project ?

  • Cin node and c++ code

    Hi.
    I need to make a LabVIEW program to call and use c++ code. Not only calculate numbers using the c++ code, the LabVIEW program needs to send boolian signals and be able to talk to the c++ code. Such things possible? Any good article or information to teach how to do it?
    Thank you.
    Airo

    I'm not sure why it shouldn't be possible. At least as long as you are somewhat C savy and know how to configure your C compiler and linker to create specific output. However to be honest I think CIN programming has had its days. It does not really have that many advantages anymore but definitely some disadvantages above calling an external shared library through the Call Library Node.
    As a reference about CIN or DLL usage in LabVIEW I recommend the online help documentation Using External Code in LabVIEW accessible through Help->Shearch the LabVIEW Bookshelf.
    Rolf K
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • Can a CIN node raise an event?

    Just curious. I am able to pass the event struture to a CIN node it appears as "LVUserEventRef *userEvent" (and I found 'typedef MagicCookie LVUserEventRef;' in extcode.h). But I am unable to find "cookie.cpp" mentioned in extcode.h So is the information available somewhere about this ???

    Unfortunately the information you are looking for is part of the source code for LabVIEW and is National Instruments Confidential. It cannot be release to the public. I'm sorry for any inconvenience this creates.
    Robert Mortensen
    Software Engineer
    National Instruments

  • Which tablet/s can run executables created in LabVIEW 8.5?

    Background: I have created a VI that acquires data over bluetooth from a specific DAQ device. The VI is created in Windows 7 and LV 8.5. The VI is tested to work on the development machine. I'd like end users to be able to install and run an executable version of this VI from a tablet that has bluetooth capability.
    Question:  Which tablets can run executables created in LabVIEW 8.5? I think tablets running Windows 8.1, such as Surface Pro and Dell Venue Pro, should be okay but need a confirmation before I purchase one. Also, how about tablets that run other operating systems, such as Windows RT or Android?
    Extended Question: What tablets in the market can run LabVIEW 8.5 VIs (or in other words allow LabVIEW 8.5 to be installed)?
    Gurdas Singh
    PhD. Candidate | Civil Engineering | NCSU.edu

    Well since LabVIEW 8.5 isn't officially supported on Windows 7, or 8 you may have issues.
    http://digital.ni.com/public.nsf/allkb/B972242574D4BB99862575A7007520CB
    Of course many on the forums have stated being able to run older versions of LabVIEW on new operating systems without issue but know that from NI's perspective you should not be running LabVIEW 8.5 on Windows 7.  Which means no modern tablet.
    Unofficial Forum Rules and Guidelines - Hooovahh - LabVIEW Overlord
    If 10 out of 10 experts in any field say something is bad, you should probably take their opinion seriously.

  • I get an error 1 when I'm building an executable file in Labview 6.0

    I'm trying to create an executable file in labview 6.0 but I get an error 1 (see attachment) with a GSim file (control and simulation toolkit). I didn´t install the toolkit (on the computer i´m trying to create the executable) because we lost the CD and we had the toolkit installed on another computer so I copy the files I needed.
    Attachments:
    error1labview.doc ‏56 KB

    You probably have a linkage issue.
    Open the top level VI on the machine that has the toolkit installed, and make sure that it opens without an error (the run arraow should be white). Press : ctrl + shift + run arrow. This forces a recompile of the code, and will update the internal links to all the sub-VIs. Then, save the top level VI, and close it. Open a new VI, so that there is only one VI in memeory, and it is a new (blank) VI, and try to build the applictaion again.
    J

  • SQL Azure - query with row_number() executes slow if columns with nvarchar of big size are included

    I am linking my question from Stack Overflow here. The link: http://stackoverflow.com/questions/27943913/sql-azure-query-with-row-number-executes-slow-if-columns-with-nvarchar-of-bi
    Appreciate your help!
    Gorgi

    Hi,
    Thanks for posting here.
    I suggest you to check this link and optimize your query on sql azure.
    http://www.sqlusa.com/articles/query-optimization/
    http://sqlblog.com/blogs/paul_white/archive/2011/02/23/Advanced-TSQL-Tuning-Why-Internals-Knowledge-Matters.aspx
    Also check this blog which had similar issue.
    https://social.msdn.microsoft.com/Forums/en-US/c1da08b4-265d-4ec8-a252-8d7090234e3e/simple-select-query-takes-long-time-to-execute-with-nvarchar-columns?forum=transactsql
    Girish Prajwal

  • How can an external drive cause my Internet to slow to a crawl and then fail?

    I have two WD My Passport Ultra 1 TB USB 3 new drives. One I'm using for Time Machine and one for CCC (Carbon Copy Cloner). I have been trying to use them with my new MacBook Pro retina 13-inch.
    For the last few days I've had weird problems trying to connect them via a USB 3 hub. I though the hub was the problem (because things seemed to be fine until I introduced a hub into the equation) and already returned one.
    But now I see the same thing happening even without the hub.
    It appears that one drive, when connected to a USB 3 port (it doesn't matter which port), even when it's doing nothing (I erased and and removed the CCC task), causes the Internet to slow to a crawl, and then stop. All connectivity ceases.
    The instant I unmount and remove the drive the Internet springs back to normal speeds - over wifi about 110 Mbps.
    Disk Utility says there are no problems with the disk. So does WD's only diagnostic utility. And the firmware is up-to-date.
    The other disk, running Time Machine, seems to have no issues at all.
    I guess I will return this drive. It could be subtly defective in some way. But what way? Why would an external USB 3 drive affect the Internet connectivity?
    doug

    I came across this post while in search of the answer to the very same question only I am experiencing this problem when connecting to a USB 3.0 hub and nothing else.  The resuling slowdown in my internet speed leaves me hanging intermittently between screens in Safari, sometimes indefinitely.  When I remove the USB cable, normal speeds resume.  Activity Monitor shows no undue load on the CPU.  I did not notice this problem until just recently.  Could this have something to do with the recent OSX patch.
    BTW: I am running a 2012 15" MacBook Pro with Retina display. 
    Anybody have a clue as to what might be going on here?
    Gerry

  • I updated my 4s, and now the text messaging is so slow its almost unusable (type then wait 20  seconds to see characters)

    I updated my 4s, and now the text messaging is so slow its almost unusable (type then wait 20 seconds or more to see characters on the screen).  Help Please!

    It is possible that there is a problem with the files sessionstore.js and sessionstore.bak in the Firefox Profile Folder.
    Delete the sessionstore.js [2] file and possible sessionstore-##.js [3] files with a number and sessionstore.bak in the Firefox Profile Folder.
    * Help > Troubleshooting Information > Profile Directory: Open Containing Folder
    * http://kb.mozillazine.org/Profile_folder_-_Firefox
    Deleting sessionstore.js will cause App Tabs and Tab Groups and open and closed (undo) tabs to get lost, so you will have to create them again (make a note or bookmark them).
    See also:
    * [1] http://kb.mozillazine.org/Session_Restore
    * [2] http://kb.mozillazine.org/sessionstore.js
    * [3] http://kb.mozillazine.org/Multiple_profile_files_created

  • Error 15 at "Run VI" Invoke Node When Building a LabVIEW Executable

    I am trying to run a vi by using the "run VI" method of the invoke node. The source code works perfect. However, I cannot run the vi when I use the executable version. Specifically, I get the Error code 15 at the "error out" output of the invoke node while using the executable version. Why is this occuring?
    Thanks

    Hello ameng,
    This behavior is typically the result of trying to call VI that has had its front panel removed. The KnowledgeBase linked below has more information on this issue.
    Error 1 or 15 Opening Dynamic VI from LabVIEW Application
    Please let me know if you have any other questions about this issue.
    Regards,
    Matt F
    Keep up to date on the latest PXI news at twitter.com/pxi

  • Matlab syntax errors are not caught and reported when code is executed in a LabVIEW Matlab script node

    I want to be able to catch errors that occur during the execution of Matlab code in a LabVIEW Matlab script node. According to my understanding of the documentation, errors of this type should be available at the "error out" signal point on the Matlab script node. I have noticed that even deliberately generating matlab syntax errors will not produce an error output. See the attached vi for an example.
    Attachments:
    matlab.vi ‏13 KB

    I ran your example VI and the is what I got in the Error Out cluster:
    "Code 1050
    Error occured while executing script. Error message from server: ??? This is an error
    . in matlab[1].vi"
    If you are generating custom error messages in Matlab I would suggest passing them back to LabVIEW through output variables in the script node.
    Chris_Mitchell
    Product Development Engineer
    Certified LabVIEW Architect

  • Can a C++ sub-program invoked as a cin node call a LabView sub-vi as a "function"

    The subject says it all, we have a mostly LabView system that controls a numerical
    simulation program. We would like to add a new numerical analysis package written
    in C. To be used as written this C package must call one of our LabView sub-vi's
    as a function numerous times. Can this be done?
    -thanks

    > The subject says it all, we have a mostly LabView system that controls a numerical
    > simulation program. We would like to add a new numerical analysis package written
    > in C. To be used as written this C package must call one of our LabView sub-vi's
    > as a function numerous times. Can this be done?
    >
    CINs are one way of doing this, but they are C code written specifically
    for LV.
    A better solution is to use the Call Library Function node, or the DLL node.
    Your C code may already be built into a DLL. If not, it is easier and more
    familiar to the people doing it.
    Another option is to use a built EXE and the System Exec node. Note
    that a
    DLL has multiple entrypoints that can be called at different times to provide
    lots of flexibility. An EXE has few, typically o
    ne, entrypoint and is sort
    of difficult to control once it is up.
    Another option, if the C code is written this way already is to use an
    automation interface -- an ActiveX interface. This is really about the
    same as a DLL, but it follows certain standards so that ActiveX client
    programs can access things in a standard way.
    Be sure to look at the Code Interface or external code manual. It should
    be a pdf manual in the help directory.
    Greg McKaskle

  • Executable created in LabVIEW 7.1 containing Call Library Function Node runs but creates error when Call Library Function Node is executed

    I have created a simplee application that controls a piece of equipment with all control via a supplied dll.  Hence, there are a number of Call Library Function Nodes within the code and a modicum of other LabVIEW code.  Everything works fine as a LabVIEW application, but when converted to an executable, although the application runs and functions, as soon as any Call Library Function Node is executed, calling from the dll, I get the C++ debug error in the attachment.
    Is this something that I can solve from within LabVIEW, or is the problem likely buried in the dll?
    Damian
    Attachments:
    CLFN error.JPG ‏22 KB

    Hi Wise,
    Try building an executable from a very simple VI that makes one call to the dll. Have you also try using the Call Library Function node on other simple dlls that you know will work?
    Regards,
    Stanley Hu
    National Instruments
    Applications Engineering
    http://www.ni.com/support

  • Mathscript griddata function is very slow in Labview 8.6

    I'm using the griddata function in Mathscript as a direct replacement of a Matlab function we were using in the past.
    I discovered a bug with griddata which was fixed in Labview 8.5. The funciton executed very well in that version.
    I've upgraded the code to LV8.6, and now when executing griddata on the same data set in either 'linear' or 'cubic' mode, the function is MUCH slower than when using 'v4' or 'nearest'. All four methods had the same execution time in LV8.5.1
    I have searched many topics covering issues with griddata and 'cubic', but I'm pretty sure this is a new bug. 
    I've attached a VI containing my data set. Running this VI in 8.5.1 and using 'linear' mode is much faster than in 8.6.
    I'll be going back to Matlab, since the rest of the code has already been upgraded to 8.6 and this makes regression it unusable for my purposes.
    Any ideas?
    Thanks.
    Attachments:
    no_Matlab_griddata_example.vi ‏94 KB

    I should correct one detail in my message. The girddata bug was originally fixed for 8.5.1, not 8.5, per the following CAR: 41OHD4VQ
    The attached VI should be run in 8.5.1 and then in 8.6 ('linear' method) to see the performance decrease.

  • Calling a library function node much faster than labview code?

    Hi,  I wrote a labview routine to perform a multiple tau autocorrelation on a large array of integers.  A multi tau autocorrelation is a way to reduce the computation time of the correlation but at the expense of resolution.  You can taylor the multitau correlation to give you good resolution where you need it.  For instance, I require good resolution near the middle (the peak) of the correlation, so I do a linear autocorrelation for the first 64 channels from the peak, then I skip every second channel for the next 32, then skip every 4th channel for 32 more, then skip every 8th for 32 channels... etc.
    Originally, I wrote my own multitau calculation, but it took several hours to perform for just 1024 channels of the correlation of around 2million points of data.  I need to actually do the the correlation on probably 2 billion or more points of data, which would take days.  So then I tried using labview's AutoCorrelation.vi which calls a library function.  It could do a linear autocorrelation with 4 million points in less than a minute.  I figured that writing my code in C and calling it using a call library function node would be faster, but that much faster?
    Finally, I wrote some code that extracts the correlation data points that I would've got from my multitau code from the linear correlation function that I get from the AutoCorrelation.vi.  Clearly this is not optimal, since I spend time calculating all those channels of the correlation function just to throw them away in the end, but I need to do this because the final step of my procedure is to fit the correlation function to a theoretical one.  With say 2million points, the fit would take too long.  The interesting thing here is that simply extracting the 1024 point from the linear autocorrelation function takes a significant amount of time.  Is labview really that slow?
    So, my questions are...  if I rewrite my multitau autocorrelation function in C and call it using a call library function node, will it run that much faster?  Can I achieve the same efficiency if I use a formula node structure?  Why does it take so long just to extract 1024 points from an array?
    I've tried hiding indicators and this speeds things up a little bit, but not very much.
    I'll attach my code if you're interested in taking a look.  There is a switch on the front panel called 'MultiTau'... if in the off position, the code performs the linear autocorrelation with the AutoCorrelation.vi, if in the on position, it performs a multitau autocorrelation using the code I wrote.  Thanks for any help.
    Attachments:
    MultiTauAutocorrelate.vi ‏627 KB

    Hi,
    The C routine that AutoCorrelation.vi is using is probably a higly optimised routine. If you write a routine in LabVIEW, it should be less then 15% slower. But you'd have to know all ins and outs of LabVIEW. How data is handled, when memory is allocated, etc. Also note that the AutoCorrelation.vi has years of engineering behind it, and probably multiple programmers.
    It might even be possible that the c code uses an algorithmic improvement, like the Fast Fourier Transform improves speed on the Fourier Transform. I think the autocorrelation can be done using FFT, but that isn't my thing, so I'm not sure.
    For a fair comparation, posting the code in this forum was a good idea. I'm sure together we can get it to 115% or less of the C variant. (15/115 is just a guess, btw)
    I'm still using LV7.1 for client compatibility, so I'll look at the code later.
    Regards,
    Wiebe.
    "dakeddie" <[email protected]> wrote in message news:[email protected]...
    Hi,&nbsp; I wrote a labview routine to perform a multiple tau autocorrelation on a large array of integers.&nbsp; A multi tau autocorrelation is a way to reduce the computation time of the correlation but at the expense of resolution.&nbsp; You can taylor the multitau correlation to give you good resolution where you need it.&nbsp; For instance, I require good resolution near the middle (the peak) of the correlation, so I do a linear autocorrelation for the first 64 channels from the peak, then I skip every second channel for the next 32, then skip every 4th channel for 32 more, then skip every 8th for 32 channels... etc. Originally, I wrote my own multitau calculation, but it took several hours to perform for just 1024 channels of the correlation of around 2million points of data.&nbsp; I need to actually do the the correlation on probably 2 billion or more points of data, which would take days.&nbsp; So then I tried using labview's AutoCorrelation.vi which calls a library function.&nbsp; It could do a linear autocorrelation with 4 million points in less than a minute.&nbsp; I figured that writing my code in C and calling it using a call library function node would be faster, but that much faster?Finally, I wrote some code that extracts the correlation data points that I would've got from my multitau code from the linear correlation function that I get from the AutoCorrelation.vi.&nbsp; Clearly this is not optimal, since I spend time calculating all those channels of the correlation function just to throw them away in the end, but I need to do this because the final step of my procedure is to fit the correlation function to a theoretical one.&nbsp; With say 2million points, the fit would take too long.&nbsp; The interesting thing here is that simply extracting the 1024 point from the linear autocorrelation function takes a significant amount of time.&nbsp; Is labview really that slow?So, my questions are...&nbsp; if I rewrite my multitau autocorrelation function in C and call it using a call library function node, will it run that much faster?&nbsp; Can I achieve the same efficiency if I use a formula node structure?&nbsp; Why does it take so long just to extract 1024 points from an array?I've tried hiding indicators and this speeds things up a little bit, but not very much.I'll attach my code if you're interested in taking a look.&nbsp; There is a switch on the front panel called 'MultiTau'... if in the off position, the code performs the linear autocorrelation with the AutoCorrelation.vi, if in the on position, it performs a multitau autocorrelation using the code I wrote.&nbsp; Thanks for any help.
    MultiTauAutocorrelate.vi:
    http://forums.ni.com/attachments/ni/170/185730/1/M​ultiTauAutocorrelate.vi

Maybe you are looking for