LabVIEW Threads?

I am trying to get an understanding of what is a LabVIEW thread when using
LV6 on Win32.
1. If a block diagram has two parallel while loops, is each loop a thread.
2. Is each sub-VI a separate thread? (I don't think so, but I thought I
should ask)
3. If a sub-VI is a loop, and does not have a return terminal, is it a
thread?
4. If a call to a sub-VI starts a thread, how do you kill it?
Thanks. Any answers or references to material on this topic (other than NI
App Note 114) would be appreciated...Ed

Before I get to the direct questions, a bit of overview. The LV
execution is controlled by dataflow and the loop/case/sequence
structures. LV can execute the same diagram in many ways. The same VI,
without being recompiled, can run on a single threaded OS such as the
classic Mac OS, on a single CPU multithreaded machine, or on a multi-CPU
multithreaded OS. It does this much like the OS itself does, by
scheduling tasks using a priority based round-robin scheduler. There
can be any number of these scheduling objects, which we call an
execution system. Each execution system has a queue used to execute
items marked to run there.
On a single threaded OS, every execution system is run by the one and
only thread. That thread takes turns cooperatively multitasking between
the different execution systems and the window handling UI tasks. It
picks an execution system, dequeues a task, and when the task releases
control, the thread repeats -- determines the next execution system and
task to execute.
On a single CPU multithreaded OS, each execution defaults to having its
own thread. The OS schedules threads preemptively based upon priorities
and heuristics. I said this is the default because it is possible to
customize this to have more than one thread per execution system. Each
thread sleeps until a task is placed on its queue. At that point the OS
may choose to swap threads based upon priorities. Additionally, the OS
can swap out threads, thus swapping out execution systems at any point.
The default for a multi-CPU multithreaded OS is to allocate M threads
per execution system where M is the number of CPUs. Each of the
execution queues now has multiple threads sleeping waiting for a task.
The OS chooses a thread based upon priorities for each of the CPUs and
determines how long to execute and how often to swap. Again, this is
the default and can be modified, though this is rarely necessary.
> 1. If a block diagram has two parallel while loops, is each loop a thread.
Normally, no. Each thread is one or more tasks and since they are in
the same VI, they will be executing in the same execution system. The
code for both while loops is generated to have cooperative scheduling
opertunities in it. The while loops will multitask based upon delays
and other asynchronous nodes within it. Provided the loops do not have
synchronization functions in them, they will execute relatively
independent of one another sharing the CPU resources. If one of the
loops is executing more frequently than it needs to, the best way to
control this is to place a Wait node or some other synchronization to
help limit the execution speed. You can place each of the loops in
separate threads either by placing the different loops in different VIs
and setting the VIs to execute in different execution systems. You can
also customize the execution system they run in to have more than one
thread. If you could be more specific as to why you want to run them in
different threads I can help determine if this would help and the best
way to do this.
> 2. Is each sub-VI a separate thread? (I don't think so, but I thought I
> should ask)
No. Each VI specifies which execution system it wishes to run in. By
default, it is set to Same as Caller so that it adapts to its caller and
avoids thread swaps. You can set the execution system on the Execution
page of the VI Properties dialog. By setting this, you can essentially
place VIs in their own thread, but this isn't the default since it would
affect performance.
> 3. If a sub-VI is a loop, and does not have a return terminal, is it a
> thread?
Whether or not a subVI has a loop doesn't affect how it is scheduled.
> 4. If a call to a sub-VI starts a thread, how do you kill it?
You use one of the synchronization primitives or some other
communication such a global variable to tell the loop to stop executing.
When the subVI stops executing, the execution system either picks
another task to execute or goes to sleep. It is possible to use the VI
Server to abort VIs that have been run using the Run method, but this
isn't necessarily a good thing to do. It is far better to have loops
and subVIs terminate normally.
>
> Thanks. Any answers or references to material on this topic (other than NI
> App Note 114) would be appreciated...Ed
The last three years there have been presentations on the execution
system of LV. You can find most of these on devzone by going to
zone.ni.com and searching for Inside LabVIEW.
Also feel free to ask more questions or give more background.
Greg McKaskle

Similar Messages

  • Can LabVIEW threads sleep in increments less than a millisecon​d?

    I am aware of two LabVIEW sleep functions:
    1) All Functions | Time & Dialog | "Wait (ms)"
    2) All Functions | Time & Dialog | "Wait Until Next ms Multiple"
    In this day and age, when 3GHz processors sell for less than $200, it seems to me that a millisecond is an eternity. Is there any way to tell your LabVIEW threads to sleep for something less than a millisecond?
    In Java, the standard Thread.sleep() method is written in milliseconds [sorry, the bulletin board software won't let me link directly]:
    http://java.sun.com/j2se/1.4.2/docs/api/java/lang/​Thread.html#sleep(long)
    but there is a second version of the method that allows for the possiblity of nanoseconds:
    http://java.sun.com/j2se/1.4.2/docs/api/java/lang/​Thread.html#sleep(long, int)
    So there does seem to be some consensus that millisecond sleep times are getting a little long in the tooth...

    Hi Tarheel !
    May be you should get some idea of the kind of timing accuracy that you can reach when using a loop.
    Use the attached vi, which runs repeatedly a For loop (10 iterations) reading the time, then calculate the average and standard deviation of the time difference between the loop iterations.
    On my PC (P4, 2.6 MHz, W2K), I get a standard deviation of about 8 ms, which appears to be independent of the sleep duration I asked for.
    Same thing with a timed loop.
    Under MacOS X (PowerBook, 1.5GHz), the SD falls down to 0.4 ms.
    I tried to disable most of the background processes running on my PC, but I could not get a better resolution.
    Seems that the issue is not in LV but on the way the OS manage its internal reference clock.
    Since you are a Java afficionado, may be you could produce something equivalent ?
    A proof that nanosecond resolution is available on a PC could be of great help to NI. Why bother with costly timers on DAQ cards ?
    By the way, it took me about one minute to create the attached vi. I would like to have an idea of the time required to do the same thing in Java.
    Tempus fugit...
    CC
    Chilly Charly    (aka CC)
             E-List Master - Kudos glutton - Press the yellow button on the left...        
    Attachments:
    Timing precision.zip ‏11 KB

  • Forcing context switching between Labview threads

    Hello,
     I have 2 threads interfacing with 2 serial ports on Labview ( Lets say Thread1 , is responsible for polling COM1 and Thread2 for polling COM2 ).
    By using occurrences, Thread1 executes before Thread2 and context switching is done according to Labview between the threads. Now, I'm looking to do the following :
    As soon as a specific block of code is executed in Thread1, I need Labview to force context switching to execute Thread2. I want to do this in order to synchronize the data received from the 2 threads.
    1- So, can this be done, or are there other ways for synchronization ?
    2- What if I want Labview to run as a real-time process or a higher priority thread on Windows XP in order to emulate the real-time effect on a Windows XP and not get any delays ? ( I already changed the process priority from the Process explorer, but there seems no effect. I also changed the Labview threads priority to real-time)
    3- Are there better approached for (1) & (2) ?  
    Thank you,
      Walid F. Abdelfatah 

    wfarid wrote:
    I already used occurrences for Thread1 to execute before Thread2. Does occurrences guarantee that Labview will switch the context to Thread2, as soon as the context is fired ?
    -- Walid 
    NO!
    LV depends on the OS to schedule threads. If you are sticking with Non-RT then you would be better off getting thread one to do the work while it has teh CPU.
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • User interface threads

    Ok let's say you're writing a user interface and nothing involves
    special-resources like a daqcard so there's no constraints on using things
    at once. These are some kind of stupid questions and ideas but I'm really
    not sure how to write a quality app with user interface at the moment.
    My current design is bad, it is a mainVI which checks the menu and
    frontpanel buttons, decides which operation or calculation (if any) is
    selected, and sends that to a case structure to perform the operation.
    This is sloooow but the most common way in examples that I've seen. The
    major problem is that while processing the interface isn't checked and
    therefore mouseclicks might be lost. Let's say the processing takes like
    500ms on average, that is kind of slow between user-interface querys so it
    would feel unresponsive in the mean time.
    So here's my three ideas so far, I'd love some comments on what will be FAST
    and what would be worthwhile to spend my time on:
    1. Leave the app as-is, latch the booleans. However you can't latch
    mouse-clicks on a picture indicator that I knew of. I emailed NI before
    about this and they just suggested adjusting execution priorities. Anybody
    else messed around with this? Probably the worst option, but would take zero
    programming.
    2. Divorce the user interface and the processing. Create two parallel
    while-loops in the same VI, one checks the buttons/menu on the front panel
    the other processes the request or calculation. Let's say on average UI
    checking takes 1.2ms and processing takes about 500ms. Also in this case is
    the watch icon still really my best option (since checking buttons/menu only
    takes like 1.5ms on average) for not wasting time repeatedly checking when
    there's no input and dividing up cputime accordingly? Seems like there'd be
    some overhead in switching back and forth repeatedly.
    3. Going all-out and changing to somewhat of an object structure. That way
    the UI could create a new "execution" refnum, maintain some list of created
    objects, process, return values, destroy any object, so everything could be
    going on in parallel. That way one slow calculation won't bog down the rest
    of the things the UI requests. The idea is far too abstract to me at this
    point, but on a single CPU w98 system is it even worth my thinking about
    such a structure? I get the feeling I'd see zero performance change between
    the two, in fact maybe worse from any labview thread overhead!
    Thanks for any comments. I have seen DAQ intensive apps discussed often, but
    don't usually catch much on large user interface apps.
    -joey

    Hi Joey
    First check are you are not recalculating the values on ever iteration of the user interface loop but are you only recalculating
    on any change in the user interface values.
    Otherwise I would use idea No. 2 but with these changes.
    1. Only check the whole user interface every 200-300ms, 1.5ms loop time will unnecessarily load the CPU.
    2. Each user interaction could be given a string representation and then placed in a queue to wait for the calculation loop to
    have time to process it.(So the user instructions are not lost)
    3. Have separate loops for faster and slower calculations (or more) with each having their own queue.
    4. The extension to the idea's of having separate loops, is to have each loop in a separate independently running VI (see
    VIserver). Still use the queue's to pass the data. This method would allow the calculations and the user interface to run on
    separate threads and also lets you alter the execution priority for each VI to fine tune the execution times.
    Following these instructions you would produce a basic client-server architecture for your user interface, as long as the UI
    doesn't require too many slow calculation results before continuing then this should work well.
    If this is still not fast enough then, if you have used suggestion No.4, the calculations can be moved off the users computer to a
    faster server (using VIserver) also assuming they are networked.
    Hope this gives you some ideas.
    Tim S
    Joseph Oravec wrote:
    > Ok let's say you're writing a user interface and nothing involves
    > special-resources like a daqcard so there's no constraints on using things
    > at once. These are some kind of stupid questions and ideas but I'm really
    > not sure how to write a quality app with user interface at the moment.
    >
    > My current design is bad, it is a mainVI which checks the menu and
    > frontpanel buttons, decides which operation or calculation (if any) is
    > selected, and sends that to a case structure to perform the operation.
    >
    > This is sloooow but the most common way in examples that I've seen. The
    > major problem is that while processing the interface isn't checked and
    > therefore mouseclicks might be lost. Let's say the processing takes like
    > 500ms on average, that is kind of slow between user-interface querys so it
    > would feel unresponsive in the mean time.
    >
    > So here's my three ideas so far, I'd love some comments on what will be FAST
    > and what would be worthwhile to spend my time on:
    >
    > 1. Leave the app as-is, latch the booleans. However you can't latch
    > mouse-clicks on a picture indicator that I knew of. I emailed NI before
    > about this and they just suggested adjusting execution priorities. Anybody
    > else messed around with this? Probably the worst option, but would take zero
    > programming.
    >
    > 2. Divorce the user interface and the processing. Create two parallel
    > while-loops in the same VI, one checks the buttons/menu on the front panel
    > the other processes the request or calculation. Let's say on average UI
    > checking takes 1.2ms and processing takes about 500ms. Also in this case is
    > the watch icon still really my best option (since checking buttons/menu only
    > takes like 1.5ms on average) for not wasting time repeatedly checking when
    > there's no input and dividing up cputime accordingly? Seems like there'd be
    > some overhead in switching back and forth repeatedly.
    >
    > 3. Going all-out and changing to somewhat of an object structure. That way
    > the UI could create a new "execution" refnum, maintain some list of created
    > objects, process, return values, destroy any object, so everything could be
    > going on in parallel. That way one slow calculation won't bog down the rest
    > of the things the UI requests. The idea is far too abstract to me at this
    > point, but on a single CPU w98 system is it even worth my thinking about
    > such a structure? I get the feeling I'd see zero performance change between
    > the two, in fact maybe worse from any labview thread overhead!
    >
    > Thanks for any comments. I have seen DAQ intensive apps discussed often, but
    > don't usually catch much on large user interface apps.
    >
    > -joey

  • Get version of EXE built outside of LabVIEW

    Hi,
    I was looking for a way to read the "Versions" File property of an EXE that was built in C++  (or any other EXE).  I would like my application to read it programmatically instead of having the information in some other file that my application reads.
    Thanks,
    Gary
    Solved!
    Go to Solution.

    Hello glstill,
    There is a pretty good discussion of that problem as well as a few possible solutions using .NET or the WinAPI offered in this thread from 2006:
    LabVIEW thread: File version info
    http://forums.ni.com/t5/LabVIEW/File-version-info/m-p/361945?jump=true
    Some of the .NET information may be out of date, but the discussion is relevant and a few people have provided code to do more or less what I think you're looking for.
    Another pretty clean looking and more recent example is available here on the NI Developer Community:
    Example: Get File Version Information for EXE’s and DLL’s
    https://decibel.ni.com/content/docs/DOC-13866
    Best Regards,
    Tom L.

  • Error -200220 occurred at DAQmx Create Channel (AI-Voltag​e-Basic).v​i

    I am trying to set up my PMT using NI-DAQ 7.3 along with Labview 7.1.  I keep getting an error message described above.  I am new to this program, so I am not sure what is wrong.  I have also received an error message stating the device could not be routed.  Any ideas?  Thanks

    Hello WayneState,
    Welcome to the NI Forums. The error message should say something like this "Possible reason(s): Measurements: Device identifier is invalid." This means that the name of your device was incorrect. You should use the drop down box to select the one is configured in Measurement and Automation Explorer (MAX).
    There is very valuable information in this site since it is designed for people starting with data acquisition using our products.
    Finally, I see you have other problems in this LabVIEW thread that suggest that you have already overcome the error in this Thread. If so, please post back your description of the resolution and if not, please feel free to post back some screenshots of MAX and LabVIEW.
    Gerardo O.
    RF Systems Engineering
    National Instruments

  • Step into step over step out disabled while running

    Hello all,
    Step into, Step over and Step out are getting disabled whenever I run the sequence. Also Run Button is getting disabled after the execution breaks at break point.
    Its very difficult to debug the code because of these isssues.
    Is that anything to do with custom process model ? I use a third party process model. But in other PCs which are also using the same process model dont have these issues.

    Hey sridar,
    Does this happen with a simple sequence file? For example, if you create a new sequence file with only a Message Popup step, set a breakpoint, and then try to debug, does it work properly or do you have the debugging buttons grayed out in this case too?
    This is most likely happening because your sequence has multiple threads and not all of the threads are being suspended when you hit the breakpoint. For example, if you have a LabVIEW VI executing in a different thread when the breakpoint is hit, you will not be able to debug unless the LabVIEW thread suspends. Here is a KnowledgeBase article describing this case: http://digital.ni.com/public.nsf/allkb/46D1E157756C37E686256CED0075E156?OpenDocument
    It is possible that it is being caused by the process model, but I'd start by trying a very simple sequence file. If that works properly, we know it's something in your larger sequence file. If the simple file also does not allow debugging, we might look into the process model for the issue.
    Let us know if you have any further questions!
    Daniel E.
    TestStand Product Support Engineer
    National Instruments

  • Binary write to file slowing down my program

    I have noticed this more on some types of computers, but I have two threads going.  One that captures 10MB images and queues them, and another thread that dequeues them and appends to a binary file.
    If I disable the write to file, program chugs along well.  Once enabled, I noticed that the image capturing begins to act sluggishly, almost in sync with the writing to hard drive.  I have tried changing the write loop priority to the lowest as well.
    It isn't holding up at the queue/dequeue step either.  Any pointers on what to try?  I'm almost certain that there is a memory sharing issue where one process is writing to 10MB image to memory, while the other is reading a 10MB image from memory, and this is tying things up.

    dre99gsx wrote:
    The main VI which hosts both of these loops is setup as User Interface
    There's the problem, I think. The user interface subsystem is special - it's a single thread. This is required for certain things (interactions with the operating system and user interface, some external DLLs and ActiveX components) that require a single thread. In general, don't run your code in the user interface system unless you have a specific reason to run it there. As an example, property nodes that refer to front-panel items cause a switch to the user interface thread, so if you have VI that uses a lot of references to update the front panel, that VI should run in the UI system. In your case, if you switch your main VI to Standard and leave the subVIs set as Same as Caller, I think you'll see a similar speed improvement. There's more information about LabVIEW threading here, although that information is slightly out of date; if I remember correctly (couldn't find the reference in a quick search) LabVIEW now allocates as many threads per system as there are processor cores in your CPU, but not less than 4 per execution system with the exception of the user interface.

  • Generally, when is mutex required?

    I am using the VI Server to launch multiple, identical .vit's in top level subpanels. It seems to be working fine.
    However, I have been reading up on re-entrancy, Semaphores, LabView threading, etc. and am now adequately petrified there is a potential for disaster here if I am not careful.
    Are there some simple guidelines for when to protect shared objects from concurrent access and when not to worry about it? It gets confusing when multiple identical VI's are running. I have not made them re-entrant, but they are obviously running concurrently. I am guessing that since they were created from a .vit they are technically differently named VIs, right? What surprises me is that a common sub-VIs called by the .vits also runs concur
    rently...... and this is a normal VI (not re-entrant).
    I gather from reading that in some cases (such as when a DLL call is involved) that some inherent protection is provided and I do not have to worry about it. Do I need to open the diagrams of all subvi's called in order to assess this threat or not?
    Other cases (such as a VI that Reads-Modifies-Writes a global variable) concern me. In this specific case, the ReadModWrite.vi is called as a sub-vi from the vit's -- do I need to be concerned with protecting the global with a mutex (semaphore)? How about if the read/mod/write were separated into 3 different sub-VIs (hypothetically)?
    If I make sure that anything "critical"(Globals R/Mod/W, GPIB Write/Query, etc) is accessed from a VI that is non-reentrant, then does that solve the problem?
    I apologize for the shotgun question, but this issue really does affect many aspects of LabView.

    Great question Mike!
    A mutex is required when ever you have a data structure that can be modified by more than one entity.
    That is the short answer.
    These mutexes do not have to be explicit! LV uses mutexes "behind the scenes" to ensure non-re-entrant VI's are not executing at the same time.
    So if I understand your "such as a VI that Reads-Modifies-Writes a global variable" as a VI that shared by all of your VIT's that is the one and only method used to modify the global, then your global data is being protected by the mutex that protect the VI!
    This is one of the benefits that come along with the "functional global".
    RE: the dll calls.
    The call library function will default to the user interface thread. The user interfacte thread is single threaded so this ensures there is only one call to to the dll at a time. If you re-configure the nodes then this does not apply.
    You can combine the two above thoughts to be able to run a dll in a thread other than the user interface and still ensure only one call at a time. You put the dll call in a non-re-entrant VI. This will harness that same feature to ensure the is only one caller of the dll while allowing it to run in a thread other than the user interface.
    So...
    Most of the time I can get away without using mutexes (mutexi?) becuase I make extensive use of functional globals.
    Now let me share how I architect LV app's and convince myself that I do run into the issues you have raised.
    LV uses the data flow paradym so I design my aap's around the data, rather than around the process.
    I will start defining my data objects based on the app's requirements.
    These data objects can be globals, files, registers, GUI objects, etc. When I have the ability to define these objects (i.e. I can not define device status and control register formats) I will use techniques similar to what is used in DB design, keeping data grouped logicly while keeping LV performance in mind as I go (i.e. If I know I will be accessing an array of clusters based on one of the elements, I will copy that element off to a seperate array to facilitate quick searches, like a "key").
    Once I have the data structures defined, I will then go through and design the code that will provide the functionality I need and take carefull note of who write the data objects and when.
    It is durring this phase of the design that alarms will go off when I see that data needs to be modified from more than one place.
    Each of the alarms are dealt with in turn and the conflicts are handled using the tools that come with LV (LV2 globals, queues, semaphores, etc).
    So by the time I start coding I already know where my conflicts are and have plans in place to handle each.
    I would be interested in hearing how others approach design.
    LV is a unique dev environment that requires design techniques that are unique from other environments.
    I hope this answers your question.
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • Variant data

    Hello,
    I am using an ActiveX DLL in my current LabVIEW project, something
    that I have never done before
    I use one of the DLL's methods to receive Variant data and then use
    the "Variant to Data" to try and receive and array of floats that I
    then want to plot to a chart. At the moment I am only getting 0.00 for
    all the values and I know that this should not be the case. I create
    an indicator for the Variant and this is the information I can get.
    OLE Variant
    Variant Type -> VT_ARRAY|VT_R4
    Value -> Array(Non Displayable)
    I have also tried the "Get Variant Attribute" VI but don't get any
    values for either 'name(s)' or 'value(s)'.
    Using "Variant To Flattened String" gives me an array of
    integers for
    the type string: 6,132,1 and nothing in the data string.
    Any ideas?
    Regards,
    Adrian.

    Nghtcrwlr wrote:
    Should I not post this question in LabVIEW thread?
    You originally posted this in the Breakpoint board.  I had the moderator move it to the LabVIEW board.
    Use the context help (Ctrl+H to toggle it on/off).  You will notice if you put your cursor over the Variant to Data node that it explains that you have to supply a data type to convert to.  You can wire anything into the data type, such as an array of doubles.  If you don't supply a type, it defaults to a variant.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines

  • Help with folder change library call

    I'm attempting to create a labview function that waits on notification of a change in a folder, specifically a new file to be created. I'm trying to use the FindFirstChangeNotification, referenced here; http://msdn.microsoft.com/en-us/library/windows/desktop/aa364417(v=vs.85).aspx 
    I've attached the .vi I'm having trouble with where I get the notification handle. It returns error 1097 when run. Any ideas?
    -Ian
    Solved!
    Go to Solution.
    Attachments:
    folderChangeNotify_TEST.vi ‏62 KB

    iyeager2012 wrote:
    I'm attempting to create a labview function that waits on notification of a change in a folder, specifically a new file to be created. I'm trying to use the FindFirstChangeNotification, referenced here; http://msdn.microsoft.com/en-us/library/windows/desktop/aa364417(v=vs.85).aspx 
    I've attached the .vi I'm having trouble with where I get the notification handle. It returns error 1097 when run. Any ideas?
    -Ian
    The WinAPI solution you provided has just about all things configured wrongly in the Call Library Node. This version should be correct for both LabVIEW 32 Bit and 64 Bit and also provides the additional calls to be functional. It is not a perfect implementation if you want to wait repeatetly on the same event as there, one should not call FindFirstChangeNotification() each time but instead use FindNextChangeNotification() on subsequent calls. Also for a truely reusable library the creation of the handle, the subsequent reinitialization, and the waiting on the handle should probably be put in their own VI functions each. But it gives at least a correct Call Library Node configuration for the function calls involved.
     One caveat with this VI. It will block the LabVIEW thread in which it is called for the duration of the timeout (indefinitely with the default value that is in the VI) and LabVIEW can not be stopped in this situation, not even with the Abort button, since LabVIEW does not allow to reset a threads state when it is in external code. The only solution to get out of this state is killing the LabVIEW process (usually with the Task Manager) or forcing a change on the directory according to the configured change filter.
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions
    Attachments:
    WIN Folder Change Notify.vi ‏21 KB

  • Execution system differs between VI properties and Execution Trace Toolkit

    I have a time critical acquisition VI whose execution system is set to "E/S d'instruments" (instrument IO). When I dump the execution trace in the execution trace toolkit it appears as "LabVIEW Thread [Standard]". Moreover there is a second line with the same thread name. Is this just a confusing bug or does it reflect a possible conflict in my program ?
    Thanks in advance.
    Gael.
    Additionnal information : I am working on FP-2015.

    Gael,
    I made my top-level VI run at normal priority in the "same as caller" execution system, and my subVI run at time critical priority in the "instr I/O" execution system. The trace (attached) showed correct behavior.
    There are a few things to try. First make a backup of your application. Try removing the storage management subVI assigned to "other 1". You should now have a top-level VI running at normal priority in the same as caller exec system and a subVI running at time-critical in the instr I/O exec system (same as my setup).
    If you still get the weird behavior, we might be able to isolate the behavior to a subVI you're calling inside the time-critical VI. Try removing one subVI at a time and collect a tra
    ce each time.
    Alternatively, you can start with my simply VIs and slowly add components from your application until you see the weird behavior.
    I should remark that when you assign a VI to run at time-critical priority, that VI gets its own thread, even if the VI is set to run in the "same as caller" execution system. So even if your time-critical VI is not adhering to its assigned execution system, it's still running in its own time-critical priority thread (shown as red in the trace tool).
    Attachments:
    topVI_sameascaller_subVI_intrIO_timecritical.bmp ‏2089 KB
    instr_io.vi ‏19 KB
    sub_instr_io.vi ‏20 KB

  • Failed to run parallel path when using call library function node in LV2009

    I have a problem with two parallel paths not being run in parallel on LabView 2009.
    See the image below.
    This code initializes an external device, the upper part contains a call library function node to download my code into the device. When done, the function returns and the "Done" flag is set to True.
    The lower part contains a call library function node to check the download status, reporting the percentage of downloaded code.
    This updates a progress bar that is referenced by the calling VI module and this works fine in LV 7, 8 and 2011. The lower path of the VI updates the progress bar while the CLFN in the upper part is still downloading code to my device.
    Somehow, in LV2009 this does not work when running this inside my application. When running just the calling VI (the window showing the progress bar) it works but not when that VI is called by my application.
    Is there some limit on parallel threads that is different in 2009 than in other versions ?
    Or is there some other problem in 2009 that might cause this problem ?
    My labview version is 9.0f3 (32-bit).
    Regards,
    Rob

    I've just installed the DETT tool and checked what the different versions of LabView do.
    In LV2009 the application instance runs in a thread (5) but the modal dialog (the VI above) then drops to thread 0 and stays in thread 0.
    In LV2011, stays in the same thread as the application instance and only a trigger event (this could be the progress bar reference?) is executed in thread 0.
    So it seems there is a 'feature' in LV2009 where modal dialogs are by default not following the preferred execution system set in the VI properties ...
    When I change this from "same as caller" into another thread (I used "other 1") then my progress bar works as expected.
    I'm not a LabView thread expert (not even a novice ) so I'm just guessing that "other 1" as a thread is OK. This VI only runs during startup of the application to download the code to my device.
    To answer Ben's question: "What thread does a modal VI run in?":
    In LV2009, the modal VI runs in thread 0 (UI thread ?). When the preferred execution system is set to another thread, the modal dialog still starts in thread 0 but then switches over to the other thread.
    In LV2011, the model VI runs in the caller's thread (preferred execution system set to "follow caller") from start.
    Thanks,
    Rob

  • Start button xcontrol

    Hello!
    I have a project where I needed a start button. Turned out it was quite easy to make one in an xcontrol:
    I did a search for existing solutions but couldn't find any, and since many ask for a start button I'll just leave this here.
    To use it, drag and drop the xcontrol in your vi.
    Lars Melander
    Uppsala Database Laboratory, Uppsala University
    Solved!
    Go to Solution.
    Attachments:
    start button.zip ‏38 KB

    O.P. wrote:
    altenbach wrote:
    I have absolutely no idea what that means. Can you enlighten me?
    LabVIEW belongs to a group of programs called Data Flow Visual Programming Languages (DFVPL). It's a coarse-grained DFVPL, because it offers some execution control (most notably, branching a wire starts a new thread). In order to make it fine-grained, wires need to represent a flow of values, instead of a single value as it is now. Because of that, LV while and for loops become a big no-no.
    I'm pretty sure branching a wire does not start a new thread...If it did, I'd have about 1000 threads running. It only creates a copy of the data (when necessary). The compiler decides. Furthermore, LabVIEW threading is implicit. The programmer doesn't really have control over any of the threading; the compiler is smart, let it do it's work. And, I have a lot of "no-no's" in every single VI I have ever written if this is the case with loops. <sarcasm> I should really stop using LabVIEW </sarcasm>. I'm sure others will come with more in depth information, but I do believe your statements do not hold a lot of water.
    CLA, LabVIEW Versions 2010-2013

  • I am trying to have some LabVIEW code called in a New thread exit when the testStand sequence terminates

    I have a Sequence that launches a sequence in a New Thread that happens to launch some LabVIEW code.  The problem is when the LabVIEW code finishes, it will not close even when the TestStand sequence terminates. Is there a way to tell this LabVIEW code to Exit, I've tried the Quit LabVIEW function, but that causes a C++ RunTime Error.  The LabVIEW code does end though, and it is set in the VI properties to:
    Checked - Show Front Panel When Called
    Checked - Close Afterwardds if originally closed
    The sequence call that the LabVIEW code is launched from has the following options:
    - New Thread
    Unchecked - Automatically wait for the thread to complete at the end of the current sequence
    Unchecked - Initially Suspended
    Unchecked - Use single threaded apartment
    Any clues on this would be appreciated.

    Hi ADL,
    Everything should close correctly if you check the checkbox "Automatically wait for the thread to complete at the end of the current sequence" in the thread settings.
    With it unchecked, I am seeing the behavior you are. 
    Gavin Fox
    Systems Software
    National Instruments

Maybe you are looking for

  • HT4061 I tried to update my phone and it is frozen now. It won't do anything.

    I connected my phone to iTunes on my computer and tried to update.  Now my phone won't work and it is stuck on a screen that shows a usb cord and says iTunes.

  • ME1P Report

    Hi Experts, I want to have ME1P Purchase Order Price History SAP Standard Report want to export into Excel Format. How to do it?  Otherwise, I have to develop this report  or what? can anybody clarify the same. Thanks in advnace KK

  • App Error 606

    Hello, my blackberry is giving me "App Error 606" with a white screen and "Reset" I reset the phone but nothing happened. I couldn't connect the phone with the Blackberry Desktop software ! How can I fix the problem without losing my data. note: I di

  • Servlet compilation

    hello i am trying to compile the code copied below but it gives the following error- package javax.servlet.http not found in import. i have included the following paths in the environment variable in my computer settings. ;C:\jdk1.2.1\bin;C:\jsdk2.0\

  • Sending Typed Objects to Flex ?

    Hi, i'm tried many things now... On my Localhost (Coldfusion9) it's all working correctly. But on our live-Server (only Coldfusion 8) it doens't... I  am develop with Coldfusionbuilder... My Script is really as simple as possible: <cfcomponent >