Execution priority..

Hallo,
Reference : http://cnx.org/content/m12198/latest/
After Simulate signal VI sends out sine wave to both Tone measurements.vi  and Filter .vi.,why Does Filter .vi runs first and then the Tone measurements above it..
How can i change the priority of execution, for E.g. once the signal ís received by Tone measurements, let it first display the amplitude and frequency of a signal without filter and then come down to Filter Vi and tone measurements for filter..
I mean how can i change the priority of execution?
Cheers,
Abhi

That's how dataflow works. A function will execute as soon as all of its inputs are satisfied with data. The Tone and Filter Express VIs are wired in parallel, so they can both execute at the same time. To force a data dependency you need to wire an output of one VI into the input of another so the second VI waits until the first one is finished since that's when the first VI puts data on that connecting wire. In the example you linked to this can be accomplished by wiring the error in/out clusters from Tone to Filter.

Similar Messages

  • OBI report execution priority.

    Hi,
    Is there any way to set report execution priority based on user groups? For example if group with higher priority needs to execute a report, then server will free the stock or limit execution for groups with lower priority.
    Thanks,
    Adam

    No. If you query comes in less that 1 second then you got nothing to worry about. I haven't seen a single DWH where users would worry about queries returning in milliseconds.

  • (How) Can I change vi execution priority at runtime

    Hi all,
    I am using Daemons (free running VI's) and I communicate to them through Queues.
    They are part of my device driver architecture and use a producer architecture (For Acquisition) or a consumer Architecture (For Control)
    I have a single Daemon VI to which I deploy a "Device Object" using a Polymorphic class implimentation.
    This implimentation has one subtle shortfall,
    I am not able to change the execution priority at launch
    There is a property node that taunts that it is possible but the help (And run time Error message) says it not available at runtime.
    Does anyone know of an alternative method?
    Here are what I have thought of to-date:
    1. Have 5 different daemons each with a different priority [Distasteful for Code maintainance]
    2. Make Priority Low, and ensure that at least 1 VI in the driver has a high priority [Not sure if it works, obscure implimentation]
    Kind Regards,
    Tim L.
    iTm - Senior Systems Engineer
    uses: LABVIEW 2012 SP1 x86 on Windows 7 x64. cFP, cRIO, PXI-RT
    Solved!
    Go to Solution.

    Ben,
    I debated wheter or not to put more information in my post, I didn't want to bore my potential support Sensei's.
    Here Goes:
    As I hinted in my initial post, I am devoloping a set of drivers for my large application.
    I am using a compact Field Point and am under some fairly aggressive rescource pressures.
    Heavy RS422/485 Serial Comms @115200, Digital Acquisition and some heavy data processing.
    My understanding of this type of system (Jump in if you have any improvement suggestions):
    Priority #1 [High Priority](Producer) is to get data out of the Acquisition Buffers and in my case, perform Writes/Outputs/control activities as demanded.
    So any "Hardware" Device Driver Daemons that I lanuch need to be run at high priority.  The drivers should do as little as is necessary so as not to hog this thread. Event based architecure is preferred over polled.
    These drivers tend to be launched as Daemons so that they can run in a different priority and are not affected by other activities, they are inherently asynchronus.
    Priority #2 [Above Normal Priority](Consumer->Producer) Protocol/state interpretation/Data Filtering.
    Now that the data is out of the buffers, what does it mean? is it a valid communicationss message, was there a button pushed, is there an object in the transducer field.
    These functions may take a bit longer (but not too much) to execute, but as they are on a lower priority, buffers can continue to be emptied.
    These "filters" tend to be launched as Daemons also so that they can run in a different priority and are not affected by other activities they are inherently asynchronus.
    Priority #3 [Normal Priority](Consumer) Number Crunching, Heavy Lifting , Control determination..
    An event has occurred and some caculation interpretation and potentialy control needs to be performed, Do it.
    Tend to be event based and as there are multiple stimulii tend to be best managed by an event structure.  This allows for interaction with the front panel as well (should the need arise).
    Priority #4 [Low Priority](Slip-Scheduled) User interface/User Data/Report Generation.
    Who Cares if it is a bit late, Work away in background.  In the case of user updates, they can slip later and later, no need to catch up.
    For others reading this thread, I found This Module Help invaluable in understanding how Labview manages execution priority.
    Theory Done, I am essentialy a lazy programmer, and don't like to write and maintain too many different .vi's if I can help it, <Rant> ESPECIALY IF THEY ARE THE SAME VI WITH A DIFFERENT PRIORITY! </rant>.
    So I have written one Daemon and one Daemon manager (to rule them all). For each driver I require, I launch a Daemon, passing the "Base" Class object to the Daemon prior to Run, I rely on the override capability of polymorphism to choose the correct methods.
    My "Base" Driver class contains all of the functions required for operation.  I use it as an enforcable template for future driver development (It also auto fills in my Icon's wich is great for a Lazy Programmer like me).
    I have established that my protocol driver (Priority #2) is compatable with the same daemon Architecture from above, Instead of looking at hardware, it monitors a Queue/Notifier/User Events from the lower level hardware driver.
    I was very smug when I figured that one out, until.... I realised that they would need to be launched at different priorities to respect the RTOS, and thus my question.
    I am not trying to change priority per-se, but to choose one prior to launch and then to leave it.
    Cat Killed? (Curiosity Cured?)
    iTm - Senior Systems Engineer
    uses: LABVIEW 2012 SP1 x86 on Windows 7 x64. cFP, cRIO, PXI-RT

  • Subroutine execution priority with Call By Reference nodes

    Hi -
    I have a VI which contains a Call By Reference node. I would like this VI to run with subroutine execution priority (the VI is small, is called frequently, and must run quickly). The VI being called by reference is itself set to run with subroutine execution priority. However, I get an error which reads: subroutine priority VI cannot contain an asynchronous node. What is asynchronous about a call-by-reference to another subroutine VI? Is there any way to get around this? Thanks.
    Jason
    Jason Rolfe

    This is what the help files mention:
    Subroutine priority VI cannot contain an asynchronous node
    This VI has subroutine priority selected in the Execution page of the VI Properties dialog box. It cannot use an asynchronous node on its block diagram. Asynchronous nodes, such as dialog boxes, are supposed to allow other VIs on the same thread to continue to execute while they wait to complete their own execution. However, subroutine priority VIs block the execution of other VIs on the same thread until the subroutine priority VIs finish execution.
    You can correct this error in the following ways:
    Change the execution priority of this VI. To change the priority, right-click the VI icon in the upper-right corner of the front panel or block diagram window, and select VI Properties from the shortcut menu to display the VI Properties dialog box. Select Execution from the top pull-down menu of the VI Properties dialog box, and change the priority in the Priority pull-down menu.
    Remove the asynchronous node.
    I am affraid that the call by reference node is asynchronous.
    aartjan

  • LabVIEW execution priority/Yielding to the operating system

    Hello, I have an interesting problem. I am doing some image acquisition with a third party board. I do not have any "onboard" memory to buffer images, thus they are DMA'd into host memory (PING-PONG scheme is used). Things work well except in the following situation. I will be running my LabVIEW app and if I go launch another program (say windows explorer) the labVIEW program gets "behind" I can tell that it is trying to read from buffers that are currently being filled. It is almost as if LabVIEW says "ok have the CPU" to the OS. Is there anyway I can set the application priority to keep this from happening? The subVI that does the data acq is already in it's own execution system with a "time critical" priori
    ty.
    Thanks!
    OK now the obvious...just don't launch windows explorer when the application is running. Well, that sounds logical, but this application will be installed in a plant site and often the unlogical happens.....

    I cannot believe that I figured this one out! Here is an example program in LabVIEW 6.1 that will allow you to change the Windows priority of LabVIEW itself on NT/2000/XP.
    I would not recommend using this VI to the regular LabVIEW programmer. Essentially the argument for not changing VI priorities applies here but with greater mangnitude. Look in the LabVIEW shipping docs for "Using LabVIEW to Create Multithreaded VIs for
    Maximum Performance and Reliability" for a discussion of these settings. You can find this by opening LabVIEW 6.1 >> help >> search the LabVIEW bookshelf. This launches a PDF of all the shipping docs--it is at the end. The next best place to look is here on the forum. There have been detailed discussions of the settings in LabVIE
    W.
    Attachments:
    lv61_set_LabVIEW_priority.vi ‏16 KB

  • Execution priority for application

    The executable of my application doesn't reflect the priority I set in VI Properties (Win XP). Is there an .ini setting I can make to have the exe run at background priority (lowest)?

    You may want to take a look at this. Setting the priority within the options of LabVIEW will not affect its priority within XP. LabVIEW is given priority when it starts running on XP. You can programmatically call an XP DLL and make it give you higher priority. This KnowledgeBase on NI's website describes how to do that.
    J.R. Allen

  • Forms Triggers execution priority

    Hi Friends
    Can any one tell me answers for the following ones?
    1. When Form loads what are all the triggers fires and the sequence.
    2.Can any one give me one example for Statment Triggers and Rowlevel triggers.
    thanks
    Reddy

    Statement triggers and row level triggers are database server triggers that run in response to updates and inserts to a specific table. They are not related to Forms triggers.

  • Is this roughly how the labVIEW Execution Systems work?

    I've not taken a class in OS design, so I don't know the strategies used to implement multitasking, preemptive or cooperative. The description below is a rough guess.
    LabVIEW compiles Vis to execute within its own multitasking execution environment. This execution environment is composed of 6 execution systems. Each execution system has 5 priority queues (say priorities 0->4). Vis are compiled to one or more tasks which are posted for execution in these queues.
    An execution system may either have multiple threads assigned to execute tasks from each priority queue, or may have a single thread executing all tasks from all priority queues. The thread priorities associated with a multithreaded execution system are assigned according to the queue that they service. There are therefore 5 available thread priority levels, one for each of the 5 priority level queues.
    In addition to the execution queues, there are additional queues that are associated with tasks suspended in various wait states. (I don't know whether there are also threads associated with these queues. It seems there is.)
    According to app. note 114, the default execution environment provides 1 execution system with 1 thread having a priority level of 1, and 5 execution systems with 10 prioritized threads, 2 threads per priority queue. NI has titled the single threaded execution system "user interface" and also given names to the other 5. Here they will be called either "user interface" or "other".
    The "user interface" system is responsible for all GUI actions. It monitors the keyboard and mouse, as well as drawing the controls. It is also used to execute non-thread-safe tasks; tasks whose shared objects are not thread mutex protected.
    Vis are composed of a front panel and diagram. The front panel provides an interface between the vi diagram, the user, and the calling vi. The diagram provides programmatic data flow between various nodes and is compiled into one or more machine coded tasks. In addition to it own tasks, a diagram may also call other vis. A vi that calls another vi does not actually programmatically branch to that vi. Rather, in most cases the call to another vi posts the tasks associated with the subvi to the back of one of the labVIEW execution system�s queues.
    If a vi is non-reentrant, its tasks cannot run simultaneously on multiple threads. This implies a mutex like construction around the vi call to insure only one execution system is executing the vi. It doesn�t really matter where or how this happens, but somehow labVIEW has to protect an asynchronous vi from simultaneous execution, somehow that has to be performed without blocking an execution queue, and somehow a mutex suspended vi has to be returned to the execution queue when the mutex is freed. I assume this to be a strictly labVIEW mutex and does not involve the OS. If a vi is reentrant, it can be posted/ran multiple times simultaneously. If a vi is a subroutine, its task (I think there is always only one) will be posted to the front of the caller's queue rather than at the back of the queue (It actually probably never gets posted but is simply mutex tested at the call.) A reentrant-subroutine vi may be directly linked to its caller since it has no restrictions. (Whether in fact labVIEW does this, I don�t know. In any event, it would seem in general vis that can be identified as reentrant should be specified as such to avoid the overhead of mutexing. This would include vis that wrap reentrant dll calls.)
    The execution queue to which a vi's tasks are posted depends upon the vi execution settings and the caller's execution priority. If the caller's execution priority is less than or equal the callee's execution settings, then the callee's tasks are posted to the back of the callee's specified execution queue. If the caller's execution priority is greater than the callee's specifications, then the callee's tasks are posted to the back of the caller's queue. Under most conditions, the vi execution setting is set to "same as caller" in which case the callee�s tasks are always posted to the back of the caller's execution queue. This applies to cases where two vis are set to run either in the other execution systems or two vis are set to run in the user interface execution system. (It�s not clear what happens when one vi is in the �user interface� system and the other is not. If the rule is followed by thread priority, any background tasks in the �other� systems will be moved to the user interface system. Normal task in the �other� systems called by a vi in the �user interface� system will execute in their own systems and vice versa. And �user interface� vis will execute in the caller�s �other� system if the caller has a priority greater than normal.)
    Additionally, certain nodes must execute in the "user interface" execution system because their operations are not thread-safe. While the above generally specifies where a task will begin and end execution, a non-thread safe node can move a task to the �user interface� system. The task will continue to execute there until some unspecified event moves it back to its original execution system. Note, other task associated to the vi will be unaffected and will continue to execute in the original system.
    Normally, tasks associated with a diagram run in one of the �other� execution systems. The tasks associated with drawing the front panel and monitoring user input always execute in the user interface execution system. Changes made by a diagram to it own front panel are buffered (the diagram has its own copy of the data, the front panel has its own copy of the data, and there seems to be some kind of exchange buffer that is mutexed), and the front panel update is posted as a task to the user interface execution system. Front panel objects also have the advanced option of being updated sequentially; presumably this means the diagram task that modifies the front panel will be moved to the user interface execution system as well. What this does to the data exchanged configuration between the front panel and diagram is unclear as presumably both the front panel and diagram are executing in the same thread and the mutex and buffer would seem to be redundant. While the above is true for a control value it is not clear whether this is also true for the control properties. Since a referenced property node can only occur on the local diagram, it is not clear it forces the local diagram to execute in the user interface system or whether they too are buffered and mutexed with the front panel.
    If I were to hazard a guess, I would say that only the control values are buffered and mutexed. The control properties belong exclusively to the front panel and any changes made to them require execution in the �user interface� system. If diagram merely reads them, it probably doesn�t suffer a context switch.
    Other vis can also modify the data structure defining the control appearance and values remotely using un-reference property nodes. These nodes are required to run in the user interface system because the operation is not thread-safe and apparently the diagram-front-panel mutex is specifically between the user interface execution system and the local diagram thread. Relative to the local diagram, remote changes by other vis would appear to be user entries.
    It is not clear how front panels work with reentrant vis. Apparently every instance gets its own copy of the front panel values. If all front panel data structures were unique to an instance, and if I could get a vi reference to an instance of a reentrant vi, I could open multiple front panels, each displaying its own unique data. It might be handy, sort of like opening multiple Word documents, but I don�t think that it�s available.
    A note: It is said that the front panel data is not loaded unless the front panel is opened. Obviously the attributes required to draw an object are not required, nor the buffer that interfaces with the user. This rule doesn�t apply though that if property references are made to front panel objects, and/or local variables are used. In those cases at least part of the front panel data has to be present. Furthermore, since all data is available via a control reference, if used, the control�s entire data structure must be present.
    I use the vi server but haven�t really explored it yet, nor vi reference nodes, but obviously they too make modifications to unique data structures and hence are not thread-safe. And in general, any node that accesses a shared object is required to run in the user interface thread to protect the data associated with the object. LabVIEW, does not generally create OS level thread mutexes to protect objects probably because it becomes to cumbersome... Only a guess...
    Considering the extra overhead of dealing with preemptive threading, I�m wondering if my well-tuned single threaded application in LV4.1 won�t out perform my well-tuned multithreaded application in LV6.0, given a single processor environment�
    Please modify those parts that require it.
    Thanks�
    Kind Regards,
    Eric

    Ben,
    There are two types of memory which would be of concern. There is temporary and persistent. Generally, if a reentrant vi has persistent memory requirements, then it is being used specifically to retain those values at every instance. More generally, reentrant code requires no persistent memory. It is passed all the information it needs to perform its function, and nothing is retained. For this type of reentrant vi, memory concern to which you refer could become important if the vis are using several MBytes of temporary storage for intermediate results. In that case, as you could have several copies executing at once, your temporary storage requirements have multiplied by the number of simultaneous copies executing. Your max memory use is going to rise, and as labview allocates memory rather independently and freely, the memory use of making them reentrant might be a bit of a surprise.
    On the other hand, the whole idea of preemtive threading is to give those tasks which require execution in a timely fashion the ability to do so regardless of what other tasks might be doing. We are, after all, suffering the computational overhead of multithreading to accomplish this. If memory requirements are going to defeat the original objective, then we really are traversing a circle.
    Anyway, as Greg has advised, threads are supposed to be used judiciously. It isn't as though your going to have all 51 threads up at the same time. In general I think, overall coding stategy should be to minimize the number of threads while protecting those tasks that absolutely require timely execution.
    In that sense, it would have been nice if NI had retained two single threaded systems, one for the GUI and one for the GUI interface diagrams. I've noticed that control drawing is somewhat slower under LV6.0 than LV4.1. I cannot, for example, make a spreadsheet scroll smoothly anymore, even using buffered graphics. This makes me wonder how many of my open front panel diagrams are actually running on the GUI thread.
    And, I wonder if threads go to sleep when not in use, for example, on a wait, or wait multiple node. My high priority thread doesn't do a lot of work, but the work that it does is critical. I don't know what it's doing the rest of the time. From some of Greg's comments, my impression is it in some kind of idle mode: waking up and sleeping, waking up and sleeping,..., waking up, doing something and sleeping... etc. I suppose I should try to test this.
    Anyway that's all an aside...
    With regard to memory, your right, there are no free lunches... Thanks for reminding me. If I try this, I might be dismayed by the additional memory use, but I won't be shocked.
    Kind Regards,
    Eric

  • Change user-interface priority

    Hello LabVIEWERS ,
    Does anyone of you know how to set/adjust the user-interface priority of a
    LabVIEW program.
    In an certain application I noticed a bad respons from LabVIEW towards the
    user-interface
    (it reacts only very slowly towards pushing a button or for instance
    dragging the LabVIEW frontpanel)
    although the processor utilisation is only a few percent. I don't want to
    put delays in the code unneccesary.
    I tried to lower the execution priority of LabVIEW but I don't see any
    difference.
    I use windows2000 as platform and LabVIEW 7.1 if that matters.
    René Ramekers

    Thanks for you suggestions, but I already found it myself:
    it appeared to be a DLL which was called very often ,
    and this DLL was running in UI - thread. Changing it to "reentrant" did the
    trick .
    René Ramekers
    "chrisger" <[email protected]> wrote in message
    news:[email protected]..
    > hi there
    > in almost all cases the reason for slow gui response is NOT the priority
    of the vi. there are a lot of reasons:
    > - (hidden)&nbsp;delays (e.g. reading from an interface using a timeout)-
    lot of memory/disk access- lot of data to show on the gui- unneccessary
    repitions of function/vi calls - (very) poor graphic hardware
    > do you use&nbsp;event based programming and a high level/low level
    architecture in your application?
    > i think best would be to post some code here, so we can have a look at it.
    like i said, there are a lot of possible reasons...
    > &nbsp;
    > &nbsp;
    > &nbsp;

  • Execution time optimizing labview code

    Hi,
    I would like to optimize my labview program. Is there a way I can plot
    the execution time of my program as a function of some parameter I am
    changing in my program. I was thinking of placing the entire program in
    a while loop and then changing the parameter during each loop iteration
    and then outputting the time that the program takes to execute as a
    function of the parameter. The problem is how would I obtain the
    execution time.
    Thank you,
    -Tim

    Here a re a few more tips:
    Running on a typical multipurpose OS will will make things a bit unpredictable because you never know what else is running so you should always repeat each run many times (then take the fastest, not the average, of all corresponding times).
    Make sure that each run takes a few hundred milliseconds to get accurate values.
    For timing purposes, it might be worth to temporarily raise the execution priority. The results will be much more reproducible (at the expense of useablility).
    LabVIEW Champion . Do more with less code and in less time .

  • Does a "subroutine priority" mean anything in an FPGA?

    Since a programmed FPGA is a collection of hardware logic strung together with potentially a great deal of it in parallel, does setting the execution priority properties of FPGA subVIs to "subroutine" really do anything? If you read the description of what a subroutine priority does, it says it devotes all the resources of an execution thread to the subroutine code. This to me implies time sharing of resources. But I don't think time-shared execution threads exist in an FPGA, or do they? Do any of the other possible execution priority levels mean anything to FPGA code?
    Solved!
    Go to Solution.

    Execution priorities have no effect in FPGA. Every subVI that does not require arbitration (ie, is either reentrant or used only once) is inlined into the main VI at compile time. As far as I know, execution priority has no effect on arbitration for those subVIs that do require it.

  • List execution priorities of all VIs in project

    Hello all,
    I'm using LabVIEW 2012.  Is there any way to list the execution priorities of all the VIs in a project please?
    Thank you!
    Zola

    I don't see "real-time" on the list of execution priorities, but I do see "high", "time critical", and "subroutine".  I don't know of any easy way to find out that information.  I doubt it exists because I think it would be very rare for anyone to do what you are asking.
    That said, I don't think it would be difficult to write a small VI that would loop through your .vi files and read the Execution priority through a property node.  I did the following in about 3 minutes.  You just need to create an array of file names.  Perhaps using directory information to build the list.
    Attachments:
    Example_VI.png ‏14 KB

  • Problem with combination of LabVIEW classes (dynamic dispatch), statechart module and FPGA module

    SITUATION:
    - I am developing a plug-in-based software with plug-ins based on LabVIEW classes which are instanced at run-time. The actual plug-in classes are derived from generic plug-in classes that define the interfaces to the instancing VI and may provide basic functionality. This means that many of the classes' methods are dynamic dispatch and methods of child classes may even call the parent method.
    - Top-level plug-ins (those directly accessed by the main VI) each have a run method that drives a plug-in-specific statechart.
    - The statechart of the data acquisition plug-in class (DAQ class) calls a method of the DAQ class that reads in data from a NI FPGA card and passes it on to another component via a queue.
    PROBLEM:
    - At higher sampling rates, an FPGA-to-host FIFO overflow occurs after some time. When I "burden" the system just by moving a Firefox browser window over the screen, the overflow is immediately triggered. I did not have this kind of problem in an older software, where I was also reading from an FPGA FIFO, but did not make use of LabVIEW classes or statecharts.
    TRIED SOLUTIONS (WITHOUT SUCCESS):
    - I put the statechart into a timed loop (instead of a simple while loop) which I assigned specifically to an own core (I have a quad-core processor), while I left all the other loops of my application (there are many of them) in simple while loops. The FIFO overflow still does occur, however. 
    QUESTION:
    - Does anybody have a hint how I could tackle this problem? What could be the cause: the dynamic dispatch methods, the DAQ statechart or just the fact that I have a large number of loops? However, I can hardly change the fact that I have dynamic dispatch methods because that's the very core of my architecture... 
    Any hints are greatly appreciated!
    Message Edited by dlanger on 06-25-2009 04:18 AM
    Message Edited by dlanger on 06-25-2009 04:19 AM
    Solved!
    Go to Solution.

    I now changed the execution priority of all the VIs involved in reading from the FPGA FIFO to "time critical priority (highest)". This seems to improve the situation very much: so far I did not get a FIFO overflow anymore, even when I move around windows on the screen. I hope it stays like this...

  • How to prevent/Minimize WindowsXP interference

    I have an application that is troubled by Windows occasionally interputing a two step process and need to eliminate or minimize this interuption.  More specifically  I have an instrument that I need to obtain two pieces of information from- counts and livetime.  This must occur by first requesting the livetime then the counts in separate calls to the instrument.  Unfortunately, Windose often interputs the process with its own calls (that can last for a second or more) and completly messes things up.
    My question is there anyway to (even briefly) lock the system resources so that for my two critical call no other process can interupt my code?  These calls are quick and painless (less than 50ms) total time consumed, and occur only once every half second (or more).
    Any suggestions or help would be greatly appreciated?
    John

    The instrument in question is the Digibase from Ortec.  It does not support buffering (unfortunately).
    I have tried optimizing some of the windows stuff related to the eye-candy (no shadows, classic style etc).  Though I have not yet optimized the services (or removing services).  The computer I am using for the data collection is Fujistu lifebook (one of the P1500 series ultra small computers).  This variant has 512 Mb RAM.
    In addtion to my data acquisition program I have a database (MYSQL) running which I write the data into.
    I have tried the option of setting the execution priority to "Time Critical" for the portion of the code that reads the various pieces of data from the Digibase.  However it seem that windows/other processes still manage to get in between my calls to the Digibase.  To recap I need to read the livetime and the total counts (or spectra)  this requires that I send a command to the digibase to stop the data acquisition , I then send a command to read the live time, and read the return value.  Then send a command to read the spectra (or total counts) and the read the return, and finally send a command to start the data collection.  However to aid in diagnosing the windows interuption problem I do not stop and restart data acquisition thus any time delays caused by excessive delays between the two reads becomes readily apparant.  (The Digibase is set to count at a high data rate which is nominally constant so that the calculated count rate   (counts/livetime) should be a constant -minus statisical flucuations).  The results indicate that windows/or other processes manage to sneak in between my calls to read the data about once every 50 to 200 trys no matter what I do. Bottom line is that there does not seem to be a shurefire way to prevent other process from interrupting my data acquisition even for a very brief time.  Or is there??  I sure long for the days of cooperive (or uncooperative) multitasking.
    I am still pursuing the optimization of windows services and background tasks.  If anyone has a good reference I would love to hear it.  Presently I have explored a number of the on-line references such as the extreme tech tuneup hints such as(www.extremetech.com/article2/0,1558,5155,00.asp)
    In the end I am affraid I may be asking the impossible of Windows, and since this equipment does not have drivers truly compatible with QNX, I may simply have to find a plan B.  But I can't help but think that this should be possible since the actual computation and bandwidth requirements of this application are very modest.
    Thanks again for all the help and suggestions

  • How do I determine expected output size for System Exec vi

    I am running a DOS exe from a Batch file using the System Exec vi. My exe is programming software for a microcontroller and passes data to the programmer via the serial port. When I run the batch file externaly to LabView it works fine. When I run the same batch file using the System Exec vi I get an error from the programmer app (DOS app). Not every time but usually within 20 attempts. I reduced the execution priority of my vi's to give more time to the DOS window. This seems to have helped but hasn't solved the problem completely. I read in the help file that LabView will run more efficiently if you specifiy the expected output size in the System Exec vi, but how do I determine it? Is the expected output
    size the size of the bat file I call?

    > I am running a DOS exe from a Batch file using the System Exec vi. My
    > exe is programming software for a microcontroller and passes data to
    > the programmer via the serial port. When I run the batch file
    > externaly to LabView it works fine. When I run the same batch file
    > using the System Exec vi I get an error from the programmer app (DOS
    > app). Not every time but usually within 20 attempts. I reduced the
    > execution priority of my vi's to give more time to the DOS window.
    > This seems to have helped but hasn't solved the problem completely. I
    > read in the help file that LabView will run more efficiently if you
    > specifiy the expected output size in the System Exec vi, but how do I
    > determine it? Is the expected output size the size of the bat fi
    le I
    > call?
    If you think it is due to timing, and you wish to yield to the DOS app,
    don't just lower the priority of LV, but use Wait ms to put overeager
    diagram elements to sleep and limit their loop rate. Use the task
    manager or performance monitor of the computer to determint if this is
    the issue.
    Greg McKaskle

Maybe you are looking for