Peculiar behavior of Shared Variable RT FIFO

I'm trying to "leverage" the enhanced TCP/IP and Shared Variable properties of LabView 8.5.  My application involves (among other things) doing continuous sampling (16 channels, 1KHz/channel) using 6-year-old PXIs (Pentium III) and streaming data to the host.  I developed a small test routine that was more than capable of handling this data rate, even when I had the host put a 20msec wait between attending to the PXI (to simulate other processing on the host).  To do this, I enabled the "RT FIFO" property of the Shared Variable (which was an array of 16 I16 integers) and specified a buffer size of 50 (that's 50 arrays).  Key to making this work was figuring out the "error codes" associated with the SV RT FIFO, particularly the one that says the FIFO is empty (so don't save the "non-data" that is present).
Flush with success, I started developing a more realistic routine that involves rather more traffic between Host and Remote, including the passing back and forth of "event" data.  These include, among other things, "state variables" to enable both host and remote to run state machines that stay "in sync"; in addition, the PXI also acquires digital data (button pushes, etc.) which are other "events" to be sent to the Host and streamed to disk.  I developed the dual state-machine model without including the "analog data" machine, just to get the design of the Host/Remote system down and deal with exchanging digital data through other Shared Variables.  Along the way, I decided to make these also use an RT FIFO, as I didn't want to "miss" any data.  One problem I had noticed when using Shared Variables is the difficulty of telling "is this new?", i.e. is the variable present one that has been already read (and processed) or something that needs processing.  I ended up adopting something of a kludge for the events by including an incrementing "event ID" that could be tested to see if it was "new".
Today, I put the two routines together by adding the "generate 16-channels of integer data at 1 KHz and send it to the Host via the Shared Variable" code to my existing Host/Remote state machine.  I used exactly the same logic I'd previously employed to monitor the RT FIFO associated with this Shared Variable (basically, the Host reads the SV, then looks at the error code -- a value of -2220 means "Shared Variable FIFO Read Buffer Empty", so the value you just read is an "old" value, so throw it away).  Very sad -- my code threw EVERYTHING away!  No matter how slowly the Host ran, the indicator always said that the Shared Variable FIFO Read Buffer was empty!  This wasn't true -- if I ignored the flag, and saved anyway, I saw reasonable-looking data (I was generating a sinusoid, and I saw numbers going up and down).  The trouble was that I read many more points than were actually generated, since I read the same values multiple times!
Looking at the code, the error line coming into the Shared Variable (before it was read) was -2220, and it remained so after it was read.  How could this be?  One possibility is that my other Shared Variables were mucking up the error line, but I would have thought that the SV Engine handling reading my "analog data" SV would have set the error line appropriately for my variable.  On a hunch, I turned of the RT FIFO on the two Event shared variables, and wouldn't you know, this more-or-less fixed it!
But why?  What is the point of having a shared variable "attached" to an error line and having it return "Shared Variable FIFO Read Buffer Empty" if it doesn't apply to its own Read Buffer?  This seems to me to be a very serious bug that renders this extremely useful feature almost worthless (certainly mega-frustrating).  The beauty of the new Shared Variable structure and the new code in Version 8.5 is that it does seem to allow better and faster communication in real-time using TCP/IP, so we can devote the PXI to "real-time" chores (data acquisition, perhaps stimulus generation) and let the PC handle data streaming, displays, controls, etc.
Has anyone been successful in developing a data-streaming application using shared variables between a PXI and and PC, particularly one with multiple real-time streams (such as mine, where I have an analog stream from the PXI at 16 * 1KHz, a digital stream from the PXI at irregular intervalus, but possibly up to 300 Hz, and "control" information going between PC and PXI to keep them in step)?  Note that I'm attempting to "modernize" some Version 7 code that (in the absence of a good communication mechanism) is something of a nightmare, with data being kept in PXI memory, written on occasion to the PXI hard drive (!), and then eventually being written up to the PC; in addition, because the data "stayed" on the PXI, we split the signal and ran a second A/D board in the PC just so we could "see" the signal and create a display.  How much better to get the PXI to send the data to the PC, which can sock it away and take samples from the data stream to display as they fly by on their way to the hard drive!
But I need to get Shared Variables (or something similar) working more "understandably" first ...
Bob Schor

Bob,
The error lines passed into and out of functions are just just clusters with a status boolean, an error code, and an error string, and are not "attached" to a particular function as you describe in your post.  Most functions have an error in input and an error out output, and most functions will simply do nothing except pass through the error cluster if the error in status is True (to verify this for yourself, double click on a function such as a DAQmx Read or Write and look at the block diagram.  If there is an error passed in, no read/write occurs).  This helps prevent unwanted code from  executing when an error does arise in your program.  By wiring the error cluster from your other shared variables to your analog data variable, you're essentially telling LabVIEW that these functions are related and that your analog data variable depends requires that the other shared variables are functioning properly.  The error wire is a great way to enforce the flow of your program, but you must always consider how it will affect other functions if an error does arise.
Anyways, it's great that you have things more or less working at the moment.  Keep us all updated!

Similar Messages

  • Shared variable RT FIFO (multi element) - Read all at once

    Hi all!
    I'm trying to collect data from analog inputs on my cRIO with a 1 sec time step. For that purpose I use a Timed loop and a shared variable with RT FIFO enabled (multiple array of doubles). Parallel to the timed loop, I also use a while loop that is significantly slower (lets say 10 sec period) that should read all arrays stored in FIFO and write them down in a .txt file (everything is executed locally on RT target). Is there a way to empty the RT FIFO shared variable at once, and if no how can I get a number of arrays that are currently stored in a shared variable?
    What is the difference between RT FIFO created by a shared variable and the one created using Real-Time/RT FIFO palette? I prefer using shared variable instead of Real-Time/RT FIFO since it also allows timestamping.
    Best regards,
    Marko.
    Solved!
    Go to Solution.

    Marko,
    We recommend that the non-deterministic loop run faster than your deterministic loop and pull off samples when they become available. If you don't want to do this, you can simply put your RT FIFO read in a For Loop set to execute 10 iterations and then place that in your while loop. For more information on the differences between the RT FIFO and Shared Variables with RT FIFO enabled, please see pages 32-35 of the CompactRIO Developers Guide linked below.
    http://www.ni.com/pdf/products/us/fullcriodevguide​.pdf
    Hope this helps!
    Rob B
    FlexRIO Product Manager

  • Network Datastreams and FIFO Shared Variables

    Does someone know why one would use a Network Datastream as opposed to a network published shared variable with FIFO enabled? Seems like they would be identical except that the Shared variable could potentially have multiple listeners.
    Thanks,
    Craig

    Hey Craige
    You are correct that a major difference between Network Steams and Network Published Shared Variables with RT FIFO enabled is that the Shared Variable can publish to many computers at once.  There are a few other differences that are outlined in the comparison table in this article  
    http://zone.ni.com/reference/en-XX/help/370622J-01/lvrtbestpractices/rt_gui_bp/
    There's another good comparison with some more information in the last paragraph of this article. 
    http://zone.ni.com/reference/en-XX/help/371361H-01/lvconcepts/networkstreams/ 
    Regards,
    Eric L.
    Applications Engineer
    National Instruments

  • Shared Variables for Real-TIme Robot Control

    I'm really stuck in my efforts to use LV real-time in my hardware control application. I have a 6-axis industrial robot arm that I must control programmatically from my PC. To do this I've developed a dynamic link library of functions for various robot control commands that I can call using Code Interface Nodes in LV (using 8.5). This has worked great, that is, until I tried to port parts of the application to a real-time controller. As it turns out, because the robot control dll is linked with and relies so heavily upon several Windows libraries, it is not compatible with use on a RT target, as verified by the the "DLL Checker" application I downloaded from the NI site. When the robot is not actually executing movements, I am constantly reading/writing analog and digital I/O from various sensors, etc.....
    This seemed to suggest that I should simply segregate my robot commands from the I/O activities, using my host PC for the former, and my deterministic RT loop on the target machine for the latter. I set up a Robot Controller Server (RCS) vi running on my host PC that is continuously looking for (in a timed loop) a flag (a boolean) to initiate a robot movement command. Because several parameters are used to specify the robot movement, I created a custom control cluster (which includes the boolean variable) that I then used to make a Network Shared Variable that can be updated by either the RT target or the host PC running the RCS. I chose NOT to use buffering, and FIFO is not available with shared variables based on custom controls.
    Here's sequence of events I'd like to accomplish:
    1) on my host PC I deploy the RCS, which continuously pools a boolean variable in the control cluster that would indicate the robot should move. The shared variable cluster is initialized in the RBS and the timed loop begins.
    2) I deploy the RT vi, which should set the boolean flag in the control cluster, then update the shared variable cluster.
    3) an instance of the control cluster node in the RCS should update, thereby initiating a sequence of events in a case structure. (this happens on some occassions, but very few)
    4) robot movement commands are executed, after which the boolean in the control cluster is set back to its original value.
    5) the RT vi (which is polling in a loop) should see this latest change in the boolean as a loop stop condition and continue with the RT vi execution.
    With the robot controller running in a timed loop, it occassionally "sees" and responds to a change of value in members of the shared variable cluster, but most times it does not. Furthermore, when the robot controller vi tries to trigger that the movement has completed by changing a boolean in the control cluster, the RT vi never sees it and does not respond.
    1) Bad or inappropriate use of network shared variables?
    2) a racing issue?
    3) slow network?
    4) should I buffer the control cluster?
    5) a limitation of a custom control?
    6) too many readers/writers?
    7) should I change some control cluster nodes to relative, rather than absolute?
    8) why can't I "compile" my RT vi into an executable?
    Any help would be greatly appreciated. Unfortunately, I'm writing this from home and cannot attach vi files or pictures, but would be happy to do so at work tomorrow. I'm counting on the collective genius in the universe of LV users and veterans to save my bacon.....
    David

    Hi David,
    I'm curious why you decided to build a CIN instead of developing the code in
    LabVIEW.  Is there some functionality that that LabVIEW couldn't
    provide?  Can you provide some more information about the LabVIEW
    Real-Time target you're using?  What type of IO are you using?
    It is impossible to get LabVIEW Real-Time performance on a desktop PC running
    an OS other than LabVIEW Real-Time.  Even running a timed loop in LabVIEW
    for Windows won't guarantee a jitter free application.  Also, no TCP based
    network communication can be deterministic.  This means Network Shared
    Variables are also not deterministic (they use a TCP for data transport) and I
    advise against using them as a means to send time critical control data between
    a Windows host and a LabVIEW Real-Time application.
    In general, I would architect most LabVIEW-based control applications as
    follows:
    - Write all control logic and IO operations in LabVIEW Real-Time.  The
    LabVIEW Real-Time application would accept set points and/or commands from the
    'host' (desktop PC).  The Real-Time controller should be capable of
    running independently or automatically shutting down safely if communication to
    the PC is lost.
    - Write a front-end user interface in LabVIEW that runs on the desktop
    PC.  Use Shared Variables with the RT-FIFO option enabled to send new set
    points and/or commands to the LabVIEW Real-Time target.
    Shared variable buffering and RT-FIFOs can be a little confusing.  Granted
    not all control applications are the same, but I generally recommend against
    using buffering in control applications and in LabVIEW Real-Time applications
    recommend using the RT-FIFO option.  Here's why:  Imagine you have a
    Real-Time application with two timed loops.  Time-loop 'A' calculates the
    time critical control parameters that get written to hardware output in
    timed-loop 'B'.  Loop 'A' writes the outputs to a RT-FIFO enabled variable
    with a RT-FIFO length of 50.  Loop 'B' reads the outputs from the shared
    variable, but for some reason, if loop 'B' gets behind then the shared variable
    RT-FIFO will now contain several extra elements.  Unless loop 'b' runs
    extra fast to empty the RT-FIFO, loop 'B' will now start outputting values that
    it should have output on previous cycles.  The actual desired behavior is
    that loop 'B' should output the most recent control settings, which means you
    should turn off buffering and set the RT-FIFO length to 1.
    There is also a clear distinction between buffering and the RT-FIFO
    option.  The RT-FIFO option is used to add a non-blocking layer between
    network communication and time-critical code in LabVIEW Real-Time
    applications.  It also provides a safe mechanism to share data between two
    loops running in a Real-Time application without introducing unnecessary
    jitter.  Network buffering is a feature that allows a client to receive
    data change updates from the server even if the client is reading the variable
    slower than the server is writing to it.  In the example I presented above
    you don't need to enable networking because the shared variable is used
    entirely within the Real-Time application.  However, it would be
    appropriate to send control set points from a Windows PC to the Real-Time
    application using network published shared variables with the RT-FIFO option
    enabled.  If it is critical that the Real-Time application executed all
    commands in the sequence they were sent then you could enable an appropriate
    buffer.  If the control application only needs the latest set point
    setting from the Windows host then you can safely disable network buffering
    (but you should still enable the RT-FIFO option with a length of 1 element.)
    Network buffering is especially good if the writer is 'bursty' and the reading
    rate is relatively constant. In the robot application I can imagine buffering
    would be useful if you wanted to send a sequence of timed movements to the
    Real-Time controller using a cluster of timestamp and set point.  In this
    case, you may write the sequence values to the variable very quickly, but the
    Real-Time controller would read the set points out as it proceeded through the movements.
    The following document presents a good overview of shared variable
    options:  http://zone.ni.com/devzone/cda/tut/p/id/4679
    -Nick
    LabVIEW R&D
    ~~

  • LabVIEW could not generate code for the shared variable.You must open the VI in the project that contains the library where the shared variable resides

    HI
    When I put a network shared variable with fifo RT activated on my diagram, the arrow is broken.and I' ve got this message:
    ""LabVIEW could not generate code for the shared variable.You must open the VI in the project that contains the library where the shared variable resides""
    If I uncheck FIFO RT option for this variable the arrow isn't broken anymore.
    I 've no idea how to correct this weird error?
    Autodeploy is on, and I've check copy / delete in diagram in tool/options\diagram
    regards,
    james

    Hello,
    I don't reproduce this error.
    Could you send your VI?
    Regards
    VéroniqueD
    NI France

  • Using TCP or shared variable for data transfer

    I am trying to send a large amount of numbers from a real-time module to a host computer.  These numbers have been arranged into a large array, such as an array with 10s of thousands of points.  The time critical portion of getting the information has already been done, so the data transfer back to the host VI is not time critical.  I know I will need to break the large array down into smaller arrays and then reform the large array after all the information has been sent.  I know how to use both TCP and shared variables with FIFO.  What I am unsure of is which one is better to use for this application.  I do not know what the maximum size arrays I can send through either.
    Also, from what I have gathered from using LabView is that the sender has to be listening for a connection before the client opens a connection, or else it will throw an error.  When I tried breaking it down into 50 points, if i did not wait long enough in the host VI or if I did not put a long enough wait function in the RT loop, and error would throw, so it would take a long time to transfer the data when it worked properly.
    Any help or suggestions is appreciated, thanks.

    Regarding the array size question, there is no real limit (other then the amount of memory in your system) to the size of data that you can transfer in a single block using either TCP or the Shared Variable. In your case you can easily transfer an array with 10's of thousands of data points in a single write operation. Both TCP and the Shared Variable will automatically handle breaking up the data for the maximum packet size on Ethernet and then reconstitute the array on the receiving end. In LabVIEW you will simply get back the array as a whole without needing to worrying about how the data is broken into smaller packets on the Ethernet.
    I tested the attached example which transfers 400kB per block (50000 Doubles) without any problems. You do need to have the Server (in this case RT) running first before the client (Windows) can connect.
    Message Edited by Christian L on 02-09-2007 11:34 AM
    Christian Loew, CLA
    Principal Systems Engineer, National Instruments
    Please tip your answer providers with kudos.
    Any attached Code is provided As Is. It has not been tested or validated as a product, for use in a deployed application or system,
    or for use in hazardous environments. You assume all risks for use of the Code and use of the Code is subject
    to the Sample Code License Terms which can be found at: http://ni.com/samplecodelicense
    Attachments:
    TCP.JPG ‏44 KB

  • Error codes for shared variables in Labview 8.5?

    I am trying to use Shared Variables in Labview 8.5 to enable real-time loops (similar to some of the examples in "Using the LabVIEW Shared Variable", published Aug 28, 2007).  I created it to hold the result of a 16-channel A/D converter (so a 16-element I16 array).  To avoid losing samples, I used buffering, with a buffer of 5.  To test this, I made a pair of VIs, one a producer that stuffs a 16-element I16 array into the shared variable "every so often" (controlled by a timed loop), and a consumer loop that reads the shared variable and does something with the data.
    If I think of the buffered shared variable as a Real Time FIFO (as the article suggests it is), I was curious how I would know (a) when the queue was empty, and (b) if the queue had overflowed.  Both are necessary if this is to be a practical means of exchanging data -- you want the producer and consumer to run more-or-less at the same rate, but only the producer is deterministic.  The consumer needs to be able to run "faster" if it falls behind (for example, because it is writing data to disk), but you don't want it to read data from the shared variable if there's nothing there.  [One can always read a shared variable, after all -- as the article states, it simply "holds" the last value written to it].
    Snooping around, I discovered that there are "error codes" associated with the shared variable.  In particular, a code of -2220 (FFFFF754) seems to signify an empty queue (or a shared variable that has not yet been written to), while a code of -1950678981 (8BBB003B) appears to be "buffer overflow".
    Is this documented anywhere?  Are there other "error codes" that would be helpful to know?  Is there some rationale to these seemingly-random numbers?  [It would help to develop code to utilize shared variables if there was a bit less "magic" and "mystery" involved].
    For what it is worth, with a buffer of size 40, I could generate 16 I16 values at 1 KHz (simulating sampling from a 16-channel A/D at 1 KHz) and pass it to a consumer node that (a) read from the shared variable until it was empty, then (b) "went to sleep" for 20 msec (simulating "doing something else non-deterministically"), and not miss any data (because I could then empty the Shared Variable RT-FIFO, which should have been half full, before it overflowed on me).  Not bad throughput -- I bet I could push it even higher.
    Bob Schor

    Hey Bob,
    The errors are documented in the LabVIEW help:
    Shared Variable Error Codes
    Real-Time Shared Variable Error Codes
    There are several error messages for buffer underflow/overflow depending on the settings of the network or RT FIFO buffers. In particular the -2220 and -2221 are useful for the producer/consumer use case. For example (as you probably know) the consumer can flush a variable using the error code (see the attached image).
    Gerardo
    Attachments:
    variable1.png ‏3 KB

  • Using a 2-D array Single Process Shared Variable w/ RT FIFO for comm between a Deterministic and non-deterministic loop on an RT Target

    Our problem is that we currently use a 2D array to store CAN data on a Real-time Target. The array is 20 elements of 3 byte elements as so:
                    0              1              2
    0              [byte]   [byte]   [byte]
    19           [byte]   [byte]   [byte]
    These values are passed between a Deterministic Timed (DT) loop where they are set and a Non-Deterministic Timed (NDT) loop where they are read and passed into a Network Published Shared Variable (NPSV) for communication across the network to a Host PC. I have insrted an image for illustration, pardon the size.
    Currently to pass the data between the DT and NDT loop we are using a Global Variable (GV). To improve the system we have attempted to replace these GVs with Single Process Shared Variables (SPSV) with an RT FIFO enabled.
    To create the shared variable I simply right clicked the GV of interest and selected create Shared Variable Node form the drop downs. At this point LabVIEW presented me with a 2D NPSV within a new Library hosted on the RT Target. I then selected this new NPSV from the Project, changed it to a SPSV, and enabled a single element FIFO. This variable was initialized with a default value for the size described above and then used in our code for the DT to NDT communication, and conversion to a corresponding NPSV for sending to the Host.
    When I went to run the code I noticed that the variable was in fact 2D, however its size was only 2 elements of three bytes each, in other words only two of the row indices were populated and the other appeared as uninitialized. in addition, this data had no resemblance to the set initilazation value. This was also how the variable was presented on the host side of the network after tranfer into a NPSV.
    The peculiar part is that If I change this SPSV to a NPSV and then try to change it back, I receive an error saying the type is not supported for SPSV with an RT FIFO enabled. I have to disable the FIFO (which defeats the entire purpose) in order to successfully compile! I am unclear as to what is the bug in this case. Should I not be allowed to create the original 2D SPSV with a single element RT FIFO enabled without receiving an error? Or if this is okay how do I fix the problems associated with the variable after being allowed to create it?
    I have found the following discussion in which a user states “The only limitations for custom controls is the ability to use it with RT FIFO enabled on a network-published shared variable”. Is this also true for SPSV? I have not found any documentation explicitely stating this for SPSV, though it is stated for the NPSVs.

    Martin,
    RT FIFOs don't support Multi-Dimensional Arrays, which would corroborate the issues you're seeing.  You can break up the 2D array into 1D arrays by reshaping the array, then you'll be able to use the RT FIFO enabled variable, just set the array size to the total number of elements (20*3 = 60).
    You can also pass the 2D array via pre-allocated queue, or using a Functional Global.  We have a reference example for a circular buffer using Functional Globals here.

  • Do I need to set host side shared variables to RT FIFO?

    Hello there,
    in my application a LV host application communicates with compact RIO through shared variables. On cRIO i have some shared vars with RT FIFO enabled. On host side for those vars which are bound to cRIO i did the same. Is that neccessary?
    In general I am confused about shared variable settings on both sides, for example when I want network buffering. Do I set buffering on RT side or on PC side or on both, what is the difference?
    regards
    Thomas

    Dear Thomas,
    thank you so much for your post. The shared variable with RT FIFO enabled works the same as the RT FIFO functions on the RT functions palette. The RT FIFO is for communication between the higher priority loop and the normal priority loop. You can see it like this; if the shared variable with RT Fifo enabled is network published there is a communication loop (hidden) which publishes the shared variable to the network port of the cRIO system. The RT FIFO option is only for the communication between your high priority loop and this hidden loop. The hidden loop is executed when the higher priority loop is idle. Again I speak about hidden loop just to create a clear picture.
    So for your PC side no RT FIFO is needed. Please notice that by using shared variables you can lose data; for example the cRIO publishes the data faster then your PC can read the data; you will lose data points. Shared variables are based on UDB, you can see it like a radio station which is transmitting. It doesn't matter if people are listing to the radio station. Use the TCP functions (for example theSTM library ) or the network stream API if you don't want to lose data.
    Please let me know if you have any further questions,
    Best regards,
    Martijn S
    Applications Engineer
    NI Netherlands

  • Correct Shared Variable behavior

    I have a program, written in version 8.5, that has a shared variable used in an number of locations to shut down the program's major loops when the test system is being "shut down". I used a SV specifically to use its error in/out to force the execution order in these various loops. In 8.5 it has worked correctly, but yesterday I was helping my customer commision a new test system, using this program, and they only have 8.6, so we decided to migrate it all to this version. While still debugging the test system hardware we got an error which caused me to attempt to terminate the program. At this point I noticed that all but one loop had terminated correctly, and when I probbed the no terminating loop I discovered that it had had a hardware error, putting an error on the loop's error cluster. This caused the boolean Shared Variable to output a F rather than the T that had been written to it when I attempted the shutdown. This wasn't the behavior I remembered in my 8.5 version, so I created a quick little test, two parallel loops, one with the boolean SV in  the write mode with a control to toggle it F/T the other loop having the SV in the read mode, with its output set to stop the loop if T. I had an error cluster shift register in the second loop, with the SV's error in/out connected to it that had a control to introduce an error in the error cluster. On the system running 8.5 writing a T to the SV stops the loop regardless of the error cluster state, but on 8.6 it does not, with the SV returning a F if there is an error present.
    Has this been seen by others? Which is the correct behavior?    I ended up doing a really quick change, making a "real global" and putting it into a case statement within a VI so that I could retain the ability to have the error in/out chain. I should have used a functional global, which is my normal mode rather than a "real global" but was tired and had just had an educational discussion with my customer about SV vs Globals and for the few minutes it took was in the "global mindset".  I think it might be the first global I've used in a couple of years or more, I just was so flabbergasted about finding the apparent change in behavior.
    Putnam
    Certified LabVIEW Developer
    Senior Test Engineer
    Currently using LV 6.1-LabVIEW 2012, RT8.5
    LabVIEW Champion

    IMHO if a function's behavior hasn't been specifically defined for 3 major release cycles, and it performs one way, it shouldn't be changed, regardless of some philosophical "standard". Pre-8.6 it behaved by passing the error through, but still performing its own task, output the value written to it elsewhere, so why would someone think that making this behavior change to "if an error is present output the default" would not have major implications? We don't all have the luxury of rewriting our projects from scratch. Worse yet, no where in the "release notes" , "known issues", "changes since 8.6", do I see this mentioned. I wouldn't have known about it, possibly for some time, if in debugging the test system hardware we hadn't had a hardware error (which hadn't occured in the prior three months of "ringing out" the hardware) that generated and error on the error cluster. The shared variable was in the  various loops in my program to allow them to be stopped if an error was detected, or the operator was shutting down.
    I would suggest that you use some type of "functional global" also called a LabVIEW 2 style global  which uses a shift register on a while loop as the memory storage element in a while loop wired to execute only once (the stop node is wired "stop")rather than "real globals" unless you encapsulate the global in a sub-vi,. You can wire an error cluster straight through to allow the resulting vi to use the error chain for execution order control, as I had planned with the Shared Variable. You can also have a write case, read case, etc. inside the loop.  Search on the this board for "functional global" or "LabVIEW 2 style" (there were no globals back then).
     My fix at the moment is that I drove 250 miles (400 km) through a snow storm  last night, am going in this morning and scrubbing the 8.6 based machine and installing 8.5.1 to make both machines identical. I would have done this originally (and not found out about this issue) but last week I forgot to pack my 8.5 DVD's and the customer only has the 8.6 ones at this time. They will be calling the NI office that handles them for their own, I suspect.
    Putnam
    Certified LabVIEW Developer
    Senior Test Engineer
    Currently using LV 6.1-LabVIEW 2012, RT8.5
    LabVIEW Champion

  • Diferencia entre RT FIFO shared variable Single element y Multi-element.

    Si en las propiedades de una Shared  Variable, elijo el tipo de dato como Booleano y activo RT-FIFO, Cuál es el tamaño del buffer en la opción Single element? Si elijo Multi-elemet, entiendo que el tamaño lo determino en "Number of Booleans".
    Gracias

    Hola Ainhoa!
    Estuve leyendo un poco respecto a la diferencia entre ambas opciones. Realmente no pude encontrar el tamaño del buffer, por lo que pude entender la diferencia principal es el acceso de más de un lector o escritor a la variable.
    En el caso del single element, solo un elemento permanece en el buffer por lo que si tienes dos lectores, ambos obtendran el mismo valor. Así mismo el tipo single-element FIFO no reporta alertas cuando existe un underflow u overflow en el buffer. En el caso del multi-element, por cada lector o escritor que accese la variable se creará un nuevo buffer independiente y los valores que lean serán así mismo independientes. Finalmente, en LV 8.6 tu puedes definir el tamaño del buffer, dentro de la categoría Networking. Te dejo una liga a un poco más de información que espero te sea de utilidad:
    http://zone.ni.com/devzone/cda/tut/p/id/4679
    Que tengas un excelente día!
    Oswald Branford

  • Shared network variables RT FIFO versus buffering

    Hello,
    can anybody tell me why I should use network shared variables with buffering and RT FIFO (http://www.ni.com/white-paper/4679/en/)?
    Only buffering: ok (reader can catch up + I can use fancy none-RT data types).
    Only RT FIFO: ok (the high-prio reader/writer is time deterministic).
    FIFO and buffering: why? Isn't a FIFO a buffer already?
    Thanks!
    Peter

    DMA: great, I'll have a closer look.
    Buffer: What do you mean by "buffer another part", and "buffering on the TCP IP side"?
    Conceptually, the problem is that a writer writes, say, at rate 10 pieces of new data / second, and a reader reads 5 pieces of written data / second. If this discrepancy of rates is temporary, a buffer can be used to "remember" the written data so that the reader doesn't miss data from the writer.
    Now, the problem can be tackled by a FIFO the same way, a FIFO being a special buffer with a constraint on the orders of written/read data.
    So, based on the example above: what is the sense in using a FIFO and a buffer? And what does it have to do with network/TCP IT?
    Thanks!
    Peter

  • Weird delay reading shared variables

    Hello,
    I'm working on a project were I'm monitoring some prodution lines. I'm using DSC module's OPC server to connect to PLCs on the production line and I've created bound variables on my labview project of the PLC's tags.
    On my project I have one main VI were I show information about the production lines and were I can access several subVIs were I show other information about those lines. Then I have a VI that is running in background were I'm reading about 50 shared variables from the PLC and where I'm registering some data in a MySQL database, datalogging data on the Citadel database and registering alarms.
    The problem I'm having is that on the VI that runs in background I noticed a delay reading the variables that are reading containers weights from the production line. It seems that all the other shared variables don't suffer any delay, only the weight variables start having some delay when the values are increasing. It also seems like that, when I'm only running that VI alone, without running the main VI, there isn't any kind of delay. I'm reading the shared variables as shared variable nodes.
    Can anybody help me understand what's happening and how can I fix this delay? The VI that runs in background is time critical and a weird delay like that messes up my data.
    Solved!
    Go to Solution.

    Hi Mateus23,
    The shared variable has various buffering capabilites, including integration with the Real-Time FIFO feature in LabVIEW Real-Time.
    I guess that the buffering settings are causing the unexpected behavior.
    Check these resources:
    Buffered Network-Published Shared Variables (whitepaper)
    Shared Variables Properties Dialog
       - Network Page
       - Real-Time FIFO Page
    ~~

  • Shared Variable Engine Buffering Enable/Disable

    Hello -
    I am running into a problem where I am seeing a read of data that seems to be lagging the writing of the data. The reading and writing functions are utilizing the same shared variable - a control to write to it and an indictor to display it somewhere else. The indicator is lagging by one value, ie. scrolling up the value from 1,2,3,4.. will yeild in a display of 0,1,2,3, lagging by one. I am writing/reading to/from a value in a PLC using an OPC server, binding the variables to the control/indicator.
    I am assuming it is the buffering which is causing this, but I can not seem to find where the buffering is enabled/disabled.
    Has anyone seen this behaviour before? Also, where do you configure the Shared Variable Engine to disable buffering?
    Thank you in advance for your help -
    John
    PS> One other note, Datasocket binding of the control/indicator does not yield any problems.

    John,
    Buffering is configured in the main window of the shared variable (double click on the shared variable in the project).  Also, you will see this behavior if you have the RT FIFO enabled and you're using the variable on a Real Time target. 
    I would also recommend taking a look at this white paper which covers the workings of the shared variable:
    http://zone.ni.com/devzone/conceptd.nsf/webmain/5b4c3cc1b2ad10ba862570f2007569ef
    --Paul Mandeltort
    Automotive and Industrial Communications Product Marketing

  • Front Panel binding of shared variables very slow initialization / start

    Hello @ all,
    I am using a server running Windows2000 and LV 8 DSC RTS for datalogging. All shared variables are deployed on that server.
    I am now facing the problem, that all front panels running on the clients using the network shared variables on the server take very long to sync on startup. First the flags on the controls bind to the shared variables turn red, after up to ten minutes they start to turn green. The panels use up to 40 controls bind to the shared variables.
    All firewalls are turned off. I tried to connect the client to the same switch the server is connected to. Same problem. Does anybody have a clue?
    Thx for your quick answers.
    Carsten 

    While I can't offer any solution to your problem, I am having a similar issue running LV8.0 and shared variables on my block diagram (no DSC installed).
    When using network published shared variables, it takes anywhere from 30 sec to 4 min from the vi start for any updates to be seen. Given enough time, they will all update normally, however this 4 minute time lag is somewhat troublesome.
    I have confirmed the issue to be present when running the shared variable engine on windows and RT platforms, with exactly the same results.
    In my case, the worst offenders are a couple of double precision arrays (4 elements each). They will normally exhibit similar "spurty" behavior on startup, and eventually work their way up to continuous and normal update rates. Interestingly enough there are no errors generated by the shared variables on the block diagram.

Maybe you are looking for

  • How to start the server in debug mode

    HI Im working on weblogic server 8.1 sp. My requirement is to run the application in hosted on weblogic server with out restaring the server i.e dynamiically turn to debug mode with out stopping. IM using Log4j method for logging. please let me know

  • Passing data from PCUI application to BSP page

    Hi, From CRM PCUI application, a BSP page is called for displaying a report. The behaviour of the report has to be different based on a data (field value) in the PCUI application. How can I pass the data from PCUI application to the BSP page? Thanks

  • Toolbar control button in Safari

    Hi ! Is it possible to "insert" the *Toolbar control button* in the upper-right corner of a Safari window ? Do you know the reason why there is no this button in Safari ? Regards

  • Why we do MIGO wrt Inb. Delv in Case of  Import P.O ?

    Dear All , Good Morning ,                                      Can u pl guide that why we r doing MIGO wrt inb. delv , can we do MIGO wrt P.O just as same as normal case , pl suggest what is the reason why this inb. delv is mixed up in Import P.O Sce

  • Media Encoder Orange screen when encoding mts files

    I'm trying to convert 1080p files from my Canon Vixia HFM41 to Youtube 720p using the Media encoder, and it typically works for about 20 seconds and then hangs up and I get an orange screen. At that time I can see it dumping RAM and the CPU usage goe