Shared Variable Polling Rate

Hi all,
I use the NI-DSM on the windows machine running an LV application that shares variables on the network.
In the NI-DSM you can setup the update rate and I can see that the rate changes as expected.
BUT, on my IPad I still have only a rate of 200ms thus 5 Values per second.
Bastian

NI_MikeB wrote:
Hi
You cannot change the rate of update for the shared variable on Data Dashboard. You can alter it if use Poll Web Service but it will still only go to 300ms. Please bear in mind, as a remote device, that you typically want to limit update rate. And, as of version 2 you can send arrays of values and supply multiple values at once.
Mike
As a remote device you typically want to limit update rate? Why?
Here is my idea. The company I am working for is producing position sensor integrated circuits. I build LV executables to access the ICs RAM, and reading the sensor data via an serial interface. This should be done at least at a data rate of 25 values per second. So if you want to show someone how our sensor works it should be faster than the eye can recognize value changes (smooth updates).
Don't get me wrong. The Dash Board App is great. But I want more. 
I like the idea to have just one PC running, measuring the data via an USB device and sending the data via WLAN to several IPADs. Thumb up!
Bastian

Similar Messages

  • Shared variable sampling rate

    Hi, I would want to read some shared variable on the network at 1KHz on SignalExpress but I'm actually limited at 200Hz on it.
    Is there any possibility or not?
    Thanks.

    Hi pattegain,
    I'm not sure reading shared variables published on the network at 1kHz is possible. 
    I suggest you to read this :
    Buffered Network-Published Shared Variables: Components and Architecture
    http://zone.ni.com/reference/en-XX/help/371268P-01/expresswb/read_shared_variable_se/
    However, I tried the Shared Variable Examples (Reader and Writer) and I have been able to run the Reader with 0,002 sec (500Hz) as Sample Period parameter. Did you test those examples?
    Regards,
    Jérémy C.
    National Instruments France
    #adMrkt{text-align: center;font-size:11px; font-weight: bold;} #adMrkt a {text-decoration: none;} #adMrkt a:hover{font-size: 9px;} #adMrkt a span{display: none;} #adMrkt a:hover span{display: block;}
    Travaux Pratiques d'initiation à LabVIEW et à la mesure
    Du 2 au 23 octobre, partout en France

  • Polling variables using Modbus IP and labview 8.2.0 shared variables

    I'm using shared variable in order to read/write register on a Watlow PM controller over Modbus IP standard. Once I make a change to FP control, the shared variable polling starts and I no longer get update of any controls or indicators on the FP.
    Just wondering if this is an LV 8.2.0 issue and if any of this is addressed in LV 8.5?
    Thx ahead of time
    richjoh

    Hi richjoh,
    If I understand correctly, there are two issues to address: the status of the UpdateNow shared variable and the fact that your controls and indicators are not updating. 
    When you right-click on UpdateNow in your project and select Properties, what is the data type listed there?  Is it bound to one of the other shared variables that has a value in Variable Manager?
    After changing a control on the front panel, do you continue to see the values changing in Variable Manager even though the controls and indicators do not update on the front panel?  Do you see the same behavior regardless of which control you change? 
    Thanks for the additional information. 
    Jennifer R.
    National Instruments
    Applications Engineer

  • Slow update of shared variables on RT (cRIO) after building exe

    Hi,
    I've been struggling with this for the past few days.  I am having a problem with slow updating of shared variables on my RT project....but only after building the application into exe's.
    The application consists of an RT target (cRIO 9073) sampling inputs at a rate of 1sec.  I have a host PC running the front panel that updates with the new acquired values from the cRIO.  These values are communicated via shared variables.
    Once the cRIO samples the inputs, it writes the values to the shared variables, and then flags the data as 'ready to be read' using a boolean shared variable flag.  The hostPC polls this boolean shared variable and updates the indicators on the front panel accordingly.  
    Now, this worked fine during development, but as soon as I built the RT exe and host exe's, it stopped working properly and the shared variables ended up being updated very slowly, roughly 2-3sec update time.
    To give you some more background:
    I am running the Labview 2010 v10.0.
    I am deploying the shared variable library on the RT device (as the system must work even without the hostPC).  I have checked that its deploying using Distributed System Manager, as well as deploying it into the support directory on the cRIO and not the exe itself. 
    I have also disabled all firewalls and my antivirus, plus made sure that the IP's and subnets are correct and its DNS Server address is set to 0.0.0.0.
    There are 25 shared variables all in all, but over half of those are config values only used once or twice at startup.  Some are arrays, plus I dont have any buffering and none of them are configured as RT FIFO's either.
    The available memory on the cRIO is about 15MB minimum.
    What strikes me is that it works fine before building exe's and its not like the cRIO code is processor intensive, its idleing 95% of the time.

    Thats exactly what I'm saying, it takes 2-3seconds to update the values.
    I have tried taking out the polling on the PC side, and registered an event on the changing of that shared variable and that doesn't do anything to change the slow update time.  Even if I stop the PC, and just monitor the shared variables in DSM it updates slowly.
    I also tried utilising the "flush shared variables" vi to try to force the update....that does nothing.
    I wire all the error nodes religously. Still no luck.
    Its very strange, I'm not too sure whats happening here.  These things should be able to update in 10ms. 

  • Shared Variables for Real-TIme Robot Control

    I'm really stuck in my efforts to use LV real-time in my hardware control application. I have a 6-axis industrial robot arm that I must control programmatically from my PC. To do this I've developed a dynamic link library of functions for various robot control commands that I can call using Code Interface Nodes in LV (using 8.5). This has worked great, that is, until I tried to port parts of the application to a real-time controller. As it turns out, because the robot control dll is linked with and relies so heavily upon several Windows libraries, it is not compatible with use on a RT target, as verified by the the "DLL Checker" application I downloaded from the NI site. When the robot is not actually executing movements, I am constantly reading/writing analog and digital I/O from various sensors, etc.....
    This seemed to suggest that I should simply segregate my robot commands from the I/O activities, using my host PC for the former, and my deterministic RT loop on the target machine for the latter. I set up a Robot Controller Server (RCS) vi running on my host PC that is continuously looking for (in a timed loop) a flag (a boolean) to initiate a robot movement command. Because several parameters are used to specify the robot movement, I created a custom control cluster (which includes the boolean variable) that I then used to make a Network Shared Variable that can be updated by either the RT target or the host PC running the RCS. I chose NOT to use buffering, and FIFO is not available with shared variables based on custom controls.
    Here's sequence of events I'd like to accomplish:
    1) on my host PC I deploy the RCS, which continuously pools a boolean variable in the control cluster that would indicate the robot should move. The shared variable cluster is initialized in the RBS and the timed loop begins.
    2) I deploy the RT vi, which should set the boolean flag in the control cluster, then update the shared variable cluster.
    3) an instance of the control cluster node in the RCS should update, thereby initiating a sequence of events in a case structure. (this happens on some occassions, but very few)
    4) robot movement commands are executed, after which the boolean in the control cluster is set back to its original value.
    5) the RT vi (which is polling in a loop) should see this latest change in the boolean as a loop stop condition and continue with the RT vi execution.
    With the robot controller running in a timed loop, it occassionally "sees" and responds to a change of value in members of the shared variable cluster, but most times it does not. Furthermore, when the robot controller vi tries to trigger that the movement has completed by changing a boolean in the control cluster, the RT vi never sees it and does not respond.
    1) Bad or inappropriate use of network shared variables?
    2) a racing issue?
    3) slow network?
    4) should I buffer the control cluster?
    5) a limitation of a custom control?
    6) too many readers/writers?
    7) should I change some control cluster nodes to relative, rather than absolute?
    8) why can't I "compile" my RT vi into an executable?
    Any help would be greatly appreciated. Unfortunately, I'm writing this from home and cannot attach vi files or pictures, but would be happy to do so at work tomorrow. I'm counting on the collective genius in the universe of LV users and veterans to save my bacon.....
    David

    Hi David,
    I'm curious why you decided to build a CIN instead of developing the code in
    LabVIEW.  Is there some functionality that that LabVIEW couldn't
    provide?  Can you provide some more information about the LabVIEW
    Real-Time target you're using?  What type of IO are you using?
    It is impossible to get LabVIEW Real-Time performance on a desktop PC running
    an OS other than LabVIEW Real-Time.  Even running a timed loop in LabVIEW
    for Windows won't guarantee a jitter free application.  Also, no TCP based
    network communication can be deterministic.  This means Network Shared
    Variables are also not deterministic (they use a TCP for data transport) and I
    advise against using them as a means to send time critical control data between
    a Windows host and a LabVIEW Real-Time application.
    In general, I would architect most LabVIEW-based control applications as
    follows:
    - Write all control logic and IO operations in LabVIEW Real-Time.  The
    LabVIEW Real-Time application would accept set points and/or commands from the
    'host' (desktop PC).  The Real-Time controller should be capable of
    running independently or automatically shutting down safely if communication to
    the PC is lost.
    - Write a front-end user interface in LabVIEW that runs on the desktop
    PC.  Use Shared Variables with the RT-FIFO option enabled to send new set
    points and/or commands to the LabVIEW Real-Time target.
    Shared variable buffering and RT-FIFOs can be a little confusing.  Granted
    not all control applications are the same, but I generally recommend against
    using buffering in control applications and in LabVIEW Real-Time applications
    recommend using the RT-FIFO option.  Here's why:  Imagine you have a
    Real-Time application with two timed loops.  Time-loop 'A' calculates the
    time critical control parameters that get written to hardware output in
    timed-loop 'B'.  Loop 'A' writes the outputs to a RT-FIFO enabled variable
    with a RT-FIFO length of 50.  Loop 'B' reads the outputs from the shared
    variable, but for some reason, if loop 'B' gets behind then the shared variable
    RT-FIFO will now contain several extra elements.  Unless loop 'b' runs
    extra fast to empty the RT-FIFO, loop 'B' will now start outputting values that
    it should have output on previous cycles.  The actual desired behavior is
    that loop 'B' should output the most recent control settings, which means you
    should turn off buffering and set the RT-FIFO length to 1.
    There is also a clear distinction between buffering and the RT-FIFO
    option.  The RT-FIFO option is used to add a non-blocking layer between
    network communication and time-critical code in LabVIEW Real-Time
    applications.  It also provides a safe mechanism to share data between two
    loops running in a Real-Time application without introducing unnecessary
    jitter.  Network buffering is a feature that allows a client to receive
    data change updates from the server even if the client is reading the variable
    slower than the server is writing to it.  In the example I presented above
    you don't need to enable networking because the shared variable is used
    entirely within the Real-Time application.  However, it would be
    appropriate to send control set points from a Windows PC to the Real-Time
    application using network published shared variables with the RT-FIFO option
    enabled.  If it is critical that the Real-Time application executed all
    commands in the sequence they were sent then you could enable an appropriate
    buffer.  If the control application only needs the latest set point
    setting from the Windows host then you can safely disable network buffering
    (but you should still enable the RT-FIFO option with a length of 1 element.)
    Network buffering is especially good if the writer is 'bursty' and the reading
    rate is relatively constant. In the robot application I can imagine buffering
    would be useful if you wanted to send a sequence of timed movements to the
    Real-Time controller using a cluster of timestamp and set point.  In this
    case, you may write the sequence values to the variable very quickly, but the
    Real-Time controller would read the set points out as it proceeded through the movements.
    The following document presents a good overview of shared variable
    options:  http://zone.ni.com/devzone/cda/tut/p/id/4679
    -Nick
    LabVIEW R&D
    ~~

  • Are clusters or individual elements better for shared variables?

    So...  I have some RT code that is being updated, and pulled out of the Stone Ages of LabVIEW.  It was originally written for an old FieldPoint controller operating in "headless" mode, and used the "publish" and datasocket methods for communications and external control.  I had to get clever way back then, and put together a parsing/unparsing system for strings to send sets of data back and forth between the controller and any HMI or other computer attached.
    Now, I'm completely rewriting the code for a cRIO system, and doing my best to leverage all of the strengths of the latest LabVIEW versions.  I have already done an intermediate stage, where I converted from the publish/datasocket method to using network shared variables for my strings, so I could keep some of the original control and calculation logic.  Now, however, I'm going back to the drawing board for most of the program, with only some of the proven logic being held over into the new version.  And, as I'm putting together the data structures I need for both internal control and external communication, I'm in a bit of a quandary...
    I have come upon a data structure dilemma:  should I use individual shared variables for my data, or assemble associated data into clusters?  My original program had a string (essentially a flattened cluster) for each sensor in use (up to 4), one for the system parameters and states, and one for the control parameters and states.  There was a certain advantage to keeping the data compartmentalized like that, it kept things organized and forced me to avoid too many random references of each data point.  And it kept the number of communications channels limited to just a handful.  Mimicking this structure with cluster shared variables would be easy.  But, I'm not sure it's the best or most network-efficient method.
    I know the bundling/unbundling will add some processor time in my code, that is not new to me (it will still be much faster than my old parsing routines).  But, if I have individual data points being thrown around, I can access them easily from things like Data Dashboard (which is great, but far too limited to be able to grab items in clusters and such).  Having all of my data points individually available would make my project messier, but open up easier access.  It would also dramatically increase the number of data points being thrown around on the network at any one moment.  For reference, I would probably have a maximum of 100 data points at one time, made up of a combination of integers, floats, booleans, integer arrays, boolean arrays.  Or I would have a maximum of 8 clusters that would contain those data points.
    Any suggestions on which way I should lean?  Are there any advantages/disadvantages between shared clusters like the ones I need vs. the number of individual shared variables I would need using the alternative methods?  Network traffic and efficiency are always a concern, particularly since this is a "headless" cRIO in a control situation that must maintain a fast scan rate...
    Thanks for any help.  I'm so stuck on this fence, and I can't figure out which side to fall off!
    Solved!
    Go to Solution.

    Thanks Tim, that is a great source that I somehow missed in my hunt for information regarding my dilemna...
    I have to wonder though, does that 25 number also include the I/O points on your cRIO?  Anyone know that particular?  Most of the I/O points are network shared by default during initial configuration, and you could very quickly exceed 25 variables on an 8 slot rack (such as the one I use, a 9074).  Now I'm a bit worried that I'm overusing the variable engine, even before the communications clusters get figured in...

  • Peculiar behavior of Shared Variable RT FIFO

    I'm trying to "leverage" the enhanced TCP/IP and Shared Variable properties of LabView 8.5.  My application involves (among other things) doing continuous sampling (16 channels, 1KHz/channel) using 6-year-old PXIs (Pentium III) and streaming data to the host.  I developed a small test routine that was more than capable of handling this data rate, even when I had the host put a 20msec wait between attending to the PXI (to simulate other processing on the host).  To do this, I enabled the "RT FIFO" property of the Shared Variable (which was an array of 16 I16 integers) and specified a buffer size of 50 (that's 50 arrays).  Key to making this work was figuring out the "error codes" associated with the SV RT FIFO, particularly the one that says the FIFO is empty (so don't save the "non-data" that is present).
    Flush with success, I started developing a more realistic routine that involves rather more traffic between Host and Remote, including the passing back and forth of "event" data.  These include, among other things, "state variables" to enable both host and remote to run state machines that stay "in sync"; in addition, the PXI also acquires digital data (button pushes, etc.) which are other "events" to be sent to the Host and streamed to disk.  I developed the dual state-machine model without including the "analog data" machine, just to get the design of the Host/Remote system down and deal with exchanging digital data through other Shared Variables.  Along the way, I decided to make these also use an RT FIFO, as I didn't want to "miss" any data.  One problem I had noticed when using Shared Variables is the difficulty of telling "is this new?", i.e. is the variable present one that has been already read (and processed) or something that needs processing.  I ended up adopting something of a kludge for the events by including an incrementing "event ID" that could be tested to see if it was "new".
    Today, I put the two routines together by adding the "generate 16-channels of integer data at 1 KHz and send it to the Host via the Shared Variable" code to my existing Host/Remote state machine.  I used exactly the same logic I'd previously employed to monitor the RT FIFO associated with this Shared Variable (basically, the Host reads the SV, then looks at the error code -- a value of -2220 means "Shared Variable FIFO Read Buffer Empty", so the value you just read is an "old" value, so throw it away).  Very sad -- my code threw EVERYTHING away!  No matter how slowly the Host ran, the indicator always said that the Shared Variable FIFO Read Buffer was empty!  This wasn't true -- if I ignored the flag, and saved anyway, I saw reasonable-looking data (I was generating a sinusoid, and I saw numbers going up and down).  The trouble was that I read many more points than were actually generated, since I read the same values multiple times!
    Looking at the code, the error line coming into the Shared Variable (before it was read) was -2220, and it remained so after it was read.  How could this be?  One possibility is that my other Shared Variables were mucking up the error line, but I would have thought that the SV Engine handling reading my "analog data" SV would have set the error line appropriately for my variable.  On a hunch, I turned of the RT FIFO on the two Event shared variables, and wouldn't you know, this more-or-less fixed it!
    But why?  What is the point of having a shared variable "attached" to an error line and having it return "Shared Variable FIFO Read Buffer Empty" if it doesn't apply to its own Read Buffer?  This seems to me to be a very serious bug that renders this extremely useful feature almost worthless (certainly mega-frustrating).  The beauty of the new Shared Variable structure and the new code in Version 8.5 is that it does seem to allow better and faster communication in real-time using TCP/IP, so we can devote the PXI to "real-time" chores (data acquisition, perhaps stimulus generation) and let the PC handle data streaming, displays, controls, etc.
    Has anyone been successful in developing a data-streaming application using shared variables between a PXI and and PC, particularly one with multiple real-time streams (such as mine, where I have an analog stream from the PXI at 16 * 1KHz, a digital stream from the PXI at irregular intervalus, but possibly up to 300 Hz, and "control" information going between PC and PXI to keep them in step)?  Note that I'm attempting to "modernize" some Version 7 code that (in the absence of a good communication mechanism) is something of a nightmare, with data being kept in PXI memory, written on occasion to the PXI hard drive (!), and then eventually being written up to the PC; in addition, because the data "stayed" on the PXI, we split the signal and ran a second A/D board in the PC just so we could "see" the signal and create a display.  How much better to get the PXI to send the data to the PC, which can sock it away and take samples from the data stream to display as they fly by on their way to the hard drive!
    But I need to get Shared Variables (or something similar) working more "understandably" first ...
    Bob Schor

    Bob,
    The error lines passed into and out of functions are just just clusters with a status boolean, an error code, and an error string, and are not "attached" to a particular function as you describe in your post.  Most functions have an error in input and an error out output, and most functions will simply do nothing except pass through the error cluster if the error in status is True (to verify this for yourself, double click on a function such as a DAQmx Read or Write and look at the block diagram.  If there is an error passed in, no read/write occurs).  This helps prevent unwanted code from  executing when an error does arise in your program.  By wiring the error cluster from your other shared variables to your analog data variable, you're essentially telling LabVIEW that these functions are related and that your analog data variable depends requires that the other shared variables are functioning properly.  The error wire is a great way to enforce the flow of your program, but you must always consider how it will affect other functions if an error does arise.
    Anyways, it's great that you have things more or less working at the moment.  Keep us all updated!

  • Front Panel binding of shared variables very slow initialization / start

    Hello @ all,
    I am using a server running Windows2000 and LV 8 DSC RTS for datalogging. All shared variables are deployed on that server.
    I am now facing the problem, that all front panels running on the clients using the network shared variables on the server take very long to sync on startup. First the flags on the controls bind to the shared variables turn red, after up to ten minutes they start to turn green. The panels use up to 40 controls bind to the shared variables.
    All firewalls are turned off. I tried to connect the client to the same switch the server is connected to. Same problem. Does anybody have a clue?
    Thx for your quick answers.
    Carsten 

    While I can't offer any solution to your problem, I am having a similar issue running LV8.0 and shared variables on my block diagram (no DSC installed).
    When using network published shared variables, it takes anywhere from 30 sec to 4 min from the vi start for any updates to be seen. Given enough time, they will all update normally, however this 4 minute time lag is somewhat troublesome.
    I have confirmed the issue to be present when running the shared variable engine on windows and RT platforms, with exactly the same results.
    In my case, the worst offenders are a couple of double precision arrays (4 elements each). They will normally exhibit similar "spurty" behavior on startup, and eventually work their way up to continuous and normal update rates. Interestingly enough there are no errors generated by the shared variables on the block diagram.

  • Shared variables update in QuickClient but not in Variable Manager

    Greetings,
    I am using LV 8.6 Developer with the DSC module installed. Also, I'm using WinXP Service Pack 3.
    I am trying to create a VI that will interact with an OPC server connected to some equipment.
    I've created shared variables to read the value of tags in my OPC server exactly according to the instructions on this page:
    http://zone.ni.com/devzone/cda/tut/p/id/6346
    For some reason though, the value of the shared value never updates to reflect the value on the server.
    When I follow the tag on the OPC QuickClient, it updates just fine, but somehow the value of the shared variable isn't updating. 
    I also tried retreiving this data using the DataSocket method in the "Multiple OPC Items Monitor.vi" example, but running that VI crashes LabView every time. 
    Also, I can create a shared variable and write to tags using a VI, but I can't read them.  
    It seems like I'm *this* close to getting the shared variables to work. Any suggestions?

    Hi Rob,
    To answer your questions...
    DataSocket: NI doesn't give me the option to investigate the error when I restart the program. I did a search for .cpp files, but none turned up. I did collect the error report that Windows generates, and I attached it. I'm not sure if that would be something useful or not.
    SharedVariables: I went to use the Distributed System Manager to take a look at the variables. For both my shared variable as well as the tags on the I/O data source, the data type is listed as "advanced" and the quality is "unknown bad."  I collected the logs from the I/O server and attached them. Oddly, when I look through there, I see some Quality=BAD  and some VT_ILLEGAL flags. The deadband is set to zero, and the update rate is 1,000ms.
    Also, I've configured the DCOM and Local Security Policy settings as described in numerous support articles, and my Windows Firewall is turned off. The OPC server and LabView reside on the same computer. 
    I suppose the confusing thing is why the QuickClient works fine, but the SVE doesn't.  
    I've tried this on three different computers with the same results in all cases. Currently, LV and the OPC server is the only software installed on any machine. The only other OPC server I have access to is the LV server, and it seems to work okay. 
    Thanks,
    Wes
    Attachments:
    a793_appcompat.txt ‏133 KB
    screen.JPG ‏105 KB
    OPC I-O Server Diagnostic Information.txt ‏69 KB

  • Serializin​g read/write operations of network-pu​blished shared variables

    Hi all,
    I'm developing a distributed application (PC + CompactRIO), and using shared variables (SVs) for inter-device communication. Here's my journey so far:
    Intended procedure
    PC parses file
    PC writes processed file data (custom cluster, large) into the 1st SV 
    PC writes a "grab data" signal/command (enum) into the 2nd SV
    cRIO polls the 2nd SV
    cRIO sees the command, then reacts by reading the 1st SV
    Steps #2 and #3 were sequenced, using error wires.
    Unexpected results
    Even after the command is transmitted and the cRIO sees it, the cRIO could not read the data (which I wrote BEFORE the command) -- LabVIEW reported that the buffer was empty.
    The operation succeeded when I placed a wait (5 seconds) between steps #2 and #3.
    Questions
    What am I doing wrong, and how do I achieve my desired outcome?
    Is SV I/O asynchronous by design?
    Is it possible to use event-driven programming to handle SV access? (i.e. does LabVIEW signal when the new SV value has propagated across the network?)

    BillMe wrote:
    Why do you have to "notify" the other end that data is available? The subscriber can simply sit in a loop doing a timed read just as if using a queue or notifier IPC. If it doesn't time out, you got new data. If it does time out, do some other processing if needed and then loop back for another timed read.
    My sytem architecture is command-driven -- the cRIO listens for instructions from the PC interface (sent as an enum via one SV), and performs tasks (motor control) in response. One of the commands (the one described in this thread) happens to be "download a new motion profile from the PC, by reading the other SV". Given that the cRIO is already polling the command channel, I felt that there's no need to also poll the data channel (especially since the "download" command is issued very infrequenty). Plus, I thought that polling two channels would increase the chances of race conditions or illegal state transitions, particularly if the app is developed over a long term.
    I am also new to LabVIEW, so my current programming style will heavily reflect my C++/Qt background while I get a feel for LabVIEW's strengths and weaknesses -- Qt is a heavily event-driven framework (even for networking!), where polling often means you're doing something wrong.
    Still, thank you for pointing out that I can use timeouts to determine if new data has arrived -- my subscriber current writes a null command back into the SV when it has consumed the command, but you showed me that I don't have to.

  • CFP-1808 shared variables timestamp

    Hi,
    I need to control and acquire data from a cFP Ethernet module (cFP-1808) to a PXI RT system. I have tested two ways of doing it:
    1. Bind shared variables to the Fieldpoint channels. Works great except for the fact that I have to record the timestamp of every single channel in all Fieldpoint modules. I have to avoid this. The reason I have to record all timestamps is that sometimes two consecutive reads are the same in value and in timestamp; so even if I make sure I get a value every 100ms in my real time application I cannot trust  that  the shared variable engine will give a real new value every 100ms, because sometimes one read is the same as the previous one. My
    understanding of the technology is that the cFP network interface checks for an
    updated value of the channels of each module at the Advise Rate. If there’s a
    new value the cFP transmits it to the Shared Variable Engine running in the PXI
    controller, but if the value has not changed it doesn’t transmit anything, so
    more efficiency is achieved in the network communication. This would explain the
    timestamp behavior of the shared variables bound directly to Fieldpoint
    channels; since there is no new value the shared variable is not updated so the
    timestamp of two consecutive reads is the same.
    2.  Create a Modbus server and then create shared variables bound to the Modbus server channels. With this method any time my RT application asks for a new value the shared variable engine provides a real new value with an updated timestamp, and all timestamps in the same module are equivalent (in 1usec range).  Therefore, it looks like using the Modbus solution I  only have to record one timestamp per module.
    So option 2 probably fixes my problem, but I'd like to understand why. Both methods use the shared variable engine, shouldn't they behave the same way? There's one downside in option 2, which is having to deal with Modbus addresses and not having the ease of use of option 1 that just browses to find the Fieldpoint channel.
    Any comments?
    A final note, it'd be great to be able to use the Fieldpoint VIs in LV Real-Time! Can this be achieved?
    Raimon

    Hi Raimon,
    How are you binding to your shared variables?  This KB reviews three methods:
    http://digital.ni.com/public.nsf/allkb/FA610367EC62574186257118005089F2?OpenDocument
    I don't believe using FieldPoint VIs on a PXI RT target is possible because FieldPoint is not a driver that you are able to install on the PXI RT target.  We do have the cFP-2xxx line of RT controllers for FieldPoint, which can communicate directly to the FieldPoint modules, ensuring determinism.
    Trey B
    Applications Engineering
    National Instruments

  • Initialize the shared variable

    "From the issue description it seems that the Shared variable needs to be reinitialize.
    Create a formula to initialize the shared variable value and place it in appropriate section of the Main Report based on the section in which you are displaying shared variable value of the Subreport."  (post from Poonam Thorat 7/7/08)
    This is a solution to my problem.  However, I'm not sure how to initialize the shared variable value nor do I know where to place it in the appropriate section of the main report.  Therefore, what should the formula look like and where should it go in the main report?
    Can anyone help me with this?
    Thanks much!

    Hello
    Lets say you have a report like this
    Report Header
    Page Header
    Details
    Report Footer
    Page Footer
    If you want to create and initialise a shared variable, create a formula like this
    For this example call this forumula Rate
    whileprintingrecords;
    shared numbervar Rate := 1.5; // this can also be a field
    If you needed to use this shared variable in the Details section, then you would place the Rate formula in the Report Header, or Page Header (above Details so that it is created before the Details section is created)
    In details lets say you have some other calculation - Commission
    The Commission formula would look like this
    whileprintingrecords;
    shared numbervar Rate; // a shared variable must be referenced in each formula that it will be used.  It is NOT being initialised or set to any value here, just referenced
    {table.fieldname} * Rate
    Place the Commission formula in the Details section and Rate will be for each record in the report
    The same process applies for passing values from main report to sub report and vice versa
    There is also plenty of info in the Help and online about using shared variables
    Best regards
    Patrick

  • Shared variable engine OPC delay

    Hello All,
    I've got a bit of a problem with the delay time of updates between a non NI OPC server and a shared variable engine OPC client. I am using the redion OPC software OPCWorx with a database of around 120 tags to monitor and log data from a small pilot plant. The OPCWorx server has been configured with a 500ms update rate and the NI OPC client as the same. So here is my problem; logged updates occur no faster that 2.5s which is fine as we are exporting in 3 sec intervals (would still like to increase the performance of this to around 1 sec) but the main nuisance is the delay in changing control registers in the PLCs. The controls and indicators are directly bound to the shared variable items where by a change in a boolean control on the front panel to the indicated response can range from 5 to sometimes 15 seconds delay. If I bind the same control and indicator directly to the OPC item through the data socket the delay in response is almost unnoticeable at around 1 sec. I have increased the update/logging dead bands of the shared variables which seemed to help but only by a second or so.
    Im wondering what the limitations of using an OPC shared variable engine is and if this is expected or have I gone wrong somewhere. All the variables are configured as non buffered and to have single writers.
    A few changes have been suggested;
    - Upgrading the PC. The current hardware is AMD Phenom 2 3.21GHz processor, 4GB of ram running windows XP.
    - Using Modbus over ethernet instead of OPC.
    This is the first major use of the DSC module and using the shared variable engine as an OPC client so any information or insight to help with this would be much appreciated. 

    Hi Kallen,
    The Shared Variable Engine can become bogged down when there are more than 100 shared variables.  This may be what you are experiencing.  We can check buffering for the variables and disable it, and that should improve performance slightly.  As it is, using datasockets may be the best method for over 100 shared variables.  I apologize for the inconvenience.
    Here are some KnowledgeBase articles that can be of use with regards to this issue:
    How Does Buffering Work for Shared Variables?
    http://digital.ni.com/public.nsf/allkb/5A2EB0E0BC56219C8625730C00232C09?OpenDocument
    Why Is Accessing IO Variables Through The Shared Variable Interface So Slow?
    http://digital.ni.com/public.nsf/allkb/F18AEE4BE7C9496B86257614000C43DF?OpenDocument
    Matt S.
    Industrial Communications Product Support Engineer
    National Instruments

  • Shared variable, missing data, the same timestamp for two consecutiv​ely data

    hello
    I have a problem with missing data when I read from a network published shared variable.
    Host VI:
    In a host VI on my Laptop (HP with WinXP Prof.) i'm writing data to the shared Variable "data". Between two consecutively write operations is a minimum of Milliseconds waiting time. I use that because I want to be sure that the timestamp for each new data value is different then the preview one (resolution shared variables is 1 ms)
    Target VI:
    the Target VI on a cRIO-9012 realtime device is reading only new data in the way that it compares the timestamp of a new value with the timestamp from the last value.
    Problem:
    rarely, I'm missing a datapoint (sometimes everything works fine for several hours, transferring thousands of data correctly before suddenly the failure happens). With some workaround I'm able to catch the missing data. I've discovered that the missing data has the exactly same timestamp then the last readed datapoint, therefore is ignored in my "legal" data.
    To sum up, the missed value is written to the shared variable in the host, but the target ignores him because his timestamp is wrong, respectively the same as the last value has, despite the host waits every time for a minimum of 10 milliseconds before writing a new value.
    Note:
    The shared Variable is hosted on the Laptop and configured using buffering.
    The example is simplified only to show the principle function, in real I use also a handshaking and I secure that there is no over- and underflow.
    Simplified Example:
    Question:
    Has someone an idea why two consecutively data can have the same timestamp ?
    Where is the (wrong) timestamp finally coming from (system?) ?
    What would be a possible solution (at the moment with shared Variables) ?
    -> I tried a workaround with clusters where each data gets a  unique ID. It works but it is slower than comparing timestamps and I could get performance problems.
    Would it change something when I host the shared Variable on the RT-System ?
    Thanks for your help
    Regards
    Reto
    Solved!
    Go to Solution.

    Hi Reto,
    I had a look on your modified Example.
    Because the Shared Variables didn`t work like Queues or Notifiers (No Event or Interrupt when a new value has been written. And for sure the´re not possible over a network) you will see the issue, that the code is reading the values more often with the same timestamp (Polling Problematic) if the reader is faster then the writer. And because the timestamp is written with the value you´re able to program like you do. Filter out whats duplicated when you have the same timestamp.
    Everything is described in here:
    http://zone.ni.com/devzone/cda/tut/p/id/4679#toc1
    Laurent talked about a second depth of buffer. Please have also a look at the link. Somewhere in the middle of the tutorial you see the explanations of Buffer and RT-Buffer.
    Regarding your question: Would it change something when I host the shared Variable on the RT-System? --> No
    In my experiences, you should consider to place the Shared Variable Engine after asking some questions regarding the application.
    You will find the Answers to this 3 Questions also in the link:
    Does the application require datalogging and supervisory functionality?
    Does the computing device have adequate processor and memory resources?
    Which system is always online?
    And you`re right the smalles time interval you can see in the timestamp is 1ms.!
    What you also can do is working with an enabled "timed out". This might be more performance efficient than reading the timestamp.
    What I don`t know and not find up to now, is if LabVIEW or the OS adds the timestamp. It´s taken from the system time, this looks like LabVIEW is taking the value and adds it. 
    I hope this helps
    Alex
    NI Switzerland

  • Value of shared variable incorrect in calculations.

    Hi All,
    Hope someone can help as this has me beat!
    I am doing a simple sales projection report, the only complication is that, being a global report, it needs to be run in a user selected currency.
    I have a parameter in the main report to allow currency selection, and then a subreport in the header that calls the exchange rate file and stores this in a shared variable.
    The detail sections then show potential sales in base currency and the report currency, i.e. potential value * rate shared variable.
    The problem is that the formula in the main report does not see the rate correctly. To test it I created two formulae - the first multiplies potential by rate, the other multiplies potential by another formula which is just a hard coded '1.47'.
    Selecting $ as the currency, I have the rate shared variable showing correctly in the detail as '1.47', but the formula that uses this sees it as 1.0018 for some reason, so instead of a correct value of $220,500.00 I get $220107.17.
    Have spent nearly a day trying to track down the issue, so any suggestions at all welcome !

    Thanks for the suggestions.
    'WhilePrintingRecords' was already in my formulae so it is not that.
    I have tried your suggestion of doing the calculation in the subreport and that does not work either.
    To give a bit more detail,  I have 5 formulae -
    @Potential = the value from the database,
    @Rate = the exchange rate from the database (=1.47)
    @x Rate = @Potential x @Rate
    @1.47 = hard coded value of 1.47
    @x 1.47 = @Potential x @1.47
    These then print out in the order above, and as an example I see ...
    @Potential;   @Rate;   @x Rate;      @1.47;     @x 1.47
    219,818.79;   1.47;       322,561.00;   1.47;        323,133.74
    which I get whether I calculate in the main report or the subreport
    A calculator confirms 219,818.79 x 1.47 = 323,133.621 so there is a small error even in the direct calculation. (Another formula with simply '219818.79 * 1.47' gave the same error)
    Obviously even though an error may be small it immediately raises concerns with users as to how reliable calculated results are in any Crystal report.

Maybe you are looking for