Shared variable sampling rate

Hi, I would want to read some shared variable on the network at 1KHz on SignalExpress but I'm actually limited at 200Hz on it.
Is there any possibility or not?
Thanks.

Hi pattegain,
I'm not sure reading shared variables published on the network at 1kHz is possible. 
I suggest you to read this :
Buffered Network-Published Shared Variables: Components and Architecture
http://zone.ni.com/reference/en-XX/help/371268P-01/expresswb/read_shared_variable_se/
However, I tried the Shared Variable Examples (Reader and Writer) and I have been able to run the Reader with 0,002 sec (500Hz) as Sample Period parameter. Did you test those examples?
Regards,
Jérémy C.
National Instruments France
#adMrkt{text-align: center;font-size:11px; font-weight: bold;} #adMrkt a {text-decoration: none;} #adMrkt a:hover{font-size: 9px;} #adMrkt a span{display: none;} #adMrkt a:hover span{display: block;}
Travaux Pratiques d'initiation à LabVIEW et à la mesure
Du 2 au 23 octobre, partout en France

Similar Messages

  • NI Scope 5122 Variable Sampling Rate

    Hello,
    I'm using an NI 5122 high-speed digitizer card to acquire data and would like to synchronize the sampling rate of the card to the frequency of the data.  For example, my first set of data will have a frequency of 13.6MHz so I'd like to sample at 13.6MHz.  When I connect a 13.6MHz signal to the CLK IN on the front panel of the card and write Labview code to sample at this rate (by either the sample clock or reference clock) I receive error messags.  Does anyone know if its possible to have a truly variable sample rate for this card?
    Thanks,
    Steve
    Solved!
    Go to Solution.

    Hi Steve,
    Thanks for the post and I hope your well today!
    I noticed you've not had any support thus far.
    Im not very familar with NI-scope, but with DAQmx once the task has commited you wouldn't be able to alter the sample rate. Now with pulse train generation you can (create) what effectively appears to be a didn't sample rate.
    So unless NI-scope has built in this functionality, Im not sure it would be possible. 
    Could you maybe attach your code and the error details?
    Kind Regards,
    Kind Regards
    James Hillman
    Applications Engineer 2008 to 2009 National Instruments UK & Ireland
    Loughborough University UK - 2006 to 2011
    Remember Kudos those who help!

  • Shared Variable Polling Rate

    Hi all,
    I use the NI-DSM on the windows machine running an LV application that shares variables on the network.
    In the NI-DSM you can setup the update rate and I can see that the rate changes as expected.
    BUT, on my IPad I still have only a rate of 200ms thus 5 Values per second.
    Bastian

    NI_MikeB wrote:
    Hi
    You cannot change the rate of update for the shared variable on Data Dashboard. You can alter it if use Poll Web Service but it will still only go to 300ms. Please bear in mind, as a remote device, that you typically want to limit update rate. And, as of version 2 you can send arrays of values and supply multiple values at once.
    Mike
    As a remote device you typically want to limit update rate? Why?
    Here is my idea. The company I am working for is producing position sensor integrated circuits. I build LV executables to access the ICs RAM, and reading the sensor data via an serial interface. This should be done at least at a data rate of 25 values per second. So if you want to show someone how our sensor works it should be faster than the eye can recognize value changes (smooth updates).
    Don't get me wrong. The Dash Board App is great. But I want more. 
    I like the idea to have just one PC running, measuring the data via an USB device and sending the data via WLAN to several IPADs. Thumb up!
    Bastian

  • DASYLAB QUERIES on Sampling Rate and global variable

    hola
    i'm new user of dasylab and i would like to manipulate the sample rate through a layout ;
    using global variable seems good idea but the adress of sample rate is unknown .
    Hope that someone would be able to help.!
    also other small things when i use coded swich and i want modify  the text in the Switch Window i don't know how
    Lots of thanks!

    Hi,
    There is a dedicated place in this forum where you'll certainly get more answers than here :
    http://forums.ni.com/t5/DASYLab/bd-p/50
    Regards,
    Da Helmut

  • Peculiar behavior of Shared Variable RT FIFO

    I'm trying to "leverage" the enhanced TCP/IP and Shared Variable properties of LabView 8.5.  My application involves (among other things) doing continuous sampling (16 channels, 1KHz/channel) using 6-year-old PXIs (Pentium III) and streaming data to the host.  I developed a small test routine that was more than capable of handling this data rate, even when I had the host put a 20msec wait between attending to the PXI (to simulate other processing on the host).  To do this, I enabled the "RT FIFO" property of the Shared Variable (which was an array of 16 I16 integers) and specified a buffer size of 50 (that's 50 arrays).  Key to making this work was figuring out the "error codes" associated with the SV RT FIFO, particularly the one that says the FIFO is empty (so don't save the "non-data" that is present).
    Flush with success, I started developing a more realistic routine that involves rather more traffic between Host and Remote, including the passing back and forth of "event" data.  These include, among other things, "state variables" to enable both host and remote to run state machines that stay "in sync"; in addition, the PXI also acquires digital data (button pushes, etc.) which are other "events" to be sent to the Host and streamed to disk.  I developed the dual state-machine model without including the "analog data" machine, just to get the design of the Host/Remote system down and deal with exchanging digital data through other Shared Variables.  Along the way, I decided to make these also use an RT FIFO, as I didn't want to "miss" any data.  One problem I had noticed when using Shared Variables is the difficulty of telling "is this new?", i.e. is the variable present one that has been already read (and processed) or something that needs processing.  I ended up adopting something of a kludge for the events by including an incrementing "event ID" that could be tested to see if it was "new".
    Today, I put the two routines together by adding the "generate 16-channels of integer data at 1 KHz and send it to the Host via the Shared Variable" code to my existing Host/Remote state machine.  I used exactly the same logic I'd previously employed to monitor the RT FIFO associated with this Shared Variable (basically, the Host reads the SV, then looks at the error code -- a value of -2220 means "Shared Variable FIFO Read Buffer Empty", so the value you just read is an "old" value, so throw it away).  Very sad -- my code threw EVERYTHING away!  No matter how slowly the Host ran, the indicator always said that the Shared Variable FIFO Read Buffer was empty!  This wasn't true -- if I ignored the flag, and saved anyway, I saw reasonable-looking data (I was generating a sinusoid, and I saw numbers going up and down).  The trouble was that I read many more points than were actually generated, since I read the same values multiple times!
    Looking at the code, the error line coming into the Shared Variable (before it was read) was -2220, and it remained so after it was read.  How could this be?  One possibility is that my other Shared Variables were mucking up the error line, but I would have thought that the SV Engine handling reading my "analog data" SV would have set the error line appropriately for my variable.  On a hunch, I turned of the RT FIFO on the two Event shared variables, and wouldn't you know, this more-or-less fixed it!
    But why?  What is the point of having a shared variable "attached" to an error line and having it return "Shared Variable FIFO Read Buffer Empty" if it doesn't apply to its own Read Buffer?  This seems to me to be a very serious bug that renders this extremely useful feature almost worthless (certainly mega-frustrating).  The beauty of the new Shared Variable structure and the new code in Version 8.5 is that it does seem to allow better and faster communication in real-time using TCP/IP, so we can devote the PXI to "real-time" chores (data acquisition, perhaps stimulus generation) and let the PC handle data streaming, displays, controls, etc.
    Has anyone been successful in developing a data-streaming application using shared variables between a PXI and and PC, particularly one with multiple real-time streams (such as mine, where I have an analog stream from the PXI at 16 * 1KHz, a digital stream from the PXI at irregular intervalus, but possibly up to 300 Hz, and "control" information going between PC and PXI to keep them in step)?  Note that I'm attempting to "modernize" some Version 7 code that (in the absence of a good communication mechanism) is something of a nightmare, with data being kept in PXI memory, written on occasion to the PXI hard drive (!), and then eventually being written up to the PC; in addition, because the data "stayed" on the PXI, we split the signal and ran a second A/D board in the PC just so we could "see" the signal and create a display.  How much better to get the PXI to send the data to the PC, which can sock it away and take samples from the data stream to display as they fly by on their way to the hard drive!
    But I need to get Shared Variables (or something similar) working more "understandably" first ...
    Bob Schor

    Bob,
    The error lines passed into and out of functions are just just clusters with a status boolean, an error code, and an error string, and are not "attached" to a particular function as you describe in your post.  Most functions have an error in input and an error out output, and most functions will simply do nothing except pass through the error cluster if the error in status is True (to verify this for yourself, double click on a function such as a DAQmx Read or Write and look at the block diagram.  If there is an error passed in, no read/write occurs).  This helps prevent unwanted code from  executing when an error does arise in your program.  By wiring the error cluster from your other shared variables to your analog data variable, you're essentially telling LabVIEW that these functions are related and that your analog data variable depends requires that the other shared variables are functioning properly.  The error wire is a great way to enforce the flow of your program, but you must always consider how it will affect other functions if an error does arise.
    Anyways, it's great that you have things more or less working at the moment.  Keep us all updated!

  • Slow update of shared variables on RT (cRIO) after building exe

    Hi,
    I've been struggling with this for the past few days.  I am having a problem with slow updating of shared variables on my RT project....but only after building the application into exe's.
    The application consists of an RT target (cRIO 9073) sampling inputs at a rate of 1sec.  I have a host PC running the front panel that updates with the new acquired values from the cRIO.  These values are communicated via shared variables.
    Once the cRIO samples the inputs, it writes the values to the shared variables, and then flags the data as 'ready to be read' using a boolean shared variable flag.  The hostPC polls this boolean shared variable and updates the indicators on the front panel accordingly.  
    Now, this worked fine during development, but as soon as I built the RT exe and host exe's, it stopped working properly and the shared variables ended up being updated very slowly, roughly 2-3sec update time.
    To give you some more background:
    I am running the Labview 2010 v10.0.
    I am deploying the shared variable library on the RT device (as the system must work even without the hostPC).  I have checked that its deploying using Distributed System Manager, as well as deploying it into the support directory on the cRIO and not the exe itself. 
    I have also disabled all firewalls and my antivirus, plus made sure that the IP's and subnets are correct and its DNS Server address is set to 0.0.0.0.
    There are 25 shared variables all in all, but over half of those are config values only used once or twice at startup.  Some are arrays, plus I dont have any buffering and none of them are configured as RT FIFO's either.
    The available memory on the cRIO is about 15MB minimum.
    What strikes me is that it works fine before building exe's and its not like the cRIO code is processor intensive, its idleing 95% of the time.

    Thats exactly what I'm saying, it takes 2-3seconds to update the values.
    I have tried taking out the polling on the PC side, and registered an event on the changing of that shared variable and that doesn't do anything to change the slow update time.  Even if I stop the PC, and just monitor the shared variables in DSM it updates slowly.
    I also tried utilising the "flush shared variables" vi to try to force the update....that does nothing.
    I wire all the error nodes religously. Still no luck.
    Its very strange, I'm not too sure whats happening here.  These things should be able to update in 10ms. 

  • Error codes for shared variables in Labview 8.5?

    I am trying to use Shared Variables in Labview 8.5 to enable real-time loops (similar to some of the examples in "Using the LabVIEW Shared Variable", published Aug 28, 2007).  I created it to hold the result of a 16-channel A/D converter (so a 16-element I16 array).  To avoid losing samples, I used buffering, with a buffer of 5.  To test this, I made a pair of VIs, one a producer that stuffs a 16-element I16 array into the shared variable "every so often" (controlled by a timed loop), and a consumer loop that reads the shared variable and does something with the data.
    If I think of the buffered shared variable as a Real Time FIFO (as the article suggests it is), I was curious how I would know (a) when the queue was empty, and (b) if the queue had overflowed.  Both are necessary if this is to be a practical means of exchanging data -- you want the producer and consumer to run more-or-less at the same rate, but only the producer is deterministic.  The consumer needs to be able to run "faster" if it falls behind (for example, because it is writing data to disk), but you don't want it to read data from the shared variable if there's nothing there.  [One can always read a shared variable, after all -- as the article states, it simply "holds" the last value written to it].
    Snooping around, I discovered that there are "error codes" associated with the shared variable.  In particular, a code of -2220 (FFFFF754) seems to signify an empty queue (or a shared variable that has not yet been written to), while a code of -1950678981 (8BBB003B) appears to be "buffer overflow".
    Is this documented anywhere?  Are there other "error codes" that would be helpful to know?  Is there some rationale to these seemingly-random numbers?  [It would help to develop code to utilize shared variables if there was a bit less "magic" and "mystery" involved].
    For what it is worth, with a buffer of size 40, I could generate 16 I16 values at 1 KHz (simulating sampling from a 16-channel A/D at 1 KHz) and pass it to a consumer node that (a) read from the shared variable until it was empty, then (b) "went to sleep" for 20 msec (simulating "doing something else non-deterministically"), and not miss any data (because I could then empty the Shared Variable RT-FIFO, which should have been half full, before it overflowed on me).  Not bad throughput -- I bet I could push it even higher.
    Bob Schor

    Hey Bob,
    The errors are documented in the LabVIEW help:
    Shared Variable Error Codes
    Real-Time Shared Variable Error Codes
    There are several error messages for buffer underflow/overflow depending on the settings of the network or RT FIFO buffers. In particular the -2220 and -2221 are useful for the producer/consumer use case. For example (as you probably know) the consumer can flush a variable using the error code (see the attached image).
    Gerardo
    Attachments:
    variable1.png ‏3 KB

  • Are clusters or individual elements better for shared variables?

    So...  I have some RT code that is being updated, and pulled out of the Stone Ages of LabVIEW.  It was originally written for an old FieldPoint controller operating in "headless" mode, and used the "publish" and datasocket methods for communications and external control.  I had to get clever way back then, and put together a parsing/unparsing system for strings to send sets of data back and forth between the controller and any HMI or other computer attached.
    Now, I'm completely rewriting the code for a cRIO system, and doing my best to leverage all of the strengths of the latest LabVIEW versions.  I have already done an intermediate stage, where I converted from the publish/datasocket method to using network shared variables for my strings, so I could keep some of the original control and calculation logic.  Now, however, I'm going back to the drawing board for most of the program, with only some of the proven logic being held over into the new version.  And, as I'm putting together the data structures I need for both internal control and external communication, I'm in a bit of a quandary...
    I have come upon a data structure dilemma:  should I use individual shared variables for my data, or assemble associated data into clusters?  My original program had a string (essentially a flattened cluster) for each sensor in use (up to 4), one for the system parameters and states, and one for the control parameters and states.  There was a certain advantage to keeping the data compartmentalized like that, it kept things organized and forced me to avoid too many random references of each data point.  And it kept the number of communications channels limited to just a handful.  Mimicking this structure with cluster shared variables would be easy.  But, I'm not sure it's the best or most network-efficient method.
    I know the bundling/unbundling will add some processor time in my code, that is not new to me (it will still be much faster than my old parsing routines).  But, if I have individual data points being thrown around, I can access them easily from things like Data Dashboard (which is great, but far too limited to be able to grab items in clusters and such).  Having all of my data points individually available would make my project messier, but open up easier access.  It would also dramatically increase the number of data points being thrown around on the network at any one moment.  For reference, I would probably have a maximum of 100 data points at one time, made up of a combination of integers, floats, booleans, integer arrays, boolean arrays.  Or I would have a maximum of 8 clusters that would contain those data points.
    Any suggestions on which way I should lean?  Are there any advantages/disadvantages between shared clusters like the ones I need vs. the number of individual shared variables I would need using the alternative methods?  Network traffic and efficiency are always a concern, particularly since this is a "headless" cRIO in a control situation that must maintain a fast scan rate...
    Thanks for any help.  I'm so stuck on this fence, and I can't figure out which side to fall off!
    Solved!
    Go to Solution.

    Thanks Tim, that is a great source that I somehow missed in my hunt for information regarding my dilemna...
    I have to wonder though, does that 25 number also include the I/O points on your cRIO?  Anyone know that particular?  Most of the I/O points are network shared by default during initial configuration, and you could very quickly exceed 25 variables on an 8 slot rack (such as the one I use, a 9074).  Now I'm a bit worried that I'm overusing the variable engine, even before the communications clusters get figured in...

  • Front Panel binding of shared variables very slow initialization / start

    Hello @ all,
    I am using a server running Windows2000 and LV 8 DSC RTS for datalogging. All shared variables are deployed on that server.
    I am now facing the problem, that all front panels running on the clients using the network shared variables on the server take very long to sync on startup. First the flags on the controls bind to the shared variables turn red, after up to ten minutes they start to turn green. The panels use up to 40 controls bind to the shared variables.
    All firewalls are turned off. I tried to connect the client to the same switch the server is connected to. Same problem. Does anybody have a clue?
    Thx for your quick answers.
    Carsten 

    While I can't offer any solution to your problem, I am having a similar issue running LV8.0 and shared variables on my block diagram (no DSC installed).
    When using network published shared variables, it takes anywhere from 30 sec to 4 min from the vi start for any updates to be seen. Given enough time, they will all update normally, however this 4 minute time lag is somewhat troublesome.
    I have confirmed the issue to be present when running the shared variable engine on windows and RT platforms, with exactly the same results.
    In my case, the worst offenders are a couple of double precision arrays (4 elements each). They will normally exhibit similar "spurty" behavior on startup, and eventually work their way up to continuous and normal update rates. Interestingly enough there are no errors generated by the shared variables on the block diagram.

  • Shared variables update in QuickClient but not in Variable Manager

    Greetings,
    I am using LV 8.6 Developer with the DSC module installed. Also, I'm using WinXP Service Pack 3.
    I am trying to create a VI that will interact with an OPC server connected to some equipment.
    I've created shared variables to read the value of tags in my OPC server exactly according to the instructions on this page:
    http://zone.ni.com/devzone/cda/tut/p/id/6346
    For some reason though, the value of the shared value never updates to reflect the value on the server.
    When I follow the tag on the OPC QuickClient, it updates just fine, but somehow the value of the shared variable isn't updating. 
    I also tried retreiving this data using the DataSocket method in the "Multiple OPC Items Monitor.vi" example, but running that VI crashes LabView every time. 
    Also, I can create a shared variable and write to tags using a VI, but I can't read them.  
    It seems like I'm *this* close to getting the shared variables to work. Any suggestions?

    Hi Rob,
    To answer your questions...
    DataSocket: NI doesn't give me the option to investigate the error when I restart the program. I did a search for .cpp files, but none turned up. I did collect the error report that Windows generates, and I attached it. I'm not sure if that would be something useful or not.
    SharedVariables: I went to use the Distributed System Manager to take a look at the variables. For both my shared variable as well as the tags on the I/O data source, the data type is listed as "advanced" and the quality is "unknown bad."  I collected the logs from the I/O server and attached them. Oddly, when I look through there, I see some Quality=BAD  and some VT_ILLEGAL flags. The deadband is set to zero, and the update rate is 1,000ms.
    Also, I've configured the DCOM and Local Security Policy settings as described in numerous support articles, and my Windows Firewall is turned off. The OPC server and LabView reside on the same computer. 
    I suppose the confusing thing is why the QuickClient works fine, but the SVE doesn't.  
    I've tried this on three different computers with the same results in all cases. Currently, LV and the OPC server is the only software installed on any machine. The only other OPC server I have access to is the LV server, and it seems to work okay. 
    Thanks,
    Wes
    Attachments:
    a793_appcompat.txt ‏133 KB
    screen.JPG ‏105 KB
    OPC I-O Server Diagnostic Information.txt ‏69 KB

  • CFP-1808 shared variables timestamp

    Hi,
    I need to control and acquire data from a cFP Ethernet module (cFP-1808) to a PXI RT system. I have tested two ways of doing it:
    1. Bind shared variables to the Fieldpoint channels. Works great except for the fact that I have to record the timestamp of every single channel in all Fieldpoint modules. I have to avoid this. The reason I have to record all timestamps is that sometimes two consecutive reads are the same in value and in timestamp; so even if I make sure I get a value every 100ms in my real time application I cannot trust  that  the shared variable engine will give a real new value every 100ms, because sometimes one read is the same as the previous one. My
    understanding of the technology is that the cFP network interface checks for an
    updated value of the channels of each module at the Advise Rate. If there’s a
    new value the cFP transmits it to the Shared Variable Engine running in the PXI
    controller, but if the value has not changed it doesn’t transmit anything, so
    more efficiency is achieved in the network communication. This would explain the
    timestamp behavior of the shared variables bound directly to Fieldpoint
    channels; since there is no new value the shared variable is not updated so the
    timestamp of two consecutive reads is the same.
    2.  Create a Modbus server and then create shared variables bound to the Modbus server channels. With this method any time my RT application asks for a new value the shared variable engine provides a real new value with an updated timestamp, and all timestamps in the same module are equivalent (in 1usec range).  Therefore, it looks like using the Modbus solution I  only have to record one timestamp per module.
    So option 2 probably fixes my problem, but I'd like to understand why. Both methods use the shared variable engine, shouldn't they behave the same way? There's one downside in option 2, which is having to deal with Modbus addresses and not having the ease of use of option 1 that just browses to find the Fieldpoint channel.
    Any comments?
    A final note, it'd be great to be able to use the Fieldpoint VIs in LV Real-Time! Can this be achieved?
    Raimon

    Hi Raimon,
    How are you binding to your shared variables?  This KB reviews three methods:
    http://digital.ni.com/public.nsf/allkb/FA610367EC62574186257118005089F2?OpenDocument
    I don't believe using FieldPoint VIs on a PXI RT target is possible because FieldPoint is not a driver that you are able to install on the PXI RT target.  We do have the cFP-2xxx line of RT controllers for FieldPoint, which can communicate directly to the FieldPoint modules, ensuring determinism.
    Trey B
    Applications Engineering
    National Instruments

  • How to build a array with high sampling rates 1K

    Hi All:
    Now I am trying to develop a project with CRio.
    But I am not sure how to build a array with high sampling rates signal, like >1K. (Sigle-point data)
    Before, I would like to use "Build Arrary" and "Shift Register" to build a arrary, but I found it is not working for high sampling rates.
    Is there anyother good way to build a data arrary for high sampling rates??
    Thanks
    Attachments:
    Building_Array_high_rates.JPG ‏120 KB

    Can't give a sample of the FPGA right now but here is a sample bit of RT code I recently used. I am acquiring data at 51,200 samples every second. I put the data in a FIFO on the FPGA side, then I read from that FIFO on the RT side and insert the data into a pre-initialized array using "Replace Array subset" NOT "Insert into array". I keep a count of the data I have read/inserted, and once I am at 51,200 samples, I know I have 1 full second of data. At this point, I add it to a queue which sends it to another loop to be processed. Also, I don't use the new index terminal in my subVI because I know I am always adding 6400 elements so I can just multiply my counter by 6400, but if you use the method described further down below , you will want to use the "new index" to return a value because you may not always read the same number of elements using that method.
    The reason I use a timeout of 0 and a wait until next ms multiple is because if you use a timeout wired to the FIFO read node, it spins a loop in the background that polls for data, which rails your processor. Depending on what type of acquisition you are doing, you can also use the method of reading 0 elements, then using the "elements remaining" variable, to wire up another node as is shown below. This was not an option for me because of my programs architecture and needing chunks of 1 second data. Had I used this method it would have overcomplicated things if I read more elements then I had available in my 51,200 buffer.
    Let me knwo if you have more qeustions
    CLA, LabVIEW Versions 2010-2013
    Attachments:
    RT.PNG ‏36 KB
    FIFO read.PNG ‏4 KB

  • Initialize the shared variable

    "From the issue description it seems that the Shared variable needs to be reinitialize.
    Create a formula to initialize the shared variable value and place it in appropriate section of the Main Report based on the section in which you are displaying shared variable value of the Subreport."  (post from Poonam Thorat 7/7/08)
    This is a solution to my problem.  However, I'm not sure how to initialize the shared variable value nor do I know where to place it in the appropriate section of the main report.  Therefore, what should the formula look like and where should it go in the main report?
    Can anyone help me with this?
    Thanks much!

    Hello
    Lets say you have a report like this
    Report Header
    Page Header
    Details
    Report Footer
    Page Footer
    If you want to create and initialise a shared variable, create a formula like this
    For this example call this forumula Rate
    whileprintingrecords;
    shared numbervar Rate := 1.5; // this can also be a field
    If you needed to use this shared variable in the Details section, then you would place the Rate formula in the Report Header, or Page Header (above Details so that it is created before the Details section is created)
    In details lets say you have some other calculation - Commission
    The Commission formula would look like this
    whileprintingrecords;
    shared numbervar Rate; // a shared variable must be referenced in each formula that it will be used.  It is NOT being initialised or set to any value here, just referenced
    {table.fieldname} * Rate
    Place the Commission formula in the Details section and Rate will be for each record in the report
    The same process applies for passing values from main report to sub report and vice versa
    There is also plenty of info in the Help and online about using shared variables
    Best regards
    Patrick

  • Shared variable engine OPC delay

    Hello All,
    I've got a bit of a problem with the delay time of updates between a non NI OPC server and a shared variable engine OPC client. I am using the redion OPC software OPCWorx with a database of around 120 tags to monitor and log data from a small pilot plant. The OPCWorx server has been configured with a 500ms update rate and the NI OPC client as the same. So here is my problem; logged updates occur no faster that 2.5s which is fine as we are exporting in 3 sec intervals (would still like to increase the performance of this to around 1 sec) but the main nuisance is the delay in changing control registers in the PLCs. The controls and indicators are directly bound to the shared variable items where by a change in a boolean control on the front panel to the indicated response can range from 5 to sometimes 15 seconds delay. If I bind the same control and indicator directly to the OPC item through the data socket the delay in response is almost unnoticeable at around 1 sec. I have increased the update/logging dead bands of the shared variables which seemed to help but only by a second or so.
    Im wondering what the limitations of using an OPC shared variable engine is and if this is expected or have I gone wrong somewhere. All the variables are configured as non buffered and to have single writers.
    A few changes have been suggested;
    - Upgrading the PC. The current hardware is AMD Phenom 2 3.21GHz processor, 4GB of ram running windows XP.
    - Using Modbus over ethernet instead of OPC.
    This is the first major use of the DSC module and using the shared variable engine as an OPC client so any information or insight to help with this would be much appreciated. 

    Hi Kallen,
    The Shared Variable Engine can become bogged down when there are more than 100 shared variables.  This may be what you are experiencing.  We can check buffering for the variables and disable it, and that should improve performance slightly.  As it is, using datasockets may be the best method for over 100 shared variables.  I apologize for the inconvenience.
    Here are some KnowledgeBase articles that can be of use with regards to this issue:
    How Does Buffering Work for Shared Variables?
    http://digital.ni.com/public.nsf/allkb/5A2EB0E0BC56219C8625730C00232C09?OpenDocument
    Why Is Accessing IO Variables Through The Shared Variable Interface So Slow?
    http://digital.ni.com/public.nsf/allkb/F18AEE4BE7C9496B86257614000C43DF?OpenDocument
    Matt S.
    Industrial Communications Product Support Engineer
    National Instruments

  • Shared Variable Conflict with RT Startup App

    I think I am running in circles trying to determine what my problem is.  Please look at the screenshot I have attached.
    I setup three different network shared variables (two in one library, the third in a second library) on my RT target.  I have deployed them manually probably half a dozen times.  I then build the executable, and tell it to make it a startup app.
    I then created alias shared variables on my user computer.  They are setup as PSP variables that I linked thru the dialog boxes NI provides.  A sample URL of one of these variables is:  \\169.254.0.2\rtDataRouter\Data
    I deployed this library on my user computer.  From what I was reading, I am not sure if that is the correct thing to do if I am having the RT target host the main variables (we want to be able to hook up multiple clients at remote locations, hence have the RT host them).  So I tried it both ways, with the aliases deployed and without.
    If everybody is still with me, when I run pure development environment, Labview 2009 SP1, deploys everything to the RT, I then run my local app, it says it is deploying items, and then it runs, shared variables seem to talk to no problem.  Doesn't seem to matter if the aliases were deployed or not (I do have auto deploy turned off).
    Next, I try deploying the RT startup app, it reboots, I hit the run arrow on local machine, and I get the conflict window!   Again, I try manually deploying the libraries on both the RT and local machine, rebuilt the executable, reboot the RT, run the local app (still in development environment), and I get conflict window!
    Part of me thinks, I am simply missing turning one thing on or off.  From the tutorials on NI's site, I think I am in the ballpark, what am I missing???
    My project is 99.99% done other than getting the executables done.  Any insight and help you can give would be HUGE for me to get this out the door!!!
    Attachments:
    screenshot.jpg ‏301 KB

    Make sure you have Autodeployment turned Off on your cRIO targets as well as your Windows Target.

Maybe you are looking for

  • Refresh of Master/Detail does not work after validation failure

    After page validation stops further page processing of a Master Detail page,  I get the following Oracle Error on display of the Detail portion : <code> ORA:01445: Cannot select ROWID from, or sample, a join view without a key preserved table. ORA-06

  • My iMac only starts in safe mode few hours after upgrade to 10.7.3

    After I updated to 10.7.3 on my iMac everything looked fine, but after few hours of activity everything except mouse cursor froze in a fullscreen app. No keyboard shortcut was able to force quit the app and only way was to turn the iMac off with powe

  • NOKIA, ARE YOU FINALLY GOING TO FIX THE EMAIL IN E...

    Almost 4 months after I bought this stupid device, I still can't use it as a messaging device - I tried several types of setup and it always failed because of BUGS. I have 3 gmail accounts and here is what I tried: 1. I created Nokia Messaging accoun

  • How can i lookup a session bean from the client side

    how can i lookup a session bean from the client side...........i am using sun appserver.............. this is my code.................[B] private final static String JNDI_NAME="ejb/LmsBean"; private static String url="ldap://localhost:4848"; Hashtabl

  • How to transport a Z table?

    Hi Friends, I need to transport a Z (custom) table names "ZXXX". When we go to Transport Connection from RSA1 we can click on Object Types to see all the object we can transport. I dont see any object type that will let me create transport request fo