SLOW PERFORMANC​E OF SHARED VARIABLES

Hello,
We have developed application in LabVIEW 8.6.1 for testing mechanical equipments using cRIO 9074 (Real Time controller).
In this application we have used shared variables (Network published) for communications between Real Time controller and Host PC. All shared variables are deployed on Host PC.
Now we want to access the same application(Host PC application) from Tablet PC through wi-fi connectivity.
We have created two seperate .exe to run the application on Host PC and Tablet PC.Shrared variable response for Host PC .exe with RT controller is good,but when we run Tablet PC .exe then response is sluggish(slow).
In this case we create shared variables on Host PC (Network published) and tying to access it from Host PC and Tablet PC.
In our application,  
1)Both Host and Tablet PC operate on Windows XP.
2)Programs on Host and Tablet PC are identical.
3)Both PC's are on same subnet.
Please provide appropriate help on this issue.
Thanks in advance....
Sachin 

Have you seen this?

Similar Messages

  • Shared variable vs. data binding

    Hi,
    I try to write a program which communicates motor drives through ethernet and I use OPC server and shared variables created using PLC. I created front panel indicators in a while loop with 50 ms cycle to read motor encoder position, status etc. When I try to do this with wiring shared variable to indicator and with changing the data binding property of the indicator browsing the same variable from the project library, data binding seems to respond slower than wiring the shared variable to indicator directly. Can anyone explain why?

    Hi mhn05, 
    I'm not exactly sure why it would be operating slower.  Do you have code that you can upload that benchmarks or tests the speed of these, side by side? 
    Thanks, 
    Dave T.
    National Instruments
    FlexRIO & R-Series Product Support Engineer

  • Slow performanc​e to read/write shared variables programati​cally

    We are using datasocket read and write functions to read and write shared variables programatically (in the same machine) but we only achieve a performance of aprox. 200 reads/writes per second. We are using Labview 8.6 with DSC.
    Is possible to get better results? That performance is normal?
    Any help would be appreciated. Thank you in advance.

    Hi MMCDAT,
    I think this value can
    be normal as you can see in this link:
    http://zone.ni.com/devzone/cda/tut/p/id/5037
    As you can see, the
    limit for datasocket depends on your Ethernet limitations, even if you as using
    it just in one PC:
    http://digital.ni.com/public.nsf/websearch/6AC9E65​734E53F9A8625672400637ECC?OpenDocument
    You can improve the
    performance changing the update mode or Vis configurations:
    http://digital.ni.com/public.nsf/allkb/F8F7DE98856​B50588625672400648045?OpenDocument
    http://digital.ni.com/public.nsf/allkb/2D9C6D73A16​0537986256B290076456E?OpenDocument

  • RT target slows down after updating with a shared variable

    I am using a shared variable that is defined in the RT target (cFP-2110) to transfer data from the host PC.  The shared variable is a cluster of scaling parameters that needs to be stored in the RT target.  The host PC is only connected temporarily for system setup and calibration. 
    There is only one instance of the shared variable in both the host PC vi and the target vi.  The shared variable in the host is located in an event case that is activated when there is a value change.  Everything works great until I make a value change in the shared variable.  The target vi promptly slows from 30mSec to about 500 mSec and stays that way.  The only way I can get the RT target to run full speed again is to recycle power or reset it.
    What am I doing wrong?
    Temporarily slowing down the RT target when the shared variable is updated is not a problem for this application.  The problem is that I have to restart the target to get it running at full speed again.
    I’m using Win-XP, LV 8.5.1 and FieldPoint 6.0.1 
    Thanks,
    Dave J.

    Hi Mark,
    The host is sending the shared variable via an event case.   When the event is activated, the data is sent to the target and it functions as expected, except that it slows down to a crawl. I have to restart the target to get it working up to speed again.   The are no error codes.
    The shared variable is a cluster of several items and it is attached.
    In the mean time to promote the project, I split up the cluster into smaller clusters to use several shared variables and now I do not have a slow down problem.  In other words, it works great.  As an experiment I have sent the several small clusters at one time and I get the slow-down problem again.
    I've come to the conclusion that large single cluster (which is attached) is too much to send at one time.
    Please advise if there is something else to learn or understand.
    Thanks,
    Dave J.
    Attachments:
    Host Data In.ctl ‏33 KB

  • Slow update of shared variables on RT (cRIO) after building exe

    Hi,
    I've been struggling with this for the past few days.  I am having a problem with slow updating of shared variables on my RT project....but only after building the application into exe's.
    The application consists of an RT target (cRIO 9073) sampling inputs at a rate of 1sec.  I have a host PC running the front panel that updates with the new acquired values from the cRIO.  These values are communicated via shared variables.
    Once the cRIO samples the inputs, it writes the values to the shared variables, and then flags the data as 'ready to be read' using a boolean shared variable flag.  The hostPC polls this boolean shared variable and updates the indicators on the front panel accordingly.  
    Now, this worked fine during development, but as soon as I built the RT exe and host exe's, it stopped working properly and the shared variables ended up being updated very slowly, roughly 2-3sec update time.
    To give you some more background:
    I am running the Labview 2010 v10.0.
    I am deploying the shared variable library on the RT device (as the system must work even without the hostPC).  I have checked that its deploying using Distributed System Manager, as well as deploying it into the support directory on the cRIO and not the exe itself. 
    I have also disabled all firewalls and my antivirus, plus made sure that the IP's and subnets are correct and its DNS Server address is set to 0.0.0.0.
    There are 25 shared variables all in all, but over half of those are config values only used once or twice at startup.  Some are arrays, plus I dont have any buffering and none of them are configured as RT FIFO's either.
    The available memory on the cRIO is about 15MB minimum.
    What strikes me is that it works fine before building exe's and its not like the cRIO code is processor intensive, its idleing 95% of the time.

    Thats exactly what I'm saying, it takes 2-3seconds to update the values.
    I have tried taking out the polling on the PC side, and registered an event on the changing of that shared variable and that doesn't do anything to change the slow update time.  Even if I stop the PC, and just monitor the shared variables in DSM it updates slowly.
    I also tried utilising the "flush shared variables" vi to try to force the update....that does nothing.
    I wire all the error nodes religously. Still no luck.
    Its very strange, I'm not too sure whats happening here.  These things should be able to update in 10ms. 

  • Performanc​e of Modbus using DSC Shared Variables

       I'm fairly new at using Modbus with LabVIEW.  Out of the roughly dozen tools and API's that can be used, for one project I'm working on I decided to try using Shared Variables aliased to Modbus registers in the project, which is a DSC tool.  It seemed like a clever way to go.  I've used Shared Variables in the past, though, and am aware of some of the issues surrounding them, especially when the number of them begins to increase.  I'll only have about 120 variables, so I don't think it will be too bad, but I'm beginning to be a bit concerned...
       The way I started doing this was to create a new shared variable for every data point.  What I've noticed since then is that there is a mechanism for addressing multiple registers at once using an array of values.  (Unfortunately, even if I wanted to use the array method, I probably couldn't.  The Modbus points I am interfacing to are for a custom device, and the programmer didn't bother using consecutive registers...)  But in any case, I was wondering what the performance issues might be surrounding this API.
        I'm guessing that:
    1) All the caveates of shared variables apply.  These really are shared variables, it's only that DSC taught the SV Engine how to go read them.  Is that right?
       And I'm wondering:
    2) Is there any performance improvement for reading an array of consecutive variables rather than reading each variable individually?
    3) Are there any performance issues above what shared variables normally have, when using Modbus specifically?  (E.g. how often can you read a few hundred Modbus points from the same device?)
    Thanks,
        DaveT
    David Thomson Original Code Consulting
    www.originalcode.com
    National Instruments Alliance Program Member
    Certified LabVIEW Architect
    There are 10 kinds of people: those who understand binary, and those who don't.
    Solved!
    Go to Solution.

    Anna,
        Thanks so much for the reply.  That helps a lot.
        I am still wondering about one thing, though.  According to the documentation, the "A" prefix in a Modbus DSC address means that it will return an array of data, whereas something like the F prefix is for a single precision float.  When I create a channel, I pick the F300001 option, and the address that is returned is a range:  F300001 - F365534.  The range would imply that a series of values will be returned, e.g. an array.  I always just delete the range and enter a single address.  Is that the intention?  Does it return the range just so you know the range of allowed addresses?
       OK, I'm actually wondering two things.  Is there a reason why the DSC addresses start with 1, e.g. F300001, instead of 0, like F300000?  For the old Modbus API from LV7, one of the devices we have that uses that API has a register at 0.  How would that be handled in DSC?
    Thanks,
        Dave
    David Thomson Original Code Consulting
    www.originalcode.com
    National Instruments Alliance Program Member
    Certified LabVIEW Architect
    There are 10 kinds of people: those who understand binary, and those who don't.

  • Front Panel binding of shared variables very slow initialization / start

    Hello @ all,
    I am using a server running Windows2000 and LV 8 DSC RTS for datalogging. All shared variables are deployed on that server.
    I am now facing the problem, that all front panels running on the clients using the network shared variables on the server take very long to sync on startup. First the flags on the controls bind to the shared variables turn red, after up to ten minutes they start to turn green. The panels use up to 40 controls bind to the shared variables.
    All firewalls are turned off. I tried to connect the client to the same switch the server is connected to. Same problem. Does anybody have a clue?
    Thx for your quick answers.
    Carsten 

    While I can't offer any solution to your problem, I am having a similar issue running LV8.0 and shared variables on my block diagram (no DSC installed).
    When using network published shared variables, it takes anywhere from 30 sec to 4 min from the vi start for any updates to be seen. Given enough time, they will all update normally, however this 4 minute time lag is somewhat troublesome.
    I have confirmed the issue to be present when running the shared variable engine on windows and RT platforms, with exactly the same results.
    In my case, the worst offenders are a couple of double precision arrays (4 elements each). They will normally exhibit similar "spurty" behavior on startup, and eventually work their way up to continuous and normal update rates. Interestingly enough there are no errors generated by the shared variables on the block diagram.

  • Modbus and shared variable performanc​e in large applicatio​n

    Hi all,
    I am preparing to work on an application which is going to reading from up to 500 Modbus input registers on a CompactRIO over Modbus Ethernet using the LVRT Modbus IO Server implementation.  I've put together some minor test VIs on the local network to test the Modbus connectivity and understand in the shared variable minding mechanism.
    To save potential headaches in the future, do you all have any best programming/proejct management practices for using high channel count Modbus applications?  Has anyone done high channel count testing (similar to the link below) but for shared variables bound to a Modbus I/O Server?  Any caveats I should keep in mind?
    Performance Benchmarks for Network Published Shared Variables
    http://www.ni.com/tutorial/14675/en/
    Thanks,
    Chris
    d2itechnologies.com

    If your application can deal with it I would recommend staying clear of the 'Networked Published' option.
    When I started my Modbus development on cRIO....I left it enabled, and with ~100 shared variables on a 9074, the CPU was railing, and I saw a buffering behavior on the shared variables (which was not desirable in my application).
    In my application I am using the old modbus library (as apposed to the new API) for cRIO to slave comms, the cRIO being the master.
    I am also using the IOserver making the cRIO a slave to an external SCADA - and it passes essentially the same data arrays as I use on the modbus library for my local HMI [Not an NI product].....Which is two full Modbus frame writes (@ 120 words each, and about 60 words more for ~300 words outbound from the cRIO).
    The IOserver slave was a recent addition and did not add much to the CPU load - although only 16 bytes is high speed, the balance of the total word package is at either 1 second or 3 seconds.
    So, in my experince, the 'Networked Published' option adds significant CPU loading (on entery level cRIOs) YMMV.
    I am huge fan of the shared variable engine (some at NI were pusing the CVT, and TCE etc...). However most of my shared variables are not the Networked Published variety (excepting local module channels) those have remained networked published for DSM (Distributed System Manager) use.

  • Slow update of Shared Variables on FP-2015

    My application consists of a VI running on an FP-2015 which collects and transmits data from/to various I/O modules (DI-301, aI-111,Do-401 etc.) and passes this data to a Host PC using Shared Variables.  It runs embedded on the FP-2015 so that it can react to certain conditions I/O conditions and implement the appropriate safety measures independently of the host pc.  When I try to change some of the values of the shared variables from the host PC, it seems to take a long time for them to update on the Fieldpoint controller.  For instance, if I try to change the value of a Digital Output on the DO-401 from the Host machine, it can take anywhere from 1-3 seconds before the I/O point actually toggles.  I am using an Array of Booleans as the shared variable type.  I am pretty sure the code is running at the correct speed at that this delay is caused by the shared variables because if I remove the shared variable and toggle the digital outs from high to low and low to high, they switch at the appropriate rates.  Has anyone had any problems with using shared variable on a FP-2015?  (The shared variables are hosted on the PC, not the 2015)
    Also, I have seen some postings which say the FP-20xx do not have enough memory to run the shared variable server.  Should I be OK if the server is running the Host pc?  The link below says the shared variable engine will not work properly on the FP-2015 because it does not have enough memory but the second link below says 32mb are required (although 64 are recommended), which the FP-2015 has.  Any confirmations whether the shared variables will work properly on the FP-2015.  In all, I presently have 6 shared variable, each on being an array with a length of 20 elements.
     http://digital.ni.com/public.nsf/allkb/6e37ac5435e44f9f862570d2005fef25
    http://zone.ni.com/devzone/cda/tut/p/id/4679
    Thanks
    Damien

    Thanks Tommy, I appreciate the advice.  However, I think I will be abandoning the network-published shared variables for now.  Before I left the office yesterday, I found the "communication wizard" for the Real-Time module.  It allowed me to select my top-level VI and it automatically set up a communication routine to transmit all the controls and indicators from the RT target to the VI running on my host PC.  It allowed me to select from multiple communication protocols (I selected TCP) and everything is working well now.  I would have liked to use shared variables because they seem to be so simple to use but I do not want to waste time trying to hunt down a problem I may never fix.  For now, out with the shared variables, in with TCP.
    PS.  I didn't change any of my code, (I was using timed loops in case you were wondering), and by changing over to TCP communication all my delay problems were solved.  This reinforces my suspicion that the delays were caused by the shared variables.
    Damien 

  • Shared Variable programatic access error -1967362038

    I am trying to read/write about 300 tags from a AB ControlLogix 5561 processor from Labview 2011 on windows XP SP2 through RSLinx Classic OEM OPC Server.
    At first, I tried to do everything through front panel data binding through Datasocket, but I've seen that it becomes slow/unreliable when the bound control count reaches beyond 100 tags or so. Wrong values are read/written, some controls just won't bind intially or lose thier binding...it's really odd. So, I've put all of my faith in Labview's DSC module to solve all of my problems.  
    I installed it on a development system, played around with the programmatic access VI's using the NI's test OPC server and everything worked great. It really looked like the answer to all of my problems but upon installing it on the target system I can't get it to work the same way! 
    I've created a library of simply 3 bound variables to the PLC through an IO server pointing at the local instance of RSLinx. I am able to browse the PLC tag structure, so at least the link through RSLinx is working to some extent.
    Fairly often, when running the seach variable container vi on that library, I get error -1967362038 with explaination  "IAK_SHARED". When this error occurs, the same error shows up in the distributed system manager and the shared variables can no longer be read. I can't seem to pin down a pattern but happens every few minutes or roughly ever 3rd or 4th time I start the vi. When it happens, the only way I've found to get things working again is to undeploy and redeploy the library, while watching it in the distributed system manager, then continually loop the search variable container VI for a few seconds and eventually, the variable comes back to life.
    Does this sound like an issue with RSLinx? One of the previous posts related to this error mentioned that it may be due to some corrupt files in MAX?
    Other network published shared variables (non-OPC) seem to be OK so I think this is a problem related to the DSC module.  
    I rebooted twice immediately after the DSC installation.
    I have uninstalled SQL Server 2005 and the DSC module and reinstalled both. Twice.
    What could be causing this error?
    Would it be worth while to port over to the NI OPC Server?

    Hi pjrose,
    First, I notice that this post is pretty similar to a service request that one of my colleagues has been working on. Are you working with another NI Applications Engineer on this issue?
    That said, the issue could very well be related to the DSC module. I doubt that the error is a MAX corruption error, at least at present. One thing that would be worth checking would be the connections on your network. Additionally, what are the types of variables that you are accessing, and how specifically are you coding the access to them? If you could post an example of how you're accessing them, that would be helpful.
    Best,
    Dan N
    Applications Engineer
    National Instruments 

  • Shared variable engine OPC delay

    Hello All,
    I've got a bit of a problem with the delay time of updates between a non NI OPC server and a shared variable engine OPC client. I am using the redion OPC software OPCWorx with a database of around 120 tags to monitor and log data from a small pilot plant. The OPCWorx server has been configured with a 500ms update rate and the NI OPC client as the same. So here is my problem; logged updates occur no faster that 2.5s which is fine as we are exporting in 3 sec intervals (would still like to increase the performance of this to around 1 sec) but the main nuisance is the delay in changing control registers in the PLCs. The controls and indicators are directly bound to the shared variable items where by a change in a boolean control on the front panel to the indicated response can range from 5 to sometimes 15 seconds delay. If I bind the same control and indicator directly to the OPC item through the data socket the delay in response is almost unnoticeable at around 1 sec. I have increased the update/logging dead bands of the shared variables which seemed to help but only by a second or so.
    Im wondering what the limitations of using an OPC shared variable engine is and if this is expected or have I gone wrong somewhere. All the variables are configured as non buffered and to have single writers.
    A few changes have been suggested;
    - Upgrading the PC. The current hardware is AMD Phenom 2 3.21GHz processor, 4GB of ram running windows XP.
    - Using Modbus over ethernet instead of OPC.
    This is the first major use of the DSC module and using the shared variable engine as an OPC client so any information or insight to help with this would be much appreciated. 

    Hi Kallen,
    The Shared Variable Engine can become bogged down when there are more than 100 shared variables.  This may be what you are experiencing.  We can check buffering for the variables and disable it, and that should improve performance slightly.  As it is, using datasockets may be the best method for over 100 shared variables.  I apologize for the inconvenience.
    Here are some KnowledgeBase articles that can be of use with regards to this issue:
    How Does Buffering Work for Shared Variables?
    http://digital.ni.com/public.nsf/allkb/5A2EB0E0BC56219C8625730C00232C09?OpenDocument
    Why Is Accessing IO Variables Through The Shared Variable Interface So Slow?
    http://digital.ni.com/public.nsf/allkb/F18AEE4BE7C9496B86257614000C43DF?OpenDocument
    Matt S.
    Industrial Communications Product Support Engineer
    National Instruments

  • Problem with shared variable

    I am a newbie to labVIEW, after working on this program for 3 months with the help of NI people, the good news is, I finally can get a running program. The bad news is, the program is especially slow and 3 parallel loops only 1 ran. My code includes modbus, however, modbus works much faster than the shared variables of fieldpoint.
      Hence qn:
    1) Why are the shared variables not working? I am using network published variables as my host computer passes value to the FP target and I set a buffer for the program. The program entails read and writing of shared variables, however, it was made sure that only 1 variable is allowed to read and write at one time.
    2) Why only 1 of the 3 parallel loops will run? They are having the same time wait function, no loop should eat up the whole time share of the processor.
    Attached are my program.
    Really hope someone can help.
    :1
    THanks a million!
    Attachments:
    Heater Control.vi ‏612 KB
    cFP- Heater Control.vi ‏331 KB
    cFP- Temperature Control.vi ‏532 KB

    Hi cfp!
    Are the three parallel loops you're talking about then ones in cFP - Temperature Control? If so, they're not parallel at all. You're wiring the error cluster from the top loop into the middle loop and then into the bottom loop. This causes a data dependency of the bottom loop on the middle loop and the middle loop on the top loop.
    Remember that a node (such as a VI, function, or loop structure) can not execute until it receives all its inputs. Furthermore, a node can't release its output values until it's completely finished executing. In the case of a while loop, this means the terminal condition must be met and the while loop has stopped executing. This means that the middle loop must wait for the top loop to finish before it can even start. And the bottom loop must wait for the middle loop to stop before it can start.
    Delete the error cluster wires between the loops and you should have three bona fide parallel loops. Consider some other method of transfer between the loops for your error information if it's pertinant to your code, such as local variables, queues, notifiers, etc.
    Jarrod S.
    National Instruments

  • Shared variable RT FIFO (multi element) - Read all at once

    Hi all!
    I'm trying to collect data from analog inputs on my cRIO with a 1 sec time step. For that purpose I use a Timed loop and a shared variable with RT FIFO enabled (multiple array of doubles). Parallel to the timed loop, I also use a while loop that is significantly slower (lets say 10 sec period) that should read all arrays stored in FIFO and write them down in a .txt file (everything is executed locally on RT target). Is there a way to empty the RT FIFO shared variable at once, and if no how can I get a number of arrays that are currently stored in a shared variable?
    What is the difference between RT FIFO created by a shared variable and the one created using Real-Time/RT FIFO palette? I prefer using shared variable instead of Real-Time/RT FIFO since it also allows timestamping.
    Best regards,
    Marko.
    Solved!
    Go to Solution.

    Marko,
    We recommend that the non-deterministic loop run faster than your deterministic loop and pull off samples when they become available. If you don't want to do this, you can simply put your RT FIFO read in a For Loop set to execute 10 iterations and then place that in your while loop. For more information on the differences between the RT FIFO and Shared Variables with RT FIFO enabled, please see pages 32-35 of the CompactRIO Developers Guide linked below.
    http://www.ni.com/pdf/products/us/fullcriodevguide​.pdf
    Hope this helps!
    Rob B
    FlexRIO Product Manager

  • Shared Variable Alias vs Programmatic Assignment, which is better?

    Hello,
    I am programming an application for a cRIO platform using the scan engine in LV2009SP1.  I'd like to use the cRIO as a standalone controller, and only connect a computer to it occasionally to download data.  I decided to do this using shared variables.  So far I've been successful at doing this in two ways, both involved creating nearly identical sets of shared variables on the cRIO and on the PC, which I think (correct me if I'm wrong) I understand is the way it should be done.  The differences are the following:
    The first method was to create a parallel loop on the RT VI which would assign the values passed through the shared variables to the individual I/O channels.
    The second method was to Enable Aliasing on each shared variable and bind it to the appropriate I/O channel.
    After deploying the RT code to the cRIO for each case and running the UI on the PC, I noticed that both worked, however, the second method showed a short lag between command and feedback (~200ms or so).  The test I ran was to wire an analog output to an analog input, command the AO, and read back the value on the AI.  I can deal with the delay on the UI since it's only for reference - what I'm really interested in is the data that's logged on the cRIO controller.  My question is:  Is one of the two methods better or more appropriate than the other?  Is there something even better?
    Thanks,
    -NS

    No Substitute wrote:
    I decided to do this using shared variables.  So far I've been successful at doing this in two ways, both involved creating nearly identical sets of shared variables on the cRIO and on the PC, which I think (correct me if I'm wrong) I understand is the way it should be done.
    Hi NS,
    No correction necessary.  These are both valid ways of sharing data between your Real-Time controller and your PC.  Here's a forum that discusses some similar options to accomplish the same task (
    Shared variable architecture for distributed crio system)
    Regarding the latency issue with method 2, take a look at KnowledgeBase: Why Is Accessing IO Variables Through The Shared Variable Interface So Slow?
    I hope this helps!
    - Greg J
    Why Is Accessing IO Variables Through The Shared Variable Interface So Slow?

  • Shared variable, missing data, the same timestamp for two consecutiv​ely data

    hello
    I have a problem with missing data when I read from a network published shared variable.
    Host VI:
    In a host VI on my Laptop (HP with WinXP Prof.) i'm writing data to the shared Variable "data". Between two consecutively write operations is a minimum of Milliseconds waiting time. I use that because I want to be sure that the timestamp for each new data value is different then the preview one (resolution shared variables is 1 ms)
    Target VI:
    the Target VI on a cRIO-9012 realtime device is reading only new data in the way that it compares the timestamp of a new value with the timestamp from the last value.
    Problem:
    rarely, I'm missing a datapoint (sometimes everything works fine for several hours, transferring thousands of data correctly before suddenly the failure happens). With some workaround I'm able to catch the missing data. I've discovered that the missing data has the exactly same timestamp then the last readed datapoint, therefore is ignored in my "legal" data.
    To sum up, the missed value is written to the shared variable in the host, but the target ignores him because his timestamp is wrong, respectively the same as the last value has, despite the host waits every time for a minimum of 10 milliseconds before writing a new value.
    Note:
    The shared Variable is hosted on the Laptop and configured using buffering.
    The example is simplified only to show the principle function, in real I use also a handshaking and I secure that there is no over- and underflow.
    Simplified Example:
    Question:
    Has someone an idea why two consecutively data can have the same timestamp ?
    Where is the (wrong) timestamp finally coming from (system?) ?
    What would be a possible solution (at the moment with shared Variables) ?
    -> I tried a workaround with clusters where each data gets a  unique ID. It works but it is slower than comparing timestamps and I could get performance problems.
    Would it change something when I host the shared Variable on the RT-System ?
    Thanks for your help
    Regards
    Reto
    Solved!
    Go to Solution.

    Hi Reto,
    I had a look on your modified Example.
    Because the Shared Variables didn`t work like Queues or Notifiers (No Event or Interrupt when a new value has been written. And for sure the´re not possible over a network) you will see the issue, that the code is reading the values more often with the same timestamp (Polling Problematic) if the reader is faster then the writer. And because the timestamp is written with the value you´re able to program like you do. Filter out whats duplicated when you have the same timestamp.
    Everything is described in here:
    http://zone.ni.com/devzone/cda/tut/p/id/4679#toc1
    Laurent talked about a second depth of buffer. Please have also a look at the link. Somewhere in the middle of the tutorial you see the explanations of Buffer and RT-Buffer.
    Regarding your question: Would it change something when I host the shared Variable on the RT-System? --> No
    In my experiences, you should consider to place the Shared Variable Engine after asking some questions regarding the application.
    You will find the Answers to this 3 Questions also in the link:
    Does the application require datalogging and supervisory functionality?
    Does the computing device have adequate processor and memory resources?
    Which system is always online?
    And you`re right the smalles time interval you can see in the timestamp is 1ms.!
    What you also can do is working with an enabled "timed out". This might be more performance efficient than reading the timestamp.
    What I don`t know and not find up to now, is if LabVIEW or the OS adds the timestamp. It´s taken from the system time, this looks like LabVIEW is taking the value and adds it. 
    I hope this helps
    Alex
    NI Switzerland

Maybe you are looking for

  • How do I create email list in Numbers

    I am trying to create a list of email addresses in Numbers which I can copy and paste into an email.

  • P2 Import crashing system

    Can anyone tell my why using my log and transfer window to import P2 clips causes such frequent crashes in my system? The only plug-ins I can think of that might have an effect (but I'm no expert) is one I have installed from flip4mac which allows me

  • Display icons

    Hi, I'm trying to convert forms 4.5 to 6i without a Server, i've already define the string UI_ICON on my registry, i've converted the ico files to gif files. The problem is: i can't see my icons on the canvas and when running through Builder / Web. I

  • How to upgrade built-in subversion

    I have been using the version of subversion (1.4.4) built-in to Leopard (10.5.6) successfully for many months now but a recent commit from a windows subversion client (1.5) updated the version of the repository so that now I cannot commit from my Mac

  • XMLAnonymizerBean does not work properly

    Hi all, I do upgrade from 7.0 scenario to 7.1. In soap adapter I use mentioned module to send some namespaces. I used the same parameters as in 7.0 but that didn't work - all namespaces were cut. Then I spent 2 more hours testing the module and all I