Datasocket shared variable api

I ran into the following the other day whilst debugging some code:
We had a type mismatch when sending data however, when testing the DataSocket API did not return this error information, nut the Shared Variable API did.
DS just returned default data with a change in the timestamp)
This got me thinking:
Is using Shared Variable API the way to go? Is it a better (more refined) API?
Are there different reasons for using one over the other? (I do like that DS's refnum is not datatyped, which means that I have to provide the datatype on the read as opposed to the SV-API).  
With respect to the type mismatch, I found that I could actually initialise the Variable with either datatype and it worked (only) for that datatype.
It would only error when the other datatype was used.
Does that mean the Variable does not know what its datatype actually is when a cluster is used?
Cheers
-JG
Certified LabVIEW Architect * LabVIEW Champion

charris wrote:
I'm not sure I quite follow your question regarding datatypes. It looks like the error is being returned because a variable cannot contain a cluster. You can do arrays, but not clusters, so that's probably why the SV API is complaining about the type input.
Hi Caleb
My question was relating to the fact that if the variable is not initialized on one PC, and I try reading from it on another PC, specifying the datatype to read back, then I get an error using the SV API but no error from DataSocket.
My use case was for a Cluster.
I didn't check whether it was Cluster-specific or occurs for every datatype.
I assumed Cluster-only as Clusters differ when setting up the Shared Variable.
I.e. All standard datatypes are available in a drop-down, but you need to select Custom Control for a Cluster.
So the last question was asking could I theoretically use any Cluster with a SV-Cluster as long it is the same when I do a Read and Write.
I.e. When using the DS/SV-API the Shared Variable does not know (/have information on) what its datatype actually is when a Cluster is used - it relies on what datatype is used to initialize it? 
Hope that makes sense.
Cheers
Jon
Certified LabVIEW Architect * LabVIEW Champion

Similar Messages

  • Datasocket Server vs. Shared Variables

    Does anyone have any thoughts on what is better - Datasocket Server vs. Shared variables?  I have a table on my application that has text indicating application status, information, warning and debug messages and would like to view it remotley over a network.  The old way was to use the datasocket server and bind it to the other control.  Is the new shared variable engine more efficient?  These machines are at different sites.
    John

    Hi John,
    It definitely seems as if you want to gauge user experience on this issue, but since you've had no response I'll chime in and give the "National Instruments view" on the DataSocket/Shared Variable debate.
    Shared Variables were created to expand the functionality of DataSocket and simplify the programming style required to pass information between networked computers. We have extensive literature on this topic and the most pertinent is linked here.
    I hope some users will post to this forum to give you a less formed response than you get from me, but I am more than willing to answer more specifically if you have any more questions regarding this issue.
    | Michael K | Project Manager | LabVIEW R&D | National Instruments |

  • Why should you explicitly open and close shared variable connections?

    I'm looking into switching over from the old Datasocket API to the new Shared Variable API for programmatic access to shared variables, and I noticed that LV doesn't seem to have any problems executing Shared Variable Reads & Writes without first opening the connection explicitly. That is, I can just drop in a shared varaible Read VI, wire a constant to the refnum input, and it will work. I'm wondering, then, what benefits are offered by explicitly opening the conenction ahead of time...?
    I guess I could see some cases where you want to open all necessary connections in an initialization state of a top-level state machine, particularly if you want to use the "Open & Verify Connection"---so you could jump straight to an error case if any connections fail. But other than that, why else might one want to explicitly open the connections.
    And, along those lines, are there any problems with implicitly opening the connections? One reason why I am hesitant to open them explicitly is because for one of our applications, we need to be able to dynamically switch from one variable to another at runtime. It would be nice to just switch the variable refnum (wired to the input of the Read function), without having to manually close out the old connection and open a new one. A quick prototype of this seems to work. But am I shooting myself in the foot by doing so?
    Thanks in advance.

    I'd expect there's a very small number of people at NI that would know the answer to the detail you're asking for.  But, let's try to extrapolate from this rather old post to see if we can understand what they're forming their impression on.
    The shared variable has to have some sort of reference going on in the background.  It looks like they're calling this a connection.  It's how LabVIEW knows where to find this variable on the network.  We can also see this reference certainly exists if we're opening/closing the reference in the explicit method.  You see the connection as just a string referencing the variable by URL.  This "reference" has to be stored somewhere.  No matter how we're looking at this, we're aware there's a reference of some sort stored. 
    Now, we'd want to look at what would cause this reference to go away.  If you open/close explicitly, it's easy to see it goes away at the close.  If it's implicit, when would it make sense to close it out?  The VI can't be expected to guess where it's done being referenced and close it out.  This puts us into a situation where the soonest it could close is when the VI ends.  From my experience, references tend to be wiped when you close out LabVIEW.  It's this kind of idea that makes the FGV possible.  I wouldn't be surprised by Morgan's claim here.
    If we look at scalability, you're talking about two different topics.  You're talking about adding an extra open, close, read, etc rather than just a few wires.  That certainly would look a mess.  In terms of the dynamic swap that was being discussed, we wouldn't be adding all of those.  The concern would be if enough connections were opened it'd start to behave similar to a memory leak.  This could be something that works with a smaller number of variables.  If you continue to scale, it becomes problematic.  This is why they suggest it's not scalable. 
    To your questions:
    Does LV allocate some kind of session in memory using that string as a lookup?
    It would HAVE to allocate some memory to hold that string.  Otherwise, it'd be pointless to even have the reference. 
    Does it reuse that session if other parts of the application reference the same variable, or does it create a unique session for each referencing call to "Read Variable.vi"?
    I would expect this to be "it depends."  With the implicit method, I would expect it to open a new reference in each point it isn't wired.  This is similar to int x,y = 5;  x and y share the same value but are their own unique memory location.  If you wire the reference to the other points, it should use the same reference.  This would make more sense to me than the program seeking out any other potential usage of the variable in the application to see if there's already a reference open.  I could be wrong, though.
    And what resources does this "connection" actually represent?
    At best, this is just the string.  At worst, it's the string and the TCP socket.  I'd lean towards the first.  Opening and closing sockets should be relatively easy in most applications.  But, it also wouldn't surprise me if it holds the socket.
    I'm sure others have a better understanding than I do.  But, that's what I'd expect for anything you've asked.

  • Do I have to Bind Shared Variables in Executables?

    I want to have shared variables work between two executables that will be installed on seperate machines.  I've managed to get the code working, but am not satisfied with the approach.
    Attached is a .zip file with my project files that contain what I've been able to get to work.
    What I don't like, is how I need to have double the amount of network variables.  Right now I have a program that is a publisher and another is a subscriber (based on an example provided on this forum at some point).  The Publisher has its own shared variables inside a library, and the subscriber has its own shared variables inside a different library.  The Subscriber's shared variables are "bound" to the Publisher's shared variables in the project file and is a concrete physical network path that cannot be changed.
    Once the executable is made and installed, this works.  I've tried getting it to work w/o one of the executables using a shared variable library that contain bound shared variables through a concrete physical network path but I obviously do not know what I am doing.  Ultimatly, I would like to have a single shared variable library that both the publisher and subscriber use.  I want the publisher executable to post the variables to the variable manager running on the publishers machine, and then have the subscriber program running on the subscriber's machine to consume the shared variables from the variable manager on the publisher's machine.
    Please help me!
    -Nic

    Hey Nic,
    I am going to try and answer those questions for you.
    If you use the DataSocket API to connect to a variable that resides on another machine, is your local machine's variable engine involved at all?
    No, it should not be involved at all.
    If the Variable Engine is not used locally, does it need to be included with your application installer?
    Yes, it is going to need to included with the installer.
    If I'm correct, whats better?  To use the variable engine locally, or to connect directly to a remote variable engine?
    Can you please give a little more information here?  I'm not quite sure if I understand your question.
    The default time-out on the DataSocket Read vi is 10 seconds.  If you are doing multiple DataSocket reads in the same polling loop, one variable may update but the other datasocket read is waiting for its timeout before allowing the loop to itterate.  This creates a really laggy scenario (see progmatic subscriber.vi in the attached .zip).  Setting the timeout to something low, say 10msecs, reduce the "lag" but...
    Having the ability to set a decent time-out is a really nice performance feature that isn't availabe to you when you Bind to a shared variable (I don't think).  It may make performance since to not do any loop iterations until the data from the host has changed.... how to manage this w/o seperating each read into its own loop however is my question.... ???
    Can you please explain this a little more as well?  If I understand you correctly, it sounds like you have many datasockets open in one while loop.  Are you using one datasocket per variable?
    Is there any unseen advantages with using the National Instrument's Shared Variable API over the DataSockets API?
    Shared variables are in general more easier to use, and provide a simple way to implement the idea of passing data between machines.  Obviously, using TCP, you can create more control using your own protocol to exchange information, but in general, Shared Variables are normally just easier to use.
    Feel free to post any questions or dicussion issues here.
    Regards,
    Kevin H
    National Instruments
    WSN/Wireless DAQ Product Support Engineer

  • Datasocket and Shared Variables

    I am curious if there is any advantage to using Datasocket to read/write shared variables (as opposed to a direct read/write).  I'm specifically talking networked shared variables here.
    Is there any speed advantage to accessing shared variables thru the Datasocket functions?  Since both a direct read/write and a Datasocket PSP read/write talk to the same variable engine I assume they are equally efficient but I'm looking for confirmation here.  I've seen benchmarks for shared variable performance but none of them use DS/PSP to access the variables.
    Normally I would not even think of using Datasocket to access shared var's but where I currently work we have a large app that does this and it works great.  I suspect that this functionality only exists in LV8.x for backward compatibility and non-Windows OS compatibility and is not really meant to be used for new, Windows-based apps?   Am I off base on this?
    I am working in LV 8.5, BTW.....

    Hello Jared,
    Thank you for the reply with clarification. 
    Based on your comment, I changed the buffer parameters and also tried the programs with two different data types, previously StringArray and now String.
    In the attached LV8.6 project, you have all the programs, and shared variable library to review my tests. 
    There are two sets of two files - each set has a Write Shared Variable and Read Shared Variable file. One set is for StringArray type Shared Variable (named StrArr in the library), and the other set is for String type Shared Variable (named Str in the library).
    String Array example:
    MultipleDS-Write-SharedV-StrArr.vi / MultipleDS-Read-SharedV-StrArr.vi
    In my String Array shared variable, I use only 4 element array, each having 4 character strings - meaning 16 bytes per String Array data. I have two loops in the write file, writing to the same variable, an array of 4 strings, each loop continues until the loop index is >0. This means, sometimes, depending on the processor speed, the variable will be written 3 times or 4 times (the variable could have a new value before the loop condition is checked).
    So this means, if I have buffer of 100 bytes (16*4=64<100), it's enough for 4 such arrays (of 4 elements, each element with 4 characters) could be buffered to have sufficient time at the client (Read) program to read them. 
    I am putting 2048 bytes in buffer, which is much more than sufficient in my case. 
    The writer loops run with 200 ms to wait for each iteration. The reader loop runs with 100 ms in DS timeout and 100 ms in wait timer. This gives results without any loss. However, if I run the reader loop with 1000 ms to wait for each iteration, the data is lost. The buffer is not maintained for 2048 bytes.
    In the read program, just to make sure if all data is read or not, I am showing data in two different string indicators, showing data of each loop.
    String example:
    MultipleDS-Write-SharedV-Str.vi / MultipleDS-Read-SharedV-Str.vi 
    The String Array shared variable didn't show values in the Distributed System Manager. Hence, I created another simple variable with String datatype.
    The writer program writes strings of 4 characters, one-by-one, in two loops. Meaning, total 8 strings of 4 characters each are written in the "Str" Shared variable. 
    The reader program, however, doesn't always display all the 8 strings. Although the wait timer is not high (slow) it still misses some data usually. Data is overwritten even before the buffer is filled (in buffer, I have defined 50 strings with 4 elements in each).
    In both of the Read programs, I read using datasocket. I think thought datasocket has more ability to buffer. Earlier I had "BufferedRead" in DataSocket, which I have changed to just Read, because BufferedRead didn't give any special buffer advantage in the Shared Variable reading.
    ---- This is an update on the issue. 
    Ok, just while typing the last paragraph above, regarding datasocket, something clicked in my mind, and I changed the DataSocket functions to simple Shared variables (completely eliminating datasocket functions) in the read programs as well. And bingo, the buffer works as expected, even if I have reading loops very very slow, there is no data loss in any of the program sets. 
    The two changed Read programs are also included in the attached project - MultipleSV-Read-SharedV-Str.vi and MultipleSV-Read-SharedV-StrArr.vi
    So this means, I can completely eliminate DataSockets (not even using PSP URLs in DataSocket Open/Read functions) from my programs. 
    One question here, what will be an advantage of this (or any side effects that I should be keeping in mind)?
    Vaibhav
    Attachments:
    DataSocket.zip ‏71 KB

  • Datasocket read stop but shared variable can update

    Hello ,
    I program with datasocket open/read, sometimes the read stops and after some time,maybe 10s, it go on reading, I compare this with shared variable (DSC) and found
    the shared variable can update ,I don't know what happen to my program.
    ( the reason I'd like to use datasocket is that I need to change the ULR dynamically , I have many OPC tags to read ( 100 PLCs time 80 tags per PLC )  )
    thanks for your help!  
    Solved!
    Go to Solution.
    Attachments:
    shared variable VS datasocket.jpg ‏23 KB
    shared variable VS datasocket.vi ‏17 KB

    Hi~
    The read function waits for items to be updated, and 10ms is the default time for this waiting. Setting wait for updated value to false will probably solve this issue.
    PS: URLs of variables could be changed programmatically, see this document if you like.
    Good day

  • Use buffering and psp with datasocket-VIs and without any binding and shared variable node

    Hello,
    I'm using LV 8.5.
    I'm trying to develop a multiplatform (windows and mac os x) and multi-computer application. II want to get executables running on each device, communicating through the network. Communication process includes datas (such as images) and events messages (something like "Hello, I got an error" or "youyou, my work is done" or "I'm hereeeee!!!!...."). I do need a communication without any loss of data.
    I worked a lot and wanted to test a psp-based design, without any binding nor shared variable node (mac os...) using data socket VIs and SVE buffering.
    I managed to :
    - deploy shared variable library dynamically (even in an executable)
    - communicate between two PCs with datasocket VIs
    However, I never managed to enjoy buffering (even locally with one VI doing the deployment and writing datas and another one for reading).
    I worked hard (dynamic buffering setting, dynamic buffering watching like in  http://zone.ni.com/reference/en-XX/help/371361D-01/lvconcepts/buffering_data/ and in the example "DS send image" and "DS receive image" in the labview examples, trying to use "?sync=true" in the URL, etc...) but no way to get things work.
    I attached a jpeg of an example of receiver and sender. I use wait commands in both receiver and sender to test buffering
    Receiver do receive datas (the last written) but buffering doesn't work.
    Did somebody did that before ? (better than me...)
    Thanks
    Bo
    Attachments:
    Sender.JPG ‏87 KB
    Receiver.JPG ‏96 KB

    Hello,
    Indeed my problem has been solved. My error : in the While loop of the receiver VI, I always reactualize the PacketsMaxBuffer and OctetsMawBuffer parameters, what resets the buffer and make it appears ineffective.
    I now set  the PacketsMaxBuffer and OctetsMawBuffer values only once at the begining of the VI and the psp buffering works perfectly.
    Sorry for the desagreement...
    Bo

  • Datasocket works, Shared variables dont?

    Hi community,
    the title tells it all. I'd like to initiate a communication between the computers with shared variables. I am using the example project shipped with labview, which works fine on a local computer, but it doesnt when I try to use on different PCs. When I try to browse for the variables published on the network I dont even see that PC.
    Which is quite odd, because datasocket communication works without an issue.
    Can you help me out with some hints?
    thanks!

    Hi 1984,
    Please check this paper regarding the issue. There is a step by step on how to do the communication using shared variable.
    http://digital.ni.com/public.nsf/allkb/7815BCE435DCC432862575DA006FEBF8
    Best regards,
    TuiTui

  • Not able to access Network Shared Variable when deployed in PC

    Initially, when we deploy a Network Published Shared Variable in PC and try to access the same (Read/Write) from RT, we get an error/warning "-1950679023". But when we access the same variable from the PC, it works fine. In the distributed system manager, we can see the variable getting deployed. Also the value changes in Distributed System Manager if we write a value from PC. The attached image "Shared variable Issue.png" will give more understanding of this issue. 
    The other way round it works fine i.e deploying in RT side and accessing from both RT and PC is working fine. 
    We also see that the network adapter settings is not loading properly in MAX, if at all we install any software in RT. With no software installed, it loads properly. 
    The following were the steps that we tried to solve the issue. 
    1.Flushing the entire NI software and Re- installing again 
    2.Formatting the RT(PXI). 
    3.Removing the EtherCAT Card and testing. 
    4.Checked the network properties of the RT network. 
    5.Checked the IP/Subnet/Gateway settings. 
    6.Checked Firewall Settings, If Shared Variable Engine is accessible.
    Attachments:
    Shared variable Issue.png ‏491 KB

    First Root cause needs to be identified before any actions.
    I would suggest first check if you can access the shared variable hosted in PC from RT using other ways like using SVE API (Logos and PS protocols, Datasocket etc..)
    Check if antivirus or firewall is playing...
    Check the same experiment with some other PC if you can.
    You can also try creating another Shared Variable in RT and binding the same to the PC and try to access it...
    Since you have did all the reinstallations already
    Best Regards,
    Vijay.

  • How to use shared variables with native c programs

    Hello
    What is the way to use shared variables with native c programs?
    I have a c/c++ program that uses the NIDAQmx C-API to perform measurements. Now I want to communicate to a LabVIEW program via shared variables.
    Is there a C-API for shared variables as there is for the NIDAQmx functionality? Where can I find further documentation? The document "Using the LabVIEW Shared Variable" mentions that one "can read and write to shared variables in ANSI C", but there are no hints about how and where to look.
    Thanks in Advance

    Hi user42,
    with CVI 8.0, you cannot create or configure shared variables. However, you can read or write to an already configured LabVIEW 8.x shared variable from CVI using the DataSocket API.
    In order to do this you need to have and DataSocket 4.3 or higher installed.
    Here's a forum post about using the DataSocket functions a LabVIEW Shared Variables:
    http://forums.ni.com/ni/board/message?board.id=180&message.id=24569&requireLogin=False
    With CVI 8.1 and Measurement Studio 8.1 it's possible to use Shared Variables via the Network Varaiable Library (check out the end of the "Network-Published Shared Variable" section within the "Using the LabVIEW Shared Variable" documentation and the following link).
    Datasocket with LabWindows/CVI and LabVIEW Real-Time:
    http://digital.ni.com/public.nsf/allkb/CC4343488413A2F586256E6200099638?OpenDocument
    Daniel
    NIG

  • I/o server shared variable not working in deployment system ( error no-1950679034 (0x8BBB0006) (Warning))

    Hello ,
             am using shared variable from opc client in labview when am run a exe file at development system its working fine but when am running it in deployment system its not working am using same configuration file in opc server at development and deployment system error -1950679034 (0x8BBB0006) (Warning)

    First Root cause needs to be identified before any actions.
    I would suggest first check if you can access the shared variable hosted in PC from RT using other ways like using SVE API (Logos and PS protocols, Datasocket etc..)
    Check if antivirus or firewall is playing...
    Check the same experiment with some other PC if you can.
    You can also try creating another Shared Variable in RT and binding the same to the PC and try to access it...
    Since you have did all the reinstallations already
    Best Regards,
    Vijay.

  • Updating "Shared Variable" Map address

    Hi Experts!
    I have a question regarding shared variables:
    At the moment I am creating a Shared variable (81O_G01) in the library and this is binded to :
    My Computer\SVCREATION.lvlib\Modbus1\410611
    Question is: How can I change this SV address (410611) when tha variable has been already opened?  is it possible?
    Thanks in advance!
    Anibal
    Solved!
    Go to Solution.

    Hey Anibaldos,
    It sounds like you are basically attempting to change the binding of a variables from <MB I/O Server>/A1 to <MB I/O Server>/A2 at runtime. I think there are a few options for this:
    1) In LabVIEW DSC you can access a hosted variable and change the binding address using a property node. If you are running the modbus server on a windows machine, this is the best option. Otherwise, I am guessing you do not have DSC. If that is the case you can still change bindings by editing the library manually, but it sounds like this will not help you out.
    2) It may be possible to open a shared variable connection using the palettes (Data Communication >> Shared Variable) to that specific address, in which case you have no need to use the bound shared variable. You can simply open a connection to two different modbus addresses and read from one or the other as needed.
    3) If #2 does not work, then I believe you can still use the datasocket API to perform such an operation. There is a little bit of research you'll need to do to use it (for example, URL formatting and UI-thread issues) but it does work. I believe this is the approach taken by this document: https://decibel.ni.com/content/docs/DOC-13508 (from experience I know you have to dig a bit to find the VIs, but they are there).
    4) You can use the library on Ni Labs: http://ni.com/labs

  • Performanc​e of Modbus using DSC Shared Variables

       I'm fairly new at using Modbus with LabVIEW.  Out of the roughly dozen tools and API's that can be used, for one project I'm working on I decided to try using Shared Variables aliased to Modbus registers in the project, which is a DSC tool.  It seemed like a clever way to go.  I've used Shared Variables in the past, though, and am aware of some of the issues surrounding them, especially when the number of them begins to increase.  I'll only have about 120 variables, so I don't think it will be too bad, but I'm beginning to be a bit concerned...
       The way I started doing this was to create a new shared variable for every data point.  What I've noticed since then is that there is a mechanism for addressing multiple registers at once using an array of values.  (Unfortunately, even if I wanted to use the array method, I probably couldn't.  The Modbus points I am interfacing to are for a custom device, and the programmer didn't bother using consecutive registers...)  But in any case, I was wondering what the performance issues might be surrounding this API.
        I'm guessing that:
    1) All the caveates of shared variables apply.  These really are shared variables, it's only that DSC taught the SV Engine how to go read them.  Is that right?
       And I'm wondering:
    2) Is there any performance improvement for reading an array of consecutive variables rather than reading each variable individually?
    3) Are there any performance issues above what shared variables normally have, when using Modbus specifically?  (E.g. how often can you read a few hundred Modbus points from the same device?)
    Thanks,
        DaveT
    David Thomson Original Code Consulting
    www.originalcode.com
    National Instruments Alliance Program Member
    Certified LabVIEW Architect
    There are 10 kinds of people: those who understand binary, and those who don't.
    Solved!
    Go to Solution.

    Anna,
        Thanks so much for the reply.  That helps a lot.
        I am still wondering about one thing, though.  According to the documentation, the "A" prefix in a Modbus DSC address means that it will return an array of data, whereas something like the F prefix is for a single precision float.  When I create a channel, I pick the F300001 option, and the address that is returned is a range:  F300001 - F365534.  The range would imply that a series of values will be returned, e.g. an array.  I always just delete the range and enter a single address.  Is that the intention?  Does it return the range just so you know the range of allowed addresses?
       OK, I'm actually wondering two things.  Is there a reason why the DSC addresses start with 1, e.g. F300001, instead of 0, like F300000?  For the old Modbus API from LV7, one of the devices we have that uses that API has a register at 0.  How would that be handled in DSC?
    Thanks,
        Dave
    David Thomson Original Code Consulting
    www.originalcode.com
    National Instruments Alliance Program Member
    Certified LabVIEW Architect
    There are 10 kinds of people: those who understand binary, and those who don't.

  • Is there a way to reset the shared variable engine?

    I would like to be able to reset the SVE in a manner that is equivalent to what occurs during a hardware reset.
    My motivation for doing so is as follows:
    I have an cRIO app with lots of IOV's and NSV's that operate via the SV API and also with plenty of static nodes.  I am finding
    that on first run, from the DE, my CPU%=~55%.  On all subsequent
    runs my CPU%=~65%.  If I do a hardware reset or redeploy one particular
    NSV library then
    my cpu usage will return to ~ 55%.  I have tried isolating the area of code that accounts for this and have found that
    the problem centers around my initialization of one library of NSV's
    where I set its init value via the SV API.  The NSV references
    are all closed after writing without error.  So I am wondering if it
    is possible that the NSV ref's are not actually closing or possibly
    the setting of the inital value has some effect on the SVE so that subsequent runs get bogged down.
    Anybody have any ideas?

    Hello,
    If you redeploy your library, does that bring your CPU usage back down?  perhaps you can use that as a suitable workaround?  How exactly are you setting the initial value of your shared variables?
    Tejinder Gill
    National Instruments
    Applications Engineer
    Visit ni.com/gettingstarted for step-by-step help in setting up your system.

  • Are clusters or individual elements better for shared variables?

    So...  I have some RT code that is being updated, and pulled out of the Stone Ages of LabVIEW.  It was originally written for an old FieldPoint controller operating in "headless" mode, and used the "publish" and datasocket methods for communications and external control.  I had to get clever way back then, and put together a parsing/unparsing system for strings to send sets of data back and forth between the controller and any HMI or other computer attached.
    Now, I'm completely rewriting the code for a cRIO system, and doing my best to leverage all of the strengths of the latest LabVIEW versions.  I have already done an intermediate stage, where I converted from the publish/datasocket method to using network shared variables for my strings, so I could keep some of the original control and calculation logic.  Now, however, I'm going back to the drawing board for most of the program, with only some of the proven logic being held over into the new version.  And, as I'm putting together the data structures I need for both internal control and external communication, I'm in a bit of a quandary...
    I have come upon a data structure dilemma:  should I use individual shared variables for my data, or assemble associated data into clusters?  My original program had a string (essentially a flattened cluster) for each sensor in use (up to 4), one for the system parameters and states, and one for the control parameters and states.  There was a certain advantage to keeping the data compartmentalized like that, it kept things organized and forced me to avoid too many random references of each data point.  And it kept the number of communications channels limited to just a handful.  Mimicking this structure with cluster shared variables would be easy.  But, I'm not sure it's the best or most network-efficient method.
    I know the bundling/unbundling will add some processor time in my code, that is not new to me (it will still be much faster than my old parsing routines).  But, if I have individual data points being thrown around, I can access them easily from things like Data Dashboard (which is great, but far too limited to be able to grab items in clusters and such).  Having all of my data points individually available would make my project messier, but open up easier access.  It would also dramatically increase the number of data points being thrown around on the network at any one moment.  For reference, I would probably have a maximum of 100 data points at one time, made up of a combination of integers, floats, booleans, integer arrays, boolean arrays.  Or I would have a maximum of 8 clusters that would contain those data points.
    Any suggestions on which way I should lean?  Are there any advantages/disadvantages between shared clusters like the ones I need vs. the number of individual shared variables I would need using the alternative methods?  Network traffic and efficiency are always a concern, particularly since this is a "headless" cRIO in a control situation that must maintain a fast scan rate...
    Thanks for any help.  I'm so stuck on this fence, and I can't figure out which side to fall off!
    Solved!
    Go to Solution.

    Thanks Tim, that is a great source that I somehow missed in my hunt for information regarding my dilemna...
    I have to wonder though, does that 25 number also include the I/O points on your cRIO?  Anyone know that particular?  Most of the I/O points are network shared by default during initial configuration, and you could very quickly exceed 25 variables on an 8 slot rack (such as the one I use, a 9074).  Now I'm a bit worried that I'm overusing the variable engine, even before the communications clusters get figured in...

Maybe you are looking for

  • Links don't work and Resolution Low

    Hi Just made 2 temp pages for my site, published to a folder and used Transmit to FTP upload to my serverspace which is hosted by yahoo. The files transfer over but for some reason none of the hyperinks work at all and the resolution of the page is m

  • Text Messaging Call Back Number

    How Do I Permanently STOP my Text  messaging  Call Back Number from Popping  back  up. ? I take  it  off and  save  the  setting but  after awhile  it  comes  back on  my  text  messages. And I  have  to  reset  it.

  • Parameter query or cascading combo boxes

    In my Access 2013 web app, I have a form for entering a specific type of  activity.  I am using a related item control to add contacts to each activity.  The main table is Activity, the related Item Control is based on (join table) Assignment.  So fa

  • Need conversion logic for the xml sending from legacy system ...!!!

    Hi Experts , we have one requirement where in the legacy system ( Sender system) is sending .xml  file and PI needs to pick the file and send it to ECC Via IDOC AAE Receiver Adapter  to R/3  (SAP ECC) System . The problem is the  .xml file which PI r

  • Leopard not wanting to install on MacBook Pro Intel Core Duo?

    I want to bump up to Leopard on my two main computers (MacBook Pro Intel, and a tower, also Intel)... but as I'm using them 24/7 on projects I dare not have problems with ... I don't want any down time getting Leopard to run seamlessly on those syste