Shared Variable triggering

Hello everyone, I have two projects (one is in 64bit LV, the other is in 32bit LV) that I run side-by-side on the same machine. The 64bit project controls a camera, and the 32bit project calls some code to do processing. I would like to send a trigger to make the 64bit program wait until some of the raw data buffers are processed before sending more data to the 32bit project. Unfortuantely cannot recomplie the code for 64bit processing.
Are there any good examplse of doing this? I was thinking of using some sort of shared variable setup, but I really dont like shared variables.
Thanks 
Solved!
Go to Solution.

You can send commands through tcp/ip to the different processes, e.g.
Network shared variable is probably slightly easier, but not alot.
/Y
LabVIEW 8.2 - 2014
"Only dead fish swim downstream" - "My life for Kudos!" - "Dumb people repeat old mistakes - smart ones create new ones."
G# - Free award winning reference based OOP for LV

Similar Messages

  • DSC - Event triggering for Single Process Shared Variables

    Hello,
    I understand how to set up a Value Change Notification for Network Published Shared Variables so that an event will trigger when that particular Shared Variable changes. However, I can't figure out how to do the same for Single Process Shared Variables. Is this even possible? Can someone shine a light on this, please?
    Thanks in advance.
    - James Pham
    Solved!
    Go to Solution.

    VRspace4,
    Hello! It is not possible to enable alarming for Single Process Shared Variables. A workaround to setup a Value Change Notification would be to create a network shared variable that reads from your Single Process Variable, but at that point in time it might be worth just replacing your variable with a networked shared variable.
    Ben Sisney
    FlexRIO V&V Engineer
    National Instruments

  • Shared Variables - Properties

    Hello,
    I have a couple of questions about properties of Shared Variables (SV). They can be configured either by using the dialog box (right-click on SV -> Properties), or programmatically via the "SharedVariableIO" property node. Have a look at the NI-Examples to see how SV can be created programmatically by using the DSC module.
    1)  The dialog box offers a property "Variable Type" which can be "Network-published", "Single-Process" or "Time-Triggered". The "SharedVariableIO" property node gives no access to this property. Does anybody know why? The same question can be rephrased: How can I create a "Single-Process" SV programmatically?
    2) The "SharedVariableIO" property node has two items, "Network.OnScan" and "Network.ConnectionType", which have no direct correspondance to any of the properties available in the dialog box. What is the exact meaning of those properties? The on-line help isn't of much use here...
    3) Mutliple SV can be edited by using the Multiple Variable Editor (Tools->Shared Varaible->Multiple Variable Editor). A cool feature is the import/export of CSV-files. I am interested in programmatically creating SVs from the information stored in such a CSV-file. Reading the file is no problem, but connecting the file entries to the properties of the "SharedVariableIO" property node would be quite some job. In principle NI has already solved that within the Multiple Variable Editor. I guess there is a bunch of VIs somewhere below "....Program Files/NI/.../vi.lib that do exactly this thing. Does anybody know, if these (which ones?) are available for public use?
    Regards,
       Dietrich

    dietrich wrote:
    1)  The dialog box offers a property "Variable Type" which can be "Network-published", "Single-Process" or "Time-Triggered". The "SharedVariableIO" property node gives no access to this property. Does anybody know why? The same question can be rephrased: How can I create a "Single-Process" SV programmatically?
    You can't.  The single-process shared variable is an old style LabVIEW global placed under the shared variable abstraction.  There is currently no way to create those on the fly.  Time-triggered also cannot be created programmatically.  I confess I don't know enough about them to know the reason why.
    dietrich wrote:
    2) The "SharedVariableIO" property node has two items, "Network.OnScan" and "Network.ConnectionType", which have no direct correspondance to any of the properties available in the dialog box. What is the exact meaning of those properties? The on-line help isn't of much use here...
    Network.On Scan = read/write hardware? - this allows you to programmatically control when you are reading from or writing to configured hardware.
    Network.ConnectionType = connect to hardware even when no one is viewing me? - this has 2 choices, UpFront & OnDemand, UpFront means that the variable will be connected to hardware even when no one is connected to the variable.  This is useful when you want to reserve hardware or when logging is enabled.  OnDemand means that you disconnect from hardware whenever no one is reading the variable.
    dietrich wrote:
    3) Mutliple SV can be edited by using the Multiple Variable Editor (Tools->Shared Varaible->Multiple Variable Editor). A cool feature is the import/export of CSV-files. I am interested in programmatically creating SVs from the information stored in such a CSV-file. Reading the file is no problem, but connecting the file entries to the properties of the "SharedVariableIO" property node would be quite some job. In principle NI has already solved that within the Multiple Variable Editor. I guess there is a bunch of VIs somewhere below "....Program Files/NI/.../vi.lib that do exactly this thing. Does anybody know, if these (which ones?) are available for public use?
    I would advise against using any of these.  National Instruments can change, or remove, those underlying files at any time as a result of modifications to the MVE.  That could cause issues for any code using those sub-VIs.
    Regards,
    Robert

  • RT Shared Variable Engine stops publishing

    I have an RT system (PXI-8106) that hosts quite a few network published shared variables (NPSVs) for communicating to a Host PC. My Host and RT executables have been functioning fine on their respective systems for weeks, but just recently, I started running into problems with my NPSVs. My Host application has indicators to display DAQ data being acquired on the RT target and transferred via NPSV, and lately the application will update normally fine for several hours and then the displays will stop updating all of a sudden. If I reboot the RT target, the communication is restored...for a couple hours, anway, until we get another NPSV lockup. I can see no rhyme or reason as to why it stops updating the variables when it does; it seems to just randomly crap out.
    It just happened again, so I brought my development laptop down to run the Distributed System Manager. Sure enough, the NPSVs all stopped updating (latest timestamp was about 30 min previous to my check). I know, however, that the RT system is still running (I can open remote front panels), and none of the NPSV calls use their error input clusters, so an upstream error would not be causing the issue. Also of note, I have a couple NPSVs that are aliased to PSP URLs (NI_SystemState\CPULoad), which do continue to update while the remaining NPSVs do not. So that seems to indicate to me that it is not a networking issue, nor is it a Shared Variable Engine issue.
    I also programmatically deploy the NPSV library from the Host to the RT target on initialization, so I know they are deployed properly. Redeploying does not kickstart the RT SVE into updating. The only thing that seems to solve the problem (albeit temporarily) is a reboot of the RT target.
    Does anyone have any ideas as to what might be causing the RT to stop updating the NPSVs? I have already check the NPSV Troubleshooting KB (http://digital.ni.com/public.nsf/websearch/6E37AC5435E44F9F862570D2005FEF25?OpenDocument), and none of those suggestions appear applicable in this case (most address networking issues that result in never being able to read the NPSVs in the first place, whereas my problem is NPSV communication dropping out arbitrarily). Has anyone else run into such a problem?
    Thanks!

    I think I have an idea as to why my NPSV communication was freezing. One of my NPSVs is a 2D String Array of Alarm History Data, which just published infomation about alarms that have been triggered on the RT system; each new alarm appends a 10 element string array onto the 2D string array. For the most recent testing we were running, there was a recurring alarm condition that was not applicable (i.e. did not need to be handled--it just kept adding new elements onto the history array), and I am guessing that the array may have just gotten too big. When the RT target kept updating the NPSV (at 1 Hz) with more and more 2D String Data, it may have caused a buffer overflow for the shared variable.
    I will attempt to recreate the problem on another system to verify the cause.

  • Shared Variable VI development on machine other than server

    I'm about to do my first shared variable code, I read the white paper
    http://www.ni.com/white-paper/4679/en
    Something that isn't immediately clear to me is the expected work flow in actually creating ALL the VIs using the shared variables.  I tried creating the project and the variable as per instructions, easy.  But I'm going to have one of those VIs running as the server of the data and one as the client.  Would these be two seperate project files with the same variable name and type?  One project file with seperate builds for server and client executables?
    Actually, SV might not even be the best way to do what I need.
    I have a PXIe machine with a slot 0 controller that has a mux (a pxi-2503) and dmm (pxi-4072) for getting measurements from multiple channels.  Rather than calling via VI server every time I want to grab a measurement I'd rather have an application running waiting on commands.  I was thinking of creating a crude command/acknowledge data avail/acknowledge scheme to synchronize the PXI slot 0 with the client that will be requesting the data, but I can imagine there is probably a more suitable framework that someone else has no doubt tackled?
    I just need to guarantee that the rack takes the measurement when I want it to (ie, after some other action I've triggered the system to perform) and that the data I get back from it is identifiable.
    Ideally, I'd have a MXI controller and this gets really easy but I won't be able to get ahold of one for months if at all so I'm going this route for now.

    Hi Marcus,
    When I drag a shared variable from the project window into the block diagram and then left click it and go to browse, I'm only presented with variables on my local machine to choose from.  If I switch into programmatic access and then go to browse I'm presented with a nice list and am able to browse to the remote shared variables and select it for my constant/control.  This works fine and I'll remember to do this from now on, I just wonder why the two different behaviors.
    I have this working quite well provided I go ahead and start the VI running either with a built exe or through the vi.  As these instruments are going to ultimately be sitting in a headless (no monitor/mouse/keyboard) PXI with slot 0 rack, I am now trying to figure out how to either start these VIs running remotely.  I thought I'd just be able to drop the .exe into a windows scheduled task and have it run on start up.  This works fine but my client VI doesn't communicate.  If I go ahead and log in to the machine I can see the exe running in the Task Manager.  If I kill that exe and start a new one then I'm able to communicate with the VI though with some initially odd behavior, almost as if when I was running it on the client the SVE had changed the states of the variables locally and then when the new process was started it latched those states and causes me to have race-like conditions.  If I then kill that exe and start another one everything is in its expected "default" state and I don't have race like conditions or get into unexpected states.  I can probably add some code on the client side to avoid this weird behavior but I'd rather understand why things aren't working.  My guess is the SVE and other NI processes on the headless box aren't starting properly until I actually log in?

  • Shared Variables stop working on cRIO-9074

       I have a Dell E5540 laptop running W7 and LV 2013.  The project talks to a cRIO-9074.  I use a handful of shared variables to communicate some of the data between the two platforms.  I have gotten this all working fine.  Then twice, something has gone wrong.  The shared variables just stop communicating.  They are hosted on the laptop.  The host program sets them just fine.  The Distributed System Manager can monitor them and sees them change.  But the RT program never gets the new values.  The RT connection is fine.  I can probe into the running RT VI, and I see that the shared variables do not get the correct values.
        The firewall is off.  I've undeployed and redeployed everything over and over.  Cycled power, rebooted, etc.
        In both cases, I finally was able to fix the issue by re-installing the entire suite of NI software on the cRIO.  After I did that, the program started up and behaved fine.  I have no idea what triggered the failure in either case.  Also, I have a second laptop that I've used to verify that the problem is with the cRIO and not with the laptop.  When things are broken, it fails with both laptops.   As soon as I reinstall the software on the cRIO, both laptops work with it fine.
        Has anyone seen this before?  Any ideas how the cRIO might be failing?  It seems like some software component on the cRIO must be getting corrupted, since the software reinstall is the only thing that fixes this.  Perhaps the cRIO is bad?  The one other possible clue I have is that during the second fix, I even went so far as to reformat the cRIO disk using MAX.  After the format worked for a while, it popped up an error saying that the format had failed.  But I was able to reconnect and reinstall the software and get things running.  I just wonder if that error code is another symptom of a bad piece of hardware...
    Thanks,
        DaveT
    David Thomson Original Code Consulting
    www.originalcode.com
    National Instruments Alliance Program Member
    Certified LabVIEW Architect
    There are 10 kinds of people: those who understand binary, and those who don't.

    Hello Dave,
    You can try to pinpoint where the problem is with some tests. First, you could try running the real-time VI on your host instead and seeing if the problem with communication still occurs. Then if this seems to work fine, try running a LabVIEW shipping example for network shared variables on the host and target. One such example is Analog Input - Getting Started - Scan Mode.lvproj from the NI RIO driver.  If this succeeds, the problem is most likely with the LabVIEW application you have. However, if it fails there is a much higher chance that there is a problem with the cRIO.
    Have a good day,
    Siana A.
    Application Engineering
    National Instruments

  • I am trying to connect Dashboard shared variables to a server on a different subnet. Any ideas?

    My goal is to control a device that is connected to our wired network using an Android tablet via Dashboard.  I have created a vi with shared variables that controls the device as expected when it runs on a computer that is connected to the same wired network.  The problem is that my Android tablet running the Dashboard app cannot connect to the shared variables on the PC running the vi on the wired network.
    Our wireless network is on a separate subnet from our wired network.  I am able to ping my Android tablet from the PC on the wired network but when I try to connect a variable in Dashboard, the PC running the SVE cannot be found.  I tried listing it in the alternative server settings window and it still did not work.  The only way I have been able to get around this is to run the vi which launches the server on a laptop that is connected to the wireless network and the wired network at the same time.  My tablet can then find the server and my VI can connect to the instrument that is connected to the wired network.  The laptop is somehow acting as a bridge between the subnets.  I need to find a way for the Dashboard app to connect directly to the PC on the wired network.  My PC IP address is 192.168.0.105.  My Android IP address is 192.168.10.93.

    Data Dashboard doesn't care about subnets, but it has to be able to access the server using the right ports. There is probably a firewall blocking the shared variable ports. This document explains how to configure a firewall to allow shared variables to be accessed. Your challenge will probably be to figure out where the firewall is and how to configure it.
    It is also possible that the router that your Android device is connected to doesn't know how to route to the other network. Again, that is an issue with your router that you need to resolve.

  • Performanc​e of Modbus using DSC Shared Variables

       I'm fairly new at using Modbus with LabVIEW.  Out of the roughly dozen tools and API's that can be used, for one project I'm working on I decided to try using Shared Variables aliased to Modbus registers in the project, which is a DSC tool.  It seemed like a clever way to go.  I've used Shared Variables in the past, though, and am aware of some of the issues surrounding them, especially when the number of them begins to increase.  I'll only have about 120 variables, so I don't think it will be too bad, but I'm beginning to be a bit concerned...
       The way I started doing this was to create a new shared variable for every data point.  What I've noticed since then is that there is a mechanism for addressing multiple registers at once using an array of values.  (Unfortunately, even if I wanted to use the array method, I probably couldn't.  The Modbus points I am interfacing to are for a custom device, and the programmer didn't bother using consecutive registers...)  But in any case, I was wondering what the performance issues might be surrounding this API.
        I'm guessing that:
    1) All the caveates of shared variables apply.  These really are shared variables, it's only that DSC taught the SV Engine how to go read them.  Is that right?
       And I'm wondering:
    2) Is there any performance improvement for reading an array of consecutive variables rather than reading each variable individually?
    3) Are there any performance issues above what shared variables normally have, when using Modbus specifically?  (E.g. how often can you read a few hundred Modbus points from the same device?)
    Thanks,
        DaveT
    David Thomson Original Code Consulting
    www.originalcode.com
    National Instruments Alliance Program Member
    Certified LabVIEW Architect
    There are 10 kinds of people: those who understand binary, and those who don't.
    Solved!
    Go to Solution.

    Anna,
        Thanks so much for the reply.  That helps a lot.
        I am still wondering about one thing, though.  According to the documentation, the "A" prefix in a Modbus DSC address means that it will return an array of data, whereas something like the F prefix is for a single precision float.  When I create a channel, I pick the F300001 option, and the address that is returned is a range:  F300001 - F365534.  The range would imply that a series of values will be returned, e.g. an array.  I always just delete the range and enter a single address.  Is that the intention?  Does it return the range just so you know the range of allowed addresses?
       OK, I'm actually wondering two things.  Is there a reason why the DSC addresses start with 1, e.g. F300001, instead of 0, like F300000?  For the old Modbus API from LV7, one of the devices we have that uses that API has a register at 0.  How would that be handled in DSC?
    Thanks,
        Dave
    David Thomson Original Code Consulting
    www.originalcode.com
    National Instruments Alliance Program Member
    Certified LabVIEW Architect
    There are 10 kinds of people: those who understand binary, and those who don't.

  • Issue with use of shared variables in Crystal Reports 2008 Offline Viewer

    Hi,
    I have a report that contains a number of sub-reports which include drill-down functionality. The report returns data relating to an individual team with the user being able to view top level summary information in each area from the parent report and then drill into the sub-reports to view see more detail. The data returned by the sub-reports is filtered, using sub-report links, based on the team code parameter value given by the user. This parameter field resides in the main report.
    One of the values returned by the main report is the team name. This is passed to each sub-report using a shared variable and each sub-report displays this team name as part of a heading.
    This all works fine in Crystal Reports 2008, but when a report, containing data, is opened using Crystal 2008 Offline Viewer there is a problem with the shared variable. The value is displayed correctly when the user initially drills into the sub-report. However, when the user begins to drill into grouped data within the sub-report the value passed to the sub-report using the shared variable disappears. 
    How can I ensure that, when a report is viewed using Crystal Offline Viewer 2008, the value within the shared variable is not lost when users drill into grouped data within sub-reports
    Thanks
    Stuart

    Please re-post if this is still an issue or purchase a case and have a dedicated support engineer work with you directly:
    http://store.businessobjects.com/store/bobjamer/DisplayProductByTypePage&parentCategoryID=&categoryID=11522300?resid=-Z5tUwoHAiwAAA8@NLgAAAAS&rests=1254701640551

  • Why should you explicitly open and close shared variable connections?

    I'm looking into switching over from the old Datasocket API to the new Shared Variable API for programmatic access to shared variables, and I noticed that LV doesn't seem to have any problems executing Shared Variable Reads & Writes without first opening the connection explicitly. That is, I can just drop in a shared varaible Read VI, wire a constant to the refnum input, and it will work. I'm wondering, then, what benefits are offered by explicitly opening the conenction ahead of time...?
    I guess I could see some cases where you want to open all necessary connections in an initialization state of a top-level state machine, particularly if you want to use the "Open & Verify Connection"---so you could jump straight to an error case if any connections fail. But other than that, why else might one want to explicitly open the connections.
    And, along those lines, are there any problems with implicitly opening the connections? One reason why I am hesitant to open them explicitly is because for one of our applications, we need to be able to dynamically switch from one variable to another at runtime. It would be nice to just switch the variable refnum (wired to the input of the Read function), without having to manually close out the old connection and open a new one. A quick prototype of this seems to work. But am I shooting myself in the foot by doing so?
    Thanks in advance.

    I'd expect there's a very small number of people at NI that would know the answer to the detail you're asking for.  But, let's try to extrapolate from this rather old post to see if we can understand what they're forming their impression on.
    The shared variable has to have some sort of reference going on in the background.  It looks like they're calling this a connection.  It's how LabVIEW knows where to find this variable on the network.  We can also see this reference certainly exists if we're opening/closing the reference in the explicit method.  You see the connection as just a string referencing the variable by URL.  This "reference" has to be stored somewhere.  No matter how we're looking at this, we're aware there's a reference of some sort stored. 
    Now, we'd want to look at what would cause this reference to go away.  If you open/close explicitly, it's easy to see it goes away at the close.  If it's implicit, when would it make sense to close it out?  The VI can't be expected to guess where it's done being referenced and close it out.  This puts us into a situation where the soonest it could close is when the VI ends.  From my experience, references tend to be wiped when you close out LabVIEW.  It's this kind of idea that makes the FGV possible.  I wouldn't be surprised by Morgan's claim here.
    If we look at scalability, you're talking about two different topics.  You're talking about adding an extra open, close, read, etc rather than just a few wires.  That certainly would look a mess.  In terms of the dynamic swap that was being discussed, we wouldn't be adding all of those.  The concern would be if enough connections were opened it'd start to behave similar to a memory leak.  This could be something that works with a smaller number of variables.  If you continue to scale, it becomes problematic.  This is why they suggest it's not scalable. 
    To your questions:
    Does LV allocate some kind of session in memory using that string as a lookup?
    It would HAVE to allocate some memory to hold that string.  Otherwise, it'd be pointless to even have the reference. 
    Does it reuse that session if other parts of the application reference the same variable, or does it create a unique session for each referencing call to "Read Variable.vi"?
    I would expect this to be "it depends."  With the implicit method, I would expect it to open a new reference in each point it isn't wired.  This is similar to int x,y = 5;  x and y share the same value but are their own unique memory location.  If you wire the reference to the other points, it should use the same reference.  This would make more sense to me than the program seeking out any other potential usage of the variable in the application to see if there's already a reference open.  I could be wrong, though.
    And what resources does this "connection" actually represent?
    At best, this is just the string.  At worst, it's the string and the TCP socket.  I'd lean towards the first.  Opening and closing sockets should be relatively easy in most applications.  But, it also wouldn't surprise me if it holds the socket.
    I'm sure others have a better understanding than I do.  But, that's what I'd expect for anything you've asked.

  • Is there a way to reset the shared variable engine?

    I would like to be able to reset the SVE in a manner that is equivalent to what occurs during a hardware reset.
    My motivation for doing so is as follows:
    I have an cRIO app with lots of IOV's and NSV's that operate via the SV API and also with plenty of static nodes.  I am finding
    that on first run, from the DE, my CPU%=~55%.  On all subsequent
    runs my CPU%=~65%.  If I do a hardware reset or redeploy one particular
    NSV library then
    my cpu usage will return to ~ 55%.  I have tried isolating the area of code that accounts for this and have found that
    the problem centers around my initialization of one library of NSV's
    where I set its init value via the SV API.  The NSV references
    are all closed after writing without error.  So I am wondering if it
    is possible that the NSV ref's are not actually closing or possibly
    the setting of the inital value has some effect on the SVE so that subsequent runs get bogged down.
    Anybody have any ideas?

    Hello,
    If you redeploy your library, does that bring your CPU usage back down?  perhaps you can use that as a suitable workaround?  How exactly are you setting the initial value of your shared variables?
    Tejinder Gill
    National Instruments
    Applications Engineer
    Visit ni.com/gettingstarted for step-by-step help in setting up your system.

  • Are clusters or individual elements better for shared variables?

    So...  I have some RT code that is being updated, and pulled out of the Stone Ages of LabVIEW.  It was originally written for an old FieldPoint controller operating in "headless" mode, and used the "publish" and datasocket methods for communications and external control.  I had to get clever way back then, and put together a parsing/unparsing system for strings to send sets of data back and forth between the controller and any HMI or other computer attached.
    Now, I'm completely rewriting the code for a cRIO system, and doing my best to leverage all of the strengths of the latest LabVIEW versions.  I have already done an intermediate stage, where I converted from the publish/datasocket method to using network shared variables for my strings, so I could keep some of the original control and calculation logic.  Now, however, I'm going back to the drawing board for most of the program, with only some of the proven logic being held over into the new version.  And, as I'm putting together the data structures I need for both internal control and external communication, I'm in a bit of a quandary...
    I have come upon a data structure dilemma:  should I use individual shared variables for my data, or assemble associated data into clusters?  My original program had a string (essentially a flattened cluster) for each sensor in use (up to 4), one for the system parameters and states, and one for the control parameters and states.  There was a certain advantage to keeping the data compartmentalized like that, it kept things organized and forced me to avoid too many random references of each data point.  And it kept the number of communications channels limited to just a handful.  Mimicking this structure with cluster shared variables would be easy.  But, I'm not sure it's the best or most network-efficient method.
    I know the bundling/unbundling will add some processor time in my code, that is not new to me (it will still be much faster than my old parsing routines).  But, if I have individual data points being thrown around, I can access them easily from things like Data Dashboard (which is great, but far too limited to be able to grab items in clusters and such).  Having all of my data points individually available would make my project messier, but open up easier access.  It would also dramatically increase the number of data points being thrown around on the network at any one moment.  For reference, I would probably have a maximum of 100 data points at one time, made up of a combination of integers, floats, booleans, integer arrays, boolean arrays.  Or I would have a maximum of 8 clusters that would contain those data points.
    Any suggestions on which way I should lean?  Are there any advantages/disadvantages between shared clusters like the ones I need vs. the number of individual shared variables I would need using the alternative methods?  Network traffic and efficiency are always a concern, particularly since this is a "headless" cRIO in a control situation that must maintain a fast scan rate...
    Thanks for any help.  I'm so stuck on this fence, and I can't figure out which side to fall off!
    Solved!
    Go to Solution.

    Thanks Tim, that is a great source that I somehow missed in my hunt for information regarding my dilemna...
    I have to wonder though, does that 25 number also include the I/O points on your cRIO?  Anyone know that particular?  Most of the I/O points are network shared by default during initial configuration, and you could very quickly exceed 25 variables on an 8 slot rack (such as the one I use, a 9074).  Now I'm a bit worried that I'm overusing the variable engine, even before the communications clusters get figured in...

  • Print-time charting with shared variables on a report without groups...

    hello all,
    I have an interesting conundrum; I am working on a report (Crystal XI) with no groups to it, just several subreports. Presently the report is run weekly and sent to the manager of one of our departments.
    The report itself showcases all areas of the department with total tasks, longest outstanding task, average days outstanding, MTD tasks completed, and YTD tasks completed. Due to the variety of tables the data pulls from, I have it set up as subreports that simply display the data.
    All total, there are 20 different shared variables that have been obtained from 10 different subreports. I have the shared variables set as the running totals from the subreports (hence, all are print time). Since the running totals and all the subreports are time based and are not actually linked to ANYTHING on the main report, I am struggling with how to set up the formulas described in the white paper, "Charting on Print Time Formulas".
    The setting up of the two formulas ({@onchngof} and {@showval}) will be different for my situation, as all of the subreports run off of date information relative to when the report is ran. I've got all of the subreports that the shared variables originate in early in the report; but where to from here?
    Would I set up maybe a date grouping or some other non-essential consistent grouping to make it work? The way this report is, it really doesn't need grouping, because the main report is just a display agent for the subreports...
    As always, any help is greatly appreciated!

    Let me ask couple of questions before trying to resolve this.
    1. The chart you are trying to built is placed in which section and is it on Main report or sub-report of its own. This is needed because if you are trying to access shared variables values in Main report, the section should be below the sub-report placed sections.
    2. Try pacing few test formulas in Main report and make sure the correct values are being pulled in on Main report.
    If you are successfully pulling the variable values on the report, try creating a simple chart in report footer with the values you have pulled and see if its taking you somewhere...
    Would need more information on how those variables are being accessed form sub to main or sub to other sub for example are you making an array of those variables and splitting or bringing it in as separate...

  • How do I setup shared variables between executables created in sepparate projects

    Hello,
    I have several sepparate projects with their own respective executable files and I would like to be able for these executable files to all share the same variable (one program controls the value of the variable, while the others read from it).
    I got this setup to work on my personal computer (by being able to access variable manager, etc), but I need to deploy these executables on different computers that don't have the labview development program. What steps do I need to do in order for me to be able to put these executables on any computer (I'm assuming I need to setup a path for the shared variable that is always in the same folder, etc)
    Thanks
    Vlad
    Solved!
    Go to Solution.

    Hi Vlad,
    I think this article may answer some of your questions regarding shared variables in deployed applications.
    http://zone.ni.com/devzone/cda/tut/p/id/9900
    It sounds like you already have your executables built, but this article may answer some questions about deploying them to other machines.
    http://zone.ni.com/devzone/cda/tut/p/id/3303
    Jeff S.
    National Instruments

  • How to use shared variables to address multiple Watlow controller​s on the same COM port

    Hello,
    I am trying to use LabVIEW 2010 to control 4 Watlow temperature controllers on one COM port. 3 are Model 96 and 1 is an EZ zone controller. Each controller has a unique modbus address, and I am trying to read from and write to individual registers (such as closed loop setpoint) using shared variables. I am getting return data when reading (although the data appears to be invalid), but am unable to change the value in the register by writing. How can I be sure that the Modbus server is sending commands to the correct controller?
    Chuck
    Solved!
    Go to Solution.

    Peter,
    Thanks for the reply. I have actually solved that problem. I realized that the Modbus server address has to be the same as the controller's Modbus address.
    I have, however, run into another problem. Perhaps you could help me with that. I have a system with 4 Watlow controllers, 3 are series 96 controllers, one is PID only and 2 are ramping. The 4th controller is an EZ zone. I am using RS485 for communications and the controllers are all wired in parallel for communications and power.
    I have set up 2 Modbus servers for 2 of the controllers.
    This is the first I have ever worked with Modbus based communications. I have successfully programmed using the Modbus read/write VIs, and am wanting to move to shared variables. My questions right now revolve around addressing, Modbus I/O servers and COM ports. Specifically, at this point, I know the addresses need to match up between the server and the slave device (Watlow controller in my case), how many servers can I create and use on one COM port? If the number is limited, is there a way I can specify an address that I want the server to talk to? Will the broadcast mode work to request data values from the controllers?
    I'd appreciate any information you can help me with, or if you could point me to some sort of concise 'How-To' for Modbus communication.
    Thanks.
    Chuck

Maybe you are looking for

  • Creating NWDI track for ESS/MSS

    Hi, I am trying to implement ESS/MSS (600 SP7). For customization I need to import the package on the track in CMS. Import of JEE, JTECHS, BUILDT and PCUIG were successful. When i tried importing ESS and MSS, import is failing. Check-In of both the c

  • Upgrade Plug-ins?  InD CS4

    I have been sent a file that shows as an InDesign CS4 file however when I try to open it I get the message - cannot open, upgrade plug-ins or upgrade to the latest version. I've had a bit of a look at the different threads and it seems that it may be

  • IPAd2  is frozen after trying to update ios8

    My Ipad2 wants me to plug into ITunes and the screen is frozen

  • Can you control Apple TV via LAN from 3rd party control such as AMX or Crestron?

    I would like to know if you can control an Apple TV via LAN with a 3rd party control AMX or Crestron - is there an API available for this? Two way communication would be ideal!

  • Background applications bogging down the iphone?

    Hey everyone, I am a brand new iphone user. I used to own an HTC 8515 and I recall that it kept applications running in the background and eventually it would start to eat up a lot of RAM. There was a task manager however where you could clear all ru