LV7.1 DSC tag engine VS LV8.6 DSC shared variables

I'm currently running LV7.1 with DSC and RT. To handle communications and logging RT variables I'm using the init / read / write publish.vi's on the RT side and datasockets on the HMI side (Windows XP). This has worked out great - new tags can be programmatically created in real time with the publsih vi's and then I go to the the .scf file and use the tag configuration wizard to add them to my scf file and handle data logging. This worked very well - the wizard would organize all of the memory tags into folders by block name used by the init publish vi. I could also select entire groups of tags and add hundreds at a time to the .scf file. Hardware Tag also worked in a similar fashion, organizing tags by controller and module folders. Now - looking at LV8.6.I found I can still use the init / read / publish vi's on the RT side - great. However there is not tag configuration editor as in LV7.1 to let me add large numbers of tags through a wizard. The closest thing I've found is to create a library to represent each block name from the RT init publish.vi then use "create bound variables" option under the library to bind the new shared variables to the RT memory tags. I can browse to the tags on the controller by network items, but when I add them it doesn't bring the block name of the tag as it did in 7.1, only the item name. I use a lot of PID loops that share the same tag names (i.e.: P,I,D, mode, output), so not including the block name represents an organizational problem. The problem with this is, it's very labor intensive compared to the wizard in LV7.1 DSC, especially talking about creating systems with thousands of RT memory tags. Also, there is a similar problem with hardware channels (I'm using compact FieldPoint). To log channels via DSC do I have to create a shared variable for each channel to access the DSC logging capabilities? Again how do I add all of the hardware channels in some organized fashion? I hope I'm missing some tool that is an analog to the tag configuration wizard to bring in these channels and organize them. Any help or suggestions would be appreciated. Thanks,Brad

Hi lb,
We're glad to hear you're upgrading, but because there was a fundamental change in architecture since version 7.1, there will likely be some portions that require a rewrite. 
The RTE needs to match the version of DSC your using.  Also, the tag architecture used in 7.1 is not compatible with the shared variable approach used in 2012.  Please see the KnowledgeBase article Do I Need to Upgrade My DSC Runtime Version After Upgrading the LabVIEW DSC Module?
You will also need to convert from tags to shared variables.  The change from tags to shared variables took place in the transition to LabVIEW 8.  The KnowledgeBase Migrating from LabVIEW DSC 7.1 to 8.0 gives the process for changing from tags to shared variables. 
Hope this gets you headed in the right direction.  Let us know if you have more questions.
Thanks,
Dave C.
Applications Engineer
National Instruments

Similar Messages

  • Modbus Ethernet read and write to a Eurotherm 6180XIO Modbus server using LV8.2 shared variables

    I am having EXTREME difficulty trying to establish communications with a Modbus device using LV8.2 shared variables.  The device is a Eurotherm 6180XIO Datalogger configured as a Modbus master.  The PC and a cFP-1804 are slaves.  All IP addresses are set correctly.  This approach using shared variables would seem simple, but I can't find any examples or proper guidance on how to get it working.  I am trying to avoid having to mess around with TCP/IP, OPC, or any other old-fashioned method.
    I have read many threads on related topics but none directly apply to this situation.  I have created a library containing a Modbus I/O server and shared variables bound to read and write holding registers.  I have followed all recommended tips for creating such variables but I can neither read or write data.  All data types are U16 due to Modbus protocol limitations.  I have also applied the LV x10 factor in the most significant digit in the register offset (6 digits instead of 5).
    I have a cFP-1804 on the same network which reads into the datalogger OK.  The registers I use are 31000 (for CH0 on module 0, 31002 for CH1, etc) and the data can be read as FLOAT32.  I have updated the firmwate on the 1804 to the latest level.  I cannot even get shared variables to read SGL values.  Using registers 301001 for CH0 and 301002 for CH1 I can only read U16 values, and not a 2-word SGL.
    Third party Modbus simulation software is able to write to and read from registers very easily, but not LabVIEW.
    Some questions are:
    - do I use a Modbus master or slave as an I/O server in the library as a target for binding the shared variables?
    - is there some other wierd translation in register offsets between LabVIEW and traditional Modbus?
    - is this actually possible using shared variables or am I wasting my time?

    Sending the whole 60-character string using a string or array would be the most efficient.  I have tried both methods, and these only cause the datalogger to flag a message log but no text is displayed.
    For a string variable, I have used the following binding "My Computer\Modbus Test.lvlib\ModbusServer6180\442305", where ModbusServer6180 is a Modbus I/O server configured with the logger IP address, and 42304 is the register offset at the start of the text block in the logger.  I need to write to 30 consecutive registers starting with this one.  I am not using buffering and have not enabled single writer.
    Can anyone confirm whether this method should work in 8.2?
    Does the string need a special termination character?

  • DSC "tag engine" "unable to start the citadel 5 service" "unable to start the service"

    Hi, I don't know why, but I'm not able to start the
    tag engine anymore. First I thought of a corrupt *.scf-
    file, but this is not the case.
    To investigate the problem I wrote a nice program,
    perhaps it is of interest. And perhaps someone else
    had the problem and was able to find the reason for the
    problem. I also included screen shots of the error messages.
    I'm working with LV 7.1.1 (but the problem was there
    before upgrading from 7.1 to 7.1.1). I saved the example
    code also in previous versions of LV.
    Regards, Stefan
    Attachments:
    could not start tag engine.zip ‏1093 KB

    Hi, I asked the support to help me, but they were not able
    to help, they suggested a new installation.
    To avoid that, I tried to reinstall (repair) only the Datalogging and Supervisory Control (DSC) Module and afterwards I mass compiled the whole NI directory.
    For the moment, the problem is solved. Regards, Stefan

  • WIll using a multiprocessor system improve DSC Tag Engine performance?

    We are developing a multiple workstation vacuum chamber automation control application using the DSC.
    The chambers under control each have a set of process controllers (Opto22 "Ultimate Brains") running the fundamental interlock and process mechanisms via their own software. The brains are set up for communication via OPC, thus LabVIEW can monitor the IO states of the system as well as variable values in the brain software via DSC tags. In addition, LV can manipulate variables to make requests that the brain software branch to different subroutines. The other ("control") workstations in the system pass requests to the brains via the software on the monitoring workstation, so as to ensure that requests are enqueued properly.
    The problem is at this point there are 1300 tags configured for the DSC, and the workstation responsible for monitoring them shows near 100% CPU load all the time; most of that taken by the DSC Engine. This is with only half of the final project's chambers installed and active. As a result, it sometimes takes several attempts for a control workstation to successfully pass a request to the brains via the monitoring workstation.
    We are concerned that performance will only worsen as we bring the additional chambers online.
    Would adding a second processor to the workstation improve performance? If dual processors would help, would additional processors help more?
    Note: we are examining which tags we monitor all the time and are going to try to reduce that list to those tags critical for normal operation, with an option to temporarily expand monitoring to the larger list for debugging purposes. I am concerned that even if that helps now, the problem will get worse again as we bring additional components on line. Is it the sheer number of tags defined for the DSC engine that gates the load on the engine, or the number that we are actively reading with our program?
    Thanks for any illumination you can offer.
    Kevin R
    Kevin Roche
    Advisory Engineer/Scientist
    Spintronics and Magnetoelectronics group
    IBM Research Almaden

    I have a partial answer. We've swapped in the dual processor machine and see some improvement. The processor load was still hovering around 100%, though.
    More importantly, we think we've learned something about how the DSC engine is actually working. The monitoring workstation not only runs the DSC engine to trade data with the other workstations, but an OPC server to handle transactions with the "brains". So any requests for data from the brains really are routed via the monitoring workstation.
    We had built one common tag database because we thought that would simplify programming. We did some tests today, however, and discovered that if we stop the tag engines on the control workstations, processor load drops dramatically on the monitoring workstation.
    What we've realized is that apparently if a read tag exists in a machine's database, the DSC fetches its value, regardless of whether our LabVIEW software ever actually uses the value. We deleted most of the brain tags from the control workstation databases, leaving only the LV memory tags and the few brain tags actually used by our vis. So now the monitoring workstation is not being asked to query those 1000 tags by 3 different tag engines, only by the one using it.
    CPU load is down to about 73% now (because the monitoring workstation is still itself watching those 1000 tags). That's still high, but we have a better idea what is going on.
    So -- is there any way to have the DSC engine only fetch a tag value when you really need it, rather than always fetching every tag in the database?
    Kevin Roche
    Advisory Engineer/Scientist
    Spintronics and Magnetoelectronics group
    IBM Research Almaden

  • DSC TAG ENGINE doesn't run after using eval copy of LabView

    I used an eval copy of LabView.
    I uninstalled this eval copy and I bought and installed the full NI Developer Suite.
    Now Labview 6i (6.0) works properly but I'm unable to use DSC because the Tag Engine stops immediately with this message: "This evaluation copy of Labview has expired. Engine will close".
    I'm using a Pentium III PC 128Mb RAM Win98 italian version.

    Hi Silvano,
    I found a similar case from another customer. Nationl Instruments is aware of this issue. But the only workaround for now would be (propre version):
    1) uninstall LabVIEW DSC and LabVIEW
    2) delete the registry key from the old LabVIEW eval version with regedit.exe.
    [HKEY_LOCAL_MACHINE\Software\National Instruments\LabVIEW\lvedid]
    3) reinstall LabVIEW and LabVIEW DSC
    (short version)
    1) Rename lvedid in the:
    [HKEY_LOCAL_MACHINE\Software\National Instruments\LabVIEW\lvedid] to e.g. ..\LabVIEW\1lvedid
    Hope this helps
    Roland

  • Migrating large project from DSC 7.1 to LabView 2009 Shared Variables ... What's the next step after recreating my variables?

    I am in the process of migrating a large distributed (multi-workstation) automation system from the LabVIEW 7.11 DSCEngine on Windows XP to the LabVIEW 2009 Shared Variable Engine on Windows 7.
    I have about 600 tags which represent data or IO states in a series of Opto22 instruments, accessible via their OptoOPCServer. There are another 150 memory tags which are used so the multiple workstations can trade requests and status information to coordinate motion and process sequencing.  Only one workstation may be allowed to run the Opto22 server, because otherwise the Opto22 instruments are overwhelmed by the multiple communications requests; for simplicity, I'll refer to that workstation as the Opto22 gateway.
    The LabVIEW 2009 migration tool was unable to properly migrate the Opto22 tags, but with some help from NI support (thank you, Jared!) and many days of pointing and clicking, I have successfully created a bound shared-variable library connecting to all the necessary data and IO.  I've also created shared variables corresponding to the memory tags. All the variables have been deployed.
    So far, so good. After much fighting with Windows 7 network location settings,  I can open the Distributed System Manager on a second W7/LV2009 machine (I'll refer to it as the "remote" machine henceforth) and see the processes and all those variables on the Opto22 gateway workstation. I've also created a few variables on the remote workstation and confirmed that I can see them from the gateway workstation.
    Now I need to be able to use (both read and write) the variables in VIs running on the remote workstation machine. (And by extension, on more remote workstations as I do the upgrade/migration).
    I have succeeded in reading and writing them by creating a tag reader pointed at the URL for the process on the Opto22 gateway. I can see a way I could replace the old DSC tag reads and writes in my applications using this technique, but is this the right way to do this? Is this actually using the Shared Variable Engine, or is it actually using the DataSocket? I know for a fact that attempting to manipulate ~800 items via Datasocket will bog down the systems.
    I had the impression that I should be able to create shared variables in my project on the remote workstation that link to those on the Opto22 gateway workstation. When, however, I try to browse to find the processes on that workstation, I get an error saying that isn't possible.
    Am I on the right track with the tag reader? If not, is there some basic step I'm missing in trying to access the shared variables I created on the gateway workstation?
    Any advice will be greatly appreciated.
    Kevin
    Kevin Roche
    Advisory Engineer/Scientist
    Spintronics and Magnetoelectronics group
    IBM Research Almaden

    I have found the answer to part of my question -- an relatively easy way to create a "remote" library of shared variables that connect to the master library on my gateway workstation.
    Export the variables from the master library as a csv file and copy that to the remote machine.
    Open the file on the remote machine (in excel or the spreadsheet app of your choice) and (for safety's sake) immediately save it with a name marking it as the remote version.
    Find the network path column (it was "U" in my file).
    replace the path for each variable (which will be either a long file path or a blank, depending on the kind of variable) with \\machine\'process name'\variable name
    where machine is the name or ip address of the master (gateway) workstation (I used the ip address to make sure it uses my dedicated automation ethernet network rather than our building-wide network)
    and process name is the name of the process with the deployed variables visible in the Distributed System Manager on the gateway machine.
    NOTE the single quotes around the process name; they are required.
    The variable name is in the first ("A") column, so in Excel, I could do this for line 2 with the formula =CONCATENATE("\\machine\'process name'\",A2)
    Once the formula worked on line 2, I could copy it into all the other lines.
    Save the CSV file.
    Import the CSV into the remote library to create the variables.
    Note: at this point, if you attempt to deploy the variables, it will fail. The aliases are not quite set properly yet.
    Open the properties for the first imported variable.
    There is probably an error message at the bottom saying the alias is invalid.
    In the alias section, you'll see it is set to "Project Variable" with the network path from step 4.
    Change the setting to "PSP URL" with the same path and the error message should disappear.
    Close the properties box, save the library, and then export the variables to a new CSV file.
    Open the new CSV file in Excel, and scroll sideways to the NetworkrojectBound field.
    You'll notice it is False for the first variable, and true for the rest. Set the field FALSE for all lines in the spreadsheet.
    Scroll sideways... you'll notice there are two new columns between NetworkrojectPath and Network:UseBinding
    The first one is NetworkingleWriter; it should already be FALSE for all lines.
    The second one is Network:URL. That needs to be set equal to the value for each line of NetworkrojectPath.
    You can accomplish this with a formula like in step 4. In Excel it was =U2 for line 2, and then cut and paste into all lines below it.
    There is a third new field, Path, which should already be set to the location of the variable library. You don't need to do anything with it.
    Save the edited CSV file.
    Go back to the remote library, and import variables from the just-edited remote library CSV file.
    Once you have imported them and the Multiple Variable Editor opens, click on done.
    You should now be able to deploy the remote variable library without error. (Make sure to open the Distributed System Manager and start the local variable engine. It took me a few failures before I realized I had to do that before attempting a deployment).
    Voila! You now have a "remote" library of shared variables that references all the shared variables on the master machine, and which should be deployable on other machines with very little difficulty.
    It actually took longer to write out the process here than to perform these steps once I figured it out.
    Kevin Roche
    Advisory Engineer/Scientist
    Spintronics and Magnetoelectronics group
    IBM Research Almaden

  • DSC - Event triggering for Single Process Shared Variables

    Hello,
    I understand how to set up a Value Change Notification for Network Published Shared Variables so that an event will trigger when that particular Shared Variable changes. However, I can't figure out how to do the same for Single Process Shared Variables. Is this even possible? Can someone shine a light on this, please?
    Thanks in advance.
    - James Pham
    Solved!
    Go to Solution.

    VRspace4,
    Hello! It is not possible to enable alarming for Single Process Shared Variables. A workaround to setup a Value Change Notification would be to create a network shared variable that reads from your Single Process Variable, but at that point in time it might be worth just replacing your variable with a networked shared variable.
    Ben Sisney
    FlexRIO V&V Engineer
    National Instruments

  • DSC VI won't connect to shared variable outside of project

    My main VI will start up and connect to shared variable if I first have my project open.  It will not connect to them if I try to run it by itself.  I am deploying the library programmatically, but that doesn't seem to help.
    Also, I can't get the main VI to run on startup.
    Thanks

    Hi Brian,
    Thank you for the additional information. Let's put aside the run when opened issue and try to work on having your variables deploy and connect successfully.
    When you run the VI within the project, are the variables successfully deployed as well?
    Where are you seeing that the path to the library is blank? Where do you see that the data binding path for the variable in the working VI gets changed?
    It would be quite useful for you to post a small example project, with one library, and we can tackle these questions individually, as with so many configurations (in project, not in project,  running when opened, deploying successfully), I am having a little trouble keeping them straight.
    Trying my best to help,
    -Sam F, DAQ Marketing Manager
    Learn about measuring temperature
    Learn how to take voltage measurements
    Learn how to measure current

  • Shared variable engine memory problem

    Hello,
    I have a lot of problems with DSC 8. The server that worked very well in DSC 7, in DSC 8 is creating a lot of problems starting with memory leaks in tagsrv process. I have over 5000 shared variables and when I publish them the tagsrv is around 90Mb of memory. After approx. 30 min is around 130Mb and so on until the computer becomes useless. If I restart the computer the process is the same, even if I don't start the VI that updates the shared variables. I tried to uncheck different things in the variables definition like: no alarming, no buffering, no scaling, with no improvement.
    Is there a way to undeploy the shared variables or processes manually, not from the labview project, or Variable Manager. To undeploy the mentioned library with the Variable Manager takes more that an hour (just to enumerate all the shared variables).
    Any suggestion is welcomed,
    cosmin

    cosmin,
      I have several thoughts for you.  First, I recommend
    splitting up
    your 5000 variable library into a set of smaller libraries.  In
    contrast
    with the LabVIEW DSC 7 engine, the LabVIEW 8 Shared Variable Engine is
    optimized for multiple smaller libraries.  I would recommend
    having at
    most about 1000 variables per library.  You'll see some different
    recommendations from different people at NI, but in my personal
    experience I
    think 1000 is about as many Shared Variables that can be easily managed
    at
    once.  For your case this would mean having 1000 variables in 5
    different
    libraries.   --Splitting up your library gives you the added
    benefit of being able to
    undeploy/disable individual libraries without affecting the other
    libraries.  Unfortunately, the SCF migration tool will not do this
    for you manually so it may take a little time to get everything
    organized.
      Your comment about ever-increasing memory usage is
    concerning.  Do
    you see this while deploying your library, or while writing
    values?  If
    you write to values very fast (like in an untimed loop) for an extended
    period
    of time memory usage will increase as the variable buffers
    increase. 
    Please elaborate more on this topic and let me know what happens if you
    separate your variables into multiple libraries.  I was not able
    to reproduce this on a similarly spec'd computer using 10 libraries of
    500 variables each.
      You can programmatically undeploy libraries using the delete process VI
    on the DSC Engine Control palette in LabVIEW.  Use this VI in combination
    with the get process list VI to quickly remove all currently deployed
    libraries.
      Also, regarding your question about why variables sometimes show up the
    Published Variable Monitor window; the different utilities you described may
    use different methods to get that variable list.  These methods behave
    differently depending on how stressed the Shared Variable Engine is.  If
    you're in the middle of loading a library with 5000 variables I'm not surprised
    that the list won't get populated immediately in the Published Variable Monitor
    window.  After the library successfully loads though, you should be able
    to refresh the view and see all the variables. 
    Regards,
    Nick F
    LabVIEW R&D
    ~~

  • Shared Variable Properties and DSC

    Is there a way to assign engineering units to a shared variable as a
    configuration parameter?  This should be on the "Scaling" page of
    the shared variable properties.  It seems this is a logical and
    convenient place to track units.  Assigning units programmatically
    using the Scaling:Units property is awkward (to say the least).
    On similar lines, why aren't shared variable properties automatically
    saved to the DSC historical database?  Every trace should have a
    set of information that exposes ALL the shared variable properties that
    created it.  Take something like engineering units for
    example:  then you would know what kind of historical trace you
    are looking at!  This seems so basic I can't imagine why it was
    missed.
    Unless I missed something -- please enlighten me.
    Regards,
    David Moerman
    TruView Technology Integration Ltd.

    Hello David,
    You are correct in that the units property is not exposed on the
    Scaling page of hte Shared Variable properties dialog, and it sounds
    like you are already well aware of the existing method to access this
    property through the property node interface.  As you have also
    discovered, the Citadel database that we use with DSC does not have a
    built-in provision for storing metadata about the shared variable from
    which a trace originates.  You can emulate this with, for example, an
    array of strings which is also logged to the database, with each string
    containing the metadata for a particular trace. 
    Since it seems like having these features would benefit you, I
    encourage you to let our DSC development group know your needs by
    filling out product feedback, accessible on our site at
    http://www.ni.com/contact   .  This will send a feature request
    directly to the appropriate R & D group, who reads and evaluates
    every suggestion made.  This is your most direct way to let us know
    what features would best meet your needs.
    Cheers,
    Matt Pollock
    National Instruments

  • What's the meanning of "bad status" in DSC alarm config for Shared Variable?

    as the title
    LabVIEW 8.20程序设计从入门到精通已经出版,敬请关注!
    欢迎访问LabVIEW学习乐园: http://labviewstudy.blog.edu.cn

    Can I read the quality  of an OPC tag which binded with a Shared Variable by using this  "status"?
    LabVIEW 8.20程序设计从入门到精通已经出版,敬请关注!
    欢迎访问LabVIEW学习乐园: http://labviewstudy.blog.edu.cn

  • Shared Variable Engine Crashing

    I'm using Labview 8.6.1 and DSC module 8.6.1.  Shared variable engine just started crashing as soon as it tries to start.  I know you guys are going to say reinstall labview, but it's in a remote location and the install disks are not there.  Is there anything else that can be done to fix this?  I've rebooted and restarted everything I can think of.

    Ok bare with me, alot has happened here and thank you for the help.  The error that I was experiencing occurred anytime Windows started. The first thing that would popup would be an error message saying that the Shared Variable Engine had died.  This error occurred before I even thought about opening LabView.  I am sure that the Shared Variable Engine It was the general Windows error message with the option to "send" or "don't send".  If I then started LabView after that and tried deploying my variables by right clicking my library and pressing "Deploy All", LabView would popup the normal box that shows the status of variable deployment.  It would almost immediately fail deployment and then a few seconds after that I would receive the same error I got when I started up Windows (Shared Variable Engine failed).  I went to the services in Windows and tried restarting the Shared Variable Engine from there.  I would start it and then a few seconds later it would stop and give me the same error.  I tried doing the same thing from the Distributed System Manager.  Same Result.  Same Error.  
    I found a section on this site about people having problems with C++ Runtime errors that would basically cause the same thing to happen.  Their Shared Variable Engine would fail.  The advise I found there led me to the Distributed System Manager.  It said to right click on the libraries and remove all the processes or "end process".  What I had were two libraries there.  One that I was using and one that I wasn't.  I was able to end the process for the one that I wasn't using but not for the one that I was using for my current project.  I removed that process and restarted Labview and I deployed my variables and got the same error.  No luck.  Until, I restarted Labview again and deployed my variables again and it magically healed itself.  Maybe I didn't wait long enough for the process to stop, not sure.  My variables deployed, I opened up my VI and everything was working great.  I am assuming that by removing that unwanted process, I found my fix.  But I guess I can't be sure.  That process, which was just a library that I had created for another project and was no longer being used, had been there for a long time (6 months).  And I was having no issues to speak of.  So I guess my ultimate question is what the heck happened that caused all this.  I'm sure its bad form to have a process running that isn't needed, that's my fault and I will certainly be more aware of that in the future.  I just wish I knew what caused my problems so that they won't happen again.  
    Any thoughts? 

  • Modbus I/O server & Shared Variable Engine

    小弟做專題用到Labview,想請教一個問題,目前是用labview對智慧電表讀取資料
    主要用modbus I/O server(TCP/IP模式)和shared variable engine
    但是網路上大多數資料是教學如何建立I/O server和設定shared variable
    小弟想知道背後理論是什麼?
    例如:labview如何對modbus TCP/IP的封包進行處理,並取出所要資料。
    謝謝各位指教

    建議使用 google
    關鍵字 = labview shared variable concept

  • Front Panel binding of shared variables very slow initialization / start

    Hello @ all,
    I am using a server running Windows2000 and LV 8 DSC RTS for datalogging. All shared variables are deployed on that server.
    I am now facing the problem, that all front panels running on the clients using the network shared variables on the server take very long to sync on startup. First the flags on the controls bind to the shared variables turn red, after up to ten minutes they start to turn green. The panels use up to 40 controls bind to the shared variables.
    All firewalls are turned off. I tried to connect the client to the same switch the server is connected to. Same problem. Does anybody have a clue?
    Thx for your quick answers.
    Carsten 

    While I can't offer any solution to your problem, I am having a similar issue running LV8.0 and shared variables on my block diagram (no DSC installed).
    When using network published shared variables, it takes anywhere from 30 sec to 4 min from the vi start for any updates to be seen. Given enough time, they will all update normally, however this 4 minute time lag is somewhat troublesome.
    I have confirmed the issue to be present when running the shared variable engine on windows and RT platforms, with exactly the same results.
    In my case, the worst offenders are a couple of double precision arrays (4 elements each). They will normally exhibit similar "spurty" behavior on startup, and eventually work their way up to continuous and normal update rates. Interestingly enough there are no errors generated by the shared variables on the block diagram.

  • Upgrade to 8.5.1 on all computers using shared variables?

    Hello,
    We are currently using DSC 8.5 and hosting the shared variable engine on one 'server' computer with other computers subscribing to network shared variables.  Some of the computers are Windows XP and others are RT.
    When we upgrade to DSC 8.5.1 can we just upgrade the 'server' computer and leave all other computers at 8.5 or do we need to upgrade all systems to 8.5.1 at the same time?  Will computers running 8.5 be able to read/write network shared variables hosted on the DSC 8.5.1 system?  Is such a progressive upgrade not advised?  Thank you in advance.
       - Chris White
         ThinkG Consulting LLC

    Hi Chris,
    The variables pass through the Shared Variable Engine on your server before they are read by other subscribers. Therefore, you do not have to upgrade the DSC on all your subscribers. I hope that answers your question. For further reference on how network pulished shared variables operate as well as the role of the SVE, please refer to this link.
    Ipshita C.
    National Instruments
    Applications Engineer

Maybe you are looking for

  • Hardware/software recommendation for video capture prior to edit on home ba

    i am a FCP user on a G5. i need to begin to load and initially edit video while traveling and then take that FC project file and load it onto my home based G5 for further work. i am considering a macbook and FC express to accomplish this task. my que

  • Problem navigating article with prev next buttons

    I place a next and previous button to navigate between pages in the same article using gotonextpage but it stops working after a few times navigating the article. I tried with navto://article# but it did not worked either

  • Java interface as an Axis Web Service

    Hi averybody: I'm developping an application containning a web service using Axis 1.2 with Tomcat 5.5.7. I had design a Java interface for the WS and create mi deploy.wsdd file for it using Message-style. Everithing goes right in deployment process,

  • IOS 5 Airplay bug discovered

    I've been having a vexing problem whereby if I send sound via AirPlay to one of my Airport Expresses the "Ringer and Alert Volume" always resets to 4. This seems to be an issue with older generation Airport Expresses. I was not able to try it with a

  • Portable Hard Drive Issues!!!!

    I had put Adobe Photoshop CS6 on my external hard drive because I had to wipe my computer (Macbook Pro 2011), and now I can't open or use photoshop anymore on either my portable hard drive or laptop.  It tells me to uninstall and reinstall the progra