Programmatically change cRIO scan engine IOV scaling

Has anyone created any vi's that can do this?  I have located the config.xls files on the cRIO that specify the scaling parameters and this can be changed by parsing and modifiying the xml data but it would be very nice to have a ni approved tool or api to handle this.  In the real world, a system needs the ability to change scaling of xducers.  For example when a pressure transducer goes out for calibration you need a way to easily change the scaling without having to rebuild the rtexe.

Hi sachsm,
I would recommend checking out this example.  If you dig in, you'll see property nodes and methods for creating your own variables programmatically.  It'll also give clues about what parameters we can change.
https://decibel.ni.com/content/docs/DOC-5929
Regards,
Che T.
Applications Engineer
National Instruments

Similar Messages

  • Programmat​ically modify scan engine scaling

    I need to be able to programmatically adjust the scaling factors for my cRIO analog channel.  I manage all
    my I/O from an external spreadsheet that gets compiled and ftp'd to 9
    cRIO targets.  Each cRIO will have different scaling factors.  I
    understand that this is a feature that will be forthcoming but I need
    to be able to do this within the next 2 months.  I already have in
    place a secondary scan engine of my own design that replicates all the
    I/O and includes such niceties as scaling, filtering,  zero offset and
    deadband updating.  Recently I noticed a file on the cRIO called
    'variables.xml' which quite clearly contains all of the I/O variable
    properties.  I am planning on writing a utility vi that can modify this
    xml file.  I believe I would have to call this vi from my cRIO code and
    then reboot to have it redigested into the LOGOS? server or scan
    engine.  I understand that the development engineers loath to support
    this type of activity and I also would not want this to be permanantly
    in my code, but only as a short term solution until the proper API is
    released.  If anyone has a better idea I would love to hear about it.

    sachsm,
    While I definitely don't promote doing what you suggested, I tried it out and it looks like it should theoretically work with a reboot.  I only did this with a straight VI ran from LabVIEW.  The xml file is generated when the scan engine variables are first deployed.  This file is updated on an input-by-input basis based on what was edited in the project explorer.  If the xml file is edited, then the device rebooted (but the project not changed) and you run the VI, the 'updated' scaling will be present.  Once you edit the particular IO in the project then it will override the manual edit.  When these are changed the scaling changes go into effect immediately and the device doesn't need to be rebooted.  Good idea, indeed, not sure if or what the implications may be with this though.  Definitely try at your own risk for this one.
    Regards,
    Jared Boothe
    Staff Hardware Engineer
    National Instruments

  • How to prevent cRIO Scan Engine Warning 66030

    I find that when my cRIO cpu spikes briefly I can trigger something bad in the scan engine and it will start spewing 66030 warnings and go into a Fault state. 
    The only recourse at that point is to reset the cRIO even though the cpu usage has returned to normal.  I would like to know if there is anyway to make the scan engine
    a little bit more forgiving of cpu spikes?

    sachsm:
    Is that 66030 or -66030?
    66030: This I/O variable is referencing an I/O device or channel that is not active in the current I/O mode.  Data read may be stale or invalid and data written may not be sent to the output.
    -66030: The operation cannot be completed because one of the scanned I/O buses is not in the required I/O mode.
    I'm assuming it's the non-negative one, but I just want to be sure.
    If so, I think your best bet is to clear that specific warning code and/or use the programmatic Scan Engine configuration and fault handling VIs to correct the Scan Engine.
    (The VIs are located in "Measurement I/O --> NI Scan Engine").
    Hope that helps!
    Caleb Harris
    National Instruments | Mechanical Engineer | http://www.ni.com/support

  • Can I programmatically change a virtual channel's scaling?

    I want to be able to read and write the scaling parameters for a virtual channel. Using NIDAQ 6.8 with MAX 2.0 and LV 6.0.2.
    Thanks

    This question actually belongs in one of the Measurement Devices categories. In the future, please post to the appropriate forum. You will find similar questions and get exposure to others with similar interests when you post directly into one of those categories.
    In answer to your question, you can read the virtual channel properties, but you cannot programmatically change them. If there were a finite number of changes, you could have multiple config files that contained the same virtual channel names. Then, you could programmatically switch which config file MAX was using.
    Alternatively, you can apply the scaling in your program if you read the raw value through the virtual channel (no scale) and then pass it into a scaling function.
    You can find exa
    mples for saving virtual channel properties to a text file and for changing config files in the NI Developer Zone. Start at the http://www.ni.com/support page and choose Example Programs from the Technical Resources pulldown menu. Then, type in keywords, such as "virtual channel" or "change NI-DAQ configuration".
    Regards,
    Geneva L.
    Applications Engineer
    National Instruments
    http://www.ni.com/support

  • Does the cRIO scan engine support TEDS

    I have a cRIO chassis with 9237 bridge amplifier modules and would like to use the TED feature of my transducers.  I know the module itself supports TEDS but is this functionality incorporated into
    the new scan engine?
    Thanks

    Hi Sachsm,
    Currently, there is no support for TEDs with the CompactRIO Scan Interface.  However, you should be able to write an application that will allow you to get to your desired TEDs information by compiling two biftfiles, one that communicates to the modules using the TEDs interface that is supported with the LabVIEW FPGA Host interface functions, and another that then just communicates to Modules through the Scan Interface.
    After the TEDs information was read from the first bitfile, then you could use this information to calibrate the data used for the sensor.
    Let me know if you would like me to explain a little bit further or if you think the solution would work for you.
    Thanks,
    Basset Hound

  • Does the cRIO Scan Engine Support Offset and Shunt Calibration on the 9237 module

    Thanks.

    Sachsm,
    The documentation for the Set Shunt Calibration (Scan Interface).vi should have been in the VI help but it looks like it didn't make it so I'll make sure it gets updated.  I'll double check my understanding tomorrow morning but the documentation should be the following for the inputs of this VI:
    slot -  Specifies the chassis slot of the strain module (9237, 9235, 9236 ). Valid values are 1 through N, where N is the number of slots in the chassis.
    channel - Specifies the channel to be affected.  Valid values are 0-N where N is the channel for shunt calibration.
    value - Turns Shunt calibration on or off. Valid values are 0 or 1 where a value of 1 will turn shunt calibration on for the given slot and channel.
    Now, for your orginal question about offsets on the AI inputs, here are my addtional thoughts:
    Unlike our SCXI products, the NI 9237 does not have an internal hardware nulling circuitry because its input range is sufficiently wide so that the inputs will not saturate even with a very large initial bridge offset.  Since the 9237 does not have any hardware nulling circuitry, you have to perform offset nulling with software compenstation, and save the value to use later in the application as you mentioned.  For example, when an offset null is performed in DAQmx, the entire offset is stored in the Initial Bridge Voltage property and is stored in software.  You should see the same offset if you used the module from DAQmx, FPGA Interface, or in the Scan Interface.  With that being said, I believe what your seeing is really just the resting measurement of the module and transducer together and you should be able to zero out your measurement by performing Software Compensation.
    Regarding, Offset Calibration, I think its important to make clear exactly what it is providing as the name is a little misleading.  The Offset Cal feature of the 9237 is really more like an Auto Zero used for eliminating offsets generated by an amplifier stage and doesn't provide the same behavior as offset nulling. In Auto Zero Mode, the 9237 shorts the input channel to ground and subtracts the obtained measurement from all subsequent samples. Performing an auto zero is a quick calibration technique that takes place at the beginning of a scan.  A typical use of the Offset calibration feature would be to set it true at the begining of acquiring data and then leaving it on indefinitely.
    Personally, I do not bother enabling offset cal in my FPGA applications because the measurement of the sensor at rest takes into account all sensor and module offsets.
    And finally, if your looking for some information on how to perform shunt calibration, I like to refer to the example VI located here:
    C:\Program Files\National Instruments\LabVIEW 8.6\examples\CompactRIO\Module Specific\NI 9235\NI 9235 Getting Started
    As it has walks through the programming steps of setting up Shunt cal and applying the Gain Factors.  Its written for LabVIEW FPGA but it should be easy to use with Scan Interface.
    Hope that helps a little bit,
    Let me know if you have questions and I'll help where I can or bug the right engineer.
    Basset Hound

  • 9236 enable shunt cal property crashes crio OS with scan engine

    I would like to inform users of the 9236 Quarter Bridge Strain Gauge Module of a bug. The Real-Time team is aware of this issue and I have been working with an app engineer on it for about a week and he has confirmed my findings. He says the problem is most likely a driver issue or scan engine issue and they are investigating it.
    There is a bug that completely crashes the cRIO operating system with the 9236 module. Currently when a cRIO device is loaded with LV2009 SP1 or LV2010 and using scan engine interface, attempting to write the "shunt cal enable" property of a 9236 module completely crashes the VxWorks OS. If you try to access the property again, the crio crashes and enters an error state where the crio has rebooted twice without being told to. The LED indicator blinks four times to indicate this.
    I have tested this with a few different hardware combinations with both the crio-9014 and crio-9012 controllers, combined with either the 8-channel 9104 back plane or the 4-channel 9102 back plane with the same results. The app engineer was able to duplicate this issue as well.
    To duplicate the bug:
    Load a crio device with 2009 SP1 or LV2010 and configure the device for scan engine. Locate the 9236 module in the project explorer and drag and drop it into a VI to create the IO variable. Right click the output terminal of the IO Variable and go to create > property > enable shunt calibration  on a channel. Once the property is dropped, right click on it and do change to write so that you can write a boolean to the variable. It doesn't matter if you write a true or false to the property, the results are the same. Hit run, watch it crash. When it reboots, hit run again. The crio is now in a state that can only be recovered by physically resetting the device.
    I love the crio stuff and I use it every day because it is very reliable and robust. This kind of thing is rare, which is why I am reporting it to the community as well as NI as it is a pretty big bug that took me a while to narrow down.
    [will work for kudos]

    rex1030,
    Shunt calibration can be accessed using a property node.  The operation will be the same, you still need to acquire two sets of data and compute the scalar.
    You can obtain the necessary reference by dragging the module to the block diagram from the project.  Please see http://zone.ni.com/devzone/cda/tut/p/id/9351 for more information on programmatic configuration.
    Let me know if you have any other questions.
    Sebastian

  • CRIO 9068 + Scan Engine Support Error - Even though it is installed on cRIO, after deploying error says its missing

    All,
    I have a cRIO-9068 that I am trying to use the scan mode for. I have intalled all the latest drivers and software as explained. However, when I set my chassis to scan mode and then select deploy all, I recieve this error on my chassis and all of my modules:
    "The current module settigns require NI Scan Engine support on the controller. You can use Measurement & Automation Explorer (MAX) to install a recomended software set of NI-RIO with NI Scan Engine support on the controller. If LabVIEW FPGA is installed, you can use this module with LabVIEW FPGA by adding an FPGA Target Item under the chassis, and drag and drop the module onto the FPGA Target Item."
    Has anyone experienced this or know why labVIEW won't recognize that the software is installed on my cRIO or is it not being installed correctly?
    Solved!
    Go to Solution.

    So I found that the target's Scan Engine was in Configuration mode.  After placing it in Active, I was able to deploy all of the modules on the cRIO target.  However, now I cannot deploy any of my modules on my EtherCAT NI 9144 racks regardless of which scan mode I place the Scan Engine.  I verified that I have all software on the cRIO target I need.  I was also able to find the EtherCAT slaves and their modules on MAX and I was able to add them to the project.  The problem I am having is being able to deploy them.  I have gone through the procedure outlined in the manual provided with the EtherCAT racks and the following link with no issues:  http://www.ni.com/white-paper/10555/en/
    The top LEDs on the EtherCAT racks are solid yellow and the bottom are solid green.  Does anyone know why I may be having this problem?

  • Programmatically remove ethercat slaves from scan engine

    I am using an RT Target as the master of EtherCAT chain.  It has several different sorts of third party slaves in the chain.  All are correctly detected.  However some are buggy (they're in development) and prevent the Scan Engine from switching to Active mode.  If I remove the buggy slaves from under EtherCAT Master (but leave them plugged into the chain) in the project then the Scan Engine will switch to Active and the remaining targets are interogated correctly.
    However I want my RT app to be stand alone and adapt to different chains of slaves. I invoke the Refresh Modules method to find all the available slaves then it finds the buggy ones.  These then prevent the Scan Engine from switching to Active mode.  I have not been able to find a way to programmatically remove these buggy slaves as I can from the project - does it exist???
    Many thanks!

    I am not sure if you will be able to do this but I would suggest posting this question to the Industrial Communications board.
    http://forums.ni.com/t5/Industrial-Communications/bd-p/260
    If this is something that you are able to do, the people that frequent that board are much more likely to know about it.
    Matt J
    Professional Googler and Kudo Addict
    National Instruments

  • Can you still use soft motion without the scan engine on crio?

    Where are the trajectory generator property and invoke nodes in softmotion for LV 2011? 
    These functions are not longer found in the pallet?  Are they no longer supported? 
    All the new soft motion examples are using the scan engine.   Can I use soft motion without the scan engine????
    Steve
    SteveA
    CLD
    FPGA/RT/PDA/TP/DSC

    Hi Ian,
    I apologize that this wasn't stated in the release notes. While your code should have upgraded without breaking, the release documentation should have mentioned that the advanced trajectory genertor functionality is no longer supported. If you still want to use the trajectory generator functions, they are still shipped with SoftMotion, they are just not on the palette. I have attached a zip file that has 4 .mnu files that will get them on the palette. To install these, do the following:
    Close LabVIEW
    Make a copy of the following directory: C:\Program Files (x86)\National Instruments\LabVIEW 2011\menus\Categories\VisionMotion\_nism. In case something goes wrong, we will want to have a copy of this directory so that we can replace it.
    Copy the 4 .mnu files from the attachement into the above _nism directory (the original, not the copy). If it asks you to replace any of the exisiting files, please do. You don't have to copy the readonly.txt file.
    Start LabVIEW. You should know have the trajectory generator functions in your SoftMotion Advanced palette.
    Keep in mind that we no longer support these functions. This means that they are not tested in new releases and any bugs will likely not get fixed.
    I would recommend that you use the new API for any new designs. You can still get most of the functionality of the old API but without the complexity. If you want to generate setpoints at 5ms then you will run the scan engine at 5ms. This is certainly easier than having to do the timing yourself, but it does take away some control from the user. If you give me a brief overview of what you mean by synchronization, I will let you know the best way to do it with the new API.
    Thanks, 
    Paul B.
    Motion Control R&D
    Attachments:
    _nism.zip ‏4 KB

  • NI Scan Engine, cRIO 9211 and CJC

    Does anyone know if the temperature returned by the scan engine uses the CJC and zero offset channels on the 9211?
    Thanks 

    This is what the 9211 user manual says about it.
    http://digital.ni.com/manuals.nsf/websearch/A24B9BB284234EDF862571CA00684AC7 
    Heat dissipated by adjacent modules or other nearby heat sources
    can cause errors in thermocouple measurements by heating up the
    terminals so that they are at a different temperature than the
    cold-junction compensation sensor used to measure the cold
    junction. 
    Not sure what the different ways for implementing CJCs are, but is there something that can be inferred by knowing that is a an actual sensor within the module?

  • RT Scan Engine Roadmap

    Greetings,
    First off, let's take our hats off to the team at NI for creating this technology, it is a powerful new set of features for cRIO and has tremendous potential.
    I have a application that will require 9 cRIO chassis. There will be 3 identical 'sections', each with 3 cRIO's. It would be ideal if
    each cRIO could read a disk config file that would contain scaling constants for each I/O Variable. This would require programmatic access
    to the I/O Variable Properties which is not currently available. It does appear that the hooks are in the place under the hood but have not
    been wrapped in LV yet. For the next 6 months I will only be building one section of my system, but by next Q2, 2009 I will need to complete all 3 sections
    and would like to have some idea if these types of features will be forthcoming. My wish list would also include the ability to
    programmatically create or rename I/O variables and perhaps even add inline filtering to the scan engine.
    Thanks,
    Mike Sachs
    Intelligent Systems
    www.viScience.com
    Onsite at the NASA Marshall Space Flight Center

    Mike,
    This is a great idea. Please submit a formal request to our Product Suggestion Center.
    Cheers.
    | Michael K | Project Manager | LabVIEW R&D | National Instruments |

  • Shared Variables, Scan Engine & Multiple Targets

    I am seeking some general advice about the structure of my LabVIEW project.
    The project consists of a laptop with LabVIEW, and a joystick connected, and a CompactRIO connected via ethernet. I had been running the cRIO in FPGA Interface mode, however a change in some things caused the project to have to be shifted to scan mode.
    As of now, the code on the laptop updates shared variables on the cRIO, and reads from shared variables on the cRIO for monitoring. I want the shared variables hosted on the cRIO because it will also need to operate without the laptop connected. Before switching the cRIO to scan mode, I found that I had to first run the laptop code, and then run the cRIO code, or the shared variables would not communicate properly. Now that I have switched to scan mode, I have to run the cRIO code first, and even then the shared vars do not communicate properly for more than a few seconds, and are much laggier.
    My ideal project solution is a system that can run with or
    without the laptop connected, and obviously not having all these shared
    variable issues. I would like to autostart the code on the cRIO, and
    have the user run the laptop code if necessary, but in the past this did
    not seem to work with the shared variables.
    I am really confused about why this is happening. Hopefully I have explained my problem well enough. I don't really want to post the entire project on here, but I can email it to people if they are willing to take a look at it. Thank you for taking the time to read this.
    I am running LabVIEW 2010 SP1 with the Real-time, FPGA, DSC, and Robotics modules. I have the Feb '11 driver set and NI-RIO 3.6.0 installed and all completed updated on my RT cRIO.
    Solved!
    Go to Solution.

    I do this type of stuff all the time...
    Move all your NSV libraries to the cRIO.  From the project you must deploy to the cRIO and from then on they are persistant until you reformat.
    From your windows HMI app, you can place static NSV tags on the block diagram or use the dynamic SV API to R/W.  Also you can bind HMI controls and
    indicators directly to cRIO NSV's (This is what I do)  Also I create a 'mirror' library in the PC HMI that is bound to the cRIO library.  This library has DSC Citadel
    data logging enable and will automatically save historical traces of all my important data - very nice.  PC hosted libraries can be set to autodeploy in the app build. 
    also the project has a autodeploy option for the Development Environment which I normally turn off.  If you have PC to cRIO binding setup then you need to be cautious
    about any sort of autodeployment since it will potentially force the cRIO app to stop when you deploy.  To get around this, you can use PSP binding (IP address rather than project
    process name) and use the DSC deploy library vi's in your HMI app.  Once you are using the scan engine you can use the DSM (Distributed System Manager) app to view, proble and
    chart all of you IOV and NSV's on your network.

  • Optimizing EtherCAT Performance when using Scan Engine

    Hello everyone,
    This week I have been researching and learning about the limitations of the LabVIEW's Scan Engine. Our system is an EtherCAT (cRIO 9074) with two slaves (NI 9144). We have four 9235s, two 9237s and two 9239 modules per chassis. That means we have a total of 144 channels. I have read that a conservative estimation for each scan is 10 usec. This means with our set up, assuming we only scan it would take 1.44 msec which would yield roughly a rate of 694 Hz. I know that when using a shared variable, the biggest bottle neck is transmitting the data. For instance, if you scan at 100 Hz, it'll be difficult to transmit that quickly, so it's best to send packets of scans (which you can see in my code).
    With all of that said, I'm having difficulties scanning any faster than 125 Hz without railing out my CPU. I can record at 125 Hz at 96% of the CPU usage but if I go down to 100 Hz, I'm at 80%. I noticed that the biggest factor of performance comes when I change my top loop's, the scan loop, period. Scanning every period is much much more demanding than every other. I have also adjusted the scan period on the EtherCAT's preferences and I have the same performance issues. I have also tried varying the transmission frequency(bottom loop), and this doesn't affect the performance at all.
    Basically, I have a few questions:
    1. What frequency can I reasonably expect to obtain from the EtherCAT system using the Scan Engine with 144 channels?
    2. What percent of the CPU should be used when running a program (just because it can do 100%, I know you shouldn't go for the max. Is 80% appropriate? Is 90% too high?)
    3.Could you look through my code and see if I have any huge issues? Does my transmission loop need to be a timed structure? I know that it's not as important to transmit as it is to scan, so if the queue doesn't get sent, it's not a big deal. This is my first time dealing with a real time system, so I wouldn't be surprised if that was the case.
    I have looked through almost every guide I could find on using the scan engine and programming the cRIO (that's how I learned the importance of synchronizing the timing to the scan engine and other useful facts) and haven't really found a definitive answer. I would appreciate any help on this subject.
    P.S. I attached my scan/transmit loop, the host program and the VI where I get all of the shared variables (I use the same one three times to prevent 144 shared variables from being on the screen at the same time).
    Thanks,
    Seth
    Attachments:
    target - multi rate - variables - fileIO.vi ‏61 KB
    Get Strain Values.vi ‏24 KB
    Chasis 1 (Master).vi ‏85 KB

    Hi,
    It looks like you are using a 9074 chassis and two 9144 chassis, all three full with Modules and you are trying to read all the IO channels in one scan?
    First of all, if you set your scan engine speed for the controller (9074), then you have to synchronize your Timed Loop to the Scan Engine and not to use a different timebase as you do in your scan VI.
    Second the best performance can be achieved with I/O variables, not shared variables and you should make sure to  not allocate memory in your timed Loop.  memory will be allocated if an input of a variable is not connected, like the error cluster for example or if you create Arrays from scratch as you do in your scan VI.
    If you resolve all these Issues you can time the code inside your Loop to see how long it really takes and adjust your scan time accordingly.  The 9074 does not have that much power so you should not expect us timing. 500 Hz is probably a good estimate for the max. Performance for 144 channels and depending on how much time the additional microstrain calculation takes.
    The ECAT driver brings examples which show how to program these kinds of Apps. Another way of avoiding the variables would be the programmatic approach using the variable API.
    DirkW

  • Setting thermocouple type in scan engine

    I am using cRIO 9211 to acquire thermocouple data using NI Scan Engine. I don't see any option to specify the type of thermocouple other than by right clicking on the module (in project) and selecting Properties.
    Is there a way to programmatically set the type of thermocouple?
    If this is not possible, can I can read the cjc and offset channels, so that I can convert the raw voltage to temperature in my RT code?
    Will I be able to read the cjc and offset channels if I use NI Scan Engine Advance I/O Access?
    "A VI inside a Class is worth hundreds in the bush"
    യവന്‍ പുലിയാണു കേട്ടാ!!!

    Cross posted here.
    "A VI inside a Class is worth hundreds in the bush"
    യവന്‍ പുലിയാണു കേട്ടാ!!!

Maybe you are looking for

  • MBP can't find any wireless networks

    I am running os 10.5.1, and last week or so, i was connecting to my network fine, but every so often i would join the network for about a minute, then losing connection, then getting it back for a second and going back off until my mbp required a res

  • Billing payment trouble

    I've been many times try to change my credit card for payment since i dont use the old one anymore. but it keeps failed with error Id 12000. Someone can help me? I want to purchase themes so bad. Thanks

  • Java application deployment tool

    Hey everyone, i'm having some serious difficulty deploying an ejb application on bea weblogic. I cannot find the java application deployment tool, or any concise documentation for that matter. I have experience writing jsps, javabeans, servlets etc.

  • Background Job Running long time in Production system ???

    Hi guys, I have custom report which runs normally in 20 minutes.  i am executing this report every day.  but when i execute the same report yesterday it took 5 hours. i did the performance trace ....the sql statements are in acceptable range only. i

  • Almost no paper options in PS print menu

    I'm using Photoshop CC on a Mac yosemite and want to print on transparencies. However the print menu only lets me choose photo paper or plain paper (with different qualities). How do I get more options for different kinds of paper like vellum or tran