Programmat​ically modify scan engine scaling

I need to be able to programmatically adjust the scaling factors for my cRIO analog channel.  I manage all
my I/O from an external spreadsheet that gets compiled and ftp'd to 9
cRIO targets.  Each cRIO will have different scaling factors.  I
understand that this is a feature that will be forthcoming but I need
to be able to do this within the next 2 months.  I already have in
place a secondary scan engine of my own design that replicates all the
I/O and includes such niceties as scaling, filtering,  zero offset and
deadband updating.  Recently I noticed a file on the cRIO called
'variables.xml' which quite clearly contains all of the I/O variable
properties.  I am planning on writing a utility vi that can modify this
xml file.  I believe I would have to call this vi from my cRIO code and
then reboot to have it redigested into the LOGOS? server or scan
engine.  I understand that the development engineers loath to support
this type of activity and I also would not want this to be permanantly
in my code, but only as a short term solution until the proper API is
released.  If anyone has a better idea I would love to hear about it.

sachsm,
While I definitely don't promote doing what you suggested, I tried it out and it looks like it should theoretically work with a reboot.  I only did this with a straight VI ran from LabVIEW.  The xml file is generated when the scan engine variables are first deployed.  This file is updated on an input-by-input basis based on what was edited in the project explorer.  If the xml file is edited, then the device rebooted (but the project not changed) and you run the VI, the 'updated' scaling will be present.  Once you edit the particular IO in the project then it will override the manual edit.  When these are changed the scaling changes go into effect immediately and the device doesn't need to be rebooted.  Good idea, indeed, not sure if or what the implications may be with this though.  Definitely try at your own risk for this one.
Regards,
Jared Boothe
Staff Hardware Engineer
National Instruments

Similar Messages

  • Programmatically change cRIO scan engine IOV scaling

    Has anyone created any vi's that can do this?  I have located the config.xls files on the cRIO that specify the scaling parameters and this can be changed by parsing and modifiying the xml data but it would be very nice to have a ni approved tool or api to handle this.  In the real world, a system needs the ability to change scaling of xducers.  For example when a pressure transducer goes out for calibration you need a way to easily change the scaling without having to rebuild the rtexe.

    Hi sachsm,
    I would recommend checking out this example.  If you dig in, you'll see property nodes and methods for creating your own variables programmatically.  It'll also give clues about what parameters we can change.
    https://decibel.ni.com/content/docs/DOC-5929
    Regards,
    Che T.
    Applications Engineer
    National Instruments

  • How do I programmat​ically modify array element sizes?

    Hi All,
    I have a quick question about modifying the size of array elements. Hopefully someone can help, because I am at a dead end!
    I am logging some intensities from a Fibre Array using a camera. For calibration of the system, I acquire an image from the camera, click points on the image to divide it into areas of interest. I overlay my image with a grid showing the regions of interst - for example a 4x6 array. I then have to select the fibres - or ROIs - I want to log from.
    I have a cluster type-def ( a number and a boolean) to specify the fibre number and to turn logging from that fibre on/off. I overlay an (transparent) array of this typedef over my image to correspond with the regions of interest. So here's my problem - I want to modify the dimensions of the array so each control matches my ROI. I can resize the elements by rightclicking on the elements on the frontpanel, but can't find a way to do it programmatically. The Array Property Node>>Array Element>>Bounds won't 'change to write'...thats the first thing I tried.
    Its really only important that the elements align with my ROIs - so programmatically adding in gaps/spacings would also work for me...but again I can't figure out how to do this! I've attached a screenshot of part of my image with array overlaid to show you all exactly what my problem is.
    Thanks in advance for you help,
    Dave
    PS I am running Labview 8.6 without the vision add on.
    Solved!
    Go to Solution.
    Attachments:
    Array_Overlay.png ‏419 KB

    Here's my cheat (and cheap?) way If you want to get fancy and center the numeric and boolean indicators, you could add spacers on the north and west sides, too.
    Attachments:
    ClusterSpacer.vi ‏13 KB

  • Is it possible to programmat​ically modify camera descriptio​n files?

    Hi all,
    Basically I'm trying to figure out how to store to file the changes a user made to the .iid and/or .icd values during an app run.
    For instance... I have utilities built up that help the user vary the Black and White reference voltages (using an IMAQ 1408) interactively (they can see the results of each new value nearly instantly). The problem is that these changes only last for that program run... meaning that once the app is restarted the values found in the .icd file (I presume) are re-loaded.
    I had thought that the .icd file was similar enough in format to the standard configuration file format to make using the read/write config file VIs applicable. However th
    ere are subtle differences that cause problems when re-writing the file.
    I am also not sure that the values stored in the .icd file are exact matches with those presented under MAX. For instance the key names that seem to be related to the black and white reference voltages don't always reference numerical values that are matches with those displayed in MAX.
    Does anyone know of a way to store programmatic changes to IMAQ properties, without having to store all the modified values into a seperate file?

    The only way I know to record changes in an ICD file is to use MAX, other than directly editing it as a text file. Any changes you make during the execution of a program will, as you said, revert to the default values the next time you run the program.
    To store the changes, the easiest way would probably to have the user use MAX for permanent changes. The MAX interface is relatively easy to use (after a short learning curve). You can get immediate updates after changing values by using the grab button to continuously update the image.
    If you don't like the MAX interface, I think you are going to need to store the values in a separate file. You could try to change the contents of the ICD file by using string manipulation, but I don't think it would be worth t
    he amount of time it would take to develop the tools.
    Bruce
    Bruce Ammons
    Ammons Engineering

  • Programmatically remove ethercat slaves from scan engine

    I am using an RT Target as the master of EtherCAT chain.  It has several different sorts of third party slaves in the chain.  All are correctly detected.  However some are buggy (they're in development) and prevent the Scan Engine from switching to Active mode.  If I remove the buggy slaves from under EtherCAT Master (but leave them plugged into the chain) in the project then the Scan Engine will switch to Active and the remaining targets are interogated correctly.
    However I want my RT app to be stand alone and adapt to different chains of slaves. I invoke the Refresh Modules method to find all the available slaves then it finds the buggy ones.  These then prevent the Scan Engine from switching to Active mode.  I have not been able to find a way to programmatically remove these buggy slaves as I can from the project - does it exist???
    Many thanks!

    I am not sure if you will be able to do this but I would suggest posting this question to the Industrial Communications board.
    http://forums.ni.com/t5/Industrial-Communications/bd-p/260
    If this is something that you are able to do, the people that frequent that board are much more likely to know about it.
    Matt J
    Professional Googler and Kudo Addict
    National Instruments

  • RT Scan Engine Roadmap

    Greetings,
    First off, let's take our hats off to the team at NI for creating this technology, it is a powerful new set of features for cRIO and has tremendous potential.
    I have a application that will require 9 cRIO chassis. There will be 3 identical 'sections', each with 3 cRIO's. It would be ideal if
    each cRIO could read a disk config file that would contain scaling constants for each I/O Variable. This would require programmatic access
    to the I/O Variable Properties which is not currently available. It does appear that the hooks are in the place under the hood but have not
    been wrapped in LV yet. For the next 6 months I will only be building one section of my system, but by next Q2, 2009 I will need to complete all 3 sections
    and would like to have some idea if these types of features will be forthcoming. My wish list would also include the ability to
    programmatically create or rename I/O variables and perhaps even add inline filtering to the scan engine.
    Thanks,
    Mike Sachs
    Intelligent Systems
    www.viScience.com
    Onsite at the NASA Marshall Space Flight Center

    Mike,
    This is a great idea. Please submit a formal request to our Product Suggestion Center.
    Cheers.
    | Michael K | Project Manager | LabVIEW R&D | National Instruments |

  • 9236 enable shunt cal property crashes crio OS with scan engine

    I would like to inform users of the 9236 Quarter Bridge Strain Gauge Module of a bug. The Real-Time team is aware of this issue and I have been working with an app engineer on it for about a week and he has confirmed my findings. He says the problem is most likely a driver issue or scan engine issue and they are investigating it.
    There is a bug that completely crashes the cRIO operating system with the 9236 module. Currently when a cRIO device is loaded with LV2009 SP1 or LV2010 and using scan engine interface, attempting to write the "shunt cal enable" property of a 9236 module completely crashes the VxWorks OS. If you try to access the property again, the crio crashes and enters an error state where the crio has rebooted twice without being told to. The LED indicator blinks four times to indicate this.
    I have tested this with a few different hardware combinations with both the crio-9014 and crio-9012 controllers, combined with either the 8-channel 9104 back plane or the 4-channel 9102 back plane with the same results. The app engineer was able to duplicate this issue as well.
    To duplicate the bug:
    Load a crio device with 2009 SP1 or LV2010 and configure the device for scan engine. Locate the 9236 module in the project explorer and drag and drop it into a VI to create the IO variable. Right click the output terminal of the IO Variable and go to create > property > enable shunt calibration  on a channel. Once the property is dropped, right click on it and do change to write so that you can write a boolean to the variable. It doesn't matter if you write a true or false to the property, the results are the same. Hit run, watch it crash. When it reboots, hit run again. The crio is now in a state that can only be recovered by physically resetting the device.
    I love the crio stuff and I use it every day because it is very reliable and robust. This kind of thing is rare, which is why I am reporting it to the community as well as NI as it is a pretty big bug that took me a while to narrow down.
    [will work for kudos]

    rex1030,
    Shunt calibration can be accessed using a property node.  The operation will be the same, you still need to acquire two sets of data and compute the scalar.
    You can obtain the necessary reference by dragging the module to the block diagram from the project.  Please see http://zone.ni.com/devzone/cda/tut/p/id/9351 for more information on programmatic configuration.
    Let me know if you have any other questions.
    Sebastian

  • Setting thermocouple type in scan engine

    I am using cRIO 9211 to acquire thermocouple data using NI Scan Engine. I don't see any option to specify the type of thermocouple other than by right clicking on the module (in project) and selecting Properties.
    Is there a way to programmatically set the type of thermocouple?
    If this is not possible, can I can read the cjc and offset channels, so that I can convert the raw voltage to temperature in my RT code?
    Will I be able to read the cjc and offset channels if I use NI Scan Engine Advance I/O Access?
    "A VI inside a Class is worth hundreds in the bush"
    യവന്‍ പുലിയാണു കേട്ടാ!!!

    Cross posted here.
    "A VI inside a Class is worth hundreds in the bush"
    യവന്‍ പുലിയാണു കേട്ടാ!!!

  • Differenza fra FPGA e Scan Engine in RealTime

    salve a tutti, ho seguito un corso real time per labview dove ci facevano programmare l'accensione, monitoraggio, raccolta dati e spegnimento di una specie di camera climatica composta da una ventola e una lampadina. è stato tutto molto chiaro ma c'è un problema: tutto è stato fatto col supporto di scan engine.
    invece io dovrei programmare un motore passo passo in real time con cRio e col modulo NI 9501 che non supporta lo scan engine e quindi bisogna programmare con l'FPGA.
    qualcuno che magari ha avuto esperienza col modulo 9501 sa dirmi cosa si intende col "programmare con l FPGA" e come bisogna fare?
    perchè avendo seguito il corso con lo scan engine ho capito come si usa lo scan engine, i controlli e le variabili della lampadina e della ventola venivano fuori automaticamente.
    mentre con l FPGA come si programmano i controlli ( in questo caso accensione spegnimento senso di rotazione e angolo di rotazione del motore ) da mettere nel block diagram?
    insomma quale è la differenza fra scan engine e FPGA?
    qualcuno può aiutarmi? 
    grazie 

    grazie per la risposta, il link che mi hai dato è molto utile. da quello che mi avevano detto al corso in teoria non è necessario che io apprenda totalmente la programmazione con FPGA perchè per quanto riguarda la mia applicazione, l'FPGA serve solo nel VI iniziale per i controlli del motore e poi si può programmare tutto normalmente con labview real time. in pratica mi avevano consigliato questi due VI che ti allego relativi al NI 9501 che sono programmati con l FPGA, e di usarli come "subVI" in un programma labview real time normale. quindi da quello che ho capito la parte FPGA sarebbe già fatta da questi due esempi. è possibile o ho capito male io?
    inoltre ho provato a dargli una occhiata ma non li capisco. questi programmi effettivamente cosa fanno?
    grazie
    Allegati:
    Setpoint Control (open loop) - NI 9501.lvproj ‏22 KB
    Setpoint Control (open loop) - NI 9501 (FPGA).vi ‏106 KB
    Velocity Control (open loop) - NI 9501.lvproj ‏25 KB

  • Optimizing EtherCAT Performance when using Scan Engine

    Hello everyone,
    This week I have been researching and learning about the limitations of the LabVIEW's Scan Engine. Our system is an EtherCAT (cRIO 9074) with two slaves (NI 9144). We have four 9235s, two 9237s and two 9239 modules per chassis. That means we have a total of 144 channels. I have read that a conservative estimation for each scan is 10 usec. This means with our set up, assuming we only scan it would take 1.44 msec which would yield roughly a rate of 694 Hz. I know that when using a shared variable, the biggest bottle neck is transmitting the data. For instance, if you scan at 100 Hz, it'll be difficult to transmit that quickly, so it's best to send packets of scans (which you can see in my code).
    With all of that said, I'm having difficulties scanning any faster than 125 Hz without railing out my CPU. I can record at 125 Hz at 96% of the CPU usage but if I go down to 100 Hz, I'm at 80%. I noticed that the biggest factor of performance comes when I change my top loop's, the scan loop, period. Scanning every period is much much more demanding than every other. I have also adjusted the scan period on the EtherCAT's preferences and I have the same performance issues. I have also tried varying the transmission frequency(bottom loop), and this doesn't affect the performance at all.
    Basically, I have a few questions:
    1. What frequency can I reasonably expect to obtain from the EtherCAT system using the Scan Engine with 144 channels?
    2. What percent of the CPU should be used when running a program (just because it can do 100%, I know you shouldn't go for the max. Is 80% appropriate? Is 90% too high?)
    3.Could you look through my code and see if I have any huge issues? Does my transmission loop need to be a timed structure? I know that it's not as important to transmit as it is to scan, so if the queue doesn't get sent, it's not a big deal. This is my first time dealing with a real time system, so I wouldn't be surprised if that was the case.
    I have looked through almost every guide I could find on using the scan engine and programming the cRIO (that's how I learned the importance of synchronizing the timing to the scan engine and other useful facts) and haven't really found a definitive answer. I would appreciate any help on this subject.
    P.S. I attached my scan/transmit loop, the host program and the VI where I get all of the shared variables (I use the same one three times to prevent 144 shared variables from being on the screen at the same time).
    Thanks,
    Seth
    Attachments:
    target - multi rate - variables - fileIO.vi ‏61 KB
    Get Strain Values.vi ‏24 KB
    Chasis 1 (Master).vi ‏85 KB

    Hi,
    It looks like you are using a 9074 chassis and two 9144 chassis, all three full with Modules and you are trying to read all the IO channels in one scan?
    First of all, if you set your scan engine speed for the controller (9074), then you have to synchronize your Timed Loop to the Scan Engine and not to use a different timebase as you do in your scan VI.
    Second the best performance can be achieved with I/O variables, not shared variables and you should make sure to  not allocate memory in your timed Loop.  memory will be allocated if an input of a variable is not connected, like the error cluster for example or if you create Arrays from scratch as you do in your scan VI.
    If you resolve all these Issues you can time the code inside your Loop to see how long it really takes and adjust your scan time accordingly.  The 9074 does not have that much power so you should not expect us timing. 500 Hz is probably a good estimate for the max. Performance for 144 channels and depending on how much time the additional microstrain calculation takes.
    The ECAT driver brings examples which show how to program these kinds of Apps. Another way of avoiding the variables would be the programmatic approach using the variable API.
    DirkW

  • How to prevent cRIO Scan Engine Warning 66030

    I find that when my cRIO cpu spikes briefly I can trigger something bad in the scan engine and it will start spewing 66030 warnings and go into a Fault state. 
    The only recourse at that point is to reset the cRIO even though the cpu usage has returned to normal.  I would like to know if there is anyway to make the scan engine
    a little bit more forgiving of cpu spikes?

    sachsm:
    Is that 66030 or -66030?
    66030: This I/O variable is referencing an I/O device or channel that is not active in the current I/O mode.  Data read may be stale or invalid and data written may not be sent to the output.
    -66030: The operation cannot be completed because one of the scanned I/O buses is not in the required I/O mode.
    I'm assuming it's the non-negative one, but I just want to be sure.
    If so, I think your best bet is to clear that specific warning code and/or use the programmatic Scan Engine configuration and fault handling VIs to correct the Scan Engine.
    (The VIs are located in "Measurement I/O --> NI Scan Engine").
    Hope that helps!
    Caleb Harris
    National Instruments | Mechanical Engineer | http://www.ni.com/support

  • Error 2147138480 in Set Scan Engine Mode to Actif

    Hi all,
    I'm fighting in front of this error which occur and does not want to go away (even after trying everything)...
    The error (-214713840) is happening when trying to set back the Scan Engine Mode to actif.
    It says: "The slave device could not be found. The positional addresses within the LabVIEW project are inconsistent with the actual network topology. [...]"
    I was trying to program the detection of the loss of a CompactRIO chassis when this error first occurr.
    I have 2 CompactRIO "EtherCAT" connected to a RT computer.
    Thanks for all the help or suggestion you can provide me ;-)
    Cheers,
    Laurent

    Hi Laurent,
    I don't know if your problem has been solved but I prefer asking.
    Are you sure of the error code you encounter ? I don't find it in the manual or other places.
    Regards,
    Mathieu P. | Certified LabVIEW Associate Developer
    National Instruments France
    #adMrkt{text-align: center;font-size:11px; font-weight: bold;} #adMrkt a {text-decoration: none;} #adMrkt a:hover{font-size: 9px;} #adMrkt a span{display: none;} #adMrkt a:hover span{display: block;}
    Journées techniques : des fondamentaux aux dernières technologies pour la mesure et le contrôle/comm...

  • NI Scan Engine Wont Run

    Hello, I am trying to deploy a program on the real time controller of a compactRIO. I decided to use scan mode for now. However, when I try to run my VI I get the following error:
    One or more items in this deployment require the NI Scan Engine and corresponding I/O driver(s) to be installed.  Go to MAX to install.
    If you continue to encounter this conflict, refer to the "Deploying and Running VIs on an RT Target" topic in the LabVIEW Help for information about resolving target deployment conflicts.
    I have LabVIEW real time on my machine so I don't know what the reason for this is and I can not find under MAX where to install the scan engine (again I assume it is already installed).
    I am using LabVIEW 8.6.1 and the modules I use on my cRIO-9073 are the 9477 DO module and the 9205 AI module. Any help would be appreciated.
    Solved!
    Go to Solution.

    if I tried to install the software software at the NI Max
    but it give error 
    "The selected target does not support this function"
    how to fix this error,
    what problem with it
    thank you
    Msyafiq
    Attachments:
    install error.jpg ‏128 KB

  • Unable to Access Itune's Store Cisco Content Scanning Engine

    Hi there, new to apple forums. Having all sort of problems downloading from Itunes Store and updates, via our Cisco ASA Firewall.
    Have identified that the Trend Micro Scanning Engine Module is causing the issue, and the only work around is to disable the Scanning Engine altogether, or to configure IP address exclusions from the scanning process. Using the logs from the firewall, I managed to locate some of the IP's related to Itunes; registered to Akamai Technologies.,US (81.23.243.0/24) and got things working, but now it has stopped working again. Is there a list of IP addresses that Apple and Itunes use?
    Would appreciate any pointers if anyone else has resolved similar issues.
    Thanks

    Apple uses a distributed server system and so may use a number of IP addresses, none of which have ever been officially published, so trying to allow specific addresses may not be possible. If your system will accept domain names and not just IP addresses, try using these two domains instead:
    phobos.apple.com
    phobos.apple.com.edgesuite.net
    Hope this helps.

  • Scan Engine power up time

    We have supplied a customer with a Labview/cRIO system.
    There is a realtime program running with the Scan Engine, and a UI on the host.
    Everything is working fine except for that they are
    observing very long powerup times, in the 2 minute range. In particular, after
    being powered off for several days, it take 4 minutes. Part of the time is
    because of the Ethernet extender modems which take 35-40 seconds to connect
    with each other.
    I have ran some tests here, without the modems, and have
    observed that after applying power:
    1. I can ping the controller after 5 seconds.
    2. The system is operational after 35-40 seconds. This was
    measured both by the user interface being able to read Shared Variables, and
    the System manager reconnecting and reading Shared Variables. I let it sit
    overnight and this time went up to 50 seconds, so there does appear to be a
    correlation between how long it was powered down and how long it takes to power
    up.
    I searched the NI forums but couldn’t find any discussion on
    this. Does anyone have any ideas?

    Hey pjackson59,
    Quite a strange problem I must agree! Here's some of my ideas for you to try...
    1) In the Distributed System Manager, navigate to the cRIO on the network.  Choose NI_SystemState >> Memory >> Available (you may also want to take note of the values for the other three variables there as well).  Notice that when you click on that a Trend Line appears on the right.  Leave this up and running overnight and check it in the morning.  If the trend line is going down, this means you're losing available memory and could have a memory leak in your realtime VI, which could effect startup time.
    2) In MAX, find the cRIO under Remote Systems and click on the System Settings tab.  Under the System Monitor section, take note of the Free Drive Space.  If this value gets smaller and smaller over time, say dangerously close to no free disk space, then it's definitely possible it could be affecting start-up time. 
    3) Ethernet communication is at the lowest priority on a real-time system.  That being said, if your real time VI is doing some hardcore processing, especially more and more as time goes on, then the CPU isn't going to publish data out the Ethernet port as regularly as it should be. This would explain why reading those shared variables takes a long time to update.
    4) You could have a combination of the three things, in which case you'll probably (unfortunately) need to resort to some more intensive debugging and troubleshooting techniques - i.e. using the Real-time Execution Trace Toolkit
    Rory

Maybe you are looking for

  • Withholding tax Amount not being displayed in the Invoice while posting a Parked Document

    Hi, I have created a parked document for the Vendor who has the Witholding tax code w9 - 15% . I created Parked Document using FBV1 and then tried posting the same using FBV0.Somehow I am not getting the Withholding tax Amount in the FI Document crea

  • Urgent-Help needed in FM to update picking quantity in outbound delivery.

    Hi, I have to automatically update the delivery and picking quantity for an outbound delivery without doing any post goods issue.Could you please help me with any FM which does this with proper explanation. Any help will be greatly appreciated. Thank

  • HTML5 Meta Tag issue

    I have a site that is built using Project 7's Affinity Page Builder in HTML5 with a meta tag that seems to cause problems with the W3C Validator. The meta tag is this: <meta http-equiv="X-UA-Compatible" content="IE=Edge"> The Website is here: http://

  • How can I add the audio content cd after logic finished installing ?

    There was an eror installing the Audio Content CD 1 and I did not notice. Everything else installed properly, Audio Content CD 2, Jam Pack, but how can I add the Audio Content CD1 after Logic has finished installing? If you can help me please be very

  • Is JNDI lookup efficient?

    Two questions, please help. Thanks 1) Each node in a cluster has its own JNDI Tree? 2) How efficient is JNDI lookup? Shiye Qiu [email protected]