Scan Engine power up time

We have supplied a customer with a Labview/cRIO system.
There is a realtime program running with the Scan Engine, and a UI on the host.
Everything is working fine except for that they are
observing very long powerup times, in the 2 minute range. In particular, after
being powered off for several days, it take 4 minutes. Part of the time is
because of the Ethernet extender modems which take 35-40 seconds to connect
with each other.
I have ran some tests here, without the modems, and have
observed that after applying power:
1. I can ping the controller after 5 seconds.
2. The system is operational after 35-40 seconds. This was
measured both by the user interface being able to read Shared Variables, and
the System manager reconnecting and reading Shared Variables. I let it sit
overnight and this time went up to 50 seconds, so there does appear to be a
correlation between how long it was powered down and how long it takes to power
up.
I searched the NI forums but couldn’t find any discussion on
this. Does anyone have any ideas?

Hey pjackson59,
Quite a strange problem I must agree! Here's some of my ideas for you to try...
1) In the Distributed System Manager, navigate to the cRIO on the network.  Choose NI_SystemState >> Memory >> Available (you may also want to take note of the values for the other three variables there as well).  Notice that when you click on that a Trend Line appears on the right.  Leave this up and running overnight and check it in the morning.  If the trend line is going down, this means you're losing available memory and could have a memory leak in your realtime VI, which could effect startup time.
2) In MAX, find the cRIO under Remote Systems and click on the System Settings tab.  Under the System Monitor section, take note of the Free Drive Space.  If this value gets smaller and smaller over time, say dangerously close to no free disk space, then it's definitely possible it could be affecting start-up time. 
3) Ethernet communication is at the lowest priority on a real-time system.  That being said, if your real time VI is doing some hardcore processing, especially more and more as time goes on, then the CPU isn't going to publish data out the Ethernet port as regularly as it should be. This would explain why reading those shared variables takes a long time to update.
4) You could have a combination of the three things, in which case you'll probably (unfortunately) need to resort to some more intensive debugging and troubleshooting techniques - i.e. using the Real-time Execution Trace Toolkit
Rory

Similar Messages

  • Optimizing EtherCAT Performance when using Scan Engine

    Hello everyone,
    This week I have been researching and learning about the limitations of the LabVIEW's Scan Engine. Our system is an EtherCAT (cRIO 9074) with two slaves (NI 9144). We have four 9235s, two 9237s and two 9239 modules per chassis. That means we have a total of 144 channels. I have read that a conservative estimation for each scan is 10 usec. This means with our set up, assuming we only scan it would take 1.44 msec which would yield roughly a rate of 694 Hz. I know that when using a shared variable, the biggest bottle neck is transmitting the data. For instance, if you scan at 100 Hz, it'll be difficult to transmit that quickly, so it's best to send packets of scans (which you can see in my code).
    With all of that said, I'm having difficulties scanning any faster than 125 Hz without railing out my CPU. I can record at 125 Hz at 96% of the CPU usage but if I go down to 100 Hz, I'm at 80%. I noticed that the biggest factor of performance comes when I change my top loop's, the scan loop, period. Scanning every period is much much more demanding than every other. I have also adjusted the scan period on the EtherCAT's preferences and I have the same performance issues. I have also tried varying the transmission frequency(bottom loop), and this doesn't affect the performance at all.
    Basically, I have a few questions:
    1. What frequency can I reasonably expect to obtain from the EtherCAT system using the Scan Engine with 144 channels?
    2. What percent of the CPU should be used when running a program (just because it can do 100%, I know you shouldn't go for the max. Is 80% appropriate? Is 90% too high?)
    3.Could you look through my code and see if I have any huge issues? Does my transmission loop need to be a timed structure? I know that it's not as important to transmit as it is to scan, so if the queue doesn't get sent, it's not a big deal. This is my first time dealing with a real time system, so I wouldn't be surprised if that was the case.
    I have looked through almost every guide I could find on using the scan engine and programming the cRIO (that's how I learned the importance of synchronizing the timing to the scan engine and other useful facts) and haven't really found a definitive answer. I would appreciate any help on this subject.
    P.S. I attached my scan/transmit loop, the host program and the VI where I get all of the shared variables (I use the same one three times to prevent 144 shared variables from being on the screen at the same time).
    Thanks,
    Seth
    Attachments:
    target - multi rate - variables - fileIO.vi ‏61 KB
    Get Strain Values.vi ‏24 KB
    Chasis 1 (Master).vi ‏85 KB

    Hi,
    It looks like you are using a 9074 chassis and two 9144 chassis, all three full with Modules and you are trying to read all the IO channels in one scan?
    First of all, if you set your scan engine speed for the controller (9074), then you have to synchronize your Timed Loop to the Scan Engine and not to use a different timebase as you do in your scan VI.
    Second the best performance can be achieved with I/O variables, not shared variables and you should make sure to  not allocate memory in your timed Loop.  memory will be allocated if an input of a variable is not connected, like the error cluster for example or if you create Arrays from scratch as you do in your scan VI.
    If you resolve all these Issues you can time the code inside your Loop to see how long it really takes and adjust your scan time accordingly.  The 9074 does not have that much power so you should not expect us timing. 500 Hz is probably a good estimate for the max. Performance for 144 channels and depending on how much time the additional microstrain calculation takes.
    The ECAT driver brings examples which show how to program these kinds of Apps. Another way of avoiding the variables would be the programmatic approach using the variable API.
    DirkW

  • 9236 enable shunt cal property crashes crio OS with scan engine

    I would like to inform users of the 9236 Quarter Bridge Strain Gauge Module of a bug. The Real-Time team is aware of this issue and I have been working with an app engineer on it for about a week and he has confirmed my findings. He says the problem is most likely a driver issue or scan engine issue and they are investigating it.
    There is a bug that completely crashes the cRIO operating system with the 9236 module. Currently when a cRIO device is loaded with LV2009 SP1 or LV2010 and using scan engine interface, attempting to write the "shunt cal enable" property of a 9236 module completely crashes the VxWorks OS. If you try to access the property again, the crio crashes and enters an error state where the crio has rebooted twice without being told to. The LED indicator blinks four times to indicate this.
    I have tested this with a few different hardware combinations with both the crio-9014 and crio-9012 controllers, combined with either the 8-channel 9104 back plane or the 4-channel 9102 back plane with the same results. The app engineer was able to duplicate this issue as well.
    To duplicate the bug:
    Load a crio device with 2009 SP1 or LV2010 and configure the device for scan engine. Locate the 9236 module in the project explorer and drag and drop it into a VI to create the IO variable. Right click the output terminal of the IO Variable and go to create > property > enable shunt calibration  on a channel. Once the property is dropped, right click on it and do change to write so that you can write a boolean to the variable. It doesn't matter if you write a true or false to the property, the results are the same. Hit run, watch it crash. When it reboots, hit run again. The crio is now in a state that can only be recovered by physically resetting the device.
    I love the crio stuff and I use it every day because it is very reliable and robust. This kind of thing is rare, which is why I am reporting it to the community as well as NI as it is a pretty big bug that took me a while to narrow down.
    [will work for kudos]

    rex1030,
    Shunt calibration can be accessed using a property node.  The operation will be the same, you still need to acquire two sets of data and compute the scalar.
    You can obtain the necessary reference by dragging the module to the block diagram from the project.  Please see http://zone.ni.com/devzone/cda/tut/p/id/9351 for more information on programmatic configuration.
    Let me know if you have any other questions.
    Sebastian

  • NI Scan Engine Wont Run

    Hello, I am trying to deploy a program on the real time controller of a compactRIO. I decided to use scan mode for now. However, when I try to run my VI I get the following error:
    One or more items in this deployment require the NI Scan Engine and corresponding I/O driver(s) to be installed.  Go to MAX to install.
    If you continue to encounter this conflict, refer to the "Deploying and Running VIs on an RT Target" topic in the LabVIEW Help for information about resolving target deployment conflicts.
    I have LabVIEW real time on my machine so I don't know what the reason for this is and I can not find under MAX where to install the scan engine (again I assume it is already installed).
    I am using LabVIEW 8.6.1 and the modules I use on my cRIO-9073 are the 9477 DO module and the 9205 AI module. Any help would be appreciated.
    Solved!
    Go to Solution.

    if I tried to install the software software at the NI Max
    but it give error 
    "The selected target does not support this function"
    how to fix this error,
    what problem with it
    thank you
    Msyafiq
    Attachments:
    install error.jpg ‏128 KB

  • Problens whit Messaging Server and Symantec Scan Engine

    Hi!
    I have installed Symantec Scan Engine 5.0 to check virus in my Messaging Server, but when I try to open the administration interface of Scan Engine installed in port 8004, it appeared the messages:
    �Loading Java Subprogram�
    �Do you want to install the subprogram distributed by Symanted? Yes
    �please wait���.�
    �Subprogram com.symantec.gui.guidelines.ScanEngine Applet started�
    But the apple did not load at all.
    I have installed the right java version.
    bash-2.05# java -version
    java version "1.4.2_09"
    Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.2_09-b05) Java HotSpot(TM) Client VM (build 1.4.2_09-b05, mixed mode)
    And I open it in Internet Explorer 6.0 sp2.
    bash-2.05# ./imsimta version
    Sun Java(tm) System Messaging Server 6.2 (built Dec 2 2004) libimta.so 6.2 (built 00:34:23, Dec 2 2004) SunOS projes 5.9 Generic_118558-11 sun4u sparc SUNW,Sun-Blade-1500
    I didn�t know what is wrong and why I can not start the administration interface. Can someone help me?
    Thanks in advance.

    What "symantec is supported" means, is that we've tested our product with the scan engine, and it is known to work. It does not mean we have a clue how to support the product itself, only the integration with our product.
    The problem you describe appears to be within the Symantec product itself, not the integration with our product. It's just the integration we would know how to support.
    Also, please do understand:
    1. This forum is not an official Support offering. It's a public forum.
    2. Any answers given here are given by folk that donate their time, with no compensation.
    If you want "technical support", it's something you would have to open a support case for. In this case, what is likely to happen is what happened above. "it's a problem in your Symantec product. Please call them".
    I'm not trying to blow you off, it's just that your Symantec Scan Engine is a "black box" to us. We learned enough about it to integrate it into Messaging Server, but not enough to "support" it as a product, nor should we.

  • Shared Variables, Scan Engine & Multiple Targets

    I am seeking some general advice about the structure of my LabVIEW project.
    The project consists of a laptop with LabVIEW, and a joystick connected, and a CompactRIO connected via ethernet. I had been running the cRIO in FPGA Interface mode, however a change in some things caused the project to have to be shifted to scan mode.
    As of now, the code on the laptop updates shared variables on the cRIO, and reads from shared variables on the cRIO for monitoring. I want the shared variables hosted on the cRIO because it will also need to operate without the laptop connected. Before switching the cRIO to scan mode, I found that I had to first run the laptop code, and then run the cRIO code, or the shared variables would not communicate properly. Now that I have switched to scan mode, I have to run the cRIO code first, and even then the shared vars do not communicate properly for more than a few seconds, and are much laggier.
    My ideal project solution is a system that can run with or
    without the laptop connected, and obviously not having all these shared
    variable issues. I would like to autostart the code on the cRIO, and
    have the user run the laptop code if necessary, but in the past this did
    not seem to work with the shared variables.
    I am really confused about why this is happening. Hopefully I have explained my problem well enough. I don't really want to post the entire project on here, but I can email it to people if they are willing to take a look at it. Thank you for taking the time to read this.
    I am running LabVIEW 2010 SP1 with the Real-time, FPGA, DSC, and Robotics modules. I have the Feb '11 driver set and NI-RIO 3.6.0 installed and all completed updated on my RT cRIO.
    Solved!
    Go to Solution.

    I do this type of stuff all the time...
    Move all your NSV libraries to the cRIO.  From the project you must deploy to the cRIO and from then on they are persistant until you reformat.
    From your windows HMI app, you can place static NSV tags on the block diagram or use the dynamic SV API to R/W.  Also you can bind HMI controls and
    indicators directly to cRIO NSV's (This is what I do)  Also I create a 'mirror' library in the PC HMI that is bound to the cRIO library.  This library has DSC Citadel
    data logging enable and will automatically save historical traces of all my important data - very nice.  PC hosted libraries can be set to autodeploy in the app build. 
    also the project has a autodeploy option for the Development Environment which I normally turn off.  If you have PC to cRIO binding setup then you need to be cautious
    about any sort of autodeployment since it will potentially force the cRIO app to stop when you deploy.  To get around this, you can use PSP binding (IP address rather than project
    process name) and use the DSC deploy library vi's in your HMI app.  Once you are using the scan engine you can use the DSM (Distributed System Manager) app to view, proble and
    chart all of you IOV and NSV's on your network.

  • Why does New Gen sbRIO not support Scan Engine?

    The 'New Gen' sbRIO's use the LX25 and LX45 FPGA and it seems that none of them support the scan engine.  cRIO's that use the same FPGA do support the scan engine.  Is it just a RIO driver issue?  If so when will this be resolved?

    Hi sachsm, 
    I was told initially that it is a performance issue, but after doing some extra research I have found that it cannot be the case.  You are right that the cRIOs that have the LX25 and LX45 FPGAs support the scan engine, so the answer lies outside the bounds of performance requirements.  I spoke with some of our more senior engineers, who believe that R&D has phased out scan engine support for the newer sbRIOs based on the majority use cases for which they are deployed.  Notice on the product pages for these devices that only OEM quantities are available for purchase, meaning that most of our customers for these are companies producing many units, and who generally desire to use the FPGA interface (not the Scan Interface).  Each sbRIO that currently supports the Scan Interface (older models) has a custom scan engine bitfile that is written to the FPGA, and I guess someone on the development side of things decided that it wasn't worth the effort to write scan engine bitfiles for these new models.  There is no indication that these will be supported by the scan interface in the future.  
    cRIO, on the other hand, is intended to have custom and reconfigurable module setups, and thus will always require the scan engine for those who wish to use the real-time functionality without having to deal with the FPGA interface.
    I hope that this clears up some of the confusion!
    Kevin W.
    Applications Engineer
    National Instruments

  • Differenza fra FPGA e Scan Engine in RealTime

    salve a tutti, ho seguito un corso real time per labview dove ci facevano programmare l'accensione, monitoraggio, raccolta dati e spegnimento di una specie di camera climatica composta da una ventola e una lampadina. è stato tutto molto chiaro ma c'è un problema: tutto è stato fatto col supporto di scan engine.
    invece io dovrei programmare un motore passo passo in real time con cRio e col modulo NI 9501 che non supporta lo scan engine e quindi bisogna programmare con l'FPGA.
    qualcuno che magari ha avuto esperienza col modulo 9501 sa dirmi cosa si intende col "programmare con l FPGA" e come bisogna fare?
    perchè avendo seguito il corso con lo scan engine ho capito come si usa lo scan engine, i controlli e le variabili della lampadina e della ventola venivano fuori automaticamente.
    mentre con l FPGA come si programmano i controlli ( in questo caso accensione spegnimento senso di rotazione e angolo di rotazione del motore ) da mettere nel block diagram?
    insomma quale è la differenza fra scan engine e FPGA?
    qualcuno può aiutarmi? 
    grazie 

    grazie per la risposta, il link che mi hai dato è molto utile. da quello che mi avevano detto al corso in teoria non è necessario che io apprenda totalmente la programmazione con FPGA perchè per quanto riguarda la mia applicazione, l'FPGA serve solo nel VI iniziale per i controlli del motore e poi si può programmare tutto normalmente con labview real time. in pratica mi avevano consigliato questi due VI che ti allego relativi al NI 9501 che sono programmati con l FPGA, e di usarli come "subVI" in un programma labview real time normale. quindi da quello che ho capito la parte FPGA sarebbe già fatta da questi due esempi. è possibile o ho capito male io?
    inoltre ho provato a dargli una occhiata ma non li capisco. questi programmi effettivamente cosa fanno?
    grazie
    Allegati:
    Setpoint Control (open loop) - NI 9501.lvproj ‏22 KB
    Setpoint Control (open loop) - NI 9501 (FPGA).vi ‏106 KB
    Velocity Control (open loop) - NI 9501.lvproj ‏25 KB

  • Slow Scan Engine

    I'm trying to read an analog input signal from a function generator via Scan Engine on a cRIO-9074, but scan engine doesn't seem to be updating as quickly as it should be. The input is a 10Hz sine wave +/-3V into a 9207 module. The scan engine settings are default, and I get good readings when hooked up to a DC power supply so all the hardware should be good. The attached front panel picture shows the results I'm getting which I figure is about a 0.8sec update rate.
    Thanks for any help,
    ~Tyler
    Attachments:
    FP.jpg ‏219 KB
    ScanEngine Block Diagram.jpg ‏68 KB
    Settings.jpg ‏114 KB

    Hi Tyler,
    Thank you for your post.
    I notice that your "Network Publishing Period" is set to 100ms. This means that the target will update published values on the network every 100ms only, even though you have the scan period set to 10ms.
    Try altering the network publishing period to be closer to the scan period. Please tell me if this has solved the issue.
    Thank you,
    Eden S
    Applications Engineer
    National Instruments UK & Ireland

  • White keyboard needs to toggle power every time computer is started

    I need to toggle power every time computer is started. OSX doesn't 'see' the keyboard when restarting or starting.
    ALSO
    When running XP in bootcamp, I can get XP to 'see' the keyboard, but I can't get it to work at all.
    Please help.

    Hello tak:
    I am assuming your Bluetooth module is built-in.
    There is no software setting to address what you describe. The KB may have a problem. However...
    Instead of switching the device on and off, try tapping a key or two.
    Try installing new batteries and/or removing and reinstalling them.
    I do not run Windows (and never plan to do so) so I have no feedback on the XP issue.
    Barry

  • Setting thermocouple type in scan engine

    I am using cRIO 9211 to acquire thermocouple data using NI Scan Engine. I don't see any option to specify the type of thermocouple other than by right clicking on the module (in project) and selecting Properties.
    Is there a way to programmatically set the type of thermocouple?
    If this is not possible, can I can read the cjc and offset channels, so that I can convert the raw voltage to temperature in my RT code?
    Will I be able to read the cjc and offset channels if I use NI Scan Engine Advance I/O Access?
    "A VI inside a Class is worth hundreds in the bush"
    യവന്‍ പുലിയാണു കേട്ടാ!!!

    Cross posted here.
    "A VI inside a Class is worth hundreds in the bush"
    യവന്‍ പുലിയാണു കേട്ടാ!!!

  • Error 2147138480 in Set Scan Engine Mode to Actif

    Hi all,
    I'm fighting in front of this error which occur and does not want to go away (even after trying everything)...
    The error (-214713840) is happening when trying to set back the Scan Engine Mode to actif.
    It says: "The slave device could not be found. The positional addresses within the LabVIEW project are inconsistent with the actual network topology. [...]"
    I was trying to program the detection of the loss of a CompactRIO chassis when this error first occurr.
    I have 2 CompactRIO "EtherCAT" connected to a RT computer.
    Thanks for all the help or suggestion you can provide me ;-)
    Cheers,
    Laurent

    Hi Laurent,
    I don't know if your problem has been solved but I prefer asking.
    Are you sure of the error code you encounter ? I don't find it in the manual or other places.
    Regards,
    Mathieu P. | Certified LabVIEW Associate Developer
    National Instruments France
    #adMrkt{text-align: center;font-size:11px; font-weight: bold;} #adMrkt a {text-decoration: none;} #adMrkt a:hover{font-size: 9px;} #adMrkt a span{display: none;} #adMrkt a:hover span{display: block;}
    Journées techniques : des fondamentaux aux dernières technologies pour la mesure et le contrôle/comm...

  • Programmat​ically modify scan engine scaling

    I need to be able to programmatically adjust the scaling factors for my cRIO analog channel.  I manage all
    my I/O from an external spreadsheet that gets compiled and ftp'd to 9
    cRIO targets.  Each cRIO will have different scaling factors.  I
    understand that this is a feature that will be forthcoming but I need
    to be able to do this within the next 2 months.  I already have in
    place a secondary scan engine of my own design that replicates all the
    I/O and includes such niceties as scaling, filtering,  zero offset and
    deadband updating.  Recently I noticed a file on the cRIO called
    'variables.xml' which quite clearly contains all of the I/O variable
    properties.  I am planning on writing a utility vi that can modify this
    xml file.  I believe I would have to call this vi from my cRIO code and
    then reboot to have it redigested into the LOGOS? server or scan
    engine.  I understand that the development engineers loath to support
    this type of activity and I also would not want this to be permanantly
    in my code, but only as a short term solution until the proper API is
    released.  If anyone has a better idea I would love to hear about it.

    sachsm,
    While I definitely don't promote doing what you suggested, I tried it out and it looks like it should theoretically work with a reboot.  I only did this with a straight VI ran from LabVIEW.  The xml file is generated when the scan engine variables are first deployed.  This file is updated on an input-by-input basis based on what was edited in the project explorer.  If the xml file is edited, then the device rebooted (but the project not changed) and you run the VI, the 'updated' scaling will be present.  Once you edit the particular IO in the project then it will override the manual edit.  When these are changed the scaling changes go into effect immediately and the device doesn't need to be rebooted.  Good idea, indeed, not sure if or what the implications may be with this though.  Definitely try at your own risk for this one.
    Regards,
    Jared Boothe
    Staff Hardware Engineer
    National Instruments

  • RT Scan Engine Roadmap

    Greetings,
    First off, let's take our hats off to the team at NI for creating this technology, it is a powerful new set of features for cRIO and has tremendous potential.
    I have a application that will require 9 cRIO chassis. There will be 3 identical 'sections', each with 3 cRIO's. It would be ideal if
    each cRIO could read a disk config file that would contain scaling constants for each I/O Variable. This would require programmatic access
    to the I/O Variable Properties which is not currently available. It does appear that the hooks are in the place under the hood but have not
    been wrapped in LV yet. For the next 6 months I will only be building one section of my system, but by next Q2, 2009 I will need to complete all 3 sections
    and would like to have some idea if these types of features will be forthcoming. My wish list would also include the ability to
    programmatically create or rename I/O variables and perhaps even add inline filtering to the scan engine.
    Thanks,
    Mike Sachs
    Intelligent Systems
    www.viScience.com
    Onsite at the NASA Marshall Space Flight Center

    Mike,
    This is a great idea. Please submit a formal request to our Product Suggestion Center.
    Cheers.
    | Michael K | Project Manager | LabVIEW R&D | National Instruments |

  • Utility software for Ipod-Checking HDD power on time ?

    Hi !
    I'd like to buy a Ipod-Classic 160GB from someone and I wondered if there's any software eg. HD Tune which is able to tell me how many hours/minutes the ipod was actually in use. (power on time) I wouldn't want to buy one that was used 24/7 since the potential seller bought it, 1 year ago.
    Thanks a lot !

    Reset the iPod - hold Select+Menu for > 6 seconds
    Enter diagnostics mode during reset - hold Select+Previous > 6 Seconds
    Press Menu for Manual Test
    Scroll down to IO, press Select
    Scroll down to HardDrive, press Select
    Scroll down to HDSMARTData, press Select
    Check PowerOn Hours and other stats for clues as to condition. I've had mine for a year, use it a few hours each day and would say my iPod is in reasonable condition. My stats are:-
    Retracts: 349
    Reallocs: 5
    Pending sectors: 0
    PowerOn Hours: 1252
    Start/Stops: 571
    Temp: Current 25C
    Temp: Min 10C
    Temp: Max 50C
    When finished - Menu, Menu, Menu, Scroll down to Reset, press Select.
    tt2

Maybe you are looking for