NI Scan Engine Wont Run

Hello, I am trying to deploy a program on the real time controller of a compactRIO. I decided to use scan mode for now. However, when I try to run my VI I get the following error:
One or more items in this deployment require the NI Scan Engine and corresponding I/O driver(s) to be installed.  Go to MAX to install.
If you continue to encounter this conflict, refer to the "Deploying and Running VIs on an RT Target" topic in the LabVIEW Help for information about resolving target deployment conflicts.
I have LabVIEW real time on my machine so I don't know what the reason for this is and I can not find under MAX where to install the scan engine (again I assume it is already installed).
I am using LabVIEW 8.6.1 and the modules I use on my cRIO-9073 are the 9477 DO module and the 9205 AI module. Any help would be appreciated.
Solved!
Go to Solution.

if I tried to install the software software at the NI Max
but it give error 
"The selected target does not support this function"
how to fix this error,
what problem with it
thank you
Msyafiq
Attachments:
install error.jpg ‏128 KB

Similar Messages

  • 9236 enable shunt cal property crashes crio OS with scan engine

    I would like to inform users of the 9236 Quarter Bridge Strain Gauge Module of a bug. The Real-Time team is aware of this issue and I have been working with an app engineer on it for about a week and he has confirmed my findings. He says the problem is most likely a driver issue or scan engine issue and they are investigating it.
    There is a bug that completely crashes the cRIO operating system with the 9236 module. Currently when a cRIO device is loaded with LV2009 SP1 or LV2010 and using scan engine interface, attempting to write the "shunt cal enable" property of a 9236 module completely crashes the VxWorks OS. If you try to access the property again, the crio crashes and enters an error state where the crio has rebooted twice without being told to. The LED indicator blinks four times to indicate this.
    I have tested this with a few different hardware combinations with both the crio-9014 and crio-9012 controllers, combined with either the 8-channel 9104 back plane or the 4-channel 9102 back plane with the same results. The app engineer was able to duplicate this issue as well.
    To duplicate the bug:
    Load a crio device with 2009 SP1 or LV2010 and configure the device for scan engine. Locate the 9236 module in the project explorer and drag and drop it into a VI to create the IO variable. Right click the output terminal of the IO Variable and go to create > property > enable shunt calibration  on a channel. Once the property is dropped, right click on it and do change to write so that you can write a boolean to the variable. It doesn't matter if you write a true or false to the property, the results are the same. Hit run, watch it crash. When it reboots, hit run again. The crio is now in a state that can only be recovered by physically resetting the device.
    I love the crio stuff and I use it every day because it is very reliable and robust. This kind of thing is rare, which is why I am reporting it to the community as well as NI as it is a pretty big bug that took me a while to narrow down.
    [will work for kudos]

    rex1030,
    Shunt calibration can be accessed using a property node.  The operation will be the same, you still need to acquire two sets of data and compute the scalar.
    You can obtain the necessary reference by dragging the module to the block diagram from the project.  Please see http://zone.ni.com/devzone/cda/tut/p/id/9351 for more information on programmatic configuration.
    Let me know if you have any other questions.
    Sebastian

  • Programmat​ically modify scan engine scaling

    I need to be able to programmatically adjust the scaling factors for my cRIO analog channel.  I manage all
    my I/O from an external spreadsheet that gets compiled and ftp'd to 9
    cRIO targets.  Each cRIO will have different scaling factors.  I
    understand that this is a feature that will be forthcoming but I need
    to be able to do this within the next 2 months.  I already have in
    place a secondary scan engine of my own design that replicates all the
    I/O and includes such niceties as scaling, filtering,  zero offset and
    deadband updating.  Recently I noticed a file on the cRIO called
    'variables.xml' which quite clearly contains all of the I/O variable
    properties.  I am planning on writing a utility vi that can modify this
    xml file.  I believe I would have to call this vi from my cRIO code and
    then reboot to have it redigested into the LOGOS? server or scan
    engine.  I understand that the development engineers loath to support
    this type of activity and I also would not want this to be permanantly
    in my code, but only as a short term solution until the proper API is
    released.  If anyone has a better idea I would love to hear about it.

    sachsm,
    While I definitely don't promote doing what you suggested, I tried it out and it looks like it should theoretically work with a reboot.  I only did this with a straight VI ran from LabVIEW.  The xml file is generated when the scan engine variables are first deployed.  This file is updated on an input-by-input basis based on what was edited in the project explorer.  If the xml file is edited, then the device rebooted (but the project not changed) and you run the VI, the 'updated' scaling will be present.  Once you edit the particular IO in the project then it will override the manual edit.  When these are changed the scaling changes go into effect immediately and the device doesn't need to be rebooted.  Good idea, indeed, not sure if or what the implications may be with this though.  Definitely try at your own risk for this one.
    Regards,
    Jared Boothe
    Staff Hardware Engineer
    National Instruments

  • Scan Engine power up time

    We have supplied a customer with a Labview/cRIO system.
    There is a realtime program running with the Scan Engine, and a UI on the host.
    Everything is working fine except for that they are
    observing very long powerup times, in the 2 minute range. In particular, after
    being powered off for several days, it take 4 minutes. Part of the time is
    because of the Ethernet extender modems which take 35-40 seconds to connect
    with each other.
    I have ran some tests here, without the modems, and have
    observed that after applying power:
    1. I can ping the controller after 5 seconds.
    2. The system is operational after 35-40 seconds. This was
    measured both by the user interface being able to read Shared Variables, and
    the System manager reconnecting and reading Shared Variables. I let it sit
    overnight and this time went up to 50 seconds, so there does appear to be a
    correlation between how long it was powered down and how long it takes to power
    up.
    I searched the NI forums but couldn’t find any discussion on
    this. Does anyone have any ideas?

    Hey pjackson59,
    Quite a strange problem I must agree! Here's some of my ideas for you to try...
    1) In the Distributed System Manager, navigate to the cRIO on the network.  Choose NI_SystemState >> Memory >> Available (you may also want to take note of the values for the other three variables there as well).  Notice that when you click on that a Trend Line appears on the right.  Leave this up and running overnight and check it in the morning.  If the trend line is going down, this means you're losing available memory and could have a memory leak in your realtime VI, which could effect startup time.
    2) In MAX, find the cRIO under Remote Systems and click on the System Settings tab.  Under the System Monitor section, take note of the Free Drive Space.  If this value gets smaller and smaller over time, say dangerously close to no free disk space, then it's definitely possible it could be affecting start-up time. 
    3) Ethernet communication is at the lowest priority on a real-time system.  That being said, if your real time VI is doing some hardcore processing, especially more and more as time goes on, then the CPU isn't going to publish data out the Ethernet port as regularly as it should be. This would explain why reading those shared variables takes a long time to update.
    4) You could have a combination of the three things, in which case you'll probably (unfortunately) need to resort to some more intensive debugging and troubleshooting techniques - i.e. using the Real-time Execution Trace Toolkit
    Rory

  • Shared Variables, Scan Engine & Multiple Targets

    I am seeking some general advice about the structure of my LabVIEW project.
    The project consists of a laptop with LabVIEW, and a joystick connected, and a CompactRIO connected via ethernet. I had been running the cRIO in FPGA Interface mode, however a change in some things caused the project to have to be shifted to scan mode.
    As of now, the code on the laptop updates shared variables on the cRIO, and reads from shared variables on the cRIO for monitoring. I want the shared variables hosted on the cRIO because it will also need to operate without the laptop connected. Before switching the cRIO to scan mode, I found that I had to first run the laptop code, and then run the cRIO code, or the shared variables would not communicate properly. Now that I have switched to scan mode, I have to run the cRIO code first, and even then the shared vars do not communicate properly for more than a few seconds, and are much laggier.
    My ideal project solution is a system that can run with or
    without the laptop connected, and obviously not having all these shared
    variable issues. I would like to autostart the code on the cRIO, and
    have the user run the laptop code if necessary, but in the past this did
    not seem to work with the shared variables.
    I am really confused about why this is happening. Hopefully I have explained my problem well enough. I don't really want to post the entire project on here, but I can email it to people if they are willing to take a look at it. Thank you for taking the time to read this.
    I am running LabVIEW 2010 SP1 with the Real-time, FPGA, DSC, and Robotics modules. I have the Feb '11 driver set and NI-RIO 3.6.0 installed and all completed updated on my RT cRIO.
    Solved!
    Go to Solution.

    I do this type of stuff all the time...
    Move all your NSV libraries to the cRIO.  From the project you must deploy to the cRIO and from then on they are persistant until you reformat.
    From your windows HMI app, you can place static NSV tags on the block diagram or use the dynamic SV API to R/W.  Also you can bind HMI controls and
    indicators directly to cRIO NSV's (This is what I do)  Also I create a 'mirror' library in the PC HMI that is bound to the cRIO library.  This library has DSC Citadel
    data logging enable and will automatically save historical traces of all my important data - very nice.  PC hosted libraries can be set to autodeploy in the app build. 
    also the project has a autodeploy option for the Development Environment which I normally turn off.  If you have PC to cRIO binding setup then you need to be cautious
    about any sort of autodeployment since it will potentially force the cRIO app to stop when you deploy.  To get around this, you can use PSP binding (IP address rather than project
    process name) and use the DSC deploy library vi's in your HMI app.  Once you are using the scan engine you can use the DSM (Distributed System Manager) app to view, proble and
    chart all of you IOV and NSV's on your network.

  • Error message : Nortons Anti Virus could not load the scan engine

    Having installed OSX Leopard on my MacBook. Nortons Anti Virus 10.0 displays the ERROR MESSAGE Norton AntiVirus Auto-Protect could not load the scan engine. Please run live update to get the latest version.(Code:6).
    Have reinstalled the software and tried updates and i still get the same error message.
    Any suggestions on how to remedy this problem ?

    http://service1.symantec.com/Support/num.nsf/docid/2007102700270911?OpenDocument &seg=hm&lg=en&ct=us
    if you followed those steps, my next step would be to contact Norton

  • Can you still use soft motion without the scan engine on crio?

    Where are the trajectory generator property and invoke nodes in softmotion for LV 2011? 
    These functions are not longer found in the pallet?  Are they no longer supported? 
    All the new soft motion examples are using the scan engine.   Can I use soft motion without the scan engine????
    Steve
    SteveA
    CLD
    FPGA/RT/PDA/TP/DSC

    Hi Ian,
    I apologize that this wasn't stated in the release notes. While your code should have upgraded without breaking, the release documentation should have mentioned that the advanced trajectory genertor functionality is no longer supported. If you still want to use the trajectory generator functions, they are still shipped with SoftMotion, they are just not on the palette. I have attached a zip file that has 4 .mnu files that will get them on the palette. To install these, do the following:
    Close LabVIEW
    Make a copy of the following directory: C:\Program Files (x86)\National Instruments\LabVIEW 2011\menus\Categories\VisionMotion\_nism. In case something goes wrong, we will want to have a copy of this directory so that we can replace it.
    Copy the 4 .mnu files from the attachement into the above _nism directory (the original, not the copy). If it asks you to replace any of the exisiting files, please do. You don't have to copy the readonly.txt file.
    Start LabVIEW. You should know have the trajectory generator functions in your SoftMotion Advanced palette.
    Keep in mind that we no longer support these functions. This means that they are not tested in new releases and any bugs will likely not get fixed.
    I would recommend that you use the new API for any new designs. You can still get most of the functionality of the old API but without the complexity. If you want to generate setpoints at 5ms then you will run the scan engine at 5ms. This is certainly easier than having to do the timing yourself, but it does take away some control from the user. If you give me a brief overview of what you mean by synchronization, I will let you know the best way to do it with the new API.
    Thanks, 
    Paul B.
    Motion Control R&D
    Attachments:
    _nism.zip ‏4 KB

  • Officejet 8500 a909g will scan but wont even try to pirnt (wirelessl​y)

    I have an Officejet 8500 Pro a909g, downloaded latest software package from hp for windows 7. Will scan but wont print, immediatleyl displays error when I try to send test print. Disabled my Norton anti-virus and Norton firewall but stil wont print. Wants to run printer troubleshooter which says there is nothing wrong.

    Try restarting the printer, router and PC.  If that does not do it, download and run this utility: http://h20180.www2.hp.com/apps/Nav?h_pagetype=s-92​6&h_lang=en&h_client=s-h-e17-1&h_keyword=dg-NDU&ju​...
    Say thanks by clicking "Kudos" "thumbs up" in the post that helped you.
    I am employed by HP

  • Initiate scan engine scan

    Hi,
    We are attaching 3rd party EtherCAT slaves to our cRIO.  Is there any way to control the scan engine clocking so that we can reduce jitter (relative to an extrernal pulse) while communicating to the slaves?  Specifically, I would like to initiate scans when an external TTL pulse is received.
    Thanks,
    Steve

    Hey guys, 
    I have done this before on a PXI chassis with a timesync plugin. I used a 6682 card to drive the PXI clock10 and syncronized the RT clock to the PXI clock10. Because the scan engine runs off the RT clock the two were synchronized. I have attached a project that shows the setup. 
    I haven't done this on a cRIO before but if we can find a way to drive the cRIO RT clock off a signal the same idea should work. There is an FPGA timekeeper plugin but I don't know if it is what we want. Let me and Daniel see if we can find anything further on the issue. 
    Assuming we cannot drive the RT clock off your external signal, we could just read the same clock signal on one of the ethercat chassis. The distributed clock functionality should guarantee that the cRIO master and ethercat slave are synchronized. You can use post processing to account for differences in the external clock and the cRIO RT clock. Can you tell us more about your system setup and requirements? How tightly do you need to be synchronized? 
    Jesse Dennis
    Design Engineer
    Erdos Miller
    Attachments:
    Ethercat Synchronization.zip ‏208 KB

  • Optimizing EtherCAT Performance when using Scan Engine

    Hello everyone,
    This week I have been researching and learning about the limitations of the LabVIEW's Scan Engine. Our system is an EtherCAT (cRIO 9074) with two slaves (NI 9144). We have four 9235s, two 9237s and two 9239 modules per chassis. That means we have a total of 144 channels. I have read that a conservative estimation for each scan is 10 usec. This means with our set up, assuming we only scan it would take 1.44 msec which would yield roughly a rate of 694 Hz. I know that when using a shared variable, the biggest bottle neck is transmitting the data. For instance, if you scan at 100 Hz, it'll be difficult to transmit that quickly, so it's best to send packets of scans (which you can see in my code).
    With all of that said, I'm having difficulties scanning any faster than 125 Hz without railing out my CPU. I can record at 125 Hz at 96% of the CPU usage but if I go down to 100 Hz, I'm at 80%. I noticed that the biggest factor of performance comes when I change my top loop's, the scan loop, period. Scanning every period is much much more demanding than every other. I have also adjusted the scan period on the EtherCAT's preferences and I have the same performance issues. I have also tried varying the transmission frequency(bottom loop), and this doesn't affect the performance at all.
    Basically, I have a few questions:
    1. What frequency can I reasonably expect to obtain from the EtherCAT system using the Scan Engine with 144 channels?
    2. What percent of the CPU should be used when running a program (just because it can do 100%, I know you shouldn't go for the max. Is 80% appropriate? Is 90% too high?)
    3.Could you look through my code and see if I have any huge issues? Does my transmission loop need to be a timed structure? I know that it's not as important to transmit as it is to scan, so if the queue doesn't get sent, it's not a big deal. This is my first time dealing with a real time system, so I wouldn't be surprised if that was the case.
    I have looked through almost every guide I could find on using the scan engine and programming the cRIO (that's how I learned the importance of synchronizing the timing to the scan engine and other useful facts) and haven't really found a definitive answer. I would appreciate any help on this subject.
    P.S. I attached my scan/transmit loop, the host program and the VI where I get all of the shared variables (I use the same one three times to prevent 144 shared variables from being on the screen at the same time).
    Thanks,
    Seth
    Attachments:
    target - multi rate - variables - fileIO.vi ‏61 KB
    Get Strain Values.vi ‏24 KB
    Chasis 1 (Master).vi ‏85 KB

    Hi,
    It looks like you are using a 9074 chassis and two 9144 chassis, all three full with Modules and you are trying to read all the IO channels in one scan?
    First of all, if you set your scan engine speed for the controller (9074), then you have to synchronize your Timed Loop to the Scan Engine and not to use a different timebase as you do in your scan VI.
    Second the best performance can be achieved with I/O variables, not shared variables and you should make sure to  not allocate memory in your timed Loop.  memory will be allocated if an input of a variable is not connected, like the error cluster for example or if you create Arrays from scratch as you do in your scan VI.
    If you resolve all these Issues you can time the code inside your Loop to see how long it really takes and adjust your scan time accordingly.  The 9074 does not have that much power so you should not expect us timing. 500 Hz is probably a good estimate for the max. Performance for 144 channels and depending on how much time the additional microstrain calculation takes.
    The ECAT driver brings examples which show how to program these kinds of Apps. Another way of avoiding the variables would be the programmatic approach using the variable API.
    DirkW

  • How to install LV runtime engine and run .vi's in linux

    I am attemtpting to load run time engine and run a .vi on Linux system.
    Here is what I have done so far.
    1. I don't have rpm on my system (gentoo) so I used the app on the install cd called INSTALL.norpm to install
    labview70-rte-7.0.1.i386.rpm. It looks like it worked, there are a lot of files copied to /usr/local. A couple of .so files and a directory called Labview-7.0 (12M of files). 
    2. Made a test vi on other linux box that has full development system installed (also gentoo but i have rpm on this system).
        Do I need this application builder i keep hearing about to make an .exe or can I run vi's with this run time engine?
        What is the syntax to run a vi in linux?  ./test.vi   ????
        Is application builder a seperate program that I must purchase?
    When I attemp to run a vi I get garbage printing to the console, not unlike when you cat a binary file.
    Any help would be appreciated. Is there instructions somewhare??? A manual perhaps that explains rte install and operation.
    D.A.M.

    You need the application builder to create an exe. The rte by itself will not run a VI. The exe does require the rte. It is possible to create a "loader" program that you can use to run separate VIs but the loader program must be created with the app builder. The app builder is a separate add-on or if you buy the professional version of LabVIEW, it comes included with that.

  • With icloud set up on my 4GS I deleted photos from camera roll. I run an older PPC G5 with Lion which wont run icloud. Can I transfer pics from i stream to camera roll to allow a non-icloud back up to my PPC? Precious photos are close to 30 day cut-off

    With i cloud set up on my 4GS I deleted photos from Camera Roll to make space knowing I would have them backed up on icloud via Photo Stream. I now find that my older PPC G5 with Lion which wont run icloud. I have an ipad3 too, which of course does. I want to copy photos from Photo Stream back to Camera Roll on the phone so that I can back up "old school" to iphoto on my ancient mac. The controls seem to suggest that as an option but when I select multiple photos in Photo Stream and select "Save to Camera Roll" the photos do not appear on Camera Roll as expected. I am running close to the 30 day limit and do not want to lose hundreds of precious photos (or have to email them to myself individually). Can the ipad3 help at all? Any ideas good people? Dan

    If only it were that easy Winston. I connect with a USB cable, iphoto opens and I can only see the photos in Camera Roll, not Photo Stream. There does not seem to be a way of aceessing other files in the phone from iphoto and, as in my initial message, I can not transfer the photos to Camera Roll on the phone. My iphone and ipad software is up to date, but my iphoto is '08 version 7.1.5 with my mac running 10.5.8 which is the best I can do on the PPC without messing everything up.  I dont have the dough for a new mac or I would have gone for that option...am I stuck?

  • Premiere Elements  12 wont run on my laptop

    Hi,
    I have a new laptop and I've just installed  Premiere Elements 12 but it wont run. When I create a new project I see the following message and as you can see there are no clues to what causing the issue. Organiser works fine.
    I've run a windows update, installed the latest graphic and sound drivers, I've reloaded but nothing works.
    OS Name
    Microsoft Windows 7 Enterprise
    Version
    6.1.7601 Service Pack 1 Build 7601
    System Type
    x64-based PC
    Processor
    Intel(R) Core(TM) i5-4300U CPU @ 1.90GHz, 2501 Mhz, 2 Core(s), 4 Logical Processor(s)
    Installed Physical Memory (RAM)
    8.00 GB
    Total Physical Memory
    7.90 GB
    Available Physical Memory
    4.45 GB
    Total Virtual Memory
    15.8 GB
    Available Virtual Memory
    10.6 GB
    Page File Space
    7.90 GB
    Page File
    C:\pagefile.sys
    [Sound Device]
    Item
    Value
    Name
    IDT High Definition Audio CODEC
    Manufacturer
    IDT
    Status
    OK
    Driver
    c:\windows\system32\drivers\stwrt64.sys (6.10.6499.0, 539.00 KB (551,936 bytes), 09/07/2014 13:41)
    [Display]
    Name
    Intel(R) HD Graphics Family
    PNP Device ID
    PCI\VEN_8086&DEV_0A16&SUBSYS_198F103C&REV_0B\3&E89B380&0&10
    Adapter Type
    Intel(R) HD Graphics Family, Intel Corporation compatible
    Adapter Description
    Intel(R) HD Graphics Family
    Adapter RAM
    (2,080,374,784) bytes
    Installed Drivers
    igdumdim64.dll,igd10iumd64.dll,igd10iumd64.dll,igdumdim32,igd10iumd32,igd10iumd32
    Driver Version
    10.18.10.3540
    I have got local Admin rights and more free hard disk space that I know what to do with.
    Thanks for any help you can offer.

    Spurringirl
    Before we go off the deep edge here, perhaps some clarification.
    What is the brand/model/settings for the Canon camera that you are using?
    I think that Hunt may be being jumping ahead here. Let me explain....
    For example, the Canon SX1 lS generates files with the file name starting with MVI_XXXX.mov. These are AVCHD.mov files 1920 x 1080p30 and have no problems in Premiere Elements 12.
    So, I would ask "In your case, is your MVI the file extension or the beginning of the file name?"
    Thanks.
    ATR

  • Oracle 8.1.7 on RH7.0 Installed fine now wont run at all!

    I was able to install Oracle 8.1.7 on RH 7.0 using this fix... http://ftp.valinux.com/pub/support/hjl/glibc/sdk/2.1/README.Oracle8i
    The database initialized correctly and I was able to setup and connect to the database. I even ran a port scanner and checked memmory...the thing was running! Then after the reboot the database wont run at all. When I type ./oracle it trys to load and then shows "Segmentation fault (core dumped)". When I try ./oracle start nothing is echoed. I tried to copy the lib files
    ld-linux.so.2
    libc-2.1.3.so
    libdl.so
    libpthread.so
    to the $ORACLE_HOME/lib/ dir and followed John Smiley's howto but the relink script doesn't work! Even still I think something else is wrong other than the library. Can someone help?

    Nevermind figured it out:) Like duh, I should read the damn Installation file:?

  • Photoshop cs 4 wont run after installation on mac book pro

    I recently brought the adobe design premium package with flash, illustrator, indesign, photoshop etc...
    after installation all other application works but just photoshop, it just wont start at all...
    it will pop up on the deck for a second or two and just disappear. the progra wont run at all. other applications like illustrator, indesign etc... all work just fine but just not photoshop... i dont get it what's wrong. no error or anything. just pop up and then disappear.
    the spec of my macbook pro
    mac osx 10.6.2
    2.53 ghz intel core 2 duo
    4 gb 1067 mhz ddr3

    So have you done the basics?
    Toss the PS folder and run the installer again?
    Run the updater from an app you can open? (this will update all)
    etc...

Maybe you are looking for