Network Streaming data acquisition (PC-cRIO)

Hello,
I have written some code using Labview's network streaming option to transfer data from the cRIO to my computer and then save this data in a .csv file. I wanted to test it but I'm confused on how to run or deploy it.
Right now I have the code divided into two VIs, one for each endpoint. I'm assuming the writer endpoint corresponds to the cRIO and the reader to my PC. 
So my questions are:
- Where exactly in the project explorer am I supposed to add these files? (please check the attachment and tell me if it is correct).
- What do I need to deploy into the cRIO? and How do I run the whole application?
- Can I build a UI for both VIs (writer and reader)? or because the writer is running in the cRIO I could only see the front panel of the reader?
Thank you very much!
Attachments:
project explorer.jpg ‏37 KB

It looks like you have the files in the right place.  Just try to run each VI in their proper context.  If you are connected to your cRIO, it will deploy and run for you.
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines

Similar Messages

  • Problem in data acquisition for cRIO-9076 wth c series drive interface module 9516

    I am using LabVIEW for my project i.e., the speed control (using PID) of a motor and want to create a vi for the same.
    The specifications of the products being used are as follows:
    1) Motor: AKM24F (dc motor)
    2) CompactRIO: cRIO-9076
    3) C Series Servo Drive Interface: NI-9516
    I am facing problem regarding the real time interface between the motor and PID block in labVIEW i.e. in the data acquisition part to be specific. Please suggest a way in which I can successfully acquire the analog data(speed) from the motor and vice versa in the vi.

    What is the priority of the VI you're running?  I'd be concerned that maybe you've starved out the ethernet transmit thread or something.
    -Danny

  • Data acquisition using cRIO-9066 and C++

    Hi there!
    I want to write a C++ application which would acquire data from some modules installed in cRIO-9066 chassis and this application have to work without LabView. How can I do it? Can I connect this chassis to my PC using NI DAQmx? Is it possible?
    Solved!
    Go to Solution.

    Hi aanodin,
    When using a device that uses our RIO architecture, it is usually best to use LabVIEW to develop your application. This way you can also program the FPGA using the LabVIEW FPGA module, and it makes programming the Real-Time processor much easier. In fact, your model of cRIO is only officially supported by our LabVIEW programming language, as seen on page 4 of the manual: (http://www.ni.com/pdf/manuals/376186a.pdf).  
    Because of the FPGA interface, you cannot use DAQmx with cRIO. Hope this helps.
    Best Regards,
    Roel F.
    Applications Engineer
    National Instruments

  • Network stream Two network adaptor cards in PC comms with crio

    Hi all,
    I have two network adaptor cards in my pc which i think is causing my problems when collecting data over the tcp. 
    I'm seeing an error message -314340 from the create network stream write/read my address match up and i'm using a template crio vibration data logger. 
    I have two network cards in my pc. I think the error is a tcp one and I’m wondering if this is causing the problem. I’ve given the crio a static ip address 10.0.0.2 with mask 255.255.0.0
    You can see the network adaptor below and it’s ip address 10.0.0.1 so I’m connected directly to this via ethernet cable. However from the Routes print cmd I don’t see the 10.0.0.2. Also the gateway for the crio is default at 0.0.0.0
    Do I need to add the route to the list ? I found the cmd for that which I don’t think I’m using right… route ADD 10.0.0.2 MASK 255.255.0.0 10.0.0.1
    Cheers
    Solved!
    Go to Solution.

    Hi i'm using the crio data logger template from the create project list:
    I'll post my code and some pretty pictures to help explain myself
    I'm using a different crio, so i created a new project and configured the FPGA vi for my hardware, then added the host side. I can run the code see the waveforms after 10 seconds it outright crashes and i've been searching through all the error messages tracking it back to some unfound reason. The irony is sometimes the program creates a tdms file and makes a deposit in the crio folder/deposit (this is defined from an xml file)
    I do notice within a few seconds the cpu on the waveform panel increases to 70+% before crash, so i'm also thinking the FIFO read is at fault, but it could be another red heiring (just installing the real time exection package so i can trace this better).
    I've changed the number of channels from the default 4 to 2 please look in the VibConfig.xml file to see the change as the [Host] Create Config XML.vi doesn't like to changing the channel from 4 for some reason. 
    My set up is [SENSORS] ==> [cRIO] ==ETHERNET==> [PC] Please help 
    Attachments:
    LOG3.zip ‏1584 KB
    error63036.png ‏18 KB

  • Streaming data from cRIO

    Hi everybody,
    This is quite a general question and I am looking for some pointers\opinions\suggestions.
    I want to stream data from several channels at diffferent rates from my cRIO to the host PC. As far as I can see there are three options:
    1)  Use a network shared variable (NSV) for each channel and read them in a loop on the host PC.
    2)  Use one NSV which uses a cluster of an enum and variant. In this case the enum specifies which channel (and therefore data type) the packet contains and the variant contains the data.
    3)  Use STM to stream the data over the network using TCP/IP
    These are my questions/thoughts:
    1)  not very scaleable as each NSV has to be read individually also just seems messy!
    2)  Seems nice and clean and scaleable, multiplexes the data into one channel. I am aware NSVs are not quite as efficient as directly using STM TCP/IP  BUT... is there an additional (significant) overhead converting all the data to variant?
    3) According to the documentation on NSVs using TCP/IP is the fastest way to stream the data. This method seems to offer the same avantages as (2) - clean, multiplexes the data. BUT it does add complexity in developing the cod. Also the data is still flattened to string, is this much different than converting to a variant?
    THE QUESTION: is method (2) a reasonable compromise if the absolute highest data rates are not required?
    Many thanks,
    Steve.

    Hi Steve,
    I have been looking into this problem for you. It seems that each of the three options you highlighted could be used to achieve your goal though since you want to stream multiple channels method 1 would be highly inefficient and therefore should not be considered for this application. I would suggest that method 3 which uses TCP/IP to stream the data over the network is the optimal solution in this case though as you have alluded to, this does add a level of complexity. If you decide to pursue this option I have found a really useful link provides more details about this methodology and a LabVIEW example which should help you get started with the coding. I have also found this link to a forum which may be of interest to you (especially the final post by Kurt).
    I would also like to add that I completely agree with Brian K, in that I believe method 2 is a very acceptable compromise, especially if high data rates are not necessary. 
    I hope this helps.
    Best Regards,
    Christian Hartshorne
    Applications Engineering
    National Instruments

  • FPGA to Real Time using DMA to Host using Network Stream

    Hi All,
    So I am working on a project where I am monitoring various characteristics of a modified diesel engine being driven by a dynamometer. I am trying to log different pressures, temperatures, and engine speed. It basically breaks down into two streams of data acquisition.
    Fast (125kS/ch/s): 
         In cylinder pressure using an NI 9223
         Engine speed via shaft encoder/MPU using same NI 9223
    Slow (1kS/ch/s)
         Other pressures (oil, coolant, tank, etc...) using an NI 9205
         Various temperatures using an NI 9213
    My basic architecture is simultaneous data acquisition on the FPGA for both streams. Each Stream is feed into a separate DMA FIFO where it is fed to the RT side. On the RT side each DMA FIFO is simultaneously read and transferred into two separate network streams where it is fed to the host PC. Then the Host PC does all of the analysis, and logs the data to disk. I had this code working on Thursday with great success, then I tried to package the RT VI as an executable so that it could be deployed to the rio and called pragmatically from the host VI. After trying this approach I was told that we needed to do some testing, so I undid the changes and now the slow stream is not working properly. 
    Since then I have installed LV2013 and installed NI RIO 13 so that I could have a fresh slate to work with. It didn't help however, and now I'm having issues just working in scan mode and with simple FPGA applications. Does anyone know if there are some settings that I could have messed up either building the executable application or deploying it? Or why the fast acquisition is still working while the slow doesn't, even though they are exactly the same?
    Sorry to be so scatter brained with this whole issue. I really have no idea how it could be broken considering the fast stream is working and the code is practically identical. Let me know if you need more information. I'll upload my code as well.

    Hopefully these files work. 
    The "fast" stream gives data points every 8us without fail, as that is the scan period of a 125kHz sample rate. The "slow" stream on Thursday was giving out data points every 1ms, however, now it gives out data points in a very sporadic interval. Also, the data that it does give out doesn't make any sense, tick count going in the negative followed by positive direction for example. 
    I did uninstall all of the old rio drivers before installing the new set as well. Ill give it another shot though. :/
    Thanks for the reply.
    Attachments:
    Pressure-HighSpeedRT.vi ‏665 KB
    Pressure-HighSpeedFPGA.vi ‏680 KB
    Pressure-Comp.vi ‏28 KB

  • How to Connect SbRIO to Host using NEtwork Stream

    Hi,
    I am using a program to transfer data from the PC to the Starter Kit using NEtwork Streams.Can anybody tell me how the IP of the PC / SbRIo should be configured in order to connect successfully.

    Hey,
    The first thing to do is make sure that the sbRIO can been seen in MAX under Remote Systems.  There is lots of info on the web for connecting cRIO which will be the same for a sbRIO.  Try these:
    Install NI LabVIEW, LabVIEW Real-Time and FPGA Modules, and NI-RIO Driver
    Assemble Your NI CompactRIO System
    Configure Your NI CompactRIO System for First Use
    Configure NI CompactRIO for DHCP
    Configure NI CompactRIO With a Static IP Address
    Install Software on Your NI CompactRIO Controller
    Introduction to NI LabVIEW
    Lewis Gear CLD
    Check out my LabVIEW UAV

  • Simple Network Streams - Example VIs

    In the example VI: Simple Network Streams - Host.vi the receive loop has a choice two timers: 10 and 20 milliseconds.  In addition the Read Single Element from Stream VI has a timeout of 100 milliseconds connected to it.
    In Simple Network Streams - Target.vi the receive loop has no timer, but the Read Single Element from Stream VI has a timeout of 1000 milliseconds connected.
    How can the target receive loop work without a timer?  What is the purpose of the 100 millisecond timeout of the Host receive loop if the loop has a timer of at most 10 milliseconds?  I have this example working on my cRIO-9030, but I don't understand it.  I think this gets down to my lack of understanding of the timeout function of reads and writes for streams and FIFOs.  Does the VI sleep until just before the timeout to check for data, or does it hog the processor the entire time looking for data?  It doesn't seem to be the latter since the processors on the target are only partially occupied in this example.

    gary_janik wrote:
    In the example VI: Simple Network Streams - Host.vi the receive loop has a choice two timers: 10 and 20 milliseconds.  In addition the Read Single Element from Stream VI has a timeout of 100 milliseconds connected to it.
    In Simple Network Streams - Target.vi the receive loop has no timer, but the Read Single Element from Stream VI has a timeout of 1000 milliseconds connected.
    How can the target receive loop work without a timer?  What is the purpose of the 100 millisecond timeout of the Host receive loop if the loop has a timer of at most 10 milliseconds?  I have this example working on my cRIO-9030, but I don't understand it.  I think this gets down to my lack of understanding of the timeout function of reads and writes for streams and FIFOs.  Does the VI sleep until just before the timeout to check for data, or does it hog the processor the entire time looking for data?  It doesn't seem to be the latter since the processors on the target are only partially occupied in this example.
    First of all, where is this example?  You didn't post the code, and I didn't find it in the LabVIEW Examples, so it's difficult to explain "invisible" code.
    However, "sight-unseen", I may be able to explain some of your issues.  If you think about Network Streams as a Producer/Consumer pattern "across the network", the Consumer (the receive loop) might not need a timer, as it is being "driven" (and hence "timed") by the Producer sending data to it.  It probably should have a time-out just in case the Consumer just stops sending (say, it exits).
    Functions such as dequeues and receives "block" and don't do anything if they have no input.  Recall that LabVIEW uses data flow, and can have multiple parallel processes (loops) running at the same time.  When a function blocks, the code section (let's call it a "loop") containing the function stops using CPU cycles, allowing the other parallel loops to use the time.  This lets processes run "at their own speed" as long as they are sufficiently "decoupled" by being placed in parallel loops.  When there is no data, the loop essentially "waits", and lets all the other loops use the time instead.
    Bob Schor

  • Reconnect network stream

    Good evening.
    I am developing a distributed application that executes on a cRIO and
    a Windows computer.  I want the cRIO to pump data to the Windows
    computer.  Both the cRIO and the Windows computer run LabVIEW
    2012.
    It has been suggested that I use a network stream to provide the
    data to the Windows computer.  I have implemented a solution based
    on examples provided in the example finder.  Executing the host and
    the target works.
    I need to be able to disconnect the Windows computer from the cRIO
    and then at some later time reconnect to the cRIO.  I have not been
    able to do that without restarting both the cRIO and the Windows
    programs.  When I attempt to reconnect without restarting the cRIO
    execution, the Windows computer software reports the error,
    -1950678941. 
    I am unclear where I am going wrong.  My assumption is that when
    the Windows computer disconnects from the cRIO, I need to
    destroy the network stream on both ends and then when the
    Windows computer reconnects I need to re-establish the reader
    and writer.  I have also tried to have the writer (on the cRIO)
    continue to pump data (of course this doesn't work while the
    Windows computer is disconnected) without destroying the
    writer endpoint.  This also failed.
    Does anyone have any suggestions on how to "start" and "stop"
    network streams without stopping and restarting the software 
    running on the cRIO?
    Thanks,
    Hamilton Woods

    Check out this white paper on Lossless Communication with Network Streams: http://www.ni.com/white-paper/12267/en/ It has information about how to architect what you want with the reconnection. Have a great day. 
    Alex D
    Applications Engineer
    National Instruments

  • RT-Target running out of memory while writing to Network Stream

    Hey there,
    I have a program, that transfers acquired data from the FPGA to the Host-PC. The RT-VI reads the data from the DMA-FIFO and writes it onto a Network Stream (BlockDiagram.png).
    Now I am experiencing a phenomenon, that the RT-Target loads its RAM until it's full, and crashes.
    I have no idea, why this happens, the buffer of the Network Stream is empty, all elements are read by the Host, and there is no array built by indexing or so.
    Has anybody an idea, how I can handle this?
    Best regards,
    Solved!
    Go to Solution.
    Attachments:
    BlockDiagram.png ‏43 KB
    DSM.png ‏78 KB

    Hey there,
    I got the problem solved,
    the problem was, the buffer of the sender endpoint was too big. Unlike this problem: http://digital.ni.com/public.nsf/allkb/784CB8093AE30551862579AB0050C429, it wasn't memory growth because of dynamic memory allocation,
    it's just the normal speed of the cRIO while allocating the buffer memory. Setting the sender buffer much smaller, memory growth stops at a specific level (DSM2.png).
    It's only strange, that memory usage grows that slowly, despite creating the endpoint with preallocated buffer, while usage sinks rapidly when the VI-execution stops...
    Best regards...
    Attachments:
    DSM2.png ‏63 KB

  • Data acquisition problem with NI-DAQmx 9205 and SignalExpress

    Hi everyone,
    I am using a NI-DAQmx 9205 connected via ethernet to my computer and Labview SignalExpress running to acquire data. I am working with EMG and I use an amplifier system from GrassTechnologies (http://www.grasstechnologies.com/products/ampsystems/ampsystems.html). The way it works is the electrodes are plugged on the amplifier system and the amplifier is plugged to the NI-DAQmx 9205.
    I don't know how to setup the system to be able to read properly the emg signal on SignalExpress.
    Does anyone know how to use it?
    Thank you for youe help
    John

    Hi John,
    If I were an Applications Engineer I'd probably be able to figure this out on my own but would you mind elaborating on your hardware setup a little? You mention plugging into a "NI-DAQmx 9205", however the NI-DAQmx is the data acquisition 'driver', the [NI]9205 is an Analog Input Voltage module. Do you mean to be plugging into a Compact DAQ or cRIO?
    Also, I'm not familiar with EMG readings or the amplifiers/electrodes you're working with.
    What is the signal type going into the amplifier from the electrodes, and what is the signal type and range (amplitude) of the signal coming out of the amplifier; AC, DC? I'm assume it's all voltage?
    Straight voltages are pretty easy to work with provided you can scale it to usable units.
    Lets get your hardware and signal types figured out then we'll be able to tell Signal Express how to handle those signals.  
    SCXI- 1000 Chassis w/ 1346 adapter
    PCI 6281 DAQ card
    SCXI- 1520 Bridge Board w/ 1314 Terminal Block (x2)
    SCXI- 1180 Feedthrough Panel w/ 1302 Block
    Signal Express 2014.
    Win7 Enterprise

  • While loop and data acquisition timing worries

    Hello everyone, 
    I apologize in advance if this is a silly question, but I could not find the answer around here. 
    I have made a VI to record continuously from 64 analog channels at a 5kHz sampling rate. I then stream this data to a tdms file. The data acquisition and write functions are in a while loop. 
    I the same loop, I have a bunch of other loops, that each run on their own wait timers to help limit the amount of memory they take up. I am now worried that this may somehow affect my data acquisition timing.
    If I put a bunch of timed loops within another loop, does the outer loop run at the same pace as the slowest of the inner loops? And could that mess up my sampling rate?
    I have attached my VI, in case what I just wrote makes no sense at all. 
    Thanks for any tips...
    Attachments:
    Record_M8viaDAQv3.vi ‏237 KB

    Well, looking at your code you will only write to your TDMS file one time. You have multiple infinite loops within the main/outer loop. That means that the main loop will only run a single iteration because it cannot complete an iteration until all code within it completes. With at least two infinite loops inside the loop it will never complete. Not too mention the only way to stop your code is to hit the stop/abort button. NOt a very good way to stop your code. As someone once said using the abort to stop your code is like using a tree to stop your car. It will work but not advised.
    As Ben mentioned try to understand data flow better. You have unnecessary sequence frames in your code where normal data flow will control the execution sequence.
    Mark Yedinak
    "Does anyone know where the love of God goes when the waves turn the minutes to hours?"
    Wreck of the Edmund Fitzgerald - Gordon Lightfoot

  • Network stream fxp excess memory usage and poor performance

    I'm trying to stream some datas à highspeed rate (3 channels à 1Mbytes/s) from my 9030 to my windows host. Because i don't need to use data on the rt side, i choose to forward FXP <+-,24,5> to my host throug a network stream.
    To avoid data loose i choose to use a wide buffer of 6000000 with this buffer my memory usage grow from 441mo to 672Mo and my rio is unable to stream the data. 
    With sgl or double, memory usage is 441 to 491Mo and datas can be streamed continusly.
    Anyone have encoutered this problem?

    SQL Developer is java based and relies on the jvm's memory management.
    I'm not aware of any memory leaks as such, but memory tends not to be returned to the system.
    Queries which return large return sets tend to use a lot of memory (SQL Developer has to build a java table containing all the results for display).
    You can restrict the maximum memory allocated by modifying settings in in <sqldeveloper>\ide\bin\ide.conf
    The defaults are -
    AddVMOption -Xmx640M
    AddVMOption -Xms128M

  • Network Stream Error -314340 due to buffer size on the writer endpoint

    Hello everyone,
    I just wanted to share a somewhat odd experience we had with the network stream VIs.  We found this problem in LV2014 but aren't aware if it is new or not.  I searched for a while on the network stream endpoint creation error -314340 and couldn't come up with any useful links to our problem.  The good news is that we have fixed our problem but I wanted to explain it a little more in case anyone else has a similar problem.
    The specific network stream error -314340 should seemingly occur if you are attempting to connect to a network stream endpoint that is already connected to another endpoint or in which the URL points to a different endpoint than the one trying to connect. 
    We ran into this issue on attempting to connect to a remote PXI chassis (PXIe-8135) running LabVIEW real-time from an HMI machine, both of which have three NICs and access different networks.  We have a class that wraps the network stream VIs and we have deployed this class across four machines (Windows and RT) to establish over 30 network streams between these machines.  The class can distinguish between messaging streams that handle clusters of control and status information and also data streams that contain a cluster with a timestamp and 24 I16s.  It was on the data network streams that we ran into the issue. 
    The symptoms of the problem were that we if would attempt to use the HMI computer with a reader endpoint specifying the URL of the writer endpoint on the real-time PXI, the reader endpoint would return with an error of -314340, indicating the writer endpoint was pointing to a third location.  Leaving the URL blank on the writer endpoint blank and running in real-time interactive or startup VI made no difference.   However, the writer endpoint would return without error and eventually catch a remote endpoint destroyed.  To make things more interesting, if you specified the URL on the writer endpoint instead of the reader endpoint, the connection would be made as expected. 
    Ultimately through experimenting with it, we found that the buffer size of the create writer endpoint  for the data stream was causing the problem and that we had fat fingered the constants for this buffer size.   Also, pre-allocating or allocating the buffer on the fly made no difference.  We imagine that it may be due to the fact we are using a complex data type with a cluster with an array inside of it and it can be difficult to allocate a buffer for this data type.  We guess that the issue may be that by the reader endpoint establishing the connection to a writer with a large buffer size specified, the writer endpoint ultimately times out somewhere in the handshaking routine that is hidden below the surface. 
    I just wanted to post this so others would have a reference if they run into a similar situation and again for reference we found this in LV2014 but are not aware if it is a problem in earlier versions.
    Thanks,
    Curtiss

    Hi Curtiss!
    Thank you for your post!  Would it be possible for you to add some steps that others can use to reproduce/resolve the issue?
    Regards,
    Kelly B.
    Applications Engineering
    National Instruments

  • Visa Resource Name in cluster passed to Network Stream writer causes error 42.

    If I try and pass this "motor data" cluster with an embedded VISA resource name:
    to a Network Stream Writer in this manner:
    Then I get this error from the "Write Single Element to Stream" VI
    If I change the motor data cluster TypDef so that a string control is used instead of the VISA reference, the error disappears and the data passes over the Network Stream without problem.
    Is this expected behavior? 
    I thought that the "data in" (type = "POLY"?) like the one found on the "Write Single Element to Stream" VI was supposed to accept pretty much anything...
    Solved!
    Go to Solution.

    Doug
    I would consider this a bug, as the memory location (more precisely the VISA refnum or session) of a VISA resource means nothing on a potentially remote system. Also I was under the impression that most streaming methods like binary file IO, but also the network streams would use the LabVIEW flattened format for data clusters and for that the VISA resource name would be the only logical choice instead of the underlaying VISA refnum, but I might be wrong in this specific case and the default behaviour for flattening VISA resources might always have been to the VISA refnum.
    Considering that the VISA control can resurect the VISA session from a VISA resource name if it is already opened, flattening to the VISA resource name instead would be more logical, but using a string control in the cluster type definition is a reasonable work around of course.
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

Maybe you are looking for

  • Lumia 920 and updating to Windows 8.1

    Greetings, WIndows phone holders. As the title implies, I have a 920 that I would like to update. I bought my phone outright and do a $25 per month plan through AT&T which does NOT include data, however I am perpetually connected to wi-fi. I am curio

  • Pointing to external Database from apex application

    Hi Guys, Is that possible to point to databse exist in another server through apex application ? is there any configuration like that ?

  • Steer valuation type in inbound IDOC INVOIC

    We are working with batch management. When a purchase order is partially delivered, we need to make sure that the incoming vendor invoice (via IDOC type INVOIC) is linked to the correct valuation type. I can steer the PO number en PO item line. Is it

  • Truncate with Reusing Storage

    Hi , Is there any side effects if i ssue SQL > turncate table Reusing Storage; the Reusing Storage makes my deleting process faster then normal truncate Or delete right ? pls tell me in simple why Reusing Storage on truncateing ? thnx a lot guy's -Ra

  • Cin in SD module.

    Dear SAP Consultants,                                      Please provide me the detail steps of cin related with SD module. Thanks & Regards. Sandeep