LV2010 Network Streams Bug

It appears that you cannot create a Network Stream datatype of a cluster that contains a variant.
For example a datatype of a cluster containing a string, a variant and a boolean does not work correctly.
I am not sure if this is a bug or a restriction in the range of valid datatypes for this API.

This appears to be an editor bug with LV 2010 in regards to how the data type of the terminal gets updated.  While I can't reproduce the issue you've described exactly, there's definitely issues with getting the cluster to update when changing the names of the cluster elements after wiring the cluster to the data type terminal of the Create VI.  I'm generally able to get the cluster to update when adding elements to the cluster, but it's certainly possible I'm missing something from how you're interacting with the diagram.  We hope to fix this issue for the LV 2011 release.  In the meantime, you should be able to use the unbundle instead of unbundle by name to work around the problem.  Also, I believe another work around would be to delete the wire from the data type terminal of the Create VI, save the VI, close LV, and restart LV.  After restarting LV, the unbundle by name should update appropriately once you wire the data type of the Create VI again (at least it did in the few tests I conducted). 

Similar Messages

  • Visa Resource Name in cluster passed to Network Stream writer causes error 42.

    If I try and pass this "motor data" cluster with an embedded VISA resource name:
    to a Network Stream Writer in this manner:
    Then I get this error from the "Write Single Element to Stream" VI
    If I change the motor data cluster TypDef so that a string control is used instead of the VISA reference, the error disappears and the data passes over the Network Stream without problem.
    Is this expected behavior? 
    I thought that the "data in" (type = "POLY"?) like the one found on the "Write Single Element to Stream" VI was supposed to accept pretty much anything...
    Solved!
    Go to Solution.

    Doug
    I would consider this a bug, as the memory location (more precisely the VISA refnum or session) of a VISA resource means nothing on a potentially remote system. Also I was under the impression that most streaming methods like binary file IO, but also the network streams would use the LabVIEW flattened format for data clusters and for that the VISA resource name would be the only logical choice instead of the underlaying VISA refnum, but I might be wrong in this specific case and the default behaviour for flattening VISA resources might always have been to the VISA refnum.
    Considering that the VISA control can resurect the VISA session from a VISA resource name if it is already opened, flattening to the VISA resource name instead would be more logical, but using a string control in the cluster type definition is a reasonable work around of course.
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • How to run/test application using network streams

    Hello,
    I'm confused on what steps to follow to test an application using the network streams feature. After reading a lot and going through other discussions I found 2 simple VIs on the last post here http://forums.ni.com/t5/LabVIEW/Network-streams-problem-with-cRIO-9022/td-p/1401576 (I attached the VIs, which I modified just a bit) which I believe have been tested before and work so I was trying to do the same in my desktop and it does not work. I get the same scenario when I was trying to test the code I wrote: no bugs, both VIs run but nothing happens.
    According to the link above this is what is supposed to happen: "a PC hosts a GUI (see mainGUI.VI) that sends an array of doubles to the cRIO via one stream and reads a double value back via another. The cRIO, via the scan interface, reads the array via the first stream and writes an analog value to the host via the second (see mainRT.VI). "
    The current set up is: 
    1. Add both VIs to the blank project with the target device cRIO (see attached screenshot)
    2. Deploy mainRT.vi on the cRIO
    3. Run both at the same time
    Am I missing something?
    Also, I have been reading that there are two ways of testing this. One running both VIs on the PC and the other actually using the targeted device, in this case cRIO. Is that correct?
    Please help!
    Thank you
    Attachments:
    project explorer.jpg ‏36 KB
    mainRT.vi ‏62 KB
    mainGUI.vi ‏20 KB

    Connections have TimeOuts, so you have a little "wiggle room" in deciding which to load first.
    In my situation, one side is a Remote Real Time system ("Remote"), which has the behavior that when it starts up, it runs the code that I want, which includes establishing the Remote side of the Network Stream.  I make all (four) streams use the Remote as the site whose URL (or IP) I need to know.  I set them up so that if they time out (I think I may use 15 seconds), they simply close the connections and try again, exiting when all four connections are established, and going on to run the rest of the Remote code.
    On the Host, the expectation is that the code is already running on the Remote.  Accordingly, on the Host side, the Initialization sequence establishes the four connections, all using the IP (URL) of the Remote.  I also use a 15-second timeout, but run it inside a second loop to allow three tries before I give up (and generate an Error Exit).  Otherwise, if all the connections get made, I've got my Network Streams connected and can continue.
    In this scheme, both the Host and Remote code are in the same Project.  What I typically do is to Build the Remote code, deploy it, and set it to run as the Startup.  If you are not dealing with a Real Time System, you might not have that option.  But you should deploy your Remote code, and start both programs within each other's respective TimeOut windows.  Since in my scheme the Remote Window is infinite, I (automatically) start it first, then start the Host.  I haven't timed it, but I'd guestimate that the connections are established in at most a few tenths of a second.
    Bob Schor

  • Network streams avoid LabVIEW executables to close properly

    If I build an application that contains network streams operations, the application process will not be closed after the application stops. This happens even if I use the Quit LabVIEW call !
    To examine this I used a simple add function with and without a sequenced  'Create network stream reader endpoint' call. The executable with a network stream call, stays in the process list after stopping the application.
    It doesn't matter that the pure create stream call is incomplete to create a stream and is in this way without any functionality. Due to a timeout error no stream will be created, but it ends as described. So this scenario is only to show the behaviour of the executable.
    Any ideas ?
    Thanks in advance
    Christian
    Solved!
    Go to Solution.

    For anyone who is interested I attached a sample project. I think this is a bug without workaround.
    Attachments:
    BugScenarioNetworkStreams.zip ‏13 KB

  • Network stream fxp excess memory usage and poor performance

    I'm trying to stream some datas à highspeed rate (3 channels à 1Mbytes/s) from my 9030 to my windows host. Because i don't need to use data on the rt side, i choose to forward FXP <+-,24,5> to my host throug a network stream.
    To avoid data loose i choose to use a wide buffer of 6000000 with this buffer my memory usage grow from 441mo to 672Mo and my rio is unable to stream the data. 
    With sgl or double, memory usage is 441 to 491Mo and datas can be streamed continusly.
    Anyone have encoutered this problem?

    SQL Developer is java based and relies on the jvm's memory management.
    I'm not aware of any memory leaks as such, but memory tends not to be returned to the system.
    Queries which return large return sets tend to use a lot of memory (SQL Developer has to build a java table containing all the results for display).
    You can restrict the maximum memory allocated by modifying settings in in <sqldeveloper>\ide\bin\ide.conf
    The defaults are -
    AddVMOption -Xmx640M
    AddVMOption -Xms128M

  • Network Stream Error -314340 due to buffer size on the writer endpoint

    Hello everyone,
    I just wanted to share a somewhat odd experience we had with the network stream VIs.  We found this problem in LV2014 but aren't aware if it is new or not.  I searched for a while on the network stream endpoint creation error -314340 and couldn't come up with any useful links to our problem.  The good news is that we have fixed our problem but I wanted to explain it a little more in case anyone else has a similar problem.
    The specific network stream error -314340 should seemingly occur if you are attempting to connect to a network stream endpoint that is already connected to another endpoint or in which the URL points to a different endpoint than the one trying to connect. 
    We ran into this issue on attempting to connect to a remote PXI chassis (PXIe-8135) running LabVIEW real-time from an HMI machine, both of which have three NICs and access different networks.  We have a class that wraps the network stream VIs and we have deployed this class across four machines (Windows and RT) to establish over 30 network streams between these machines.  The class can distinguish between messaging streams that handle clusters of control and status information and also data streams that contain a cluster with a timestamp and 24 I16s.  It was on the data network streams that we ran into the issue. 
    The symptoms of the problem were that we if would attempt to use the HMI computer with a reader endpoint specifying the URL of the writer endpoint on the real-time PXI, the reader endpoint would return with an error of -314340, indicating the writer endpoint was pointing to a third location.  Leaving the URL blank on the writer endpoint blank and running in real-time interactive or startup VI made no difference.   However, the writer endpoint would return without error and eventually catch a remote endpoint destroyed.  To make things more interesting, if you specified the URL on the writer endpoint instead of the reader endpoint, the connection would be made as expected. 
    Ultimately through experimenting with it, we found that the buffer size of the create writer endpoint  for the data stream was causing the problem and that we had fat fingered the constants for this buffer size.   Also, pre-allocating or allocating the buffer on the fly made no difference.  We imagine that it may be due to the fact we are using a complex data type with a cluster with an array inside of it and it can be difficult to allocate a buffer for this data type.  We guess that the issue may be that by the reader endpoint establishing the connection to a writer with a large buffer size specified, the writer endpoint ultimately times out somewhere in the handshaking routine that is hidden below the surface. 
    I just wanted to post this so others would have a reference if they run into a similar situation and again for reference we found this in LV2014 but are not aware if it is a problem in earlier versions.
    Thanks,
    Curtiss

    Hi Curtiss!
    Thank you for your post!  Would it be possible for you to add some steps that others can use to reproduce/resolve the issue?
    Regards,
    Kelly B.
    Applications Engineering
    National Instruments

  • FPGA to Real Time using DMA to Host using Network Stream

    Hi All,
    So I am working on a project where I am monitoring various characteristics of a modified diesel engine being driven by a dynamometer. I am trying to log different pressures, temperatures, and engine speed. It basically breaks down into two streams of data acquisition.
    Fast (125kS/ch/s): 
         In cylinder pressure using an NI 9223
         Engine speed via shaft encoder/MPU using same NI 9223
    Slow (1kS/ch/s)
         Other pressures (oil, coolant, tank, etc...) using an NI 9205
         Various temperatures using an NI 9213
    My basic architecture is simultaneous data acquisition on the FPGA for both streams. Each Stream is feed into a separate DMA FIFO where it is fed to the RT side. On the RT side each DMA FIFO is simultaneously read and transferred into two separate network streams where it is fed to the host PC. Then the Host PC does all of the analysis, and logs the data to disk. I had this code working on Thursday with great success, then I tried to package the RT VI as an executable so that it could be deployed to the rio and called pragmatically from the host VI. After trying this approach I was told that we needed to do some testing, so I undid the changes and now the slow stream is not working properly. 
    Since then I have installed LV2013 and installed NI RIO 13 so that I could have a fresh slate to work with. It didn't help however, and now I'm having issues just working in scan mode and with simple FPGA applications. Does anyone know if there are some settings that I could have messed up either building the executable application or deploying it? Or why the fast acquisition is still working while the slow doesn't, even though they are exactly the same?
    Sorry to be so scatter brained with this whole issue. I really have no idea how it could be broken considering the fast stream is working and the code is practically identical. Let me know if you need more information. I'll upload my code as well.

    Hopefully these files work. 
    The "fast" stream gives data points every 8us without fail, as that is the scan period of a 125kHz sample rate. The "slow" stream on Thursday was giving out data points every 1ms, however, now it gives out data points in a very sporadic interval. Also, the data that it does give out doesn't make any sense, tick count going in the negative followed by positive direction for example. 
    I did uninstall all of the old rio drivers before installing the new set as well. Ill give it another shot though. :/
    Thanks for the reply.
    Attachments:
    Pressure-HighSpeedRT.vi ‏665 KB
    Pressure-HighSpeedFPGA.vi ‏680 KB
    Pressure-Comp.vi ‏28 KB

  • Am getting this message when i try to play music " unable to configure network stream"

    am getting this message when i try to play music " unable to configure network stream"

    Hold on... advice to someone downloading from 1337x !!!
    Until I saw your comment and googled the term, I had no idea that this was a "downloaded" file. I assumed the "1337x" reference was to dimensions/resolution and Noir referred to genre. (I.e., I use similar notations the differentiate files compressed for TV playback comparisons.) In any case, the advice is the same irrespective of the source.
    er, isn't it illegal to download movies ?
    Not necessarily. Thousands of movies are probably downloaded daily as legal purchases from such vendors as iTunes. Thousands more are available for free in the public domain, open source/community videos listings, and under Creative Commons Licensing. In short, there are many legal sources of downloadable content and you should not automatically jump to the conclusion that everyone is a crook.

  • Network streams vs shared variables

    I send data from a PXI RT System to users on different Windows computers via Shared Variable and Network Stream.  The user that receives the data via Network Stream writes the data to a disk file (aka DAQ computer).  The users that receive the data via Shared Variable displays it on front panels (aka Observers). 
    The data consists of a 1D SGL array where elements 0-3 are the timestamp, element 4 is the counter, and elements 5-1000+ are data.  The timestamp is GPS time and is displayed on all computers.  When I look at the timestamp on the DAQ it is slowly falling behind the current GPS time.  After 4 hours it can be up to a minute behind.  When I look at the timestamp on the Observers it is always displaying current GPS time.  When I look at the code on the PXI System, it is always sending the current GPS time.  The counter on the DAQ computer is also behind.
    I am using the Read/Write Single Element Stream functions with the default read/write buffer size of 4096.  The 'timed out?' output is always FALSE for both functions.  No errors are generated.  LabVIEW memory usage is constant during the whole time.
    On the PXI RT System the Network Stream and Shared Variable are being written to inside a Timed While Loop.  The users read the data within a standard While Loop.  Everyone is using LabVIEW 2011.
    It sounds like a buffer is slowly being filled up somewhere, but where?
    Solved!
    Go to Solution.

    On the PXI RT System:
    How often is data sent?
    Are you using a “Flush Stream” function after your “Write Single Element to Stream”?
    On the “DAQ Computer”:
    Are you buffering the reading of the data (i.e. feeding it to a queue)?
    You might try using a property node to read “Available Elements for Reading” to see if they are stacking up here.
    The buffer size is another option to consider.
    steve
    Help the forum when you get help. Click the "Solution?" icon on the reply that answers your
    question. Give "Kudos" to replies that help.

  • How can a dangling network stream endpoint be destroyed

    I have a cRIO application that creates a network stream writer endpoint that communicates with a windows hosted app that creates a matching network stream reader endpoint.  I have found that on occasion an error in my software occurs and the rt vi that created the writer endpoint stops and the windows app also stops without destroying the endpoint.  The problem is that the rt vi that creates the stream endpoint is called with vi server and exists in the rt startup hierarchy and thus the endpoint name is preserved in memory.  When the process is repeated and the stream is created again the create endpoint vi errors out saying that the stream already exists.  Since the original stream refum is lost, my question is how can one deal with this kind of situation.  Unfortunately the create network stream vi's do not have a option to overwrite any existing endpoints.  I will agree that this sort of thing should not happen in a working application but during debug it is quite frustrating.

    Hey Sachsm,
    You will want to set up your program such that in the event of an error your application does not exit immediately so that you can still close your network stream references.  You can do this by wiring the error wire through your program and when you detect an error on it stop your loop, close all references, and then pass the error to an error handler. If you don't close your references before the program exits (either by an error or the Abort button) then you will probably have to close LabVIEW to get the references to close.  Post back if you have any questions on this.
    Regards,
    Kevin
    Product Support Engineer
    National Instruments

  • Network Stream Fails Before Timeout

    Is there any reason why a network stream would fail before the timeout?  I was using streams on an sbRIO-9632 and all throughout testing I had no problems.  But, when I hooked the network up to NASA's dedicated network cable (which goes through one or two switches to eventually connect to my router, the same one used for testing; at NASA's Lunabotics Mining Competition; I think they said it was a virtual network), the streams, as far as I could tell, were failing between immediately and the 15 second timeout.  If I recall correctly, it rarely made it beyond the timeout value.
    Unfortunately, I did not have the time to successfully debug the system while connected to that network cable and switches so I was not able to get the exact error that was occuring.  I do know that the error was originating from the sbRIO because on my computer it was showing that the network streams were closed.
    If you are curious about the architecture being used, it was very similar to "Teleop - Host Acquisition" in the robotics module project wizard.

    Unfortunately, I've been unable to recreate the issue without being connected to the NASA setup (something I won't be able to do for the next year).  The best that I can tell you is that the "fix" that I implemented was to simply re-establish the network stream and prevent it from stopping the rest of the program using a simple state machine for each end of each stream.  It was able to re-connect fairly quickly so I never lost the connection for an appreciable amount of time (which would be required to see a loss in connection in MAX I'm assuming; maybe I'm wrong assuming this?).
    Anyways, because I cannot replicate the issue and therefore test it again, I will try to not take up too much of your time on this one.  I was just hoping to find out if there was any known circumstance that would cause the stream to fail/timeout prematurely. 

  • How to Connect SbRIO to Host using NEtwork Stream

    Hi,
    I am using a program to transfer data from the PC to the Starter Kit using NEtwork Streams.Can anybody tell me how the IP of the PC / SbRIo should be configured in order to connect successfully.

    Hey,
    The first thing to do is make sure that the sbRIO can been seen in MAX under Remote Systems.  There is lots of info on the web for connecting cRIO which will be the same for a sbRIO.  Try these:
    Install NI LabVIEW, LabVIEW Real-Time and FPGA Modules, and NI-RIO Driver
    Assemble Your NI CompactRIO System
    Configure Your NI CompactRIO System for First Use
    Configure NI CompactRIO for DHCP
    Configure NI CompactRIO With a Static IP Address
    Install Software on Your NI CompactRIO Controller
    Introduction to NI LabVIEW
    Lewis Gear CLD
    Check out my LabVIEW UAV

  • Network stream Two network adaptor cards in PC comms with crio

    Hi all,
    I have two network adaptor cards in my pc which i think is causing my problems when collecting data over the tcp. 
    I'm seeing an error message -314340 from the create network stream write/read my address match up and i'm using a template crio vibration data logger. 
    I have two network cards in my pc. I think the error is a tcp one and I’m wondering if this is causing the problem. I’ve given the crio a static ip address 10.0.0.2 with mask 255.255.0.0
    You can see the network adaptor below and it’s ip address 10.0.0.1 so I’m connected directly to this via ethernet cable. However from the Routes print cmd I don’t see the 10.0.0.2. Also the gateway for the crio is default at 0.0.0.0
    Do I need to add the route to the list ? I found the cmd for that which I don’t think I’m using right… route ADD 10.0.0.2 MASK 255.255.0.0 10.0.0.1
    Cheers
    Solved!
    Go to Solution.

    Hi i'm using the crio data logger template from the create project list:
    I'll post my code and some pretty pictures to help explain myself
    I'm using a different crio, so i created a new project and configured the FPGA vi for my hardware, then added the host side. I can run the code see the waveforms after 10 seconds it outright crashes and i've been searching through all the error messages tracking it back to some unfound reason. The irony is sometimes the program creates a tdms file and makes a deposit in the crio folder/deposit (this is defined from an xml file)
    I do notice within a few seconds the cpu on the waveform panel increases to 70+% before crash, so i'm also thinking the FIFO read is at fault, but it could be another red heiring (just installing the real time exection package so i can trace this better).
    I've changed the number of channels from the default 4 to 2 please look in the VibConfig.xml file to see the change as the [Host] Create Config XML.vi doesn't like to changing the channel from 4 for some reason. 
    My set up is [SENSORS] ==> [cRIO] ==ETHERNET==> [PC] Please help 
    Attachments:
    LOG3.zip ‏1584 KB
    error63036.png ‏18 KB

  • Can't create Network Stream connections between Win and RT

    In the attached example, why can't I get all four links to connect without errors? Here's a representative example of the result I'm getting:
    Attachments:
    network_stream_test.zip ‏18 KB

    Hi Bob -
    Yes, one pair of the streams failed to connect. That's what I'm asking about. The timeout is more than enough for them to connect if they will, as evidenced by the pair that did successfully make a connection. (I've also used Network Streams in a couple of applications already, and even across subnets they normally connect within 1 second, so a 10 second timeout isn't an issue here.)
    Yep, I'm aware of the Context Name. But as the Network Streams whitepaper says, and as you reiterated, the default context is assumed when it's ommitted from the URL. That's why one pair of the stream was able to connect in my screenshot. And I'm running my example  in the dev environment anyway, so contexts shouldn't be necessary in the URL at all.
    If you like, you can grab the example I attached and run it yourself to help me investigate. Do all the endpoints connect successfully on your Win/RT setup?
    On a side note, can anyone tell me how to query the current VI's context? I'm certain there's a Server or Scripting node somewhere that does it, but I can't find it at the moment...

  • Network Streams child's play ?

    Hi all
    after a day of playing with network streams I am afraid network streams are not a child's play or rather they are!
    I tried to re-enact a rather akward, but yet working communication between one server and several clients, that I did some time ago.
    Background is, we want to have instances of the same exe  reporting to one UI.
    The idea is that the UI does not know beforehand how many exes are beeing started and where they are originating from.
    So like before I had a listener that would only recognize a connection request and then spawn a re-entrant handler  to do the rest.
    What I found is here and maybe someone can comment that is
    - When a "Create Network Stream Reader Endpoint" is answered it is not possible to get the URL of your partner.
    - I tried to get the get the URL of myself beforehand to send it over to the new partner. To get a fully qualified URL with host and application seems awkward.
    - I tried to have an executable start up a connection to a partner under labview, but it would not work at all or give an error like "duplicate connection". How specific must the URL be that you use when setting up connections. The whole concept of "default enviroment" seems a bit quick and very dirty
    Knowing that shared variables sometimes give problems when the processor is running at high load and network streams is using the same protocol
    I am considering to review my old STM scheme although the port juggling might also be difficult on a single PC.....
    We'll see tomorrow (CEST).
    Gabi 
    7.1 -- 2013
    CLA

    I'll suggest an often over-lokked but valid option going back to LV 6i.
    VI Server.
    Action engines served via VI Server allow multiple connections and bring with them all of the capabilities of TCP/IP. They handlt all of the data conversions and adaptation as data types morph over time.
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

Maybe you are looking for

  • How to set value to Model Node of cardinality 0..N

    hi I am looking for a way to set value to a model node of cardinality 0..N i imported a WSDL into my applicaion , which has the following Node Structure. Context --- ModelNode_Request       ---ModelNode2_Input       ---ModleNode3_Roles  [ cardinality

  • HT1911 change my apple id on my ipad

    I have changed my Apple ID on the appleid.apple.com site and then updated the change in iTunes, but how do I change it on my iPad or get my iPad to recognize the change? I have performed a sync with the iPad (plugged in, not wifi). Also, I'm using a

  • Reports 6.0 changing label dynamically

    I am using reports 6.0. I want to change reports label dynamically. I want to build the routine which should work for all the reports. This concept I want to use multi-language facility in my application. I have tried set_field, set_attr. But it didn

  • Create Controlling document on VF01

    Hi gurus How to to activate "Controlling document creation"  ( cost center & Cost element ) on invoice creation ( vf01) . ( like accounting document is created on vf01). thanks

  • Change and assign F140_ACC_STAT_01 to RFKORD10

    Hi all, I need to make changes to standard form F140_ACC_STAT_01 and reassign it to RFKORD10. So please can anybody guide me through all the steps I need to follow.