Network Stream Error -314340 due to buffer size on the writer endpoint

Hello everyone,
I just wanted to share a somewhat odd experience we had with the network stream VIs.  We found this problem in LV2014 but aren't aware if it is new or not.  I searched for a while on the network stream endpoint creation error -314340 and couldn't come up with any useful links to our problem.  The good news is that we have fixed our problem but I wanted to explain it a little more in case anyone else has a similar problem.
The specific network stream error -314340 should seemingly occur if you are attempting to connect to a network stream endpoint that is already connected to another endpoint or in which the URL points to a different endpoint than the one trying to connect. 
We ran into this issue on attempting to connect to a remote PXI chassis (PXIe-8135) running LabVIEW real-time from an HMI machine, both of which have three NICs and access different networks.  We have a class that wraps the network stream VIs and we have deployed this class across four machines (Windows and RT) to establish over 30 network streams between these machines.  The class can distinguish between messaging streams that handle clusters of control and status information and also data streams that contain a cluster with a timestamp and 24 I16s.  It was on the data network streams that we ran into the issue. 
The symptoms of the problem were that we if would attempt to use the HMI computer with a reader endpoint specifying the URL of the writer endpoint on the real-time PXI, the reader endpoint would return with an error of -314340, indicating the writer endpoint was pointing to a third location.  Leaving the URL blank on the writer endpoint blank and running in real-time interactive or startup VI made no difference.   However, the writer endpoint would return without error and eventually catch a remote endpoint destroyed.  To make things more interesting, if you specified the URL on the writer endpoint instead of the reader endpoint, the connection would be made as expected. 
Ultimately through experimenting with it, we found that the buffer size of the create writer endpoint  for the data stream was causing the problem and that we had fat fingered the constants for this buffer size.   Also, pre-allocating or allocating the buffer on the fly made no difference.  We imagine that it may be due to the fact we are using a complex data type with a cluster with an array inside of it and it can be difficult to allocate a buffer for this data type.  We guess that the issue may be that by the reader endpoint establishing the connection to a writer with a large buffer size specified, the writer endpoint ultimately times out somewhere in the handshaking routine that is hidden below the surface. 
I just wanted to post this so others would have a reference if they run into a similar situation and again for reference we found this in LV2014 but are not aware if it is a problem in earlier versions.
Thanks,
Curtiss

Hi Curtiss!
Thank you for your post!  Would it be possible for you to add some steps that others can use to reproduce/resolve the issue?
Regards,
Kelly B.
Applications Engineering
National Instruments

Similar Messages

  • Why do I get Network Streams Error -314351 (xFFFB3411​)?

    I've had network streams working great (love them!!) for a while with a type def cluster of some numerics and strings.  I've changed the type def numerous times and haven't had a problem.  The last time I changed it, however, I started getting this error from the "Write Single Element to Stream" primitive.
    "LabVIEW: Unable to read or write the stream endpoint with the specified data type.  The data type isn't compatible with the data type used to create the stream."
    I don't get an error when I initialize the stream with "Create Network Stream Writer Endpoint".  Yes, the same datatype is being passed to both primitives.  It's the same code that has worked hundreds of times until now.
    My best guess is that the Network Streams Engine is somehow caching the old datatype.  I've restarted both the local and remote (LV RT) computers multiple times, to no avail.
    Maybe this is a red herring, but I'm using LV 2010 SP1 with LV RT 2010 (no SP1).  The computer is in another state and not easily upgraded, but I'm working on that as a possible solution.  I doubt that's the problem, however, since it has worked up till now.
    What would cause me to get this error even though I'm using the same type def for both the initialize and write?
    Robert C. Mortensen
    Certified LabVIEW Architect
    Certified LabVIEW Embedded Systems Developer
    Endigit
    Solved!
    Go to Solution.

    Hi Robert,
    It very well might be that the Network Stream is somehow still registering the old data type. If this is the case, try deleting the Create Network Stream, the Write Single Element, and one of the type defs. Then, place down a new Create Network Stream and Write Single Element, and ensure that it's a copy of the type def. that you're wiring for both the initialization and the write. If replacing the Network Stream VIs fails to do the trick, I'd like to see if I can't replicate the error your getting on my end. To replicate your setup, can you tell me the data type of your type def.?
    On a final note, I searched through the 'Known Issues' for LabVIEW 2010 and 2010 SP1, and found that there is a known issue which might help explain the error you're receiving:
    233205
    Network Streams do not support enums, data types w/ units, substrings, subarrays, & fixed-size array...
    Sanjay C.
    Embedded Software Product Manager| National Instruments

  • Want to know the buffer size on the 5SD mk3...

    ... and/or the number of full res raw files it can hold. Wondering if I usually shoot a max of six shot bursts, does the card write speed even matter?
    Solved!
    Go to Solution.

    dbltapp wrote:
    ... and/or the number of full res raw files it can hold. Wondering if I usually shoot a max of six shot bursts, does the card write speed even matter?
    At 6 shots it doesn't matter.
    See:  http://www.learn.usa.canon.com/resources/articles/2012/eos_understanding_burst_rates.htmlp
    The conservative estimate for the buffer size (in RAW mode) is that the buffer will hold about 13 shots before the camera has to wait for data to write out in order to clear enough buffer space for another shot.   I've actually tested this with my 5D III and have found that in practice the number is a bit higher -- having about 18 shots before it slowed down due to buffer limits.
    Tim Campbell
    5D II, 5D III, 60Da

  • Error Msg? "current buffer overwritten by the acquistion"

    I got a error msg while doing IMAQ 1394 acquisition, it is random.
    Anyone had an idea on what the possible reason is?
    Thanks
    The message shows as following:
    Error -1074364365 occurred at an unidentified location
    Possible reason(s):
    NI-IMAQ IEEE-1394:  (Hex 0xBFF68033) Requested buffer unavailable. Contents
    of the current buffer overwritten by the acquistion.
    NI Software :  LabVIEW  version 7.1.1
    NI Hardware :  Image Acquisition (IMAQ) device NI IMAQ for 1394
    Driver Version :  2.0.1
    OS :  Windows XP

    I've never done 1394 Acquisition, but with respect to normal DAQ,
    there's a similar message which occurs when you're doing buffered I/O
    and you don't read out the buffer quickly enough.  Once the
    acquisition has filled the buffer, it overflows (or overwrites to be
    more exact) and then gets either very unstable or gives an error
    message.
    If you're doing buffered I/O, check that your buffer is large enough,
    and that you're reading the data from the buffer quickly enough. 
    And of course, lowering the acquisition rate will often also help solve
    this particular problem (if this is the problem at all).
    Again, I've never done 1394 acquisition, but the message kind of rings a bell.
    Hope this helps
    Shane.
    Message Edited by shoneill on 11-22-2005 10:29 AM
    Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)

  • What is ths maximum PDO read buffer size using the Series 2 CAN cards?

    Does anyone know the maximum size for the PDO read buffer when using a Series 2 PCI NI-CAN card?

    Hi
    The maximum size for a single PDO does not depent on the series of your board it is depending on what else you are doing with the CANopen Library.
    The board uses a specific shared memory to transfer messages between driver and hardware. The size of this memory fits nearly 350 messages.
    The CANopen Config takes 100 messages for different services like NMT. That means the maximum size for a single PDO would be approx. 250 messages.
    Or for 5 different PDOs 50 each. But normaly you can leave the buffer size to zero, thus the PDO Read would allways read the newest data.
    This calculation is true for the board. That means you have 350 messages per board and 2 ports whould need to share the memory.
    DirkW

  • Default buffer size to read/write LOBs

    What is the default buffer size used by the iFS APIs when reading/writing LOBs?

    What is the default buffer size used by the iFS APIs when
    reading/writing LOBs?
    64K

  • HT4972 I keep getting an error when trying to update my ipad software to IOS5. The error is a Network timeout error when in fact my connection to the internet is stable and strong. Any suggestions?

    I've tried to update my ipad softward to the new IOS 5 software but keep getting internet connection errors.
    Initially, I thought it was related to my internet connection but I've tried with full connection and still getting the same error.
    Any suggestions would be a great help.
    PS - from South Africa

    Thanks King_Penguin - I've disabled both my firewall and antivirus software to no avail.
    Any other suggestions?
    Using Windows Vista - not sure if that make any difference???

  • Help on error : OIP-04902 - Invalid buffer length for LOB write op

    We have a bit of VBA in Excel 2003 writing a BLOB to an 11.2.0.2 DB; for certain users it is throwing up the above message. I can't find any useful info on it; does anyone know the causes/resolution please ?

    Resolved - a 0 byte object was being passed, which caused the error.

  • Error -1950678945 (Network Streams)

    I coded up a local network streams example where a reader enpoint waits on the writer and got error -1950678945.  My firewall is turned off, so this article wasn't much help.  Since I plan on supporting multiple endpoints across multiple applications on multiple targers, I'm trying to make the wrapper API flexible.  So, I'm using the full URL to designate endpoints.  I discovered that if you're wiring to the "reader name" terminal of a Create Reader Endpoint node with a URL, that URL cannot contain the local IP address.
    Certified LabVIEW Architect
    Wait for Flag / Set Flag
    Separate Views from Implementation for Strict Type Defs
    Solved!
    Go to Solution.

    Dear Luis,
    1.You post the link is this article.
    2.I already use TCP/IP for one server multiple clients as attached images. My application is server query data from clients and server broadcast data to clients. Clients always generate data and sometime when user press the button, it will send addition data to server. However, my server use state machine to query clients. So in most situation server will not reveive clients data. How can I resolve this problem? Thank you.
    B/R
    Ancle
    Attachments:
    server.PNG ‏42 KB
    client.PNG ‏38 KB

  • HttpServer error in reading buffer size via keyboard input - HELP

    I've written a simple HttpServer program that reads keyboard input to construct a buffer to copy the requested file into the socket's output stream. I've done the string-to-integer conversion using BufferedReader and parse.Int. However, when I go to use the int later in the program, I keep getting the message "variable b may not have been initialized." Can anyone tell me what's missing from the code below? Thanks.
    private static void sendBytes(FileInputStream fis, OutputStream os)throws Exception
         //Construct a buffer via console input
         BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
         String str;
         int b;
              System.out.println("Enter desired buffer size or CTRL-C to break.");
         //Convert entry to an integer
         do
              str = br.readLine();
                 try
                   b = Integer.parseInt(str);
              catch(NumberFormatException e)
                   System.out.println("Invalid entry.");
         while((str=br.readLine())!=null);
         //Construct a buffer
         byte[] buffer = new byte;
         int bytes = 0;
         //Begin timing HTML page delivery
         long start, end;
              System.out.println("Timing for Web page delivery");
         start = System.currentTimeMillis();
         //Copy requested file into the socket's output stream
              while((bytes = fis.read(buffer)) != -1)
              os.write(buffer, 0, bytes);

    As the message suggests, what is missing is code to initialize the variable b. The first mention ("int b;") does not initialize it. The second mention ("b = Integer.parseInt(str);") only initializes it if no exception is thrown. So it's possible for b to be uninitialized when you actually try to use it.
    What do you need to change? First you need to decide what's to be done if the keyboard input isn't a valid integer. Do you have a default value in mind? If so, put that where you declare the variable ("int b = 42;"). If not, just initialize the variable to zero ("int b = 0;").

  • How can a dangling network stream endpoint be destroyed

    I have a cRIO application that creates a network stream writer endpoint that communicates with a windows hosted app that creates a matching network stream reader endpoint.  I have found that on occasion an error in my software occurs and the rt vi that created the writer endpoint stops and the windows app also stops without destroying the endpoint.  The problem is that the rt vi that creates the stream endpoint is called with vi server and exists in the rt startup hierarchy and thus the endpoint name is preserved in memory.  When the process is repeated and the stream is created again the create endpoint vi errors out saying that the stream already exists.  Since the original stream refum is lost, my question is how can one deal with this kind of situation.  Unfortunately the create network stream vi's do not have a option to overwrite any existing endpoints.  I will agree that this sort of thing should not happen in a working application but during debug it is quite frustrating.

    Hey Sachsm,
    You will want to set up your program such that in the event of an error your application does not exit immediately so that you can still close your network stream references.  You can do this by wiring the error wire through your program and when you detect an error on it stop your loop, close all references, and then pass the error to an error handler. If you don't close your references before the program exits (either by an error or the Abort button) then you will probably have to close LabVIEW to get the references to close.  Post back if you have any questions on this.
    Regards,
    Kevin
    Product Support Engineer
    National Instruments

  • Imp-00020 long column too large for column buffer size (22)

    Hi friends,
    I have exported (through Conventional path) a complete schema from Oracle 7 (Sco unix patform).
    Then transferred the export file to a laptop(window platform) from unix server.
    And tried to import this file into Oracle10.2. on windows XP.
    (Database Configuration of Oracle 10g is
    User tablespace 2 GB
    Temp tablespace 30 Mb
    The rollback segment of 15 mb each
    undo tablespace of 200 MB
    SGA 160MB
    PAGA 16MB)
    All the tables imported success fully except 3 tables which are having AROUND 1 million rows each.
    The error message comes during import are as following for these 3 tables
    imp-00020 long column too large for column buffer size (22)
    imp-00020 long column too large for column buffer size(7)
    The main point here is in all the 3 tables there is no long column/timestamp column (only varchar/number columns are there).
    For solving the problem I tried following options
    1.Incresed the buffer size upto 20480000/30720000.
    2.Commit=Y Indexes=N (in this case does not import complete tables).
    3.first export table structures only and then Data.
    4.Created table manually and tried to import the tables.
    but all efforts got failed.
    still getting the same errors.
    Can some one help me on this issue ?
    I will be grateful to all of you.
    Regards,
    Harvinder Singh
    [email protected]
    Edited by: user462250 on Oct 14, 2009 1:57 AM

    Thanks, but this note is for older releases, 7.3 to 8.0...
    In my case both export and import were made on a 11.2 database.
    I didn't use datapump because we use the same processes for different releases of Oracle, some of them do not comtemplate datapump. By the way, shouldn't EXP / IMP work anyway?

  • VISA Set I/O Buffer Size fails with all but one value on Linux RT

    I was unable to initialize a serial port on a cRIO-9030 using a code that works fine on VxWorks and Windows, when I tracked it down to this somewhat strange behaviour;
    If you call VISA Set I/O Buffer Size on Linux RT (at least on the 9030 device) you will get error code 1073676424 for all size values other than 0.
    That is a bit strange (what will the buffer size be then I might add...), but something even uglier is that if you leave the function's buffer size unwire,  you will also get the error (because the function's default is 4096). 
    MTO

    Under the hood VISA is using the POSIX serial interface for Mac OS X (same as for Linux and Solaris). This interface does not support changing the buffer size. Hence, the buffer size is fixed to the internal OS buffer size. The only thing that changing the buffer size will do (for the out buffer) is to have VISA not flush the data after every write. This is a limitation in the serial API for Mac OS X. Therefore, VISA reports a warning.

  • Unable to connect network stream

    Hi,
    i have a PXI-8196 as the RT System, and then my host and I'm trying to create a network stream between them.
    When I run the RT System up, then run first the RT vi and then the Host vi and connect, there is no problem. The problem comes when I stop the vi's: if then I run again both vi's, the host vi is unable to connect again with the RT vi, as the system returns me the error -314004 when the host vi tries to create the endpoint...  If I then shut the PXI system down and run it up again, it works...
    Thanks in advance for the answer!

    "It's all in the wrist" (V White).  You need to decide who is "hosting" the Network Stream (in my case, it's the PXI), and which direction you are streaming (I actually have 4 streams, one PC -> PXI and three PXI -> PC).  The key step is that you need to be sure that the PXI (the system running the Network Stream "engine") is running first.
    Note that the error you saw, -314004, (probably) means that the PXI is not, in fact, "listening" for the Host to make a connection.  If you suspect there's a "race" condition (you accidently start the Host first), you can simply say "Oh, that error means I need to wait 10 seconds for the PXI to get ready, then try again".
    If all else fails, you can get the PXI to restart itself, ensuring a clean system.

  • ErrorMsg[1024]: Getting buffer size programmatically

    I am using TestStand 3.5 and LabWindows/CVI 8. The prototypes for my functions callled from TestStand look like this:
    "void __declspec(dllexport) __stdcall myFunc (short *errorOccurred, long *errorCode, char errorMsg[1024])"
    It is nice to be able to pass errorMsg as a fixed size array, as opposed to just a pointer, so the burden or allocating/deallocating memory is removed from me.  The size of the buffer (1024) is defined in the sequence editor and I can generate the prototype and code for my function, but it seems that the two are somewhat seperate. By that I mean that I can change the literal number in either place and get them not to agree (accidently).
    It would be nice if could specify the 1024 buffer size in the TestStand sequence editor (as I am doing now), and then have this buffer and the buffer size passed down to my function.  So, I'd like to have my  function  read  "... char errorMsg[], int  errorMsgBuflen ... ". Or , alternatively, is there some property/function call that I can make to get the size of the errorMsg Buffer?
    While I have your "ears", is there anything magical about 1024 for the buffer size? That seems to be the number I've seen used in all of the examples.
    Thanks
    Don Frevele

    Hi Don,
    I've attached an example of passing in the array length of errorMsg from TestStand into a CVI dll.  There is nothing magical about the size of 1024.  It sets the maximum length of the error message. 
    Cheers,
    David Goldberg
    National Instruments
    Software R&D
    Attachments:
    myFunc.zip ‏720 KB

Maybe you are looking for