Network Stream Fails Before Timeout

Is there any reason why a network stream would fail before the timeout?  I was using streams on an sbRIO-9632 and all throughout testing I had no problems.  But, when I hooked the network up to NASA's dedicated network cable (which goes through one or two switches to eventually connect to my router, the same one used for testing; at NASA's Lunabotics Mining Competition; I think they said it was a virtual network), the streams, as far as I could tell, were failing between immediately and the 15 second timeout.  If I recall correctly, it rarely made it beyond the timeout value.
Unfortunately, I did not have the time to successfully debug the system while connected to that network cable and switches so I was not able to get the exact error that was occuring.  I do know that the error was originating from the sbRIO because on my computer it was showing that the network streams were closed.
If you are curious about the architecture being used, it was very similar to "Teleop - Host Acquisition" in the robotics module project wizard.

Unfortunately, I've been unable to recreate the issue without being connected to the NASA setup (something I won't be able to do for the next year).  The best that I can tell you is that the "fix" that I implemented was to simply re-establish the network stream and prevent it from stopping the rest of the program using a simple state machine for each end of each stream.  It was able to re-connect fairly quickly so I never lost the connection for an appreciable amount of time (which would be required to see a loss in connection in MAX I'm assuming; maybe I'm wrong assuming this?).
Anyways, because I cannot replicate the issue and therefore test it again, I will try to not take up too much of your time on this one.  I was just hoping to find out if there was any known circumstance that would cause the stream to fail/timeout prematurely. 

Similar Messages

  • [svn:osmf:] 18069: Fix FM-1035: Stream reconnect fails to timeout when reconnectTimeout is zero.

    Revision: 18069
    Revision: 18069
    Author:   [email protected]
    Date:     2010-10-07 11:28:56 -0700 (Thu, 07 Oct 2010)
    Log Message:
    Fix FM-1035: Stream reconnect fails to timeout when reconnectTimeout is zero.  This is similar to the other (recent) stream reconnection bugs, in that it only seems reproduceable when NetStream.bufferLength is non-zero upon receipt of the Buffer.Empty event.
    Ticket Links:
        http://bugs.adobe.com/jira/browse/FM-1035
    Modified Paths:
        osmf/trunk/framework/OSMF/org/osmf/net/NetLoader.as

    "Is it intended to cope with NetConnection.Connect.IdleTimeOut of FMS disconnecting clients after some time of inactivity?"
    The answer is no. Stream reconnect is all about trying to reconnect in the event of a dropped network connection. For the IdleTimeOut scenario, listen for the MediaErrorEvent.MEDIA_ERROR event on the MediaPlayer and check for an error of MediaErrorCodes.NETCONNECTION_TIMEOUT. You can handle that however you wish in your player.
    - charles

  • The engine failed to start before timeout. Restart the application (HDB 10005)

    Hi everyone,
    I experienced many times this type of error starting a SAP Lumira Document (version is 1.17, but even trying with the new version 1.19, right after the installation):
    "An Error Occured: Data Source creation failed .The engine failed to start before timeout. Restart the application. (HDB 10005)".
    I've already found (also in this community) some solutions, but they seems to me like temporary fixing, because sometimes I have to deal with this problem again. Can someone please explain me why this problem happens, and if a definitive fixing is in the works?
    Thank You.

    I am working on Lenovo Helix with Intel i5 processor and 4 GB of fixed memory. And before setting the virtual memory size (paging size) on my laptop it was switched off (no paging). So maybe this was the root cause of the issue for me.
    /Marcin

  • FPGA to Real Time using DMA to Host using Network Stream

    Hi All,
    So I am working on a project where I am monitoring various characteristics of a modified diesel engine being driven by a dynamometer. I am trying to log different pressures, temperatures, and engine speed. It basically breaks down into two streams of data acquisition.
    Fast (125kS/ch/s): 
         In cylinder pressure using an NI 9223
         Engine speed via shaft encoder/MPU using same NI 9223
    Slow (1kS/ch/s)
         Other pressures (oil, coolant, tank, etc...) using an NI 9205
         Various temperatures using an NI 9213
    My basic architecture is simultaneous data acquisition on the FPGA for both streams. Each Stream is feed into a separate DMA FIFO where it is fed to the RT side. On the RT side each DMA FIFO is simultaneously read and transferred into two separate network streams where it is fed to the host PC. Then the Host PC does all of the analysis, and logs the data to disk. I had this code working on Thursday with great success, then I tried to package the RT VI as an executable so that it could be deployed to the rio and called pragmatically from the host VI. After trying this approach I was told that we needed to do some testing, so I undid the changes and now the slow stream is not working properly. 
    Since then I have installed LV2013 and installed NI RIO 13 so that I could have a fresh slate to work with. It didn't help however, and now I'm having issues just working in scan mode and with simple FPGA applications. Does anyone know if there are some settings that I could have messed up either building the executable application or deploying it? Or why the fast acquisition is still working while the slow doesn't, even though they are exactly the same?
    Sorry to be so scatter brained with this whole issue. I really have no idea how it could be broken considering the fast stream is working and the code is practically identical. Let me know if you need more information. I'll upload my code as well.

    Hopefully these files work. 
    The "fast" stream gives data points every 8us without fail, as that is the scan period of a 125kHz sample rate. The "slow" stream on Thursday was giving out data points every 1ms, however, now it gives out data points in a very sporadic interval. Also, the data that it does give out doesn't make any sense, tick count going in the negative followed by positive direction for example. 
    I did uninstall all of the old rio drivers before installing the new set as well. Ill give it another shot though. :/
    Thanks for the reply.
    Attachments:
    Pressure-HighSpeedRT.vi ‏665 KB
    Pressure-HighSpeedFPGA.vi ‏680 KB
    Pressure-Comp.vi ‏28 KB

  • Socket read() call returning before timeout.

    I have a couple of applications which use the following basic socket reading approach:
    Socket socket = new Socket("192.168.0.1", 54321);
    socket.setSoTimeout (5000);
    InputStream stream = socket.getInputStream();
    byte[] data = new byte[232];
    boolean done = false;
    while (!done) {
        int res= stream.read(data);
        if (res < data.length) {
            System.err.println ("Error reading packet data - not enough data received: "+ res);
        // process and output the data
    try { stream.close(); } catch (Exception e) {}
    try { socket.close(); } catch (Exception e) {}The problem I am having is that sometimes read(byte[]) returns a before the full array's worth of data has been read, and before a timeout has occurred. I never get a SocketTimeoutException, but I do get my debugging output. I have recorded the network traffic with a packet sniffer (Wireshark), and what I am seeing is that in the instances where the read returns prematurely, it stops at the end of a TCP packet. The rest of the data is in another packet that arrives shortly afterwords (~1.5 ms or less).
    I know that with normal (file) input streams, read can return whenever it wants regardless of how many bytes were read, however I was under the impression that network streams are supposed to block until all the data arrives (or a timeout occurs, if set). Is this incorrect?

    djpeaco wrote:
    I know that with normal (file) input streams, read can return whenever it wants regardless of how many bytes were readThat's correct and that's exactly the reason you see the behavior you see.
    however I was under the impression that network streams are supposed to block until all the data arrives (or a timeout occurs, if set).Why? Why do you think that network streams behave differently in this regard?
    Why shouldn't it give you the data as soon as it's available, when every other stream works this way?
    Is this incorrect?Yes, you must assume that the streams of a socket follow the general InputStream/OutputStream contract and that includes possibly returning from read() before the full array is filled.

  • Can't create Network Stream connections between Win and RT

    In the attached example, why can't I get all four links to connect without errors? Here's a representative example of the result I'm getting:
    Attachments:
    network_stream_test.zip ‏18 KB

    Hi Bob -
    Yes, one pair of the streams failed to connect. That's what I'm asking about. The timeout is more than enough for them to connect if they will, as evidenced by the pair that did successfully make a connection. (I've also used Network Streams in a couple of applications already, and even across subnets they normally connect within 1 second, so a 10 second timeout isn't an issue here.)
    Yep, I'm aware of the Context Name. But as the Network Streams whitepaper says, and as you reiterated, the default context is assumed when it's ommitted from the URL. That's why one pair of the stream was able to connect in my screenshot. And I'm running my example  in the dev environment anyway, so contexts shouldn't be necessary in the URL at all.
    If you like, you can grab the example I attached and run it yourself to help me investigate. Do all the endpoints connect successfully on your Win/RT setup?
    On a side note, can anyone tell me how to query the current VI's context? I'm certain there's a Server or Scripting node somewhere that does it, but I can't find it at the moment...

  • How to run/test application using network streams

    Hello,
    I'm confused on what steps to follow to test an application using the network streams feature. After reading a lot and going through other discussions I found 2 simple VIs on the last post here http://forums.ni.com/t5/LabVIEW/Network-streams-problem-with-cRIO-9022/td-p/1401576 (I attached the VIs, which I modified just a bit) which I believe have been tested before and work so I was trying to do the same in my desktop and it does not work. I get the same scenario when I was trying to test the code I wrote: no bugs, both VIs run but nothing happens.
    According to the link above this is what is supposed to happen: "a PC hosts a GUI (see mainGUI.VI) that sends an array of doubles to the cRIO via one stream and reads a double value back via another. The cRIO, via the scan interface, reads the array via the first stream and writes an analog value to the host via the second (see mainRT.VI). "
    The current set up is: 
    1. Add both VIs to the blank project with the target device cRIO (see attached screenshot)
    2. Deploy mainRT.vi on the cRIO
    3. Run both at the same time
    Am I missing something?
    Also, I have been reading that there are two ways of testing this. One running both VIs on the PC and the other actually using the targeted device, in this case cRIO. Is that correct?
    Please help!
    Thank you
    Attachments:
    project explorer.jpg ‏36 KB
    mainRT.vi ‏62 KB
    mainGUI.vi ‏20 KB

    Connections have TimeOuts, so you have a little "wiggle room" in deciding which to load first.
    In my situation, one side is a Remote Real Time system ("Remote"), which has the behavior that when it starts up, it runs the code that I want, which includes establishing the Remote side of the Network Stream.  I make all (four) streams use the Remote as the site whose URL (or IP) I need to know.  I set them up so that if they time out (I think I may use 15 seconds), they simply close the connections and try again, exiting when all four connections are established, and going on to run the rest of the Remote code.
    On the Host, the expectation is that the code is already running on the Remote.  Accordingly, on the Host side, the Initialization sequence establishes the four connections, all using the IP (URL) of the Remote.  I also use a 15-second timeout, but run it inside a second loop to allow three tries before I give up (and generate an Error Exit).  Otherwise, if all the connections get made, I've got my Network Streams connected and can continue.
    In this scheme, both the Host and Remote code are in the same Project.  What I typically do is to Build the Remote code, deploy it, and set it to run as the Startup.  If you are not dealing with a Real Time System, you might not have that option.  But you should deploy your Remote code, and start both programs within each other's respective TimeOut windows.  Since in my scheme the Remote Window is infinite, I (automatically) start it first, then start the Host.  I haven't timed it, but I'd guestimate that the connections are established in at most a few tenths of a second.
    Bob Schor

  • Simple Network Streams - Example VIs

    In the example VI: Simple Network Streams - Host.vi the receive loop has a choice two timers: 10 and 20 milliseconds.  In addition the Read Single Element from Stream VI has a timeout of 100 milliseconds connected to it.
    In Simple Network Streams - Target.vi the receive loop has no timer, but the Read Single Element from Stream VI has a timeout of 1000 milliseconds connected.
    How can the target receive loop work without a timer?  What is the purpose of the 100 millisecond timeout of the Host receive loop if the loop has a timer of at most 10 milliseconds?  I have this example working on my cRIO-9030, but I don't understand it.  I think this gets down to my lack of understanding of the timeout function of reads and writes for streams and FIFOs.  Does the VI sleep until just before the timeout to check for data, or does it hog the processor the entire time looking for data?  It doesn't seem to be the latter since the processors on the target are only partially occupied in this example.

    gary_janik wrote:
    In the example VI: Simple Network Streams - Host.vi the receive loop has a choice two timers: 10 and 20 milliseconds.  In addition the Read Single Element from Stream VI has a timeout of 100 milliseconds connected to it.
    In Simple Network Streams - Target.vi the receive loop has no timer, but the Read Single Element from Stream VI has a timeout of 1000 milliseconds connected.
    How can the target receive loop work without a timer?  What is the purpose of the 100 millisecond timeout of the Host receive loop if the loop has a timer of at most 10 milliseconds?  I have this example working on my cRIO-9030, but I don't understand it.  I think this gets down to my lack of understanding of the timeout function of reads and writes for streams and FIFOs.  Does the VI sleep until just before the timeout to check for data, or does it hog the processor the entire time looking for data?  It doesn't seem to be the latter since the processors on the target are only partially occupied in this example.
    First of all, where is this example?  You didn't post the code, and I didn't find it in the LabVIEW Examples, so it's difficult to explain "invisible" code.
    However, "sight-unseen", I may be able to explain some of your issues.  If you think about Network Streams as a Producer/Consumer pattern "across the network", the Consumer (the receive loop) might not need a timer, as it is being "driven" (and hence "timed") by the Producer sending data to it.  It probably should have a time-out just in case the Consumer just stops sending (say, it exits).
    Functions such as dequeues and receives "block" and don't do anything if they have no input.  Recall that LabVIEW uses data flow, and can have multiple parallel processes (loops) running at the same time.  When a function blocks, the code section (let's call it a "loop") containing the function stops using CPU cycles, allowing the other parallel loops to use the time.  This lets processes run "at their own speed" as long as they are sufficiently "decoupled" by being placed in parallel loops.  When there is no data, the loop essentially "waits", and lets all the other loops use the time instead.
    Bob Schor

  • Failover Cluster Network Name Failed and Can't be Repaired

    I have an issue that seem to be a different problem than any others have encountered.
    I've scoured everything I can find and nothing has fixed my problem.
    The problem starts with the common problem of the cluster network name failing on my 2 node server 2012 file server cluster.  The computer object was still in AD and appeared to be fine so it was not the common problem of the object
    getting deleted somehow.  At the time, there was no other object with that name in the recycling bin, so I don't think it was mistakenly deleted and quickly recreated to cover any tracks, so to speak.
    Following one guide, I tried to find the registry key that corresponded with the GUID of the object, but neither node in the cluster had it in its registry (which may be part of the problem).
    Since it was in the failed state, I tried to do the repair on the object to no avail.
    We run a "locked down" DC environment so all computer objects have to be pre-provisioned.  They were all pre-provisioned successfully and successfully assigned during cluster creation.  The cluster was running with no issues for a month
    or so before this problem came up.
    When I do a repair on the object while taking diagnostic logs the following 4609 error appears:
    The action 'Repair' did not complete. - System.ApplicationException: An error occurred resetting the password for 'Cluster Name'. ---> System.ComponentModel.Win32Exception: Unknown error (0x80005000)
    There appears to be a corresponding 4771 error with a failure code 0x18 that comes from the security log of the DC that states there was a Kerberos pre-authentication failure for the cluster network name object (Domain\Clustername$)
    I believe this is what is causing the repair failure.  All the information I found related to security error 4771 was either a bad credentials given for a user account or the fix was to reconnect the computer to the domain.  I can't seem to find
    a way to do this with the cluster network name.  If there's a way please let me know.
    I've tried a number of things, like resetting the object, disabling it, deleting and creating a new object with the same name, deleting that new object and recovering the original, etc...
    Can anyone shed some light on what is going on and hopefully how to fix it other than rebuilding the cluster?  I'm quite close to just tearing it down and building it back up but am hesitant because this cluster in currently in production...
    Any help would be appreciated

    Hi,
    I don’t find out the similar issue with yours, base on my experience, the 4096 error
     often caused by the CSV disk issue, and the 0x80005000 error some time caused by the repetitive computer object in OU. Please check the above related part or run the validate test then post the error information.
    Although I do have a CSV, there doesn't seem to be any problems with it and it was running just fine for a month or so before the problem started.  I double checked and there is no duplicate computer objects, maybe I don't understand what you mean by
    repetitive, could you explain further?
    The cluster validates successfully with a few warnings:
    Validating cluster resource Name: DT-FileCluster.
    This resource is marked with a state of 'Failed' instead of
    'Online'. This failed state indicates that the resource had a problem either
    coming online or had a failure while it was online. The event logs and cluster
    logs may have information that is helpful in identifying the cause of the
    failure.
    - This is because the cluster name is in the failed state
    Validating the service principal names for Name:
    DT-FileCluster.
    The network name Name: DT-FileCluster does not have a valid
    value for the read-only property 'ObjectGUID'. To validate the service principal
    name the read-only private property 'ObjectGuid' must have a valid value. To
    correct this issue make sure that the network name has been brought online at
    least once. If this does not correct this issue you will need to delete the
    network name and re-create it.
    - This is definitely related to the problem and the GUID probably got removed when we attempted a fix by resetting the object and trying the repair from the failover cluster manager.
    The user running validate, does not have permissions to create
    computer objects in the 'ad.unlv.edu' domain.
    - This is correct, we run a restricted domain.  I have a delegated OU that I can pre-provision accounts in.  The account was pro-provisioned successfully and was at one point setup and working just fine.
    There are no other errors nor warnings.

  • How can a dangling network stream endpoint be destroyed

    I have a cRIO application that creates a network stream writer endpoint that communicates with a windows hosted app that creates a matching network stream reader endpoint.  I have found that on occasion an error in my software occurs and the rt vi that created the writer endpoint stops and the windows app also stops without destroying the endpoint.  The problem is that the rt vi that creates the stream endpoint is called with vi server and exists in the rt startup hierarchy and thus the endpoint name is preserved in memory.  When the process is repeated and the stream is created again the create endpoint vi errors out saying that the stream already exists.  Since the original stream refum is lost, my question is how can one deal with this kind of situation.  Unfortunately the create network stream vi's do not have a option to overwrite any existing endpoints.  I will agree that this sort of thing should not happen in a working application but during debug it is quite frustrating.

    Hey Sachsm,
    You will want to set up your program such that in the event of an error your application does not exit immediately so that you can still close your network stream references.  You can do this by wiring the error wire through your program and when you detect an error on it stop your loop, close all references, and then pass the error to an error handler. If you don't close your references before the program exits (either by an error or the Abort button) then you will probably have to close LabVIEW to get the references to close.  Post back if you have any questions on this.
    Regards,
    Kevin
    Product Support Engineer
    National Instruments

  • Remote Speaker (AirTunes) Error - "the network connection failed"

    I am getting a strange error message when I try to stream music through an AirTunes remote speaker. iTunes tells me this:
    *An error occurred while connecting to the remote speaker "NN". The network connection failed.*
    I can't say what has changed for this error to occur. Last time I used the system it worked fine, now it doesn't. I do know I have upgraded iTunes a couple of times since, though.
    I am connecting to an 802.11g AirPort Express running firmware version 6.3.
    Both our computers (MacBook & MacBook Pro) run Mac OS version 10.5.8.
    In my system, the AirPort Express functions solely as a wireless speaker client, as the WLAN network itself is based around an 802.11n AirPort Extreme.
    The AP shows a green light, and reports no problems in AP Utility.
    When I change the device's "iTunes Speaker Password" setting in AP Utility, iTunes in both our computers immediately reflect the change, so apparently the various parts in the system do speak to one another just fine. They just can't transmit music, for some reason...
    Anyone have any idea what might be going on, here?

    I have a similar problem. I use AE(g) with a Powerbook G4 and always get this error, which kind of bugs me, since all the settings appear to be correct. Unlike you though, I didn't make it work, not a single time — closing lid, opening and closing iTunes, resetting AE, yet no luck. Tried it all only with current versions of software.
    However, the AE does sense when I unplug the speakers, since it first says "no speakers found", and then "the network connection failed". Strange is that the error has no number, nor does Console provide any clues. Sometimes it is pretty annoying when OS X just says "error". What error, where, why... Feels like good ol' windows times.
    Update: Whoa, looks like we're not alone here. http://discussions.apple.com/thread.jspa?threadID=2154052&start=0&tstart=0
    http://www.ilounge.com/index.php/news/comments/itunes-9-causing-airtunes-connect ion-problems/
    Message was edited by: tochka

  • Network stream Two network adaptor cards in PC comms with crio

    Hi all,
    I have two network adaptor cards in my pc which i think is causing my problems when collecting data over the tcp. 
    I'm seeing an error message -314340 from the create network stream write/read my address match up and i'm using a template crio vibration data logger. 
    I have two network cards in my pc. I think the error is a tcp one and I’m wondering if this is causing the problem. I’ve given the crio a static ip address 10.0.0.2 with mask 255.255.0.0
    You can see the network adaptor below and it’s ip address 10.0.0.1 so I’m connected directly to this via ethernet cable. However from the Routes print cmd I don’t see the 10.0.0.2. Also the gateway for the crio is default at 0.0.0.0
    Do I need to add the route to the list ? I found the cmd for that which I don’t think I’m using right… route ADD 10.0.0.2 MASK 255.255.0.0 10.0.0.1
    Cheers
    Solved!
    Go to Solution.

    Hi i'm using the crio data logger template from the create project list:
    I'll post my code and some pretty pictures to help explain myself
    I'm using a different crio, so i created a new project and configured the FPGA vi for my hardware, then added the host side. I can run the code see the waveforms after 10 seconds it outright crashes and i've been searching through all the error messages tracking it back to some unfound reason. The irony is sometimes the program creates a tdms file and makes a deposit in the crio folder/deposit (this is defined from an xml file)
    I do notice within a few seconds the cpu on the waveform panel increases to 70+% before crash, so i'm also thinking the FIFO read is at fault, but it could be another red heiring (just installing the real time exection package so i can trace this better).
    I've changed the number of channels from the default 4 to 2 please look in the VibConfig.xml file to see the change as the [Host] Create Config XML.vi doesn't like to changing the channel from 4 for some reason. 
    My set up is [SENSORS] ==> [cRIO] ==ETHERNET==> [PC] Please help 
    Attachments:
    LOG3.zip ‏1584 KB
    error63036.png ‏18 KB

  • Network Streams child's play ?

    Hi all
    after a day of playing with network streams I am afraid network streams are not a child's play or rather they are!
    I tried to re-enact a rather akward, but yet working communication between one server and several clients, that I did some time ago.
    Background is, we want to have instances of the same exe  reporting to one UI.
    The idea is that the UI does not know beforehand how many exes are beeing started and where they are originating from.
    So like before I had a listener that would only recognize a connection request and then spawn a re-entrant handler  to do the rest.
    What I found is here and maybe someone can comment that is
    - When a "Create Network Stream Reader Endpoint" is answered it is not possible to get the URL of your partner.
    - I tried to get the get the URL of myself beforehand to send it over to the new partner. To get a fully qualified URL with host and application seems awkward.
    - I tried to have an executable start up a connection to a partner under labview, but it would not work at all or give an error like "duplicate connection". How specific must the URL be that you use when setting up connections. The whole concept of "default enviroment" seems a bit quick and very dirty
    Knowing that shared variables sometimes give problems when the processor is running at high load and network streams is using the same protocol
    I am considering to review my old STM scheme although the port juggling might also be difficult on a single PC.....
    We'll see tomorrow (CEST).
    Gabi 
    7.1 -- 2013
    CLA

    I'll suggest an often over-lokked but valid option going back to LV 6i.
    VI Server.
    Action engines served via VI Server allow multiple connections and bring with them all of the capabilities of TCP/IP. They handlt all of the data conversions and adaptation as data types morph over time.
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • An error occurred while connecting to the remote speaker. The network connection failed.

    I can't get my itunes to connect to the remote speakers via airport express anymore  - I did not change any settings - but did update iTunes to 10.2.2 and now things are not working and I get his message - "An error occurred while connecting to the remote speaker “Living Room”. The network connection failed."
    Does anybody have a fix for this problem?
    thanks

    Same problem for me. Ever since I upgraded to 10.6.3 and iTunes 9.1, about once per week I'll get the error message when trying to get iTunes/Airtunes to connect to my older Airport Express (running firmware 6.3) "An error occurred while connecting to the remote speaker <insert speaker name here>. The network connection failed." (No error codes are given.)
    Airport Utility sees the offending Airport Express just fine (and there's a green light on said AE).
    Curiously, a newer AE-n, elsewhere in the house (running firmware 7.4.2) does NOT exhibit this problem.
    Simply restarting the offending AE does NOT clear the problem. Restarting the main router - a Time Capsule DOES clear the problem. Alternatively, toggling off the Airtunes functionality on the offending AE, restarting it, then toggling Airtunes back on, and finally restarting it DOES clear the problem (as an alternate to restarting the router). No matter how the problem is cleared, it returns within a week. It's frustrating that the problem cannot be recreated immediately.
    Probably not relevant, but the computer on which I'm running iTunes (a Mac Mini) is connected to my TC via ethernet, not wirelessly.
    I've tried the suggestion of disabling ipv6. No effect.
    I fear the only solution will be to scrap my older Airport Express and replace it with a newer n-model. But before I resign myself to the $100 fix, any more ideas?

  • Network Stream On PXI W/RTOS works in debug but not deployed

    I have a new problem, recently my network streams have decided to not work.
    I have a PXI with a 8102 controller running the NI RTOS, If I run the program on the PXI from in the development environment they work as expected, however if I build the exe and run as startup the communicating program times out, the target (PXI) is unresponsive.
    This has work previously, we had a power outage and I had to re-format the PXI HD and re-install the RTOS, I do believe it is back to the original state before the power issue software wise, but as stated my network streams seem to be unresponsive as an app
    Networks Streams 1.1 loaded according to MAX
    any ideas?  The fact that it work like a champ in debug, but not as an exe is very frustrating, especially since it was all working beautifully before the power outage.
    oh yeah, using REAL-TIME 11.0.1 and labview 2011 SP1
    TIA

    The Sub VI is setting up and controlling the 2536 Switch and 6133 capture cards based on the command recieved by the Vi from the host program.
    It also talks with the 7811
    I thought that I was using slot numbers to reference the cards before the corruption, but now it is device numbers,.
    I was playing around in MAX and it caused labview to crash if I tried to view the block diagram of the subvi.
    I changed everything back in MAX and now the App is running - that is I can now establish a network stream with it and comunicate with the 7811, my data captures however seems to not be working and cause the PXI to go off in the weeds, the CPU loads go up and stay up and I can't re-establish the network streams without a reboot.
    I just took a look at MAX while the PXI is in lala land, it has little red x's next to the 6133's and 2536

Maybe you are looking for