Is something intercepting my TCP connections?

I have BT Infinite and have been testing my network latency using Ping for ICMP RTT, and NPing to measure TCP SYN/ACK RTT and UDP RTT.
I have found that my TCP connect latency, i.e. time create a TCP connection to a remote host, is always about 10ms, even when the ping is much higher. For example my ICMP ping to the time server 0.uk.pool.ntp.org (192.238.80.20) is about 160ms, but my TCP ping is about 10ms. I get the same result for almost any server on the internet anywhere in the world on any port, the TCP ping varies between 3ms and 12ms.
This implies that something local to my internet connection is intercepting my TCP connections, responding to my TCP connection requests and forwarding on the request to the remote host, e.g. a TCP proxy server, or firewall. The ping and TCP connection time to my broadband router is 1ms or under so its not my equipment. So I assume BT have some device on my connection path.
Has anyone experienced similar issues or knows what is causing this? Is it Parental Control network filters - I thought these worked by blocking DNS requests not by proxying and filtering traffic?

I have BT Infinite and have been testing my network latency using Ping for ICMP RTT, and NPing to measure TCP SYN/ACK RTT and UDP RTT.
I have found that my TCP connect latency, i.e. time create a TCP connection to a remote host, is always about 10ms, even when the ping is much higher. For example my ICMP ping to the time server 0.uk.pool.ntp.org (192.238.80.20) is about 160ms, but my TCP ping is about 10ms. I get the same result for almost any server on the internet anywhere in the world on any port, the TCP ping varies between 3ms and 12ms.
This implies that something local to my internet connection is intercepting my TCP connections, responding to my TCP connection requests and forwarding on the request to the remote host, e.g. a TCP proxy server, or firewall. The ping and TCP connection time to my broadband router is 1ms or under so its not my equipment. So I assume BT have some device on my connection path.
Has anyone experienced similar issues or knows what is causing this? Is it Parental Control network filters - I thought these worked by blocking DNS requests not by proxying and filtering traffic?

Similar Messages

  • Something blocks new tcp connection (Leopard)

    Hi all,
    I'm almost at a loss my issue..
    I'm using Xserve as a file server on the internet, running proftpd(http://www.proftpd.org/) with MacOSX 10.5.6 server.
    As of the proftpd with Tiger Server, everything worked well.
    Building up a new server with Leopard Server and proftpd, something seems to get wrong..
    From an ftp client, like Cyberduck, command line ftp, windows' one, every ftp clients resulted in the same, the data-connection will be blocked in a certain situation.
    For example, from a command line ftp client,
    - First attempt:
    login to the ftp server(Leopard with proftpd) and get the file list(issue "ls" command) on the active(PORT) mode.
    => success.
    - Second attempt:
    Soon after the successful file-listing, from another ftp client, login to the same ftp server with a different account and get the file list on the active mode.
    => fail. client seemed to be stalled for about 10 seconds, then ftp connection got disconnected forcedly(from the server).
    The above is always reproduced in my environment(from the internet, from the internal LAN, both).
    On the passive(PASV) mode, nothing wrong happen.
    Getting filelist is just for making a new ftp-data-connection, so it has to not be a filelist, but just uploading/downloading is the same.
    The problem is in the active(PORT) mode.
    The proftpd's log says "Failed binding to the IP address, port 20: Address already in use".
    Of course I'm not runnig any daemons using port 20, and this issue will appear in both standalone ftp daemon mode and launchd(inetd) mode.
    More wierd is,
    after waiting 30 seconds from the last successful filelist, the next filelisting(active mode) will success. And,
    Using same account(ftp user) for the first and the second attempt above, it will success too.
    Yes I had suspected built-in firewalls, I've had disabled all firewalls I can treat: ipfw -flush, kill emond, kill socketfilterfw, but nothing effected.(Of course, the firewall preferences at the Server Admin was already disabled)
    Wierd more is the issue doesn't seems to be IP address related, but user-account related.
    Guessing from these symptoms, I suspect something authenticating functions in Leopard.
    Proftpd uses PAM for its authentication to OSX.
    /var/log/system.log shows below at accepting clients:
    Mar 27 19:42:02 HOSTNAME org.proftpd.proftpd[94198]: launchproxy[94198]: /usr/local/sbin/proftpd: Connection from: XXX.XXX.XXX.XXX on port: 51108
    and /var/log/secure.log shows:
    Mar 27 19:42:03 HOSTNAME com.apple.SecurityServer[36]: checkpw() succeeded, creating credential for user USER_A
    Mar 27 19:42:03 HOSTNAME com.apple.SecurityServer[36]: checkpw() succeeded, creating shared credential for user USER_A
    Mar 27 19:42:03 HOSTNAME com.apple.SecurityServer[36]: Succeeded authorizing right system.login.tty by client /usr/local/sbin/proftpd for authorization created by /usr/local/sbin/proftpd.
    Does something block access to the system resources for 30 seconds..?
    Such a thing can be happen..?
    I've asked proftpd's forum, but they couldn't reproduce the issues.
    Sadly does this issue depend on only my environment?
    #well, I'm ignoring the basic discussion about using Xserve as an ftp server and using proftpd.
    #They say it has complex circumstances for it..(^^;;
    Very sorry for this long post with my poor English,
    Any information, any suspicious possibilityies are appreciate.
    Help me please...
    Kind regards,
    kura

    etresoft,
    thanks for the link. Seems to be very close to.
    The problem seems to be proftpd-related, using the latest version of it, the problem looks like gone.
    Sorry for wasting your time, and apologize for my lazy test.

  • TCP connection error when sending MODBUS commands to WAGO 750-881 controller after 113655 bytes of data have been sent

    Hi all,
    I am new to the world of labview and am attempting to build a VI which sends commands to a 750-881 WAGO controller at periodic intervals of 10ms. 
    To set each of the DO's of the WAGO at once I therefore attempt to send the Modbus fc15 command every 10ms using the standard Labview TCP write module. 
    When I run the VI it works for about a minute before I recieve an Error 56 message telling me the TCP connection has timed out. Thinking this strange, I decided to record the number of bytes sent via the TCP connection whilst running the program. In doing so I noticed that the connection broke after exactly 113655 Bytes of data had been sent each time. 
    Thinking that I may have been sending too many messages I increased the While-loop delay from 10ms to 20, 100 and 200 ms but the error remained. I also tried playing with the TCP connection timeout and the TCP write timeout but neither of these had any effect on the problem. 
    I cannot see why this error is occuring, as the program works perfectly up untill the 113655 Bytes mark. 
    I have attached a screenshot of the basic VI (simply showing a MODBUS command being sent every second) and of a more advanced VI (where I am able to control each DO of the WAGO manually by setting a frequency at which the DO should switch between ON and OFF). 
    If anybody has any ideas on where the problems lie, or what I could do to further debug the program this would be greatly appreciated. 
    Solved!
    Go to Solution.
    Attachments:
    Basic_VI.png ‏84 KB
    Expanded_VI.png ‏89 KB

    AvdLinden wrote:
    Hi ThiCop,
    Yes the error occurs after exactly 113655 bytes every time. The timeout control I would like to use is 10ms, however even increasing this to 1s or 10s does not remove the error, which leads me to believe that this is not the issue (furthermore, not adding any delay to the while loop, thus letting it run at maximum speed, has shown that the TCP connection is able to send all 113655 bytes in under 3 seconds again pointing towards the timeout control not being the issue here). 
    I attempted Marco's suggestion but an having difficulty translating the string returned into a readable string, (rightnow the response given is "      -#   +   ").
    As to your second suggestion, I implemented something similar where I created a sub VI to build a TCP connection, send a message and then close the connection. I now build each message and then send the string to this subVI which successfully sends the command to my application. Whilst not being the most elegant method of solving the issue, it has resolved the timeout problem meaning I am able to send as many commands as I want. So in that sense the problem has been solved. 
    If you still have tips on how to correctly read the TCP read output, I would however like to see if I could not get my first program to work as it is slightly more robust in terms of timing. 
    Modbus TCP RTU is a binary protocol, as you show in your Basic VI, where you format the data stream using byte values. So you have to interprete the returned answer accordingly with the Modbus RTU spec in hand. Now what is most likely happening is that the connection gets hung after a while since you do NOT read the data the device sends as response to your commands. The TCP/IP stack buffers those bytes and at some point the internal buffers overflow and the connection is blocked by the stack. So adding the TCP Read at strategic places (usually after each write) is the proper solution for this. Is there any reason that you didn't use the NI provided Modbus TCP library?
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • MAIL USING PL/SQL PROCEDURE TCP CONNECTION ERROR

    I was trying to send an e-mail using the demo-mail helper package which uses UTL_SMTP package and on execution, it gives the following TCP Connection error. Is it some something to do with mail configuration?
    This is the sample code I was trying to run.
    demo_mail.mail( sender => 'Me <[email protected]>',
    recipients => 'Someone <[email protected]>, ' ||
    '"Another one" <[email protected]>',
    subject => 'Test', message => 'Hi! This is a test.');
    And this is the error I am getting.
    class oracle/plsql/net/TCPConnection does not exist
    at "SYS.UTL_TCP", line 537
    at "SYS.UTL_TCP", line 199
    at "SYS.UTL_SMTP", line 102
    at "SYS.UTL_SMTP", line 121
    at "VNARAYA.DEMO_MAIL", line 159
    at "VNARAYA.DEMO_MAIL", line 119
    at "VNARAYA.DEMO_MAIL", line 105
    at "VNARAYA.SEND_MAIL", line 2
    at line 1

    The Java library needed by UTL_TCP is not created properly. You may just run $ORACLE_HOME/rdbms/admin/initplsj.sql as SYS to install it:
    cd $ORACLE_HOME/rdbms/admin
    sqlplus sys/<sys-password> @initplsj.sql

  • Resolving a TCP connection "slowdown" problem

    SuSE 9.3, stock kernel
    Intel architecture
    Jrockit-R26.4.0-jdk1.5.0_06-linux-ia32
    I have a problem that appears to be localized in Jrockit (or
    localized in the application, which is localized in Jrockit), where a
    persistent and high-volume TCP connection slows down over the course
    of about an hour--and eventually, effectively halts.
    The TCP traffic is a stream of data, arriving at a near-constant rate
    of about 16K bytes per second, with the receiving end (with the JVM
    and app) strictly sending TCP ACKs in reply.
    Restarting the sending process, or shutting down and restarting the
    connection with the JVM / app, both restore the connection to full
    speed until, over the course of perhaps an hour (sometimes more,
    sometimes less) the same symptoms appear.
    The symptoms in network packet traces are that when the connection is
    first opened, the sender transmits packets at the full MTU of the
    Ethernet segment. Gradually, the number of full-MTU sized packets
    are replaced with packets much smaller packets, until most packets
    range from 1 to 4 bytes with the occasional 3xx-4xx byte packet and
    the odd offlier of a full-MTU packet size.
    Supporting symptoms of interest:
    1) The TCP window does not shrink
    2) The CPU on the JVM/app side tops out at around 20%, even with
    mySQL running on the machine
    3) The interval between successive ACKs transmitted from the JVM/app
    size generally narrows over the course of the connection
    4) TCP send queue on the sender becomes saturated (pegged at 90+ K)
    5) TCP receive queue on the JVM/app side is almost always 0, and when
    it is not zero is bursts up to a low number (<50) and then almost
    immediately returns to 0
    6) The app does not appear to present any general symptoms of
    slowness; the rate of writes to the database does not appear to slow.
    The writes are threaded and multiplexed
    [4] strongly implies that the slowness is caused by the JVM/app side,
    since if the sender app was slowing down for some reason its TCP send
    queue would not be saturated.
    I can copiously document everything stated, and additionally provide
    much additional detail.
    Any guidance on how to suss out role Jrockit or the app are playing
    in this little drama would be very deeply appreciated.

    Asked around and it seems unlikely that this is a JVM issue. We have never heard of this behavior before, and the network layer in the JVM doesn't do anything with MTU iirc with the possible exception of manual changes to socket options. It seems more likely that this is caused by the IP stack, the NIC device driver or something in the network configuration. Try making some changes here and see what happens. For instance:
    1) Run client and server on the same machine, communicating through loopback
    2) Try another Linux distro (CentOS 4.3, for instance)
    3) Try a different NIC and/or a different device driver
    In your Java code, check that you are closing all Socket objects properly. Leaving them to be closed by a finalizer can delay closing sockets resulting in a native resource leak. I don't see how that would cause the issue you describe, but you never know...

  • Outgoing TCP connections from VM have very low firewall state idle timeout -- how do you adjust?

    When I create a TCP connection from a VM to the internet, if I'm idle for more than a few minutes (say a SSH session), the TCP flow is torn down by some AZURE networking element in between.
    Incoming connections from the internet in don't seem to be affected.
    I assume this is an Azure firewall timeout somewhere.
    Is there any way to raise this?

    Hi,
    Thanks for posting here.
    Here are some suggestions:
    [1] - You can make sure the TCP connection is not idle. To keep your TCP connection active you can keeping sending some data before 60 seconds passes. This could be done via chunked transfer encoding; send something or you can just send blank lines to keep
    the connection active.
    [2] - If you are using WCF based application please have a look at below link:
    Reference:
    http://code.msdn.microsoft.com/WCF-Azure-NetTCP-Keep-Alive-09f50fd9
    [3] - If you are using TCP Sockets then you can also try ServicePointManager.SetTcpKeepAlive(true, 30000, 30000) might be used to do this. TCP Keep-Alive packets will keep the connection from your client to the load balancer open during a long-running HTTP
    request. For example if you’re using .NET WebRequest objects in your client you would set ServicePointManager.SetTcpKeepAlive(…) appropriately.
    Reference -
    http://msdn.microsoft.com/en-us/library/system.net.servicepointmanager.settcpkeepalive.aspx
    Hope this helps you.
    Girish Prajwal

  • TCP connect consumes 80% of CPU load

    Hello!
    I've done some profiling on a Java application and i see that 80% of the time the application has on the CPU is used for establishing a TCP connection.
      Socket c;
      c.connect("",xxxx);The application performes several hundreds of connect per minute, but most of them take about 5 sec or less, but som uses the CPU for more than 130 seconds!!
    Could, i suspect it too be a problem with the network card beeing overloaded and will try wit multiple NIC's. But does anybody have any experience with this??? and maybe point out some improvements.

    Hello!
    Should have mentioned it.. we are using a
    ThreadPoolExecutor with 1000 threads in the core. But
    shall look into that. thanx :-)
    There may be some improvements that can be done in
    that areaThis doesn't have anything to do with what I am talking about. I am talking about a pool of open connections (sockets) not a pool of working threads.
    Here is the thing when you (in a program) tell a socket to close the OS gets told that the socket is no longer in use and should be closed. The OS then closes the socket but doesn't actually release the allocated resources until it feels like it. It feels like it when it (a) has some spare time with nothing else to do or (b) it has to reclaim the resources because something else needs them.
    In short the OS final clean up of socket resources is rather akin to garbage collection by the JVM... it will happen but not neccessarily right away and you really can't do anything about that.
    Let me make this 100% clear. This is NOT a Java problem. This is an OS "problem". Although it's not really a problem either. Under most sensible scenarios that model works just fine... your problem is that you are doing something foolish.
    Creating and disposing of sockets is an intensive task for the OS. The queue of sockets waiting to be finally disposed of by the OS grows and grows while it is busy creating new sockets for you until eventually it HAS to clean up in order to give you a new socket. Then you pay a big penalty in time while the OS does it's garbage collection.
    So in summary as it goes this situation is insolvable. What you need to do is not be creating and destroying hundreds of sockets every second. How can you do this? By using a pool of already open sockets that are shared among your various threads.

  • Windows XP doesnt  detect broken TCP connection

    Hi Guys,
    I have a client and a server. When they are both on a linux or both on a Windows machines there is no problem.
    When the server is on linux and the client is on Windows XP, when the server goes down, the client does not recieve a SocketException.Connection Reset like it should.
    It works fine otherwise, does anyone know why Windows cant see the TCP connection breaking if the server is on linux?

    Java's platform independence cannot shield you from all variations in the implementation of the TCP/IP stack in the underlying operating system.
    If you need to do special tricks for different operating systems chances are you are doing something in a non-optimal way. For the vast majority of programmers TCP/IP "Just Works". You open your sockets, do I/O on them, eventually close them.
    If you write into a socket whose peer has done receive shutdown you eventually get an IOException (I'll guesstimate within one or three minutes absolute worst case). If you don't get the IOException then likely your program has a bug, such as ignoring exceptions. Far lesser likelihood is OS bug.
    When you say "server goes down" do you mean the OS goes down (as in unplug the network connector), or the OS is shut down with a "shutdown" command, or the program is closed? TCP/IP will behave differently if the remote OS is forcefully brought down vs. the program exits and OS remains running. A "shutdown" command may exhibit either behaviour, depending on OS implementation and random timing effects.

  • Unable to establish TCP connection when enabling internet sharing (Phone acting as hotspot)

    I have developed a small .NET Desktop application which hosts a TCP service. In addition, it sends UDP Multicasts to anounce that service.
    Service consumer is a Phone 8.1 app, which listens for these UDP multicasts and when it receives such a multicast, it creates a TCP connection to the service.
    The following snippet shows how the Phone 8.1 app listens for UDP packets and then connects via TCP
    const string UdpPort = "42";
    const string TcpPort = "43";
    const string MultiCastIp = "224.5.6.7";
    const int TimeoutForServerLookup = 10000;
    //-- listen for UDP multicasts
    HostName hostNameOfServer = null;
    var udp = new DatagramSocket();
    var clientFound = false;
    udp.MessageReceived += delegate(DatagramSocket sender, DatagramSocketMessageReceivedEventArgs args)
    try
    hostNameOfServer = args.RemoteAddress;
    catch (Exception ex)
    s_Log.Error("Error in handling UDP packet", ex);
    await udp.BindServiceNameAsync(UdpPort);
    udp.JoinMulticastGroup(new HostName(MultiCastIp));
    var waitedTimeInMs = 0;
    const int delayInMs = 250;
    while (waitedTimeInMs < TimeoutForServerLookup && hostNameOfServer == null)
    waitedTimeInMs += delayInMs;
    await Task.Delay(TimeSpan.FromMilliseconds(delayInMs));
    udp.Dispose();
    //-- we now have the adress of the TCP server (hostNameOfServer)
    //-- connect now to the server. this works when being in a "normal" WLAN, does not work when the phone is the hotspot
    var socket = new StreamSocket();
    await socket.ConnectAsync(hostNameOfServer, TcpPort);
    This works well when the Desktop computer and the Windows Phone 8.1 are within the same WLAN. For some customers it is important that this also works when the Windows Phone 8.1 acts as hotspot (by enabling internet sharing on the phone) and the desktop computer
    (or Laptop) connects itself to this WLAN. In this scenario the Phone 8.1 still receives the UDP packets, but when trying to connect via TCP, the connection attempt fails.
    Any ideas how to get rid of this problem?

    Have you tracked down if the phone actually sends the TCP connection to the desktop? I'm feeling that this is a firewall issue that shows itself when the firewall zone changes (public/private/domain).
    Matt Small - Microsoft Escalation Engineer - Forum Moderator
    If my reply answers your question, please mark this post as answered.
    NOTE: If I ask for code, please provide something that I can drop directly into a project and run (including XAML), or an actual application project. I'm trying to help a lot of people, so I don't have time to figure out weird snippets with undefined
    objects and unknown namespaces.

  • WRT55AGv2 dropping idle TCP connections

    I have a WRT55AGv2 and it constantly is just dropping idle TCP connections. This has made it totally useless for me until I can find some fix for this. I usually have multiple SSH and FTP sessions that I just leave open all day.
    This happens on both the wired and wireless interfaces.
    Can anyone suggest something that could fix this?

    I don't use any P2P applications and I do not use anything that generates a huge number of connections in short periods.
    I've already changed the MTU setting to 1300.
    I've tried disabling the wireless interfaces and just use a wired connection and it still happens. Idle TCP connections just get dropped.
    What else could I try?

  • Array of tcp connections

    hello everyone,
    I want to creat an array of servers to connect to (configurable hots:port) and send a command to be invoked like "something\n"
    then close the tcp connection? whereis a good place to start reading about this or maybe a snip of code or two like the array part at least, I would be greatful. Any references is great... thank you.

    I think reading the manual and learning would be a much better choice than asking for a very specific case, as it improves your knowledge and skill. Read these and you will be able to write your own stuff:
    Socket documentation:
    http://java.sun.com/j2se/1.5.0/docs/api/java/net/Socket.html
    Collections framework documentation:
    http://java.sun.com/j2se/1.5.0/docs/guide/collections/reference.html
    Tutorial on how to use Sockets:
    http://java.sun.com/docs/books/tutorial/networking/sockets/index.html

  • Random TCP connections created in Selector.open (in NIO)

    I'm currenlty running a production app which has several independently running application server classes going at any given time. Each of these uses one Selector to provide support for asynchronous I/O operations. Lately I noticed that when bouncing one of these servers I'd have problems bringing it back up because of sequential "ghost listeners" and "ghost connections" colliding with the ports I was interested in.
    So, I got out a local port-scanner and did some digging. To my chagrin I discovered that every time I made a call to Selector.open() a new TCP connection was made from my application to my application on an internal port. In Java 1.4.2_02 this occured on the "primary" network adapter. In Java 1.5 this occured on the loopback adapter. Unfortunately for me neither is acceptable because my app regularly binds and unbinds for listening on varying adapters including the wildcard adapter (0.0.0.0) and I can't have my own process colliding with itself trying to listen to ports.
    Okay, so then I did some forum searching with the help of a couple co-workers. It turns out these connections are "normal" and related to something called the "wakupPipe", or "wakup pipe". Also, this seems somewhat related to something we call the "runaway select event" in-house (where Selector.select(x) returns 0 before the timeout is up over and over again, which we've long since worked around to support Java 1.4.2_02).
    This problem occurs on windows 2000 and windows server 2003. I've attached a code-snippet below that will duplicate the problem (and flood a system with extraneous TCP connections if left running long enough).
    My questions are:
    1) Why in the world did this wakup pipe have to be implemented as a TCP connection (rather than in-memory)?
    2) Why is this not documented anywhere in the Java API's, or am I missing the documentation of it?
    3) Is there some way to control the behaviour of this "wakup pipe"? (ie: make it be in-memory only, a file, or specify port-range, IP etc...)
    4) Isn't it dangerous to create a library based on undocumented and randomly allocated TCP connections that can't be controlled via configuration?
    import java.nio.channels.Selector;
    import java.util.ArrayList;
    public class NIOSelectorOpenExample implements Runnable {
        protected boolean shouldRun = true;
        public void shutdown() { shouldRun = false; }
        public void run()
          try {
            ArrayList selectors = new ArrayList();
            while( shouldRun )
                selectors.add( Selector.open() );
                Thread.currentThread().sleep(500);
          } catch (Exception ex) {
              ex.printStackTrace();
    }Basically on #4 I want to know why/if running this code wouldn't be a major problem on any system that opens and closes ports for listening regularly. (And yes, aside from the fact that it doesn't explicitly clean up the selectors before exiting.)

    Hmmm ...
    We had an issue in production where one
    of the ServerSocketChannels failed to bind because it
    collided with the wakeup pipe range. Of course, this
    was on Java 1.4.2_02 which binds on the primary
    adapter for the system and not the loopback adapter.This seems back to front. By default Java binds to INADDR_ANY which is all the interfaces, which is why you got the collision on the loopback port which was already there. If it bound the socket to a specific non-loopback NIC there would be no collision with any loopback port, they are different number spaces.
    Are you able to create all the ServerSockets before any of the Selectors?
    or, if your hosts aren't multihomed, is it practical for the application to bind its ServerSockets to the primary NIC (i.e. the non-loopback)?
    Yes, we can repeatedly try to bind on a port and
    d perform other work-arounds, but why should we have
    to? How could we have expected this behavior? (It
    may be a Windows limitation that caused Sun to choose
    their implementation method, but non-Java TCP apps on
    windows don't have these problems...)Agreed, but then again non-Java TCP apps don't try to implement select() for arbitrary numbers of sockets to agree with *nix platforms, they can generally live with <= 64.
    Note: The problem appears exacerbated from having the
    listen ports of these wakeup pipe connections stay
    open for long periods of time (rather than closing as
    soon as the pipe is established). Would this help? There would still be the connected port with the same number & this might inhibit a new listening port with that number. Haven't tried this myself.
    Well, considering the behavior changed between
    1.4.2_02 and 1.5 it can't be all that inaccessible a
    fix. Perhaps using an extra TCP connection was
    necessary in some cases, but obviously binding to
    ("127.0.0.1", 0) isn't the only choice since it has
    changed recently and those values could easily be
    made configurable (given access to the code
    involved).It changed from binding to 0, i.e. INADDR_ANY, in 1.4 to binding to 127.0.0.1 in 1.5, probably in an effort to vacate the port space for the physical NICs.
    Given access to the code involved you can change anything. In the SCSL code it is sun.nio.ch.WIndowsSelectorImpl.java in src/windows/classes.
    Actually, I'm also wondering if a single
    (known-default/configurable) listen port wouldn't be
    adequate for all of these wakeup pipe TCP
    connections. Me too.
    Regards
    EJP

  • Establish TCP connection in SIT framework

    I am using LabVIEW 8.6 with Simulation Interface Toolkit (SIT) running a model on PXI.  I need to establish communications between host and driver that are unrelated to the Model.  I am having issues establishing a TCP connection.  I have tried 3 different port addresses (6342, 7342 and 55555), carefully avoiding 6011. IP address (on driver side) is the address for the host PC.   I used TCP Communicator - Active.vi and TCP Communicator - Passive.vi as examples.  The Host vi (Passive side) hangs at TCP Listen, waiting for connection.  If I run the Driver vi along with "TCP Communicator - Passive.vi", communications are established.  Is there something SIT is doing on the host side that is preventing this communication?

    Hi DinaDemara,
    You should be able to use SIT and communicate via TCP.
    1. Make sure that you are performing your TCP functionality outside of the control loop in SIT. Preferably, implement your TCP functionality in a separate VI outside of the SIT control loop.
    2. I highly recommend for you to use the two NI Example Finder VIs called Simple Data Client.vi and Simple Data Server.vi to test your network. After you get these two VIs to communicate successfully, adapt them to your application.
    3. Turn off all firewalls to test.
    Let me know if you are still having problems.
    Aldo A
    Applications Engineer
    National Instruments

  • Quick detection of failed tcp connection

    I apologize for asking this question, sense I know variants of it are asked often, but none of the standard solutions I read about seem to apply to my situation.
    I have an application acting as a server for a client application written by another organization. Most of our communication is done via UDP, but the client application insists on a TCP connection which usually hangs out unused. I need to be able to detect when this TCP connection goes down. Sense I am only writing the server I can't implement a heartbeat or similar method which requires server and client to work together. Likewise doing a read and waiting for a -1 doesn’t work, as the reads seem to time out and fail even when the network is not down due to the client not sending anything to be read across the TCP connection.
    So far the best solution I have come up with is to intermittently write a 0 byte when I believe there is a network error (i.e. when our udp communication stops) and capture an exception when this fails. This method does work, but I have to wait for the exception which can take 2-3 minutes from when I kill the network to occur. I would prefer for the exception to occur sooner then that if possible. I've tried setting the SO_TIMOUT bit, but sense write is usually non-blocking this doesn’t have any affect. Is there some other method of controlling how long it takes for the tcp socket to fail if it doesn’t get an ack back?
    I have also thought of other methods of doing this, including always killing the TCP socket the moment UDP fails, or have my serverSocket listen for and accept any reconnect attempts, then killing the old socket once the new one is established. However both methods seem like imperfect workarounds, I would rather be able to use the absence of ACKs to detect that the TCP has stopped working.
    And allow me to offer a preemptive thank you for your help.

    Is there some other method of controlling how long it takes for the tcp socket to fail if it doesn’t get an ack back?The traditional Berkeley socket API offers no way to adjust the timeout.
    Some operating systems (perhaps even most major ones) have system-global settings for it - registry, /dev/somemagicname, ndd(1), etc. Google for something like windows adjust tcp timeout or solaris modify tcp retransmission or some such. Be aware that you would be setting a parameter that will affect all programs running on that system. Mostly I would just accept the 2-minute timeout; that's just the way things are.

  • Sharing a TCP Connection

    I have an embedded processor that opens a TCP socket to a program running on a web server and dumps real time measurements on to the server. I am trying to design an applet that opens a TCP connection to this program and thus gain access to the information dumped by the embedded system. This works fine for a single applet. But when I try doing it with several applets at a time, I run into trouble since all these applets try to read from the socket to the processor.
    My question is how do you share a single open TCP connection for reading?
    Thanks in advance.
    Prathap

    There is absolutely nothing that stops one from connecting to a single server using multiple sockets from a single application.
    So you doing something wrong in your socket code.

Maybe you are looking for

  • Multiple Entrys for the same Album

    Hello I ripped an Album (Till Broenner, "Oceana") to my Computer. Everthing is working fine, but iTunes shows it as 5 (!!!) separate Albums. I tried to change the Album information in order to bring it all together again, but it was no use. iTunes ev

  • How can I be sure a table is printed with at least 1 item before page break

    I've got a form with several tables (small and large ones, depends on the selection) and I don't like to print only the header on the first page (without any items) while a page break is needed. I like to reserve lines for every table (like in smartf

  • Analytical function in OWB 10.2.0.4.0

    Dear - I am trying to implement analytical function in OWB but not sure how to do it. Can anyone help me? My SQL query looks like select sum (aamtorg), sum(sum(aamtorg)) over (order by cbssuntgbk, caccgbk, caccroo, ccrytrngbk, creftrl, cmgmint, cbase

  • Trick to remove duplicate entries from tables ?

    hi. i have 53tables which are having duplicate entries and names of all 53 tables r listed in top_t table ? can any1 provide me solution to show and if possible ask for remove of those duplicates entries from each table if required ? daily i am remov

  • Able to select resolution in Windows?

    Hello, I spent 2 days working through boot camp issues and finally got it installed.  The reason I wanted to install it is because I wanted to play some games that only come on PC - especially Star Citizen.  When I play the game, its PAINFULLY MISERA