Socket latency

I am experiancing long round-trip latency when chaining many Solaris processs together with INET blocking sockets. On NT, I see little latency.
Is there any way of tuning the solaris internals to make the socket respoonse times falster ? Is there an underlying poll timeout I can change ?
my setup is 4 processes:
server: has server socket.
proxy 2: has server socket and client socket to client.
proxy 1: has server socket and client socket to proxy 2.
client: has connection to proxy 1
The client APP sends 40 100byte messages. these are proxied up to the server, which then responds with a 100byte response message for each request. All sockets are blocking with dedicated reader and writer threads. All apps are threaded C++ apps that have been Purify'ed extensively.
If I run the processes on seperate SOLARIS boxes I see the time difference between the first message sent by the client to the last response recieved at 0.5 of a second ! I soon deduced that this was due to the nagle algorithm my sending an extra few placebo messages after the last response to effectively flush the nagle algorithm. I then got it down to ~300ms.
I have now tried running with all processes on the same SunFire-880 machine and the timing is still ~300ms. If I do the same on my Pentium IV Win2000 machine I get round-trip times of ~100 ms or less.
Does anyone know of any issues that affect the response times of sockets ?
I don't believe this is my code as I can sustain transaction rates of ~6000 requests and responses per second ! This is what makes these small-burst issues so fustrating.
Neil

This is what I get, what do you get:
java -server Loopback1000000 messages in 3004 ms
1000000 messages in 2752 ms
1000000 messages in 2706 ms
1000000 messages in 2728 ms
import java.io.*;
import java.net.*;
public class Loopback
    static final int PORT = 6666;
    public static void main(String args[])
     throws Exception
     new Listener().start();
     Socket socket = new Socket("localhost", PORT);
     DataOutputStream out = new DataOutputStream(new BufferedOutputStream(socket.getOutputStream()));
     int count = 0;
     long start_time = System.currentTimeMillis();
     while (true) {
         out.writeBytes("hello world");
         out.flush();
         if (++count % 1000000 == 0) {
          long now = System.currentTimeMillis();
          System.out.println("1000000 messages in " + (now - start_time) + " ms");
          start_time = now;
    static class Listener
     extends Thread
     ServerSocket server_socket;
     Listener()
         throws IOException
         server_socket = new ServerSocket(PORT);
     public void run()
         try {
          while (true) {
              Socket socket = server_socket.accept();
              InputStream in = socket.getInputStream();
              byte buf[] = new byte[8192];
              while (true) {
               if (in.read(buf) == -1)
                   break;
              in.close();
              socket.close();
         } catch (IOException e) {
          System.out.println("error in listener: " + e);
}

Similar Messages

  • Write to socket latency improvements

    Hello,
    I have an application receiving events, and reacting to them by sending a message to a server through sockets / TCP...
    I'm trying to reduce the time between the reception of the event and the confirmation the message has been sent.
    right now I'm interested in the part writing on the socket:
        public void sendRawMessage(String msg) {
        try {
            _connection.send(msg);
        catch(java.io.IOException ioe) {
            ioe.printStackTrace(System.out);
    }and my 'connection' object:
    _socket = new Socket(_host, _port);
    _socket.setSoTimeout(3000);           
    _out = new DataOutputStream(new BufferedOutputStream(_socket.getOutputStream()));
    public void      send(String msg) throws IOException {
        if (!_exit) {
         _out.writeBytes(msg);
            _out.flush();
    }I'm measuring the time elapsed before and after my sendRawMessage method (with System.nanoTime())
    the machine is a HP x86 server dual cpu, quad core 3GHz, running Solaris x86, Gigabit Ethernet, and the server I'm sending my messages to is running on the same machine.
    I get an average of 60us for messages around 180 bytes long, and trying to improve that, as I'm hearing people getting much better results for similar tasks...
    CPU usage average is close to 0% along the life of the application, so ressources are available, not too clogged up. I imagine the improvement should come for better coding or configuration of the server / TCP stack, and asking the gurus here for help :-)
    do you have any idea or any ressource to point me towards ?
    thanks

    This is what I get, what do you get:
    java -server Loopback1000000 messages in 3004 ms
    1000000 messages in 2752 ms
    1000000 messages in 2706 ms
    1000000 messages in 2728 ms
    import java.io.*;
    import java.net.*;
    public class Loopback
        static final int PORT = 6666;
        public static void main(String args[])
         throws Exception
         new Listener().start();
         Socket socket = new Socket("localhost", PORT);
         DataOutputStream out = new DataOutputStream(new BufferedOutputStream(socket.getOutputStream()));
         int count = 0;
         long start_time = System.currentTimeMillis();
         while (true) {
             out.writeBytes("hello world");
             out.flush();
             if (++count % 1000000 == 0) {
              long now = System.currentTimeMillis();
              System.out.println("1000000 messages in " + (now - start_time) + " ms");
              start_time = now;
        static class Listener
         extends Thread
         ServerSocket server_socket;
         Listener()
             throws IOException
             server_socket = new ServerSocket(PORT);
         public void run()
             try {
              while (true) {
                  Socket socket = server_socket.accept();
                  InputStream in = socket.getInputStream();
                  byte buf[] = new byte[8192];
                  while (true) {
                   if (in.read(buf) == -1)
                       break;
                  in.close();
                  socket.close();
             } catch (IOException e) {
              System.out.println("error in listener: " + e);
    }

  • Socket reading & writing - minimize latency

    Hello,
    I'm having a big application which reads & writes on a regular socket, and I'm trying to reduce latency to the minimum.
    to understand & measure things a bit better, I've written a simple client/server couple app: client connects to server, sends messages of specified length, server echoes messages back to client, that reads them.
    I'm running both client & server on the same box so that I have no timestamps issues.
    for this test, rate of sending messages is 1 every 400ms, size of message is 512 Bytes.
    I get average Round Trip Time of 560 micro-seconds, but variations are impressive, with values between 28 micro-second and 1.5milliseconds.
    I'd like to try to lower this latency to the minimum possible, and would like to hear what are the FASTEST & most Jitter-free ways of doing so, and I'm ready to experiment.
    on the client I do initially:
    _out = new DataOutputStream(_socket.getOutputStream()); then write regularly with
    String msg = "whatever_512_characters_msg";
    _out.writeBytes(msg);server side:
    _in = new BufferedReader(new InputStreamReader(_socket.getInputStream()));
    _out = new PrintWriter(_socket.getOutputStream(), true);reading is done with _in.readLine()writing is done with _out.println(msg);even though things are working OK, I'm thinking there must be a faster way to do things...
    the goal is not to optimize throughput or bandwidth, but really latency for roughly 512 Bytes messages.
    any pointer would be appreciated...
    thanks

    - how do PrintWriter and BufferedOutputStream are supposed to compare ?Don't use a PrintWriter for network operations. It swallows exceptions that you need to know about. You can use a BufferedWriter, if the other end is going to use a Reader.
    - what would be the advantage of using a byteArray (have seen this in several places, but don't exactly know how to implement it)If the data is already in a byte[] array, just write it using OutputStream.write(), and read it at the other end using InputStream.read(byte[]). If you do this you need to loop when reading until you've assembled a complete message, whatever that means. In your case you may be better off sticking with readLine().
    - people mentionned NIO, how would that be useful in such case, knowing that I'm mostly going to handle single connections ?Don't bother. NIO is for thousands of connections, not one.

  • NIO Socket Reading - Intermittent Latency in High-Speed Data Reading

    Our application is reading data very fast over TCP/IP sockets in Java. We are using the NIO library with a non-blocking Sockets and a Selector to indicate readiness to read. On average, the overall processing times for reading and handling the read data is sub-millisecond. However we frequently see spikes of 10-20 milliseconds. (running on Linux).
    Using tcpdump we can see the time difference between tcpdump's reading of 2 discreet messages, and compare that with our applications time. We see tcpdump seems to have no delay, whereas the application can show 20 milliseconds.
    We are pretty sure this is not GC, because the GC log shows virtually no Full GC, and in JDK 6 (from what I understand) the default GC is parallel, so it should not be pausing the application threads (unless doing Full GC).
    It looks almost as if there is some delay for Java's Selector.select(0) method to return the readiness to read, because at the TCP layer, the data is already available to be read (and tcpdump is reading it).

    Have you tried profiling your application to confirm that the wait is indeed in the Selector.select() method, as you think?
    PS: Selector.select(0) will not return immediately if no channels are selected as you may assume. select(0) is the same as a regular select(): it will block until at least 1 channel is selected.

  • A quick primer on audio drivers, devices, and latency

    This information has come from Durin, Adobe staffer:
    Hi everyone,
    A  common question that comes up in these forums over and over has to do  with recording latency, audio drivers, and device formats.  I'm going to  provide a brief overview of the different types of devices, how they  interface with the computer and Audition, and steps to maximize  performance and minimize the latency inherent in computer audio.
    First, a few definitions:
    Monitoring: listening to existing audio while simultaneously recording new audio.
    Sample: The value of each individual bit of audio digitized by the audio  device.  Typically, the audio device measures the incoming signal 44,100  or 48,000 times every second.
    Buffer Size: The  "bucket" where samples are placed before being passed to the  destination.  An audio application will collect a buffers-worth of  samples before feeding it to the audio device for playback.  An audio  device will collect a buffers-worth of samples before feeding it to the  audio device when recording.  Buffers are typically measured in Samples  (command values being 64, 128, 512, 1024, 2048...) or milliseconds which  is simply a calculation based on the device sample rate and buffer  size.
    Latency: The time span that occurs between  providing an input signal into an audio device (through a microphone,  keyboard, guitar input, etc) and when each buffers-worth of that signal  is provided to the audio application.  It also refers to the other  direction, where the output audio signal is sent from the audio  application to the audio device for playback.  When recording while  monitoring, the overall perceived latency can often be double the device  buffer size.
    ASIO, MME, CoreAudio: These are audio driver models, which simply specify the manner in which an audio application and audio device communicate.  Apple Mac systems use CoreAudio almost exclusively which provides for low buffer sizes and the ability  to mix and match different devices (called an Aggregate Device.)  MME  and ASIO are mostly Windows-exclusive driver models, and provide  different methods of communicating between application and device.  MME drivers allow the operating system itself to act as a go-between and  are generally slower as they rely upon higher buffer sizes and have to  pass through multiple processes on the computer before being sent to the  audio device.  ASIO drivers provide an audio  application direct communication with the hardware, bypassing the  operating system.  This allows for much lower latency while being  limited in an applications ability to access multiple devices  simultaneously, or share a device channel with another application.
    Dropouts: Missing  audio data as a result of being unable to process an audio stream fast  enough to keep up with the buffer size.  Generally, dropouts occur when  an audio application cannot process effects and mix tracks together  quickly enough to fill the device buffer, or when the audio device is  trying to send audio data to the application more quickly than it can  handle it.  (Remember when Lucy and Ethel were working at the chocolate  factory and the machine sped up to the point where they were dropping  chocolates all over the place?  Pretend the chocolates were samples,  Lucy and Ethel were the audio application, and the chocolate machine is  the audio device/driver, and you'll have a pretty good visualization of  how this works.)
    Typically, latency is not a problem if  you're simply playing back existing audio (you might experience a very  slight delay between pressing PLAY and when audio is heard through your  speakers) or recording to disk without monitoring existing audio tracks  since precise timing is not crucial in these conditions.  However, when  trying to play along with a drum track, or sing a harmony to an existing  track, or overdub narration to a video, latency becomes a factor since  our ears are far more sensitive to timing issues than our other senses.   If a bass guitar track is not precisely aligned with the drums, it  quickly sounds sloppy.  Therefore, we need to attempt to reduce latency  as much as possible for these situations.  If we simply set our Buffer  Size parameter as low as it will go, we're likely to experience dropouts  - especially if we have some tracks configured with audio effects which  require additional processing and contribute their own latency to the  chain.  Dropouts are annoying but not destructive during playback, but  if dropouts occur on the recording stream, it means you're losing data  and your recording will never sound right - the data is simply lost.   Obviously, this is not good.
    Latency under 40ms is  generally considered within the range of reasonable for recording.  Some  folks can hear even this and it affects their ability to play, but most  people find this unnoticeable or tolerable.  We can calculate our  approximate desired buffer size with this formula:
    (Sample per second / 1000) * Desired Latency
    So,  if we are recording at 44,100 Hz and we are aiming for 20ms latency:   44100 / 1000 * 20 = 882 samples.  Most audio devices do not allow  arbitrary buffer sizes but offer an array of choices, so we would select  the closest option.  The device I'm using right now offers 512 and 1024  samples as the closest available buffer sizes, so I would select 512  first and see how this performs.  If my session has a lot of tracks  and/or several effects, I might need to bump this up to 1024 if I  experience dropouts.
    Now that we hopefully have a pretty  firm understanding of what constitutes latency and under what  circumstances it is undesirable, let's take a look at how we can reduce  it for our needs.  You may find that you continue to experience dropouts  at a buffer size of 1024 but that raising it to larger options  introduces too much latency for your needs.  So we need to determine  what we can do to reduce our overhead in order to have quality playback  and recording at this buffer size.
    Effects: A  common cause of playback latency is the use of effects.  As your audio  stream passes through an effect, it takes time for the computer to  perform the calculations to modify that signal.  Each effect in a chain  introduces its own amount of latency before the chunk of audio even  reaches the point where the audio application passes it to the audio  device and starts to fill up the buffer.  Audition and other DAWs  attempt to address this through "latency compensation" routines which  introduce a bit more latency when you first press play as they process  several seconds of audio ahead of time before beginning to stream those  chunks to the audio driver.  In some cases, however, the effects may be  so intensive that the CPU simply isn't processing the math fast enough.   With Audition, you can "freeze" or pre-render these tracks by clicking  the small lightning bolt button visible in the Effects Rack with that  track selected.  This performs a background render of that track, which  automatically updates if you make any changes to the track or effect  parameters, so that instead of calculating all those changes on-the-fly,  it simply needs to stream back a plain old audio file which requires  much fewer system resources.  You may also choose to disable certain  effects, or temporarily replace them with alternatives which may not  sound exactly like what you want for your final mix, but which  adequately simulate the desired effect for the purpose of recording.   (You might replace the CPU-intensive Full Reverb effect with the  lightweight Studio Reverb effect, for example.  Full Reverb effect is  mathematically far more accurate and realistic, but Studio Reverb can  provide that quick "body" you might want when monitoring vocals, for  example.)  You can also just disable the effects for a track or clip  while recording, and turn them on later.
    Device and Driver Options: Different  devices may have wildly different performance at the same buffer size  and with the same session.  Audio devices designed primarily for gaming  are less likely to perform well at low buffer sizes as those designed  for music production, for example.  Even if the hardware performs the  same, the driver mode may be a source of latency.  ASIO is almost always  faster than MME, though many device manufacturers do not supply an ASIO  driver.  The use of third-party, device-agnostic drivers, such as  ASIO4ALL (www.asio4all.com) allow you to wrap an MME-only device inside a  faux-ASIO shell.  The audio application believes it's speaking to an  ASIO driver, and ASIO4ALL has been streamlined to work more quickly with  the MME device, or even to allow you to use different inputs and  outputs on separate devices which ASIO would otherwise prevent.
    We  also now see more USB microphone devices which are input-only audio  devices that generally use a generic Windows driver and, with a few  exceptions, rarely offer native ASIO support.  USB microphones generally  require a higher buffer size as they are primarily designed for  recording in cases where monitoring is unimportant.  When attempting to  record via a USB microphone and monitor via a separate audio device,  you're more likely to run into issues where the two devices are not  synchronized or drift apart after some time.  (The ugly secret of many  device manufacturers is that they rarely operate at EXACTLY the sample  rate specified.  The difference between 44,100 and 44,118 Hz is  negligible when listening to audio, but when trying to precisely  synchronize to a track recorded AT 44,100, the difference adds up over  time and what sounded in sync for the first minute will be wildly  off-beat several minutes later.)  You are almost always going to have  better sync and performance with a standard microphone connected to the  same device you're using for playback, and for serious recording, this  is the best practice.  If USB microphones are your only option, then I  would recommend making certain you purchase a high-quality one and have  an equally high-quality playback device.  Attempt to match the buffer  sizes and sample rates as closely as possible, and consider using a  higher buffer size and correcting the latency post-recording.  (One  method of doing this is to have a click or clap at the beginning of your  session and make sure this is recorded by your USB microphone.  After  you finish your recording, you can visually line up the click in the  recorded track with the click in the original track by moving your clip  backwards in the timeline.  This is not the most efficient method, but  this alignment is the reason you see the clapboards in behind-the-scenes  filmmaking footage.)
    Other Hardware: Other  hardware in your computer plays a role in the ability to feed or store  audio data quickly.  CPUs are so fast, and with multiple cores, capable  of spreading the load so often the bottleneck for good performance -  especially at high sample rates - tends to be your hard drive or storage  media.  It is highly recommended that you configure your temporary  files location, and session/recording location, to a physical drive that  is NOT the same as you have your operating system installed.  Audition  and other DAWs have absolutely no control over what Windows or OS X may  decide to do at any given time and if your antivirus software or system  file indexer decides it's time to start churning away at your hard drive  at the same time that you're recording your magnum opus, you raise the  likelihood of losing some of that performance.  (In fact, it's a good  idea to disable all non-essential applications and internet connections  while recording to reduce the likelihood of external interference.)  If  you're going to be recording multiple tracks at once, it's a good idea  to purchase the fastest hard drive your budget allows.  Most cheap  drives spin around 5400 rpm, which is fine for general use cases but  does not allow for the fast read, write, and seek operations the drive  needs to do when recording and playing back from multiple files  simultaneously.  7200 RPM drives perform much better, and even faster  options are available.  While fragmentation is less of a problem on OS X  systems, you'll want to frequently defragment your drive on Windows  frequently - this process realigns all the blocks of your files so  they're grouped together.  As you write and delete files, pieces of each  tend to get placed in the first location that has room.  This ends up  creating lots of gaps or splitting files up all over the disk.  The act  of reading or writing to these spread out areas cause the operation to  take significantly longer than it needs to and can contribute to  glitches in playback or loss of data when recording.

    There is one point in the above that needed a little clarification, relating to USB mics:
    _durin_ wrote:
     If  USB microphones are your only option, then I would recommend making  certain you purchase a high-quality one and have an equally high-quality  playback device.
    If you are going to spend that much, then you'd be better off putting a little more money into an  external device with a proper mic pre, and a little less money by not  bothering with a USB mic at all, and just getting a 'normal' condensor  mic. It's true to say that over the years, the USB mic class of  recording device has caused more trouble than any other, regardless.
    You  should also be aware that if you find a USB mic offering ASIO support,  then unless it's got a headphone socket on it as well then you aren't  going to be able to monitor what you record if you use it in its native  ASIO mode. This is because your computer can only cope with one ASIO device in the system - that's all the spec allows. What you can do with most ASIO hardware though is share multiple streams (if the  device has multiple inputs and outputs) between different software.
    Seriously, USB mics are more trouble than they're worth.

  • Windows TCP Socket Buffer Hitting Plateau Too Early

    Note: This is a repost of a ServerFault Question edited over the course of a few days, originally here: http://serverfault.com/questions/608060/windows-tcp-window-scaling-hitting-plateau-too-early
    Scenario: We have a number of Windows clients regularly uploading large files (FTP/SVN/HTTP PUT/SCP) to Linux servers that are ~100-160ms away. We have 1Gbit/s synchronous bandwidth at the office and the servers are either AWS instances or physically hosted
    in US DCs.
    The initial report was that uploads to a new server instance were much slower than they could be. This bore out in testing and from multiple locations; clients were seeing stable 2-5Mbit/s to the host from their Windows systems.
    I broke out iperf
    -s on a an AWS instance and then from a Windows client in the office:
    iperf
    -c 1.2.3.4
    [ 5] local 10.169.40.14 port 5001 connected with 1.2.3.4 port 55185
    [ 5] 0.0-10.0 sec 6.55 MBytes 5.48 Mbits/sec
    iperf
    -w1M -c 1.2.3.4
    [ 4] local 10.169.40.14 port 5001 connected with 1.2.3.4 port 55239
    [ 4] 0.0-18.3 sec 196 MBytes 89.6 Mbits/sec
    The latter figure can vary significantly on subsequent tests, (Vagaries of AWS) but is usually between 70 and 130Mbit/s which is more than enough for our needs. Wiresharking the session, I can see:
    iperf
    -c Windows SYN - Window 64kb, Scale 1 - Linux SYN, ACK: Window 14kb, Scale: 9 (*512) 
    iperf
    -c -w1M Windows SYN - Windows 64kb, Scale 1 - Linux SYN, ACK: Window 14kb, Scale: 9
    Clearly the link can sustain this high throughput, but I have to explicity set the window size to make any use of it, which most real world applications won't let me do. The TCP handshakes use the same starting points in each case, but the forced one scales
    Conversely, from a Linux client on the same network a straight, iperf
    -c (using the system default 85kb) gives me:
    [ 5] local 10.169.40.14 port 5001 connected with 1.2.3.4 port 33263
    [ 5] 0.0-10.8 sec 142 MBytes 110 Mbits/sec
    Without any forcing, it scales as expected. This can't be something in the intervening hops or our local switches/routers and seems to affect Windows 7 and 8 clients alike. I've read lots of guides on auto-tuning, but these are typically about disabling scaling
    altogether to work around bad terrible home networking kit.
    Can anyone tell me what's happening here and give me a way of fixing it? (Preferably something I can stick in to the registry via GPO.)
    Notes
    The AWS Linux instance in question has the following kernel settings applied in sysctl.conf:
    net.core.rmem_max = 16777216
    net.core.wmem_max = 16777216
    net.core.rmem_default = 1048576
    net.core.wmem_default = 1048576
    net.ipv4.tcp_rmem = 4096 1048576 16777216
    net.ipv4.tcp_wmem = 4096 1048576 16777216
    I've used dd
    if=/dev/zero | nc redirecting to /dev/null at
    the server end to rule out iperfand
    remove any other possible bottlenecks, but the results are much the same. Tests with ncftp(Cygwin,
    Native Windows, Linux) scale in much the same way as the above iperf tests on their respective platforms.
    First fix attempts.
    Enabling CTCP - This makes no difference; window scaling is identical. (If I understand this correctly, this setting increases the rate at which the congestion window is enlarged rather than the maximum size it can reach)
    Enabling TCP timestamps. - No change here either.
    Nagle's algorithm - That makes sense and at least it means I can probably ignore that particular blips in the graph as any indication of the problem.
    pcap files: Zip file available here: https://www.dropbox.com/s/104qdysmk01lnf6/iperf-pcaps-10s-Win%2BLinux-2014-06-30.zip (Anonymised
    with bittwiste, extracts to ~150MB as there's one from each OS client for comparison)
    Second fix attempts.
    I've enabled ctcp and disabled chimney offloading: TCP Global Parameters
    Receive-Side Scaling State : enabled
    Chimney Offload State : disabled
    NetDMA State : enabled
    Direct Cache Acess (DCA) : disabled
    Receive Window Auto-Tuning Level : normal
    Add-On Congestion Control Provider : ctcp
    ECN Capability : disabled
    RFC 1323 Timestamps : enabled
    Initial RTO : 3000
    Non Sack Rtt Resiliency : disabled
    But sadly, no change in the throughput.
    I do have a cause/effect question here, though: The graphs are of the RWIN value set in the server's ACKs to the client. With Windows clients, am I right in thinking that Linux isn't scaling this value beyond that low point because the client's limited CWIN
    prevents even that buffer from being filled? Could there be some other reason that Linux is artificially limiting the RWIN?
    Note: I've tried turning on ECN for the hell of it; but no change, there.
    Third fix attempts.
    No change following disabling heuristics and RWIN autotuning. Have updated the Intel network drivers to the latest (12.10.28.0) with software that exposes functioanlity tweaks viadevice manager tabs. The card is an 82579V Chipset on-board NIC - (I'm going to
    do some more testing from clients with realtek or other vendors)
    Focusing on the NIC for a moment, I've tried the following (Mostly just ruling out unlikely culprits):
    Increase receive buffers to 2k from 256 and transmit buffers to 2k from 512 (Both now at maximum) - No change
    Disabled all IP/TCP/UDP checksum offloading. - No change.
    Disabled Large Send Offload - Nada.
    Turned off IPv6, QoS scheduling - Nowt.
    Further investigation
    Trying to eliminate the Linux server side, I started up a Server 2012R2 instance and repeated the tests using iperf (cygwin
    binary) and NTttcp.
    With iperf,
    I had to explicitly specify -w1m on both sides
    before the connection would scale beyond ~5Mbit/s. (Incidentally, I could be checked and the BDP of ~5Mbits at 91ms latency is almost precisely 64kb. Spot the limit...)
    The ntttcp binaries showed now such limitation. Using ntttcpr
    -m 1,0,1.2.3.5 on the server and ntttcp
    -s -m 1,0,1.2.3.5 -t 10 on the client, I can see much better throughput:
    Copyright Version 5.28
    Network activity progressing...
    Thread Time(s) Throughput(KB/s) Avg B / Compl
    ====== ======= ================ =============
    0 9.990 8155.355 65536.000
    ##### Totals: #####
    Bytes(MEG) realtime(s) Avg Frame Size Throughput(MB/s)
    ================ =========== ============== ================
    79.562500 10.001 1442.556 7.955
    Throughput(Buffers/s) Cycles/Byte Buffers
    ===================== =========== =============
    127.287 308.256 1273.000
    DPCs(count/s) Pkts(num/DPC) Intr(count/s) Pkts(num/intr)
    ============= ============= =============== ==============
    1868.713 0.785 9336.366 0.157
    Packets Sent Packets Received Retransmits Errors Avg. CPU %
    ============ ================ =========== ====== ==========
    57833 14664 0 0 9.476
    8MB/s puts it up at the levels I was getting with explicitly large windows in iperf.
    Oddly, though, 80MB in 1273 buffers = a 64kB buffer again. A further wireshark shows a good, variable RWIN coming back from the server (Scale factor 256) that the client seems to fulfil; so perhaps ntttcp is misreporting the send window.
    Further PCAP files have been provided, here:https://www.dropbox.com/s/dtlvy1vi46x75it/iperf%2Bntttcp%2Bftp-pcaps-2014-07-03.zip
    Two more iperfs,
    both from Windows to the same Linux server as before (1.2.3.4): One with a 128k Socket size and default 64k window (restricts to ~5Mbit/s again) and one with a 1MB send window and default 8kb socket size. (scales higher)
    One ntttcp trace
    from the same Windows client to a Server 2012R2 EC2 instance (1.2.3.5). here, the throughput scales well. Note: NTttcp does something odd on port 6001 before it opens the test connection. Not sure what's happening there.
    One FTP data trace, uploading 20MB of /dev/urandom to
    a near identical linux host (1.2.3.6) using Cygwin ncftp.
    Again the limit is there. The pattern is much the same using Windows Filezilla.
    Changing the iperf buffer
    length does make the expected difference to the time sequence graph (much more vertical sections), but the actual throughput is unchanged.
    So we have a final question through all of this: Where is this limitation creeping in? If we simply have user-space software not written to take advantage of Long Fat Networks, can anything be done in the OS to improve the situation?

    Hi,
    Thanks for posting in Microsoft TechNet forums.
    I will try to involve someone familiar with this topic to further look at this issue. There might be some time delay. Appreciate your patience.
    Thank you for your understanding and support.
    Kate Li
    TechNet Community Support

  • MSI K8N Neo Platinum (MS-7030) Performance on 3000+ (socket 754)

    I want to thank many guys for their help on various posts on these forums. I found out many useful tips on my newly built system but seems I have missed something somewhere...
    The problem is that my system seems to perform a bit low. On the following I will provide my system configuration as it is generated by 'cpuz' program:
    CPU-Z Report
    CPU-Z version 1.28.6.
    CPU(s)   
    Number of CPUs: 1
    Name                               : AMD Athlon 64 3000+
    Code Name                       : NewCastle
    Specification                     : AMD Athlon(tm) 64 Processor 3000+
    Family / Model / Stepping   : F C 0
    Extended Family / Model     : F C
    Brand ID                           : 4
    Package                           : Socket 754
    Core Stepping                   : DH7-CG
    Technology                      : 0.13 µ
    Supported Instructions Sets: MMX, Extended MMX, 3DNow!, Extended 3DNow!, SSE, SSE2, x86-64
    CPU Clock Speed               : 2009.8 MHz
    Clock multiplier                  : x 10.0
    HTT Bus Frequency           : 201.0 MHz
    L1 Data Cache                  : 64 KBytes, 2-way set associative, 64 Bytes line size
    L1 Instruction Cache          : 64 KBytes, 2-way set associative, 64 Bytes line size
    L2 Cache                         : 512 KBytes, 16-way set associative, 64 Bytes line size
    L2 Speed                         : 2009.8 MHz (Full)
    L2 Location                      : On Chip
    L2 Data Prefetch Logic      : yes
    L2 Bus Width                   : 128 bits
    Mainboard and chipset   
    Motherboard manufacturer: -- (Doesn't say MSI) 
    Motherboard model          : MS-7030, 
    BIOS vendor                    : Phoenix Technologies, LTD
    BIOS revision                   : 6.00 PG
    BIOS release date            : 08/26/2004
    Chipset                           : nVidia nForce3 250 rev. A1
    Southbridge                    : nVidia nForce3 MCP rev. A2
    Sensor chip                     : Winbond W83627THF
    Graphic Interface AGP
    AGP Status                     : enabled, rev. 3.0
    AGP Data Transfer Rate    : 8x
    AGP Max Rate                 : -1x
    AGP Side Band Addressing: supported, enabled
    AGP Aperture Size           : 128 MBytes
    Memory   
    DRAM Type                : DDR-SDRAM
    DRAM Size                  : 512 MBytes
    DRAM Frequency         : 201.0 MHz
    FSB:DRAM                   : CPU/10
    CAS# Latency             : 2.5 clocks
    RAS# to CAS#            : 3 clocks
    RAS# Precharge          : 3 clocks
    Cycle Time (TRAS)      : 8 clocks
    Bank Cycle Time (TRC) : 11 clocks
    DRAM Idle Timer          : 16 clocks
    # of memory modules  : 1
    Module 0                    : DDR-SDRAM PC3200 - 512 MBytes
    Software   
    Windows version : Microsoft Windows XP Professional Service Pack 2 (Build 2600) 
    DirectX version    : 9.0c
    Now that the cpuz report is complete, let me inform you about my HD and Graphics Card:
    Hard Drive   : Maxtor DiamondMax 9 Plus 80 GB @ 133MHz
    Graphics Card: XFX NVIDIA GeForce 5200 with 256 MB of memory
    So here is my first question, which is relevant to all the above:
    ? What seems to be the problem and my CPU clock is limited to 2GHz? I've tried many different things on BIOS but unfortunately my Clock Multiplier can not be more than 10x? Is this a problem with my mainboard? Should I try to flash the BIOS?
    Some more thoughts:
    If I try to increase the FSB Frequency, then I get the expected speedups on my programs as long as the change is small. For example, if I set the FSB Frequency on 210 MHz I actually encounter a 5% (about) speedup in my performance. The problem is that on execution time, this FSB frequency changes (actually increases) and that can be onserved from CoreCenter. So as a result, I can not change the frequency that much since system hangs if it exceeds 240Mhz on FSB on execution time (I have reached 239 MHz at most without stability problems on execution time) - this value was obtained (if I remember correctly) by setting the FSB on 215 MHz, when my next step on 220 MHz caused a system crash soon afterwards...
    Can you please help me so that I can have a 3000+ performance?
    I know for sure that this is a problematic system setup, since my pc behaves more or less on execution times like one of a friend of mine which uses a plain AMD Athlon XP @ 2400 MHz. Needless to say, If I try to make a comparison of the current setup with a Pentium Processor, my system is equivalent to that of a Pentium @ 1,4 GHz (about) which is really irritating.
    Thank you in advance for all your help on that one.
    Test Beds: I use some programs I 've written in order to have a comperative analysis of my CPU. Hopefully, when this problem is fixed I will provide you with some useful feedback. This is the least I can do for your help.
    Did you think that was it all about? Wrong! I have one more question:
    ? I have a Microsoft Intelli Wheel Mouse which doesn't close whenever I close my system. Why this thing happens? Is there a way of making that PS-2 mouse close whenever my system shutsdown?
    Thank you in advance fo your help on all the above,
     - Dimis -

    Quote from: Supershanks on 16-May-05, 22:50:21
    the memory settings in cpu-z looked like the timings used by corsair memory modules, which is why i suggested the memory voltage of 2.7v as a lot of people have problems from running memory @ auto voltage.(2.5v) it's a common oversight.
    Very impressive must admit i pulled a homer, eyes started glazing over towards the end of the theory   
    Have tried running your hurdles cubic perm17.in in a command prompt window but it doesn't work for me sorry.
    you might like to try Super Pi which performs a similar function to your program.  Unfortunately the link seems to be broken at the minute Super Pi, Post Your Times
    You might find it here Kanada Laboratory home page - See FTP Link If you can get it run it & hopefully you can then find some comparisons
    luck
    I want to thank you all guys for your feedback. Especially you Shanks for your links. At least these showed me that things were not as bad as I initially thought of on my machine! 
    So I had more guidance on the investigation of my initial problem (hurdles). I 've updated the page which now consists of new information. Perhaps you would all like to have a look again and observe something reaaaaallly strange. I would like to hear your explanations since I am only speculating ...
    For your convenience I provide you the link once again:
    AMD Processors - Comperative Analysis
    Finally, I would like to remind you that if anyone has a solution to the problem I have with my mouse (refer to my initial post), I would be happy if you could share the solution with me! 
    Regards,
     - Dimis -

  • SocketException during reads - JVM_recv in socket input stream read

    I am getting a SocketException when a Java applet talks to our
    WebLogic 7.0 server. The catch is that it only occurs at one site
    (that has very high T1 utilization, although latency is only ~60 ms)
    Our setup is such that the calls hit an Alteon load balancer, which
    then sends the request out to one of 4 IIS clustered servers, where it
    then is sent to one of 2 WL clustered servers. I figured latency
    would be the cause, but on IIS and on WL, the timeouts are set to
    several hundred seconds, so I am not quite seeing where the connection
    is being reset. To be honest, I really don't know if it is WL that is
    killing the connection, as nothing abnormal shows up in the WL log. I
    have seen similar problems in this group, though, although the stack
    traces never follow the same path mine does. I do have the following
    call stack from the Java plug-in console, though. Any ideas would be
    greatly appreciated.
    java.net.SocketException: Connection reset by peer: JVM_recv in socket
    input stream read
         at java.net.SocketInputStream.socketRead0(Native Method)
         at java.net.SocketInputStream.read(Unknown Source)
         at java.io.BufferedInputStream.fill(Unknown Source)
         at java.io.BufferedInputStream.read1(Unknown Source)
         at java.io.BufferedInputStream.read(Unknown Source)
         at sun.net.www.http.HttpClient.parseHTTPHeader(Unknown Source)
         at sun.net.www.http.HttpClient.parseHTTP(Unknown Source)
         at sun.net.www.http.HttpClient.parseHTTP(Unknown Source)
         at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown
    Source)
         at sun.plugin.net.protocol.http.HttpURLConnection.getInputStream(Unknown
    Source)
         at sun.net.www.protocol.http.HttpURLConnection.getHeaderFields(Unknown
    Source)
         at sun.plugin.net.protocol.http.HttpURLConnection.checkCookieHeader(Unknown
    Source)
         at sun.plugin.net.protocol.http.HttpURLConnection.getInputStream(Unknown
    Source)
         at org.xxxx.abstracts.Controller.sendRequest(Controller.java:39)
         at org.xxxx.data.DataMediator.getDataNode(DataMediator.java:46)
         at sun.applet.AppletPanel.run(Unknown Source)
         at java.lang.Thread.run(Unknown Source)
    Also, here is my code, although I can't see anything on the client
    side that seems off:
    public Object sendRequest( Object request, URL receiver ) throws
    Exception{
    Object response = null;
    URLConnection con = null;
    ObjectOutputStream out = null;
    ObjectInputStream in = null;
    try {
    con = receiver.openConnection();
    con.setDoInput(true);
    con.setDoOutput(true);
    con.setUseCaches(false);
    con.setDefaultUseCaches(false);
    con.setAllowUserInteraction(false);
    out = new ObjectOutputStream(con.getOutputStream());
    out.writeObject(request);
    out.flush();
    out.close();
    in = new ObjectInputStream(con.getInputStream());
    response = in.readObject();
    in.close();
    } catch (ClassCastException e) {
    if( out != null ){
    out.close();
    if( in != null ){
    in.close();
    } catch (Exception e) {
    if( out != null ){
    out.close();
    if( in != null ){
    in.close();
    throw e;
    return response;

    There is a known bug on earlier 1.3.1 releases with sockets on Windows 2k
    and XP. I don't remember all the details.
    Peace,
    Cameron Purdy
    Tangosol, Inc.
    http://www.tangosol.com/coherence.jsp
    Tangosol Coherence: Clustered Replicated Cache for Weblogic
    "Keith Patrick" <[email protected]> wrote in message
    news:[email protected]...
    I'm getting the exception on the client, which is an XP machine, while
    the server is Win2K. I can't recall which, but either the applet or
    the server runs 1.3x while the other runs 1.4. I discounted that
    factor, though, as the problem only occurs on one site, which on all
    others it works fine.
    "Cameron Purdy" <[email protected]> wrote in message
    news:<[email protected]>...
    Exception is in the applet or on the server?
    Would one of those by any chance be running on W2K with JDK 131_01 orolder?
    >>
    Peace,
    Cameron Purdy
    Tangosol, Inc.
    http://www.tangosol.com/coherence.jsp
    Tangosol Coherence: Clustered Replicated Cache for Weblogic
    "Keith Patrick" <[email protected]> wrote in message
    news:[email protected]...
    I am getting a SocketException when a Java applet talks to our
    WebLogic 7.0 server. The catch is that it only occurs at one site
    (that has very high T1 utilization, although latency is only ~60 ms)
    Our setup is such that the calls hit an Alteon load balancer, which
    then sends the request out to one of 4 IIS clustered servers, where it
    then is sent to one of 2 WL clustered servers. I figured latency
    would be the cause, but on IIS and on WL, the timeouts are set to
    several hundred seconds, so I am not quite seeing where the connection
    is being reset. To be honest, I really don't know if it is WL that is
    killing the connection, as nothing abnormal shows up in the WL log. I
    have seen similar problems in this group, though, although the stack
    traces never follow the same path mine does. I do have the following
    call stack from the Java plug-in console, though. Any ideas would be
    greatly appreciated.
    java.net.SocketException: Connection reset by peer: JVM_recv in socket
    input stream read
    at java.net.SocketInputStream.socketRead0(Native Method)
    at java.net.SocketInputStream.read(Unknown Source)
    at java.io.BufferedInputStream.fill(Unknown Source)
    at java.io.BufferedInputStream.read1(Unknown Source)
    at java.io.BufferedInputStream.read(Unknown Source)
    at sun.net.www.http.HttpClient.parseHTTPHeader(Unknown Source)
    at sun.net.www.http.HttpClient.parseHTTP(Unknown Source)
    at sun.net.www.http.HttpClient.parseHTTP(Unknown Source)
    at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown
    Source)
    at
    sun.plugin.net.protocol.http.HttpURLConnection.getInputStream(Unknown
    Source)
    at sun.net.www.protocol.http.HttpURLConnection.getHeaderFields(Unknown
    Source)
    atsun.plugin.net.protocol.http.HttpURLConnection.checkCookieHeader(Unknown
    Source)
    atsun.plugin.net.protocol.http.HttpURLConnection.getInputStream(Unknown
    Source)
    at org.xxxx.abstracts.Controller.sendRequest(Controller.java:39)
    at org.xxxx.data.DataMediator.getDataNode(DataMediator.java:46)
    at sun.applet.AppletPanel.run(Unknown Source)
    at java.lang.Thread.run(Unknown Source)
    Also, here is my code, although I can't see anything on the client
    side that seems off:
    public Object sendRequest( Object request, URL receiver ) throws
    Exception{
    Object response = null;
    URLConnection con = null;
    ObjectOutputStream out = null;
    ObjectInputStream in = null;
    try {
    con = receiver.openConnection();
    con.setDoInput(true);
    con.setDoOutput(true);
    con.setUseCaches(false);
    con.setDefaultUseCaches(false);
    con.setAllowUserInteraction(false);
    out = new ObjectOutputStream(con.getOutputStream());
    out.writeObject(request);
    out.flush();
    out.close();
    in = new ObjectInputStream(con.getInputStream());
    response = in.readObject();
    in.close();
    } catch (ClassCastException e) {
    if( out != null ){
    out.close();
    if( in != null ){
    in.close();
    } catch (Exception e) {
    if( out != null ){
    out.close();
    if( in != null ){
    in.close();
    throw e;
    return response;

  • Bytes read from Socket buffer

    Hi,
    I'm writing a proxy application that relays data from a server to a client that requested the data. One thing I have observed is that when I have read about 32136 bytes of data, my reads from my buffered input stream fetch only one byte at a time. Not only is my read function now reading only one byte at a time, but it takes over a minute to fetch 83 bytes of data. And the delay gets worse the longer I run.
    In an effort to stress test my app, I deliberately set the send and receive buffers of the client socket to a small value (128 bytes). I can understand that filling up the buffer could slow down the reading, but what is odd is that long after the client has digested all the data, the Input stream reads never seem to go back to their original speed.
    Is this an O/s thing? a JVM thing? I'm running JDK 1.3.1 on an intel box with Linux Redhad 6.2. It seems to me that there must be some way of getting the socket buffer clear on the receive side so that I can get something that ressembles a normal dataflow. Oh, and I tried toggling the TCP no delay flag to no avail.
    Any suggestions?
    -hugh

    However, even with a large socket buffer, there is
    always the chance that the receiving app will run
    slowly due to bad programming, network latency, etc
    (we're not the authors of the clients who will
    connect), meaning even with the maximum buffer size,
    we could fill the socket buffer. I suppose, though,
    if it is the sender app that is getting fooled, then
    it's a matter for the authors of the sending
    application.True but the same thing will happen if the client connects directly to the server without your proxy, so why worry? General tips for writing proxies: use as large a buffer as possible, run separate reading and writing threads in your proxy, and propagate an EOF by doing shutdownOutput in the opposite direction. When you have done this in both directions you can close the socket, not before.

  • DRAM Clock Settings and SDram Cas latency

     : 8o Just got my KT3 ULTRA 2 Socket A Motherboard. When I turn on the computer, in the POST test it says:
    DRAM Clock = 266MHz
    SDRAM Cas Latency = 2
    I checked the cmos and the latency is set at 2 , but what about the DRAM Clock? Please help for I am a newbie. Thanks

    If your CPU is Athlon XP 2700+/2800+, the FSB is 166Mhz, the DRAM frequency is HCLK (166Mhz).
    If your CPU is Athlon XP 2600+ and below, FSB = 133Mhz, then HCLK = 133Mhz, HCLK+33 = 166Mhz
    If your CPU is older Athlon or Duron, FSB = 100Mhz, then HCLK = 100Mhz, HCLK+33 = 133Mhz, HCLK+66 = 166Mhz.

  • TCP/IP Communication Latency too big

    I have a simple application sending 500 byte messages to a server and recieved the resoinse with the similar saize of message(for testing purpose we do not do any manupulation of the message. It is simple recieve and reply) . Both apps a Java, but I somehow see the roundtrip of 1,7 msec, wchich seems to be quite high for what it is doing.
    one machine is LINUX REDHAT Ver4., and the other is SUN Opteron box with Solaris10. JDK1.6 is used.
    Does anybody have any way to improve the latency at all?
    By the way, the message is already sent and recieves using 'NO TCP DELAY' set on.
    Thank you very much for your advaice.
    H

    Test programs below; around 0.7 ms for one way is what I get too. You have your network speed and your computers' ability process interrupts.
    You may want to look into ways to bunch up several requests into one network packet. (Yeah, TCP/IP is not a packet protocol, but if you need to get the last bit of performance you may need to consider stuff like packet issues and NIC interrupts.) Do another test first: stream a few megabytes one way in large write sizes; that's your baseline upper limit for what you can transfer by bunching up requests and replies. In olden days this test would be done using FTP, but I guess FTP servers are getting less common for security reasons.
    public class TimePingPongServer
        static final int SIZE = 500;
        public static void main(String args[])
         throws Exception
            ServerSocket serverSocket = new ServerSocket(6666);
         while (true) {
             System.out.println("waiting for connections");
             Socket socket = serverSocket.accept();
             System.out.println("got connection");
             OutputStream out = socket.getOutputStream();
             InputStream in = socket.getInputStream();
             int size = TimePingPongServer.SIZE;
             byte buf[] = new byte[size];
             while (true) {
              if (!readFully(in, buf, 0, TimePingPongServer.SIZE)) {
                  System.out.println("got EOF");
                  break;
              out.write(buf, 0, size);
             socket.close();
        public static boolean readFully(InputStream in, byte buf[], int pos, int len)
            throws IOException
            int got_total = 0;
            while (got_total < len) {
                int got = in.read(buf, pos + got_total, len - got_total);
                if (got == -1) {
                    if (got_total == 0)
                        return false;
                    throw new EOFException("FileUtil.readFully: end of file; expected " +
                                           len + " bytes, got only " + got_total);
                got_total += got;
            return true;
    import java.net.*;
    import java.io.*;
    public class TimePingPongClient
        public static void main(String args[])
         throws Exception
         System.out.println("connecting");
         Socket socket = new Socket(args[0], 6666);
         socket.setTcpNoDelay(true);
         OutputStream out = socket.getOutputStream();
         InputStream in = socket.getInputStream();
         System.out.println("connected");
         int packets = 0;
         int modulo = 10000;
         long start = System.currentTimeMillis();
         byte buf[] = new byte[TimePingPongServer.SIZE];
         while (true) {
             out.write(buf);
             if (!TimePingPongServer.readFully(in, buf, 0, TimePingPongServer.SIZE)) {
              System.out.println("got EOF");
              break;
             if (++packets % modulo == 0) {
              long end = System.currentTimeMillis();
              long time = end - start;
              System.out.println(modulo + " pingpongs in " + time + " ms; " +
                           (2.0 * time / modulo) + " ms/message");
              if (packets % (modulo * 20) == 0) {
                  socket.close();
                  break;
              start = end;
    }

  • Using EJB vs Socket : expensive satellite line

    We are migrating a bank system from Clipper to Java(J2EE). One of the requisites is to save the bandwidth through the satellite line. Some developers came with a idea to make an object in bank store that talks with the central server through sockets, and in the central server a thread is created to handle that call and call the session bean. Anyone know how big (in bytes) is a EJB call??? (not the jndi lookup neither home finding), just a method to a session bean. How many bytes are added to the parameters data??? It could be a good idea to change from EJB to sockets through the satellite??? Another point is that the satellite has a 1.22 sec delay. Should it be used messages??? And the same question I make for messages, how big would be (in bytes) to send a message using JMS???

    Greetings,
    We are migrating a bank system from Clipper to
    Java(J2EE). One of the requisites is to save the
    bandwidth through the satellite line. Some developers
    came with a idea to make an object in bank store that
    talks with the central server through sockets, and inUltimately, this developer is just "reinventing the wheel". All synchronous network technologies communicate over sockets. This is the job of the object stubs through which EJBs (as well as other component models: CORBA, DCE, DCOM, etc.), communicate. In essence, this developer is suggesting communication client and server "objects" to take the place of a communication "stub" and "skeleton" pair. Only, in this case it would really only add to the complexity of the issue by adding an additional communication framework (aka "point of failure") - over-satellite client/server pair - in front of the framework that must still be used - EJB stub/skeleton pair (additionally, EJB skeletons are optional and may not even be used by your particular vendor). Furthermore, this solution will also require the creation of a new protocol to be communicated over the satellite decorators (an additional maintenance piece; aka "added complexity"). This solution may save some up-front bandwidth (largely depending on the implemented protocol), but the increased latency and maintenance complexity in your application may simply negate it (increased possibility for transmission repeats, session failures, protocol errors, etc.).
    the central server a thread is created to handle that
    call and call the session bean. Anyone know how big
    (in bytes) is a EJB call??? (not the jndi lookup
    neither home finding), just a method to a session
    bean. How many bytes are added to the parameters
    data??? It could be a good idea to change from EJB toAn EJB sits atop a communication layer which may be implemented using a vendor's protocol of choice. Such may be JRMP (RMI), IIOP (CORBA), or proprietary (e.g. WebLogic's 'T3'). The answer to this question, therefore, depends upon the (packet size of) the actual protocol in use.
    sockets through the satellite??? Another point is thatDepends... the added complexity of this approach will add the TCO of your application in other ways. The real question is "which is more - maintenance cost, or bandwidth cost?" Some things to keep in mind:
    * An effective socket implementation requires: invention of a stable and robust protocol; proper state management; a scalable threading model; and let's not forget - its own full lifecycle of DDTM: design, development, testing, and maintenance.
    * Current component models - EJB most certainly included :) - already have, and have undergone, the previous.
    the satellite has a 1.22 sec delay. Should it be usedAdjust your server's timeout parameters.
    messages??? And the same question I make for messages,Depends if your application needs synchronous or asynchronous communication, and if asynchronous, 1-way or 2-way. If synchronous (response must follow request), then "socket-based" communication is required - whether it's through EJB or direct. If asynchronous 1-way (response not required), then JMS or low-level datagram-based communication may be used; both may also be used for 2-way (anytime response), but both, of course, also have their own sets of pros and cons depending on your application and following many of the same arguments as above...
    how big would be (in bytes) to send a message using
    JMS???As with EJB, depends on the protocol implementor.
    Regards,
    Tony "Vee Schade" Cook

  • ADSL Socket V1.0 and Filters

    I've been experiencing speed issues/ cutting out for a few weeks. This was temporarily fixed after a "line fault" was diagnosed. After a week or so, I'm now getting an intermittent service again and was told by the online help that I needed to fit a filter.
    When my ADSL main socket was fitted some years ago, the engineer hardwired the only telephone extension in the house to the back of the socket with the Home Hub plugged into the left socket on the faceplate. I've read in the forums, if a filter is fitted, I risk "double filtering" and messing up my 'phone operation. Who's correct, Engineer or online assistant?
    (A friend of mine in the village is also experiencing similar issues; could be coincidence).

    I'm still experiencing intermittent problems. Gave up with online chat and spoke to Tech Help who went through the same old spiel about getting +1 mb so they're fulfilling their contract and line test shows no issues over last 14 days.
    What is actually happening is that my Sonos cuts out and won't even play radio. Web pages slow to download. So, I unplugged any ancilliaries from the hub (and the phone, just in case), hard wire the laptop and repeatedly run the BT Wholesale test which gives me a download speed of between 2.36 and 3.38 after 11 tries. U/L was 0.0 to 0.14 but mostly 0.0 something. Ping consistently 0.0. I Ran iplayer speed checker which gave me a download of 0.73mb and streaming speeds varying from 0 to  0.68mb. The BT.com tester wouldn't even run suggesting line too slow or web page issues. I did this using Google Chrome as the BT guy suggested that it was something to do with my browser but made no comment that my Sonos, desktop, smart TV, laptop and ipod must share the same issue. I normally use IE.
    Today is all fine and dandy. Just ran tests:
    BT Wholesale D/L 3.39, U/L 0.38, however Ping latency 47.13 (is this a clue?)
    iplayer D/L 3.3, streaming consistently 3.3
    "BT Help" D/L 3.33, U/L .36
    I've a feeeling that I especially experience issues if it's raining and or windy but this could just be coincidental.
    Sorry for the length of this post but I'm getting so frustrated that BT keep fobbing me of and I'd really like to know if it is actually me not them!

  • BT turning speed down and latency up ??

    BT turning speed down and latency up ??
    Hi All & Happy new year
    Wanted to see if anyone else has seen the same issues as I have.
    Right 4 ISP's in the last 18 mths.
    1. Zen internet - perfect but expensive, 7mb down speed.
    2. Newcall (resold Tiscali ie TalkTalk) awful, ever degrading service, speed, latency and "locking/freezing" where had to reboot router to reinitialise connection to exchange.
    3. Idnet - perfect, 7mb again, issues disappeared the moment I switched.
    4. BT - same as TalkTalk - same old nightmare, and BT are not helping, denying any issues. (speed down to 3mb, latency v. high, rebooting router 3 to 5 times a day).
    In the above sequence nothing changed in terms of hardware or line. Did testing with faceplate off - 3 different routers - all same results.
    Changed to BT to save money, but it's been a bad move.
    I can only conclude that TalkTalk and BT actually restrict access and speed. If a third party can deliver superior service on all BT kit something is seriously off - it does not seem likely to be accidental.
    Any thoughts guys ?
    (Currently trying to get contract canceled as service not fit for purpose, (click and wait 2 mins, or click and nothing happens, reboot router and it's ok again for a little while, but speed way down).
    How do I get out of the contract and get back to a competent ISP ?
    Any thoughts or contacts gratefully received.
    (Apologies if broke any rules - first post).

    Sorry to hear that your  having issues with your BT Retail services.
    You will either be on the ADSLMax (upto 8mbps) service or if your exchange has been BTw 21cn upgraded you maybe on the ADSL2/2+ (upto 12/20mbps) service depending on your line quality and length.
    However these upto speeds may not be the speeds you will get as ADSL broadband connections are very dependent on the distance from your property to the exchange and the quality of your line. The further away you are from your local exchange the slower your broadband speed will be.
    To enable the community to help you please see the advice below:
    Please see Keith's help guide here: Helping forum members to help you, it will go through some checks that are needed for us to help you.
    A summary of the checks are:
    1a) Is your router/Homehub connected by a BT NTE5 master socket, Adsl Filtered Master Socket or Extension Socket?   Please bear in mind that extension cables and extension sockets can reduce the broadband's performance. If you have an Old LJU master socket then please say.
    1b) Have you tried the Test Socket? - if you have one. - Bear in mind that lots of manual distconnections/turning off the router/HomeHub will cause you more issues as the DLM will either reduce your sync rate, increase your noise margin or put you into a banded profile.
    2) Can you please run a BT speed test (including IP Profile) http://speedtest.btwholesale.com (not beta version)[Best done with a wired, Ethernet, connection] After Quick Test is done you need to click "Further Diagnostics" to get IP Profile.
    3) is there any noise on your line. dial 17070 option2 ,called quite line test, from landline phone. should be silent but slight hum normal on cordless phone.
    4) please post adsl line statistics 
    ADSL Line Statistic Help:
     If you have a BT Home Hub like the one below...
     Then:
     1) Go to http://192.168.1.254 or http://bthomehub.home
     2) click Settings
     3) Click Advanced Settings
     4) Click Broadband
     5) Click Connection or sometimes called ADSL (see picture Below)
    The direct Address is http://bthomehub.home/index.cgi?active_page=9116 (for bthomehub3.A firmware ending in 1.3)
    or http://bthomehub.home/index.cgi?active_page=9118 (for bthomehub3.A firmware ending in 94.1.11)
    You will need to copy and past all the adsl line statistics ( Including HEC, CRC and FEC errors). You may need to click " More Details"
     If you have a HomeHub 4 then the majority of the ADSL Stats shown in the previous Hubs will not be there.
    for HH4 users you can go to hub manager then select troubleshooting then logs and are look for 2 entries together which will show theconnection speed and noise margin for when your HH4 last sycned with the exchange.
    There are more useful links on Keith's website here: If you have an ADSL connection, please select this link
    Don't have a BT Homehub/Voyager?
    • http://192.168.0.1 for a netgear router and look for ADSL adsl statistics with information like noise margin and line attenuation, connection speed
    • http://192.168.2.1 for a belkin router and look for ADSL adsl statistics with information like noise margin and line attenuation, Data Rate
    I'm no expert, so please correct me if I'm wrong

  • [Solved] Darkplaces video latency

    In darkplaces, I have a strange problem where the video is delayed. I can hear when I shoot, move around menu, etc immediatly but there is a good part of a second befour the video catches up. Besides from that the video runs fine with a reasonable framerate but due to the latency is unplayable.
    Here is the output from darkplaces
    Game is DarkPlaces-Quake using base gamedir id1
    DarkPlaces-Quake Linux 17:05:19 May 27 2013 - release
    Playing registered version.
    Skeletal animation uses SSE code path
    DPSOFTRAST available (SSE2 instructions detected)
    Failed to init SDL joystick subsystem:
    execing quake.rc
    execing default.cfg
    execing config.cfg
    couldn't exec autoexec.cfg
    Client using an automatically assigned port
    Client opened a socket on address 0.0.0.0:0
    Playing demo demo1.dem.
    Linked against SDL version 1.2.15
    Using SDL library version 1.2.15
    GL_VENDOR: ATI Technologies Inc.
    GL_RENDERER: AMD Radeon(TM) HD 6480G
    GL_VERSION: 4.2.12217 Compatibility Profile Context 12.10.17
    vid.support.arb_multisample 1
    vid.support.gl20shaders 1
    NOTE: requested 1x AA, got 0x AA
    Video Mode: fullscreen 1366x768x32x0.00hz
    S_Startup: initializing sound output format: 48000Hz, 16 bit, 2 channels...
    Wanted audio Specification:
    Channels : 2
    Format : 0x8010
    Frequency : 48000
    Samples : 2048
    Obtained audio specification:
    Channels : 2
    Format : 0x8010
    Frequency : 48000
    Samples : 2048
    Sound format: 48000Hz, 2 channels, 16 bits per sample
    CDAudio_Init: No CD in player.
    Can't get initial CD volume
    CD Audio Initialized
    Host_Mingled: time stepped forward (went from 0.000000 to 1369706662.873157, difference 1369706662.873157)
    Last edited by BennyBolton (2013-05-28 06:34:19)

    BennyBolton wrote:1366x768x32x0.00hz
    0 hz is wrong, should probably be 60.

Maybe you are looking for

  • Assigning a value to a Read-only Segment of DFF

    Hi Friends, I have a read-only segment of DFF. I want to assign a value based on the value entered in other segments to this read-only segment. There are a couple of issues I am facing. I am not able to get a handle to this read-only segment and also

  • PO Approval based on Vendor Category.

    Hi There is a typical scenario in my client's system. Now we need to implement SAP PO approval process is there. There are two major characteristics 1. PO value 2. Vendor's present category. Vendor's present grade : During vendor evaluation, vendor a

  • How to count no. of tokens in a string?

    hello everybody? I am using StringToknizer to read a csv file line by line. My problem is: i want to know how many tokens are there in a particular row. How to count it at a streatch? i am using <% if (st.hasMoreTokens())      a3=st.nextToken().trim(

  • Street line (name, number) formating error

    Hello everyone, I have an issue regarding formatting of the street line - when I include an address field (say in Smartforms), the street number comes before the street name. This would be according to USA standards, but I need it to be formatted by

  • Query Rule Not working As Expected

    I'm trying to make a rule that will let me know when a server is not in one of 3 AD groups. This is the Query I have: select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,SMS_R_SYSTEM.Name,SMS_R_SYSTEM.SMSUniqueIdentifier,SMS_R_SYSTEM.ResourceDom