Tcp stack tuning

recently I've been playing with TCP stack in the kernel, and found an option that could give the same low latency response as in windows (especially if you compare firefox on windows and on linux)
sysctl net.ipv4.tcp_low_latency = 1
now tcp connections seem to be established a bit faster
do you also have some nice tips on tuning the TCP stack?

I add the following tweaks to my linux boxes:
echo 2097136 > /proc/sys/net/core/rmem_max
echo 2097136 > /proc/sys/net/core/wmem_max
echo 524284 > /proc/sys/net/core/rmem_default
echo 524284 > /proc/sys/net/core/wmem_default
echo "4096 87380 2097136" > /proc/sys/net/ipv4/tcp_rmem
echo "4096 65536 2097136" > /proc/sys/net/ipv4/tcp_wmem
echo 1 > /proc/sys/net/ipv4/tcp_low_latency
Some information about tcp/ip tuning can be found here:
http://www-didc.lbl.gov/TCP-tuning/linux.html
and   
http://ipsysctl-tutorial.frozentux.net/ … ables.html

Similar Messages

  • TCP stack and Frequent GC Calls

    Hi All
    I have a client/socket application which is giving me a tricky problem.
    Suddenly i fing that load average of my machine increases without any chane in client/server activity.On log analysis i find frequent GC calls
    390319.956: [GC 390319.956: [ParNew: 147583K->225K(172032K), 0.0019560 secs] 239475K->92116K(499712K), 0.0020500 secs]
    390322.215: [GC 390322.215: [ParNew: 147681K->190K(172032K), 0.0018750 secs] 239572K->92083K(499712K), 0.0019590 secs]
    Even when i get all most clients disconnected from my server,i still observer high load average and frequent.
    I im confuesed how this can happen when there is not much activity on server??
    Is is possible that TCP stack of VM is filled because of inability of few clients to read all responses??Can filling of TCP stack lead JVM getting amock resulting in contineous GC calls??
    Also,what precautions should i take in client/server architecture that this is not repeated if TCP stack choking is the cause of this problem??
    Is there any mechaninsm to monitor TCP stack of system??
    Any kind of your advise will be helpful to me.
    Please please some light on this issue
    with regards
    Kumar Vinit

    I would think Java's TCP/IP support is a relatively shallow wrapper around the underlying OS provided one. It should not be an issue.

  • ASIP dumps TCP stack nightly

    We have been running ASIP 6.3 since 2001, migrating it through several servers. For the past 8 months, it has resided on an early-edition G4 (ID 106, ROM 7.5.1) with 1.5GB RAM and two ATA drives supporting a small school (100 user accounts, rarely > 30 logged on at once). The server provides only AFS services, using both TCP/IP and AppleTalk. The underlying system is OS 9.1.1 (we found it would not install on 9.2). Until 3 weeks ago we'd never had a real problem we hadn't caused ourselves.
    For the last 3 weeks, every morning when we arrive at school the server is hung, requiring a hard restart. There is usually (but not always) an error message: "TCP stack was dumped." We tried turning off TCP/IP service from Server Manager, but that didn't change the problem. This ONLY happens overnight -- otherwise, all services run smoothly. The machine is on a UPS which tests OK, and is set to restart if it loses power.
    We can't tell if this is an ASIP error or an OS 9.1.1 error. We've never seen this error before (between us we have over 2 decades of Appleshare server management experience behind us).
    Ideas?

    Look at this site for getting 9.2.2 and 6.3.3 up on your Server:
    http://os9forever.com
    Click the link on the left labeled ASIP patch. Print out and read the documents. I tried to install the patch, and could not quite get it to work, but in reading the discussion and examining the files in question, I realized a drag-and-drop install of the crucial parts was all that was needed. It worked fine.
    One possibility on dumping the TCP/IP stack may be lurking in the TCP/IP Control Panel. If the "load only when needed" box is checked, when things get very quiet, it may just unload it.
    I am not certain whether that parameter is one of the several AppleTalk/ TCP parameters stored in Parameter RAM, but a PRAM reset at a minimum, and replacing the battery at the extreme may be in order anyway.

  • Corrupted TCP stacK?

    Here's the problem.
    I can upload and download email just fine. I have a DSL modem, no router, jsut the one iMac connected, and am using Sys 10.4
    When I try to connect to a web page, I get an message that I am "not connected to the internet".
    I can type in the numerical address of Google, for example, in the address line and I connect to Google just fine; type in "Google" and get the above message.
    My ISP gave me a workaround of typing in the numerical addresses of their DNS servers in the network configuration screen (TCP/IP configuration). However with the DHCP connection I should not have to be providing these. In any case with these addresses entered in the "DNS servers" box, things work fine.
    Their comment was that my TCP stack was probably corrupted. Not being familiar with Macs, the suggestion was that there are probably some preferences that should be trashed, (at least that would probably be the System 9 solution). They view the manual entry of the servers (listed as "optional" on the setup screen) as a temporary fix for a problem in my software.
    Before I go further I thought I'd check here for the voice of expertise and sanity.
    Please don't tell me that I need to reload the system software!
    Thanks for you insights
    Ken

    Where you suggested is where I put the DNS server address that the tech person gave me. (on the TCP configuration page).
    Yes, you can right down the ones there in case you want to put them back, or even have all of them there, but the ones I gave you are safer & more reliable than what most ISPs give you.
    So what does flushing the DNS cache do?
    The Cache can become Stale, outdated IPs, or just plain Corrupted, it's there to speed lookups and flushing it just takes a bit more time to go out on the Internet and look them up again.
    TCP Stack...
    http://en.wikipedia.org/wiki/TCP/IP_model

  • What RFC:s are implemented in Solaris�s TCP stack?

    I am trying to find out wich of the following TCP related RFC:s are implemented in the Solaris TCP stack:
    RFC0793
    RFC1122
    RFC2581
    RFC1323
    RFC2081
    RFC1191
    RFC1981
    RFC2414
    RFC2481
    RFC2884

    Hi Jeny,
    refer this URL:
    http://www.rvs.uni-hannover.de/people/voeckler/tune/EN/tune.html
    Thanks.
    regards,
    senthilkumar
    SUN - DTS

  • Solaris Kernel and TCP/IP Tuning Parameters (Continued)

    This page describes some configuration optimizations for Solaris hosts running ATG Page Serving instances (application servers) that will increase server efficiency.
    Note that these changes are specific to Solaris systems running ATG application servers (+page serving+ instances). Do not use these on a web server or database server. Those systems require entirely different settings.
    h3. Solaris 10 Kernel
    Adjust /etc/system (parameters below) and reboot the system.
    set rlim_fd_cur=4096
    set rlim_fd_max=4096
    set tcp:tcp_conn_hash_size=32768
    set shmsys:shminfo_shmmax=4294967295
    set autoup=900
    set tune_t_fsflushr=1h4. Set limits on file descriptors
    {color:blue}set rlim_fd_max = 4096{color}
    {color:blue}set rlim_fd_cur = 4096{color}
    Raise the file-descriptor limits to a maximum of 4096. Note that this tuning option was not mentioned in the "Sun Performance And Tuning" book.
    [http://download.oracle.com/docs/cd/E19082-01/819-2724/chapter2-32/index.html]
    h4. Increase the connection hash table size
    {color:blue}set tcp:tcp_conn_hash_size=8192{color}
    Increase the connection hash table size to make look-up's more efficient. The connection hash table size can be set only once, at boot time.
    [http://download.oracle.com/docs/cd/E19455-01/816-0607/chapter4-63/index.html]
    h4. Increase maximum shared memory segment size
    {color:blue}set shmsys:shminfo_shmmax=4294967295{color}
    Increase the maximum size of a system V shared memory segment that can be created from roughly 8MB to 4GB.
    This provides an adequate ceiling; it does not imply that shared memory segments of this size will be created.
    [http://download.oracle.com/docs/cd/E19683-01/816-7137/chapter2-74/index.html]
    h4. Increase memory allocated for dirty pages
    {color:blue}set autoup=900{color}
    Increase the amount of memory examined for dirty pages in each invocation and frequency of file system synchronizing operations.
    The value of autoup is also used to control whether a buffer is written out from the free list. Buffers marked with the B_DELWRI flag (which identifies file content pages that have changed) are written out whenever the buffer has been on the list for longer than autoup seconds. Increasing the value of autoup keeps the buffers in memory for a longer time.
    [http://download.oracle.com/docs/cd/E19082-01/819-2724/chapter2-16/index.html]
    h4. Specify the time between fsflush invocations
    Specifies the number of seconds between fsflush invocations.
    {color:blue}set tune_t_fsflushr=1{color}
    [http://download.oracle.com/docs/cd/E19082-01/819-2724/chapter2-105/index.html]
    Again, note that after adjusting any of the preceding kernel parameters you will need to reboot the Solaris server.
    h3. TCP
    ndd -set /dev/tcp tcp_time_wait_interval 60000
    ndd -set /dev/tcp tcp_conn_req_max_q 16384
    ndd -set /dev/tcp tcp_conn_req_max_q0 16384
    ndd -set /dev/tcp tcp_ip_abort_interval 60000
    ndd -set /dev/tcp tcp_keepalive_interval 7200000
    ndd -set /dev/tcp tcp_rexmit_interval_initial 4000
    ndd -set /dev/tcp tcp_rexmit_interval_max 10000
    ndd -set /dev/tcp tcp_rexmit_interval_min 3000
    ndd -set /dev/tcp tcp_smallest_anon_port 32768
    ndd -set /dev/tcp tcp_xmit_hiwat 131072
    ndd -set /dev/tcp tcp_recv_hiwat 131072
    ndd -set /dev/tcp tcp_naglim_def 1h4. Tuning the Time Wait Interval and TCP Connection Hash Table Size
    {color:blue}/usr/sbin/ndd -set /dev/tcp tcp_time_wait_interval 60000{color}
    The tcp_time_wait_interval is how long a connection stays in the TIME_WAIT state after it has been closed (default value 240000 ms or 4 minutes). With the default setting, this socket will remain for 4 minutes after you have closed the FTP connection. This is normal operating behavior. It is done to ensure that any slow packets on the network will arrive before the socket is completely shutdown. As a result, a future program that uses the same socket number won't get confused upon receipt of packets that were intended for the previous program.
    On a busy Web server a large backlog of connections waiting to close could build up and the kernel can become inefficient in locating an available TCP data structure. Therefore it is recommended to change this value to 60000 ms or 1 minute.
    h4. Tuning the maximum number of requests per IP address per port
    {color:blue}ndd -set /dev/tcp tcp_conn_req_max_q 16384{color}
    {color:blue}ndd -set /dev/tcp tcp_conn_req_max_q0 16384{color}
    The {color:blue}tcp_conn_req_max_q{color} and {color:blue}tcp_conn_req_max_q0{color} parameters are associated with the maximum number of requests that can be accepted per IP address per port. tcp_conn_req_max_q is the maximum number of incoming connections that can be accepted on a port. tcp_conn_req_max_q0 is the maximum number of “half-open” TCP connections that can exist for a port. The parameters are separated in order to allow the administrator to have a mechanism to block SYN segment denial of service attacks on Solaris.
    The default values are be too low for a non-trivial web server, messaging server or directory server installation or any server that expects more than 128 concurrent accepts or 4096 concurrent half-opens. Since the ATG application servers are behind a DMZ firewall, we needn't starve these values to ensure against DOS attack.
    h4. Tuning the total retransmission timeout value
    {color:blue}ndd -set /dev/tcp tcp_ip_abort_interval 60000{color}
    {color:blue}tcp_ip_abort_interval{color} specifies the default total retransmission timeout value for a TCP connection. For a given TCP connection, if TCP has been retransmitting for tcp_ip_abort_interval period of time and it has not received any acknowledgment from the other endpoint during this period, TCP closes this connection.
    h4. Tuning the Keep Alive interval value
    {color:blue}ndd -set /dev/tcp tcp_keepalive_interval 7200000{color}
    {color:blue}tcp_keepalive_interval{color} sets a probe interval that is first sent out after a TCP connection is idle on a system-wide basis.
    If SO_KEEPALIVE is enabled for a socket, the first keep-alive probe is sent out after a TCP connection is idle for two hours, the default value of the {color:blue}tcp_keepalive_interval{color} parameter. If the peer does not respond to the probe after eight minutes, the TCP connection is aborted.
    The {color:blue}tcp_rexmit_interval_*{color} values set the initial, minimum, and maximum retransmission timeout (RTO) values for a TCP connections, in milliseconds.
    h4. Tuning the TCP Window Size
    {color:blue}/usr/sbin/ndd -set /dev/tcp tcp_xmit_hiwat 65535{color}
    {color:blue}/usr/sbin/ndd -set /dev/tcp tcp_recv_hiwat 65535{color}
    Setting these two parameters controls the transmit buffer and receive window. We are tuning the kernel to set each window to 65535 bytes. If you set it to 65536 bytes (64K bytes) or more with Solaris 2.6, you trigger the TCP window scale option (RFC1323).
    h4. Tuning TCP Slow Start
    {color:blue}/usr/sinb/ndd -set /dev/tcp tcp_slow_start_initial 4{color}
    tcp_slow_start_initial is the number of packets initially sent until acknowledgment, the congestion window limit.
    h4. Tuning the default bytes to buffer
    {color:blue}ndd -set /dev/tcp tcp_naglim_def 1{color}
    {color:blue}tcp_naglim_def{color} is the default number of bytes to buffer. Each connection has its own copy of this value, which is set to the minimum of the MSS for the connection and the default value. When the application sets the TCP_NODELAY socket option, it changes the connection's copy of this value to 1. The idea behind this algorithm is to reduce the number of small packets transmitted across the wire by introducing a short (100ms) delay for packets smaller than some minimum.
    Changing the value of tcp_naglim_def to 1 will have the same effect (on connections established after the change) as if each application set the TCP_NODELAY option.
    {note}
    The current value of any of the TCP parameters can be displayed with the command ndd get. So to retrieve the current setting of the {color:blue}tcp_naglim_def parameter{color}, simply execute the command:\\
    {color:blue}ndd -get /dev/tcp tcp_naglim_def{color}
    {note}
    h3. References
    Solaris Tunable Parameters Reference Manual
    [http://download.oracle.com/docs/cd/E19455-01/816-0607/index.html]
    WebLogic Server Performance and Tuning
    [http://download.oracle.com/docs/cd/E11035_01/wls100/perform/OSTuning.html]

    For example,
    Socket.setSoTimeout() sets SO_TIMEOUT option and I
    want to what TCP parameter this option corresponds in
    the underlying TCP connection.This doesn't correspond to anything in the connection, it is an attribute of the API.
    The same questions
    arises fro other options from SocketOptions class.setTcpNoDelay() controls the Nagle algorithm. set{Send,Receive}BufferSize() controls the local socket buffers.
    Most of this is quite adequately described in the javadoc actually.

  • Broken TCP stack in latest kernel when under heavy load

    I'm running an Arch box with a decent amount of HTTP traffic. After upgrading to the latest kernel I've seen that packets are send from the wrong source and destination address. This only applies during heavy load (100+ requests per second). tcpdump shows the following:
    18:52:58.512573 IP 0.0.0.0.80 > 0.0.0.0.4316: Flags [FP.], seq 0, ack 1, win 14400, length 0
    18:52:58.512600 IP 0.0.0.0.80 > 0.0.0.0.56546: Flags [FP.], seq 0, ack 1, win 14400, length 0
    18:52:58.512621 IP 0.0.0.0.80 > 0.0.0.0.4535: Flags [FP.], seq 0, ack 1, win 14600, length 0
    18:52:58.512641 IP 0.0.0.0.80 > 0.0.0.0.3528: Flags [FP.], seq 0, ack 1, win 14600, length 0
    18:52:58.512662 IP 0.0.0.0.80 > 0.0.0.0.4509: Flags [FP.], seq 0, ack 1, win 14400, length 0
    18:52:58.512682 IP 0.0.0.0.80 > 0.0.0.0.65040: Flags [FP.], seq 0, ack 1, win 14600, length 0
    18:52:58.512702 IP 0.0.0.0.80 > 0.0.0.0.2455: Flags [FP.], seq 0, ack 1, win 10240, length 0
    18:52:58.512722 IP 0.0.0.0.80 > 0.0.0.0.16545: Flags [FP.], seq 0:268, ack 1, win 15008, length 268
    18:52:58.519258 IP 0.0.0.0.80 > 0.0.0.0.29802: Flags [FP.], seq 0:268, ack 1, win 980, options [nop,nop,TS val 745514 ecr 1317559555], length 268
    18:52:58.565907 IP 0.0.0.0.80 > 0.0.0.0.32376: Flags [FP.], seq 0, ack 1, win 14400, length 0
    18:52:58.619241 IP 0.0.0.0.80 > 0.0.0.0.50493: Flags [FP.], seq 0:268, ack 1, win 11256, options [nop,nop,TS val 745544 ecr 9539361], length 268
    18:52:58.805927 IP 0.0.0.0.80 > 0.0.0.0.20852: Flags [FP.], seq 3025419976:3025420244, ack 3037671074, win 967, options [nop,nop,TS val 745600 ecr 6445640], length 268
    18:52:58.805953 IP 0.0.0.0.80 > 0.0.0.0.65025: Flags [FP.], seq 1663827778:1663828046, ack 2127675352, win 707, options [nop,nop,TS val 745600 ecr 457812708], length 268
    18:52:58.845918 IP 0.0.0.0.80 > 0.0.0.0.2217: Flags [FP.], seq 0:268, ack 1, win 707, options [nop,nop,TS val 745612 ecr 546643], length 268
    18:52:59.099245 IP 0.0.0.0.80 > 0.0.0.0.5112: Flags [FP.], seq 0:268, ack 1, win 15008, length 268
    18:52:59.152582 IP 0.0.0.0.80 > 0.0.0.0.1175: Flags [FP.], seq 0:268, ack 1, win 15008, length 268
    18:52:59.232612 IP 0.0.0.0.80 > 0.0.0.0.47217: Flags [FP.], seq 684621876:684622144, ack 3544859356, win 11256, length 268
    18:52:59.659258 IP 0.0.0.0.80 > 0.0.0.0.3098: Flags [FP.], seq 2105858244:2105858512, ack 3896053916, win 980, options [nop,nop,TS val 745856 ecr 52041], length 268
    18:52:59.659290 IP 0.0.0.0.80 > 0.0.0.0.3099: Flags [FP.], seq 18772067:18772335, ack 2568646283, win 980, options [nop,nop,TS val 745856 ecr 52041], length 268
    18:52:59.759244 IP 0.0.0.0.80 > 0.0.0.0.18780: Flags [FP.], seq 0:268, ack 1, win 707, options [nop,nop,TS val 745886 ecr 168876], length 268
    18:52:59.845907 IP 0.0.0.0.80 > 0.0.0.0.58449: Flags [FP.], seq 0, ack 1, win 980, options [nop,nop,TS val 745912 ecr 528058426], length 0
    18:52:59.925936 IP 0.0.0.0.80 > 0.0.0.0.65137: Flags [FP.], seq 0:268, ack 1, win 15008, length 268
    18:52:59.979497 IP 0.0.0.0.80 > 0.0.0.0.2920: Flags [FP.], seq 0:268, ack 1, win 980, options [nop,nop,TS val 745952 ecr 18879], length 268
    18:52:59.979527 IP 0.0.0.0.80 > 0.0.0.0.2922: Flags [FP.], seq 0:268, ack 1, win 980, options [nop,nop,TS val 745952 ecr 18879], length 268
    18:52:59.979553 IP 0.0.0.0.80 > 0.0.0.0.2940: Flags [FP.], seq 0:268, ack 1, win 980, options [nop,nop,TS val 745952 ecr 18879], length 268
    Source and destination ports are correctly set. Wireshark shows the correct HTML inside the packets that are returned to 0.0.0.0. The web server log also looks normal; the correct IP address is displayed and logged as a successful request.
    When dropping incomming traffic on port 80 on eth0 everything works as expected (when requesting the server on eth1, which otherwise fails).
    I'm running on "Linux srv 3.0-ARCH #1 SMP PREEMPT Wed Oct 19 12:14:48 UTC 2011 i686" which is the latest kernel in the repos. When booting the fallback image this problem does not exist, all packets are correctly addressed no matter how much load I put on the server.
    Does anyone else have this problem?
    Edit:
    Running lighttpd 1.4.29. No tweaked kernel/TCP parameters whatsoever.
    Last edited by nullvoid (2011-10-29 17:19:57)

    Did a full reinstall of Arch on another machine and the problem still persist. Tried with Apache and Nginx, same behaviour as with Lighttpd. Could anyone else using an arch box under heavy load see if there's activity from 0.0.0.0?
    Hint:
    # tcpdump -n host 0.0.0.0
    I'll do a bug report upstream later today.

  • Simens TCP/IP stack

    Hi,
    I worked all the day to watch TCP packets traveling between my J2ME client (GPRS connection) and my J2EE server (Internet connection).
    I used WireShark (ex Ethereal) to sniff the network.
    I could see through this sniffer that sometimes, my client TCP connection are not terminated (FIN flag). After, another connection (SYN flag) is established and everything is fine...and maybe after it will appear again...
    In addition, sometimes (again), my client write on its socket outputstream and I can't see the TCP packet on my sniffer (so my server is waiting until a timeout occurs)...
    (see my last post... http://forum.java.sun.com/thread.jspa?threadID=5261959)
    I work on a Siemens TC65. Globally, do you guys saw some weird stuff like this with mobile TCP stack ? I mean is there something to know about TCP protocol with mobile phone?
    I'm really pissed about this I don't know if I made a mistake or if GPRS connection is not reliable with TCP!
    Thanks...

    AdrienD wrote:
    SIEMENS
    TC65
    REVISION 02.800I never heard of this one... We are still working with the 2.000 releases and we are getting prereleases of the rev 3 modules soon. Is this one of those maybee? If so, you might try a rev 2.000 module..
    What is your client, and what is your server, and where are they located?My server is in my private network. I added a NAT on my router to link every TCP connection on port 9000 to my server private ip address. That's called port forwarding.
    My client is connected through GPRS on its local mobile network to my ISP router (I guess).
    When I see a TCP packet coming from my client, source IP address is an Internet IP so I think it is my ISP public IP address ....
    My server...<-------------------> My router (with NAT on TCP port 9000) <------------INTERNET------------> ISP Router <----------------------> My clientYes, usually, the GPRS network is a NAT'ed network, meaning that every unit will present itself with the same IP to the server. The ISP NAT router manages TCP connections and therefore knows where it should the TCP packets to. This way, you can't directly connect to any of the GPRS units.
    Other ISP might actually give every GPRS unit a real internet IP, making it possible to connect to the units directly. We have our own APN, and the units get an IP directly form out own authentication server and reside in our own network via a VPN connection. This way we have all the freedom we need.

  • Unable to disable TCP keep alive probing

    Hello,
    I work with a mobile phone which seems to have problems if a keep alive packet arrives close with a payload packet. Therefore I would like to disable keep alive probing on my Java 1.5 windows XP plattform.
    I tried it with:
    Socket socket = serverSocket.accept();
    socket.setKeepAlive(false);
    However the Java application sends keep alive probes anyway. Is this a known bug? Are there some workarounds?
    Even if the keep alive is enabled, Java seems to violate RFC recommendations guiding to set an interval of 2 hours. If I sniff with Ethereal I can see keep alive packets in an interval shorter than 2 minutes (sent from the java application socket).
    Is this another bug?
    Kind regards
    Hans-Joerg

    (a) Java doesn't send keepalives at all, so it's not 'another bug'. The TCP stack sends them.
    (b) TCP may only do so if the application has specifically requested them, which you haven't.
    (c) Java doesn't turn TCP keepalive on by default.
    (d) TCP may only send a keepalive packet every two hours, unless a global system setting has been changed, which you haven't mentioned.
    So, (e) if you're getting them unasked, and more frequently than two hours, either the TCP stack has a major bug, which isn't likely, or they're not keepalive packets, which is orders of magnitude more likely.
    The phone shouldn't have any problem with real keepalive packets: they're just acknowledgements quoting a sequence number 1 less than the receiver expects, so the receiver replies with the correct sequence number. This shouldn't cause a competently implemented TCP stack any grief whether in a mobile phone or a supercomputer.
    So I would examine your assumption that it's a keepalive packet.

  • TCP segment Overwrite

    Recently, I've been seeing a ton of messages on the IPS Event Viewer stating that TCP Segment Overwrites have been detected. The events are from either the IPS-A to our DMZ or vice versa. It just started whenever we added some servers to the network. Does anybody have any suggestions? We traced the packets, it's not hacker related. It seems like maybe a server or app. may be causing the issue. The problem is isolating it. I would appreciate any feedback you could provide. Thanks!

    TCP normalization
    Through intentional or natural TCP session segmentation, some classes of attacks can be hidden. To make sure policy enforcement can occur with no false positives and false negatives, the state of the two TCP endpoints must be tracked and only the data that is actually processed by the real host endpoints should be passed on. Overlaps in a TCP stream can occur, but are extremely rare except for TCP segment retransmits. Overwrites in the TCP session should not occur. If overwrites do occur, someone is intentionally trying to elude the security policy or the TCP stack implementation is broken. Maintaining full information about the state of both endpoints is not possible unless the sensor acts as a TCP proxy. Instead of the sensor acting as a TCP proxy, the segments will be ordered properly and the normalizer will look for any abnormal packets associated with evasion and attacks.
    http://www.cisco.com/en/US/products/hw/vpndevc/ps4077/products_configuration_guide_chapter09186a00804cf52e.html

  • TCP Segment Overwrite - IDSM2 with IPS5.1(1d)

    Hi,
    I have an IDSM2 running IPS5.1(1d)S220 upgraded recently from 4.x. My network has windows desktops (spanned on multiple subnets) whose default gateway is a Cisco 6500 FWSM module.
    Since I upgraded to IPS 5.x, I am seeing lots and lots of TCP Hijack and TCP Segment Overwrite alarms. The source addresses of these alarms are my windows PCs, destination addresses are Windows 2003 servers..Sometimes, the destination address is 0.0.0.0 and ports are empty.
    It is difficult to ignore so many alarms unless there is a technical explanation to see if the placement of FWSM is causing IPS to treat this as a threat.
    Can someone help me to get out of this issue?

    I found this on a previously written thread. I cannot find the thread anymore. All credit is given to the message owner. i hope the reply helps.
    mlhall of - CISCO SYSTEMS wrote:
    Oct 6, 2003, 11:06am PST
    I have several packet traces from several customers that see this alert. There appears to be a bug in the microsoft TCP stack when connections go stale. What happens is that the last successful segment's last byte is resent with a value of 0xff. This is after the other endhost has ACK'ed the sequence from the last segments.
    So for example.
    a->b seq=100 data="ABCDEFG"
    b->a ack=107 no data
    a->b seq=106 data="(0xff)"
    The last packet in the example is overwriting the G in the first packet with an 0xff. This causes the IDS to fire. We are working on detecting this stack bug in a new version.

  • TCP header (difficult questions)

    So is that the case with Java that I cannot access to TCP-packet headers with Java? I can't have influence on sequence numbers, ack numbers, window size, nor control bits like rst, syn... Of course dest and source are apart from this.
    The only thing I can affect is the data which socket sends to another socket?
    If that is the case then what initiates the traffic to be tcp not something else and how the retransmissions are done? I know the concept of ack etc, but how is it managed with Java.
    With UDP it is easier cause it is a single datagram.
    If I want to monitor TCP traffic that my pc sends/receives with my java client/server program and increment some counters for instance to check how many packets have been reordered then how would I do that?
    It would be very enlightning to have decent answers :)

    So is that the case with Java that I cannot access to
    TCP-packet headers with Java? I can't have influence
    on sequence numbers, ack numbers, window size, nor
    control bits like rst, syn... Of course dest and
    source are apart from this.
    The only thing I can affect is the data which socket
    sends to another socket?
    With the Java API that is correct.
    If that is the case then what initiates the traffic
    to be tcp not something else and how the
    retransmissions are done? The TCP stack. Applications that use TCP (versus IP) never do that. It doesn't have anything to do with java.
    I know the concept of ack
    etc, but how is it managed with Java.
    With UDP it is easier cause it is a single datagram.
    If I want to monitor TCP traffic that my pc
    sends/receives with my java client/server program
    and increment some counters for instance to check
    k how many packets have been reordered then how would
    I do that?
    The entire computer? Then I would suggest that you find one of the many tools that have already been built to do exactly that.

  • Database network packet size

    How can one change the database network packet size in Oracle 8i in the tnsnames file?

    Search "SDU" in the documentation. Fake delirium about size "tuning" - set SDU to the maximum 32767. TCP stack much mor clever than Oracle's developers can explain.

  • Sockd process uses a lot of CPU time

    Hi,
    I'm running the Sun Java System Web Proxy Server version 4.02 on a SunFire V210 dual processor box running Solaris 10 with the default socks5.conf for testing.
    Just browsing a few web pages in Firefox or IE using the socks proxy boosts CPU usage from the sockd process to a staggering 50% and stay there for several minutes.
    Comparing with the old NEC reference Socks5 daemon the Sun version is really performing badly.
    The system is pretty standard though I have tuned /etc/system and the tcp stack using recommendations in the proxy administration manual. All Solaris 10 patches are installed.
    prstat output:
       PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP
    24519 proxy    6664K 4384K cpu1     0   10   0:11:18  50% sockd/43Any ideas what's wrong or do I have to stick with the old NEC reference daemon?
    Regards
    Kasper L�vschall

    Using:
    truss -dlfo truss.out ./sockd-wdog
    Where things begin to happen:
    5259/1:          11.2858     lwp_park(0xFFBFF290, 0)                    Err#62 ETIME
    5259/1:          12.2958     lwp_park(0xFFBFF290, 0)                    Err#62 ETIME
    5259/1:          13.3058     lwp_park(0xFFBFF290, 0)                    Err#62 ETIME
    5259/1:          14.3158     lwp_park(0xFFBFF290, 0)                    Err#62 ETIME
    5259/2:          15.1858     pollsys(0xFEAEFDC8, 1, 0xFEAEFD58, 0x00000000)     = 0
    5259/1:          15.3258     lwp_park(0xFFBFF290, 0)                    Err#62 ETIME
    5259/2:          pollsys(0xFEAEFDC8, 1, 0xFEAEFD58, 0x00000000) (sleeping...)
    5259/1:          lwp_park(0xFFBFF290, 0)          (sleeping...)
    5259/1:          16.3359     lwp_park(0xFFBFF290, 0)                    Err#62 ETIME
    5259/1:          lwp_park(0xFFBFF290, 0)          (sleeping...)
    5259/1:          17.3459     lwp_park(0xFFBFF290, 0)                    Err#62 ETIME
    5259/2:          18.2382     pollsys(0xFEAEFDC8, 1, 0xFEAEFD58, 0x00000000)     = 1
    5259/2:          18.2385     accept(3, 0xFEAEFEC8, 0xFEAEFE64, SOV_DEFAULT)     = 5
    5259/2:          18.2386     lwp_unpark(43)                         = 0
    5259/43:     18.2386     lwp_park(0x00000000, 0)                    = 0
    5259/2:          18.2388     accept(3, 0xFEAEFEC8, 0xFEAEFE64, SOV_DEFAULT)     Err#11 EAGAIN
    5259/43:     18.2389     getsockname(5, 0xFE5AFEA8, 0xFE5AFDA4, SOV_DEFAULT) = 0
    5259/43:     18.2391     getpeername(5, 0xFE5AFE38, 0xFE5AFDA4, SOV_DEFAULT) = 0
    5259/43:     18.2391     read(5, 0x00063B7E, 1)                    Err#11 EAGAIN
    5259/43:     18.2407     pollsys(0xFE5AFD08, 1, 0xFE5AFCA0, 0x00000000)     = 1
    5259/43:     18.2408     read(5, "04", 1)                    = 1
    5259/43:     18.2409     read(5, "01\0 P BF9 ] c", 7)               = 7
    5259/43:     18.2411     read(5, " k l\0", 255)                    = 3
    5259/43:     18.2412     write(4, " [ 0 1 / M a r / 2 0 0 6".., 88)     = 88
    5259/43:     18.2413     so_socket(PF_INET, SOCK_STREAM, IPPROTO_IP, "", SOV_DEFAULT) = 6
    5259/43:     18.2414     fcntl(6, F_GETFL)                    = 2
    5259/43:     18.2415     fcntl(6, F_SETFL, FWRITE|FNONBLOCK)          = 0
    5259/43:     18.2416     bind(6, 0xFE5AF980, 16, SOV_SOCKBSD)          = 0
    5259/43:     18.2418     connect(6, 0xFE5AF910, 16, SOV_DEFAULT)          Err#150 EINPROGRESS
    5259/43:     18.2970     pollsys(0xFE5AF798, 1, 0xFE5AF730, 0x00000000)     = 1
    5259/43:     18.2971     getsockopt(6, SOL_SOCKET, SO_ERROR, 0xFE5AF6D0, 0xFE5AF6D4, SOV_DEFAULT) = 0
    5259/43:     18.2972     getsockname(6, 0xFE5AF980, 0xFE5AF8A4, SOV_DEFAULT) = 0
    5259/43:     18.2973     write(5, "\0 Z918782E1 511", 8)               = 8
    5259/43:     18.2974     lwp_unpark(3)                         = 0
    5259/3:          18.2974     lwp_park(0x00000000, 0)                    = 0
    5259/3:          18.2977     brk(0x00064850)                         = 0
    5259/3:          18.2977     brk(0x00078850)                         = 0
    5259/3:          18.2981     pollsys(0xFEACF458, 50, 0xFEACF3E8, 0x00000000)     = 0
    5259/3:          18.2982     pollsys(0xFEACF458, 50, 0xFEACF3E8, 0x00000000)     = 0
    5259/3:          18.2983     pollsys(0xFEACF458, 50, 0xFEACF3E8, 0x00000000)     = 0Then loades of pollsys(0xFEACF458, 50, 0xFEACF3E8, 0x00000000)     = 0 until I kill the daemon - seems like they are killing the server?
    Thanks,
    Kasper

  • Windows 10 Volume Replica vs StarWind

    Hi,
    I know Server 10 isn't finished yet, but I am interested in the storage replica feature which I believe does synchronous synchronisation. Was just wondering if someone else had some more experience with testing this feature and could explain some of the
    major differences between Microsoft's replication and StarWind replication. Do we need extra hardware for the MS way, anything special? can it be run on a 2 node cluster? if it does the same job, what reasons would we have to purchasing a third party product,
    can the MS implementation be a virtual SAN and be competing against the likes to StarWind and DataCore?
    I would be interested to know the technical details of the differences, and why one is better than the other.
    Cheers
    Steve

    Hi,
    I know Server 10 isn't finished yet, but I am interested in the storage replica feature which I believe does synchronous synchronisation. Was just wondering if someone else had some more experience with testing this feature and could explain some of the
    major differences between Microsoft's replication and StarWind replication. Do we need extra hardware for the MS way, anything special? can it be run on a 2 node cluster? if it does the same job, what reasons would we have to purchasing a third party product,
    can the MS implementation be a virtual SAN and be competing against the likes to StarWind and DataCore?
    I would be interested to know the technical details of the differences, and why one is better than the other.
    Cheers
    Steve
    Steve,
    few disclaimers to start from...
    DISCLAIMER1: I do work for StarWind Software (heck, I started the Company with one more guy years ago and still own a noticeable part of it) so while I'm doing my best to be honest you cannot believe me 100% :) Making long story short: NEVER trust anything
    and double-check everything vendor says to you. At least we're publishing step-by-step guides or "cooking books" and you're welcomed to use these logs to run your experiments. There's no guarantee something did not work for us would not work for
    you. And vice versa :)
    DISCLAIMER2: Windows Server 10 is half-baked it's not even beta so A LOT of things don't work as expected, performance sucks badly affecting functionality (see more below), a lot of things are simply locked, MSFT is not sure will it deliver everything promised
    with GA or not. So it's very premature to build your company storage strategy on what you see now. Also a lot of us are under NDA and prefer not to tell anything. I'm also under NDA so the only thing I can tell is - stay tuned, there are more interesting things
    coming you're going to get in love with :)
    Now Storage Replica Vs. StarWind...
    1) Technical. Storage Replica is a mini-filter based logical volume synchronous replication, very similar to what SteelEye DataKeeper and maybe Double-Take do. StarWind Virtual SAN is exactly what name says - basically SAN firmware running on Windows platform
    and brought one level up to run on a hypervisor host, partially as a user-land service and partially as a kernel-mode driver. Storage Replica copies data blocks coming to one volume to another volume and StarWind represents distributed high performance multi-path
    LUN. I/O is "bond" to local node to avoid network transactions and TCP stack is bypassed to lower latency. So LUN only "looks" like it's iSCSI and actually it's not (something similar to SMB3 SMB Direct connection that starts life as TCP
    and then turns RDMA, StarWind starts as TCP and then does DMA on a local node). With Storage Replica there's a component sitting above volume and "branching out" writes (if you know about DRBD or HAST - very similar). So "by design" Storage
    Replica and Virtual SAN are TOTALLY different.
    2) Positioning. Storage Replica is a Disaster Recovery software with a very high uptime (nearly zero downtime with some scenarios, see below). StarWind is a Business Continuity solution with optional DR component (async replication) so does 99.99% and 99.9999%
    uptime. While it's possible to achieve SOME of the scenarios with Storage Replica (we did SoFS cluster and HA VM) it's not really a goal of it. 
    3) Features. StarWind does "spoofing" so places huge amounts of a distributed write back cache (RAM-based) on every node, Storage Replica from the other side uses LOG disks (that's why SSD is preferred). StarWind is "all-active" while
    SR is "active-passive" by design. So I would not expect high IOPS from SR even with GA code. StarWind does other cool things like "flash-friendly" in-line 4KB deduplication (MSFT can do only 32KB block off-line one and it does steal IOPS
    from storage array as data is written twice and one extra read is required by optimization process, on heavily loaded storage you either have less IOPS or no dedupe as optimization process never kicks in...). StarWind can use RAM not only for write-back cache
    (AFAIK even with W10 MSFT does not do any read-write CSV cache and cannot use non-flash as a write back cache <-- double check this) but also for in-memory with upcoming VDI accelerator project. StarWind does log-structured file system so there's no data
    and only LOG and with SR LOG is re-read and decoded to put on a primary storage. Making long story short: StarWind kills random 4KB I/Os and eliminates I/O blender effect, something MSFT cannot do (LSFS is not a panacea - there are scenarios that don't work
    well with it...). StarWind does offloads snapshots to cheap nodes saving primary flash for hot data, so-called "inter-node tiering", NetApp & Nimble can do that and MSFT moves hot and cold data between flash and spindle on the same node only.
    Etc etc etc
    So... We see SR as a very basic but very important step in the right direction. If you find that SR is "good enough" for you - use it. If not - you're welcomed to deploy third-party software. MSFT does for years leave huge holes in their product
    strategy (I have an impression quite a lot of their teams are run by engineers and not by guys who were buying similar solutions for real customers) leaving a lot of space to other guys. Remember: good companies create good products and excellent companies
    create INFRASTRUCTURE :) MSFT is definitely a company that allows ISVs live and grow. So we're very happy with what MSFT does in general and with Windows 10 "storage" in particular. 
    Now some true value for you. We've experimented to run SR in a different set of scenarios. Some:
    1) Failover file server with 2 nodes and no shared storage. Works but failover is not transparent (probably because performance sucks, I hope closer to beta no need to tune failover timeout would be required to have 100% transparent one as CA SMB3 shares
    provide). Please see this blog:
    Storage Replica: General-Purpose File Server with NO SHARED STORAGE!!
    http://www.starwindsoftware.com/blog/?p=25
    2) Scale-Out File Server with 2 nodes and no shared storage. Works but with SR "active-passive" design whole SoFS idea is a bit compromised... See:
    Storage Replica: Scale-Out File Server with NO SHARED STORAGE!!
    http://www.starwindsoftware.com/blog/?p=42
    3) Hyper-V Cluster with HA VM and 2 nodes only, again no shared storage. ALSO WORKS. You can e-mail me or PM me and I'll give you post draft, not published yet.
    4) Hyper-V Cluster with guest VM cluster. DOES NOT WORK. Probably because bugs in Windows 10 as guest VM cluster on a normal shared storage DOES NOT WORK as well. Again not published yet so e-mail me.
    Hope this helped a bit :)
    Good luck and happy clustering!
    Anton
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

Maybe you are looking for

  • [SOLVED] Cannot browse by hostname - only by IP Address (!)

    Hi all, I have successfully got ALARM running on the previously largely useless GoFlex Home. (It took a while, as the original install didn't work on large partitions, but was successful once the amended recommended partition size was used.) I've set

  • Applet oracle/apps/fnd/formsClient/FormsLauncher notinited

    Hi, I am unable to launch forms from one particular client machine. When I try to launch a form after logging into Oracle Applications, I get this error message at the bottom of my screen. "Applet oracle/apps/fnd/formsClient/FormsLauncher notinited"

  • Oracle SE1

    Dear all Is It possible to download only the Oracle 10g Standard Edition One for testing purpose? thank you !

  • Quadro FX 4000 where to buy?

    Hi guys, since I live in Sweden, I don´t know where to buy a Quadro FX 4000 online in USA. can someone help me to find a trustable store? Thanks a lot?

  • How come exporting to mov never shows estimated file size ?

    Hi everyone, I export to mov more often now, that one of my buddies works exclusively on a mac. He lives vids in .mov format, but one thing I don't understand, is why there is no estimated file size, as with every other format I can export to ? Does