TCP Tuneable Parameters

Hi,
I'd like to know if there are parameters in Solaris to control the following TCP options; I looked in the "Solaris Tunable Parameters Reference Manual" but didn't find the ones I need, perhaps they're undocumented. I'm mostly interested in parameters that can be set on a kernel-wide basis (using "ndd", for example), not a per-socket basis.
1) Turn on/off the Fast Retransmit - Fast Recovery algorithm.
2) Turn on/off the Nagle algorithm.
3) Set the persist timer value (also referred to as persistence timeout).
4) Turn on/off Karn's algorithm. Is this algorithm supported in Solaris?
5) Turn on/off the Protection Against Wrapped Sequence Numbers (PAWS) algorithm. Is this algorithm supported in Solaris?
Many thanks,
Nick

Hi,
Here's the list of TCP tunables that you can modify using ndd:
# ndd -get /dev/tcp \?
? (read only)
tcp_time_wait_interval (read and write)
tcp_conn_req_max_q (read and write)
tcp_conn_req_max_q0 (read and write)
tcp_conn_req_min (read and write)
tcp_conn_grace_period (read and write)
tcp_cwnd_max (read and write)
tcp_debug (read and write)
tcp_smallest_nonpriv_port (read and write)
tcp_ip_abort_cinterval (read and write)
tcp_ip_abort_linterval (read and write)
tcp_ip_abort_interval (read and write)
tcp_ip_notify_cinterval (read and write)
tcp_ip_notify_interval (read and write)
tcp_ipv4_ttl (read and write)
tcp_keepalive_interval (read and write)
tcp_maxpsz_multiplier (read and write)
tcp_mss_def_ipv4 (read and write)
tcp_mss_max_ipv4 (read and write)
tcp_mss_min (read and write)
tcp_naglim_def (read and write)
tcp_rexmit_interval_initial (read and write)
tcp_rexmit_interval_max (read and write)
tcp_rexmit_interval_min (read and write)
tcp_deferred_ack_interval (read and write)
tcp_snd_lowat_fraction (read and write)
tcp_sth_rcv_hiwat (read and write)
tcp_sth_rcv_lowat (read and write)
tcp_dupack_fast_retransmit (read and write)
tcp_ignore_path_mtu (read and write)
tcp_rcv_push_wait (read and write)
tcp_smallest_anon_port (read and write)
tcp_largest_anon_port (read and write)
tcp_xmit_hiwat (read and write)
tcp_xmit_lowat (read and write)
tcp_recv_hiwat (read and write)
tcp_recv_hiwat_minmss (read and write)
tcp_fin_wait_2_flush_interval (read and write)
tcp_co_min (read and write)
tcp_max_buf (read and write)
tcp_strong_iss (read and write)
tcp_rtt_updates (read and write)
tcp_wscale_always (read and write)
tcp_tstamp_always (read and write)
tcp_tstamp_if_wscale (read and write)
tcp_rexmit_interval_extra (read and write)
tcp_deferred_acks_max (read and write)
tcp_slow_start_after_idle (read and write)
tcp_slow_start_initial (read and write)
tcp_co_timer_interval (read and write)
tcp_sack_permitted (read and write)
tcp_trace (read and write)
tcp_compression_enabled (read and write)
tcp_ipv6_hoplimit (read and write)
tcp_mss_def_ipv6 (read and write)
tcp_mss_max_ipv6 (read and write)
tcp_rev_src_routes (read and write)
tcp_wroff_xtra (read and write)
tcp_extra_priv_ports (read only)
tcp_extra_priv_ports_add (write only)
tcp_extra_priv_ports_del (write only)
tcp_status (read only)
tcp_bind_hash (read only)
tcp_listen_hash (read only)
tcp_conn_hash (read only)
tcp_acceptor_hash (read only)
tcp_host_param (read and write)
tcp_time_wait_stats (read only)
tcp_host_param_ipv6 (read and write)
tcp_1948_phrase (write only)
tcp_close_wait_interval(obsoleted- use tcp_time_wait_interval) (no read or write)
Regards,
--Sun/DTS                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

Similar Messages

  • Can SCCM be used to set TCP/IP parameters?

    We have an internal DNS appliance ( OpenDNS) and all of our dynamic clients use it as a resolver simply enough using DHCP. However all of our servers have static configs that point to our domain controllers rather than the OpenDNS appliance. This leaves
    us blind as to any potential malware requests coming from our servers since the reports will only include identity information if the appliance is used for the DNS request. So we get a lot of N/A in the identity column of reports that I can neither confirm,
    nor deny, if the source is a server. 
    I need to change the DNS setting of all servers to the OpenDNS appliance. I have a powershell script that goes and replaces any DNS entry found for a domain controller, to the OpenDNS appliance. Now I just need to figure out how to deploy this. It has been
    suggested we use either SCCM or GPO. A computer startup script would be simple enough, the problem is that servers do not reboot ever. And I'm not sure I want to use a logon script for this. So that leaves SCCM as a possibility.   What is the best method
    to deploy and launch a powershell script using SCCM? I can't seem to come up with a solution that I like so far..

    I did create and deploy a package, but it fails and I'm not sure why. I will paste some log output below. I also created a baseline for this that successfully determines compliance, but then the remediation does not happen. I can't figure out why, and am
    so far proving unlucky in determining where to look for clues.
    Here is a failed software deployment for this:
    Successfully prepared command line "C:\Windows\ccmcache\h\openDNSConfiguration.exe" execmgr 11/13/2014 3:09:16 PM
    2076 (0x081C)
    Command line = "C:\Windows\ccmcache\h\openDNSConfiguration.exe", Working Directory = C:\Windows\ccmcache\h\ execmgr 11/13/2014 3:09:16 PM 2076 (0x081C)
    Running "C:\Windows\ccmcache\h\openDNSConfiguration.exe" with 32bitLauncher execmgr 11/13/2014 3:09:16 PM 2076 (0x081C)
    Created Process for the passed command line execmgr 11/13/2014 3:09:16 PM 2076 (0x081C)
    Raising event:
    [SMS_CodePage(437), SMS_LocaleID(1033)]
    instance of SoftDistProgramStartedEvent
    AdvertisementId = "CHR20051";
    ClientID = "GUID:887F5ED7-298B-4CA6-AD2F-E2401532538D";
    CommandLine = "\"C:\\Windows\\ccmcache\\h\\openDNSConfiguration.exe\"";
    DateTime = "20141113200916.881000+000";
    MachineName = "SUPTEST-201201";
    PackageName = "CHR000B5";
    ProcessID = 1932;
    ProgramName = "opendnsscript";
    SiteCode = "CHR";
    ThreadID = 2076;
    UserContext = "NT AUTHORITY\\SYSTEM";
    WorkingDirectory = "C:\\Windows\\ccmcache\\h\\";
    execmgr 11/13/2014 3:09:16 PM 2076 (0x081C)
    Raised Program Started Event for Ad:CHR20051, Package:CHR000B5, Program: opendnsscript execmgr 11/13/2014 3:09:16 PM 2076 (0x081C)
    Raising client SDK event for class CCM_Program, instance CCM_Program.PackageID="CHR000B5",ProgramID="opendnsscript", actionType 1l, value NULL, user NULL, session 4294967295l, level 0l, verbosity 30l execmgr 11/13/2014 3:09:16 PM 2076 (0x081C)
    Raising client SDK event for class CCM_Program, instance CCM_Program.PackageID="CHR000B5",ProgramID="opendnsscript", actionType 1l, value , user NULL, session 4294967295l, level 0l, verbosity 30l execmgr 11/13/2014 3:09:16 PM 2076 (0x081C)
    MTC task with id {AFBDC445-8174-4716-A132-95F1FEB4CD0B}, changed state from 4 to 5 execmgr 11/13/2014 3:09:16 PM 2076 (0x081C)
    Program exit code -2146232576 execmgr 11/13/2014 3:09:16 PM 3320 (0x0CF8)
    Looking for MIF file to get program status execmgr 11/13/2014 3:09:16 PM 3320 (0x0CF8)
    Script for Package:CHR000B5, Program: opendnsscript failed with exit code 2148734720 execmgr 11/13/2014 3:09:16 PM 3320 (0x0CF8)

  • How to extract the TCP Parameters?

    How do i extract the TCP/IP parameters set on a solaris 8 system?

    To get names of available tcp kernel parameter run the following:
    ndd -get /dev/tcp \?
    To get the value of a tcp kernel parameter:
    ndd -get /dev/tcp <parameter name>
    such as
    ndd -get /dev/tcp tcp_time_wait_interval
    For the meaning of different parameter consult Solaris kernel tuning reference (look on docs.sun.com)

  • Windows TCP Socket Buffer Hitting Plateau Too Early

    Note: This is a repost of a ServerFault Question edited over the course of a few days, originally here: http://serverfault.com/questions/608060/windows-tcp-window-scaling-hitting-plateau-too-early
    Scenario: We have a number of Windows clients regularly uploading large files (FTP/SVN/HTTP PUT/SCP) to Linux servers that are ~100-160ms away. We have 1Gbit/s synchronous bandwidth at the office and the servers are either AWS instances or physically hosted
    in US DCs.
    The initial report was that uploads to a new server instance were much slower than they could be. This bore out in testing and from multiple locations; clients were seeing stable 2-5Mbit/s to the host from their Windows systems.
    I broke out iperf
    -s on a an AWS instance and then from a Windows client in the office:
    iperf
    -c 1.2.3.4
    [ 5] local 10.169.40.14 port 5001 connected with 1.2.3.4 port 55185
    [ 5] 0.0-10.0 sec 6.55 MBytes 5.48 Mbits/sec
    iperf
    -w1M -c 1.2.3.4
    [ 4] local 10.169.40.14 port 5001 connected with 1.2.3.4 port 55239
    [ 4] 0.0-18.3 sec 196 MBytes 89.6 Mbits/sec
    The latter figure can vary significantly on subsequent tests, (Vagaries of AWS) but is usually between 70 and 130Mbit/s which is more than enough for our needs. Wiresharking the session, I can see:
    iperf
    -c Windows SYN - Window 64kb, Scale 1 - Linux SYN, ACK: Window 14kb, Scale: 9 (*512) 
    iperf
    -c -w1M Windows SYN - Windows 64kb, Scale 1 - Linux SYN, ACK: Window 14kb, Scale: 9
    Clearly the link can sustain this high throughput, but I have to explicity set the window size to make any use of it, which most real world applications won't let me do. The TCP handshakes use the same starting points in each case, but the forced one scales
    Conversely, from a Linux client on the same network a straight, iperf
    -c (using the system default 85kb) gives me:
    [ 5] local 10.169.40.14 port 5001 connected with 1.2.3.4 port 33263
    [ 5] 0.0-10.8 sec 142 MBytes 110 Mbits/sec
    Without any forcing, it scales as expected. This can't be something in the intervening hops or our local switches/routers and seems to affect Windows 7 and 8 clients alike. I've read lots of guides on auto-tuning, but these are typically about disabling scaling
    altogether to work around bad terrible home networking kit.
    Can anyone tell me what's happening here and give me a way of fixing it? (Preferably something I can stick in to the registry via GPO.)
    Notes
    The AWS Linux instance in question has the following kernel settings applied in sysctl.conf:
    net.core.rmem_max = 16777216
    net.core.wmem_max = 16777216
    net.core.rmem_default = 1048576
    net.core.wmem_default = 1048576
    net.ipv4.tcp_rmem = 4096 1048576 16777216
    net.ipv4.tcp_wmem = 4096 1048576 16777216
    I've used dd
    if=/dev/zero | nc redirecting to /dev/null at
    the server end to rule out iperfand
    remove any other possible bottlenecks, but the results are much the same. Tests with ncftp(Cygwin,
    Native Windows, Linux) scale in much the same way as the above iperf tests on their respective platforms.
    First fix attempts.
    Enabling CTCP - This makes no difference; window scaling is identical. (If I understand this correctly, this setting increases the rate at which the congestion window is enlarged rather than the maximum size it can reach)
    Enabling TCP timestamps. - No change here either.
    Nagle's algorithm - That makes sense and at least it means I can probably ignore that particular blips in the graph as any indication of the problem.
    pcap files: Zip file available here: https://www.dropbox.com/s/104qdysmk01lnf6/iperf-pcaps-10s-Win%2BLinux-2014-06-30.zip (Anonymised
    with bittwiste, extracts to ~150MB as there's one from each OS client for comparison)
    Second fix attempts.
    I've enabled ctcp and disabled chimney offloading: TCP Global Parameters
    Receive-Side Scaling State : enabled
    Chimney Offload State : disabled
    NetDMA State : enabled
    Direct Cache Acess (DCA) : disabled
    Receive Window Auto-Tuning Level : normal
    Add-On Congestion Control Provider : ctcp
    ECN Capability : disabled
    RFC 1323 Timestamps : enabled
    Initial RTO : 3000
    Non Sack Rtt Resiliency : disabled
    But sadly, no change in the throughput.
    I do have a cause/effect question here, though: The graphs are of the RWIN value set in the server's ACKs to the client. With Windows clients, am I right in thinking that Linux isn't scaling this value beyond that low point because the client's limited CWIN
    prevents even that buffer from being filled? Could there be some other reason that Linux is artificially limiting the RWIN?
    Note: I've tried turning on ECN for the hell of it; but no change, there.
    Third fix attempts.
    No change following disabling heuristics and RWIN autotuning. Have updated the Intel network drivers to the latest (12.10.28.0) with software that exposes functioanlity tweaks viadevice manager tabs. The card is an 82579V Chipset on-board NIC - (I'm going to
    do some more testing from clients with realtek or other vendors)
    Focusing on the NIC for a moment, I've tried the following (Mostly just ruling out unlikely culprits):
    Increase receive buffers to 2k from 256 and transmit buffers to 2k from 512 (Both now at maximum) - No change
    Disabled all IP/TCP/UDP checksum offloading. - No change.
    Disabled Large Send Offload - Nada.
    Turned off IPv6, QoS scheduling - Nowt.
    Further investigation
    Trying to eliminate the Linux server side, I started up a Server 2012R2 instance and repeated the tests using iperf (cygwin
    binary) and NTttcp.
    With iperf,
    I had to explicitly specify -w1m on both sides
    before the connection would scale beyond ~5Mbit/s. (Incidentally, I could be checked and the BDP of ~5Mbits at 91ms latency is almost precisely 64kb. Spot the limit...)
    The ntttcp binaries showed now such limitation. Using ntttcpr
    -m 1,0,1.2.3.5 on the server and ntttcp
    -s -m 1,0,1.2.3.5 -t 10 on the client, I can see much better throughput:
    Copyright Version 5.28
    Network activity progressing...
    Thread Time(s) Throughput(KB/s) Avg B / Compl
    ====== ======= ================ =============
    0 9.990 8155.355 65536.000
    ##### Totals: #####
    Bytes(MEG) realtime(s) Avg Frame Size Throughput(MB/s)
    ================ =========== ============== ================
    79.562500 10.001 1442.556 7.955
    Throughput(Buffers/s) Cycles/Byte Buffers
    ===================== =========== =============
    127.287 308.256 1273.000
    DPCs(count/s) Pkts(num/DPC) Intr(count/s) Pkts(num/intr)
    ============= ============= =============== ==============
    1868.713 0.785 9336.366 0.157
    Packets Sent Packets Received Retransmits Errors Avg. CPU %
    ============ ================ =========== ====== ==========
    57833 14664 0 0 9.476
    8MB/s puts it up at the levels I was getting with explicitly large windows in iperf.
    Oddly, though, 80MB in 1273 buffers = a 64kB buffer again. A further wireshark shows a good, variable RWIN coming back from the server (Scale factor 256) that the client seems to fulfil; so perhaps ntttcp is misreporting the send window.
    Further PCAP files have been provided, here:https://www.dropbox.com/s/dtlvy1vi46x75it/iperf%2Bntttcp%2Bftp-pcaps-2014-07-03.zip
    Two more iperfs,
    both from Windows to the same Linux server as before (1.2.3.4): One with a 128k Socket size and default 64k window (restricts to ~5Mbit/s again) and one with a 1MB send window and default 8kb socket size. (scales higher)
    One ntttcp trace
    from the same Windows client to a Server 2012R2 EC2 instance (1.2.3.5). here, the throughput scales well. Note: NTttcp does something odd on port 6001 before it opens the test connection. Not sure what's happening there.
    One FTP data trace, uploading 20MB of /dev/urandom to
    a near identical linux host (1.2.3.6) using Cygwin ncftp.
    Again the limit is there. The pattern is much the same using Windows Filezilla.
    Changing the iperf buffer
    length does make the expected difference to the time sequence graph (much more vertical sections), but the actual throughput is unchanged.
    So we have a final question through all of this: Where is this limitation creeping in? If we simply have user-space software not written to take advantage of Long Fat Networks, can anything be done in the OS to improve the situation?

    Hi,
    Thanks for posting in Microsoft TechNet forums.
    I will try to involve someone familiar with this topic to further look at this issue. There might be some time delay. Appreciate your patience.
    Thank you for your understanding and support.
    Kate Li
    TechNet Community Support

  • Solaris TCP Tuning

    If you are running Weblogic on Solaris you'll need to change some of the
    default TCP configuration parameters.
    Sun's default TCP settings are not correct for a web server.
    We noticed the server was stalling during certain peak periods. netstat
    showed that many sockets opened to port 80 were in CLOSE_WAIT or
    FIN_WAIT_2.
    The following url is a great resource for tuning Solaris TCP/IP
    http://www.rvs.uni-hannover.de/people/voeckler/tune/EN/tune.html
    Changing the following TCP params solved our stalling problem. Also make
    sure you up the ulimit as stated in the weblogic doc.
    # to display the TCP values.
    ndd /dev/tcp tcp_close_wait_interval tcp_fin_wait_2_flush_interval
    tcp_keepalive_interval
    # set tcp_close_wait_interval from 2400000 to 60000
    ndd -set /dev/tcp tcp_close_wait_interval 60000
    # set tcp_fin_wait_2_flush_interval from 675000 to 67500
    ndd -set /dev/tcp tcp_fin_wait_2_flush_interval 67500
    #set tcp_keepalive_interval from 7200000 to 300000
    ndd -set /dev/tcp tcp_keepalive_interval 300000

    Hi Victor,
    We are facing similar problems as you are while running load testing using Load runner simulating a load of 100 concurrent users.
    We have Oracle 8.1.7.4 client running on WebLogic 6.1 SP4 on Solaris 2.6 connecting to Oracle OPS 8.1.7 running on Solaris 2.6. We are getting core dumps while running stress testing using Load Runner for 100 concurrent users. The performance is very good for a while in the beginning of the test with CPU and memory utilization low and all of a sudden we see requests for connections queueing up on the WLS and the server crashes.
    If you have found a solution to your problem Please send us details of it. Also it will be very helpful if you can send us the fix applied.
    You can send your reply to my email at [email protected]
    Regards,
    Raj

  • Windows TCP/IP stack and packet bursts

    Hi all!
    I'm trying to make a server that sends ~30 packets (tcp/ip) loaded with a little data (a long). (Streaming with dataOutputStream.writeLong())
    I would really like to get those 30 Hz signals updating on the clients in a smooth fashion.
    On linux/mac (on those I have tested), I receive the packets one by one. But, in Windows, I receive them as bursts 5/pause/5/pause... and so on. This is really annoying. Anyone knows what the problem might be? I suspect some tcp/ip stack in windows...
    I have heard ppl streaming with 100Hz, so this might not be a problem? Or should I use UDP datagram packets instead?
    Thank you!

    First, there is no 'problem', as none of the RFCs guarantees the kind of behaviour that you want, so you may be better off reviewing your requirement for feasibility rather than chasing some non-existent 'problem'.
    Having said that, there are all kinds of TCP/IP parameters you can tune via the Windows registry:
    http://technet2.microsoft.com/WindowsServer/en/Library/823ca085-8b46-4870-a83e-8032637a87c81033.mspx

  • Server 2008 (R2) TCP/IP Stack sending RST to close short lived sessions

    Hi,
    I'm having an issue with some vendor software, but it appears to be more closely related to the way the TCP/IP stack is handling session shutdown. I'd like to know what this feature is called, any available documentation, and ideally how to disable it.
    Basically, what appears to happen, is Server 2008 is sending a Rst, Ack to terminate a short lived connection, instead of entering the standard TCP shutdown (Using FIN flags). This appears to be an attempt to avoid having short lived sessions sit in a
    TIME_WAIT state, as I can see long TCP connections properly being shutdown.
    I realize the benefits of what this is trying to accomplish, however, the software in question is making HTTP calls, and the server being rather basic, is sending HTTP responses without content-length or transfer encoding: chunked, which means the only
    way to tell the server is done sending content is for the connection to close. However, it appears that the stack is interpreting this type of Tcp shutdown as in error, and generating annoying alerts within the application that is monitoring the close state.
    Does windows have a way to disable this stack feature. I've confirmed the chimney offload doesn't appear to be in use, so this is an effect of the Windows stack itself. I don't have control of the software on either end, but do have a bug open with the vendor,
    I'm more interested in a possible workaround for the short term.
    ** Entire connection lasts ~1 second
    Internet Protocol, Src: 172.25.149.231 (172.25.149.231), Dst: 172.25.147.172 (172.25.147.172)
    Transmission Control Protocol, Src Port: 49740 (49740), Dst Port: 8089 (8089), Seq: 0, Len: 0
    Flags: 0x02 (SYN)
    Internet Protocol, Src: 172.25.147.172 (172.25.147.172), Dst: 172.25.149.231 (172.25.149.231)
    Transmission Control Protocol, Src Port: 8089 (8089), Dst Port: 49740 (49740), Seq: 0, Ack: 1, Len: 0
    Flags: 0x12 (SYN, ACK)
    Internet Protocol, Src: 172.25.149.231 (172.25.149.231), Dst: 172.25.147.172 (172.25.147.172)
    Transmission Control Protocol, Src Port: 49740 (49740), Dst Port: 8089 (8089), Seq: 1, Ack: 1, Len: 0
    Flags: 0x10 (ACK)
    Internet Protocol, Src: 172.25.147.172 (172.25.147.172), Dst: 172.25.149.231 (172.25.149.231)
    Transmission Control Protocol, Src Port: 8089 (8089), Dst Port: 49740 (49740), Seq: 1, Ack: 2921, Len: 0
    Flags: 0x10 (ACK)
    Internet Protocol, Src: 172.25.149.231 (172.25.149.231), Dst: 172.25.147.172 (172.25.147.172)
    Transmission Control Protocol, Src Port: 49740 (49740), Dst Port: 8089 (8089), Seq: 2921, Ack: 1, Len: 1234
    Flags: 0x18 (PSH, ACK)
    Internet Protocol, Src: 172.25.147.172 (172.25.147.172), Dst: 172.25.149.231 (172.25.149.231)
    Transmission Control Protocol, Src Port: 8089 (8089), Dst Port: 49740 (49740), Seq: 1, Ack: 4155, Len: 57
    Flags: 0x18 (PSH, ACK)
    Internet Protocol, Src: 172.25.149.231 (172.25.149.231), Dst: 172.25.147.172 (172.25.147.172)
    Transmission Control Protocol, Src Port: 49740 (49740), Dst Port: 8089 (8089), Seq: 4155, Ack: 58, Len: 0
    Flags: 0x10 (ACK)
    Internet Protocol, Src: 172.25.147.172 (172.25.147.172), Dst: 172.25.149.231 (172.25.149.231)
    Transmission Control Protocol, Src Port: 8089 (8089), Dst Port: 49740 (49740), Seq: 58, Ack: 4155, Len: 1024
    Flags: 0x18 (PSH, ACK)
    Internet Protocol, Src: 172.25.147.172 (172.25.147.172), Dst: 172.25.149.231 (172.25.149.231)
    Transmission Control Protocol, Src Port: 8089 (8089), Dst Port: 49740 (49740), Seq: 1082, Ack: 4155, Len: 1460
    Flags: 0x18 (PSH, ACK)
    Internet Protocol, Src: 172.25.149.231 (172.25.149.231), Dst: 172.25.147.172 (172.25.147.172)
    Transmission Control Protocol, Src Port: 49740 (49740), Dst Port: 8089 (8089), Seq: 4155, Ack: 2542, Len: 0
    Flags: 0x10 (ACK)
    Internet Protocol, Src: 172.25.147.172 (172.25.147.172), Dst: 172.25.149.231 (172.25.149.231)
    Transmission Control Protocol, Src Port: 8089 (8089), Dst Port: 49740 (49740), Seq: 2542, Ack: 4155, Len: 1460
    Flags: 0x18 (PSH, ACK)
    Internet Protocol, Src: 172.25.147.172 (172.25.147.172), Dst: 172.25.149.231 (172.25.149.231)
    Transmission Control Protocol, Src Port: 8089 (8089), Dst Port: 49740 (49740), Seq: 4002, Ack: 4155, Len: 1460
    Flags: 0x18 (PSH, ACK)
    Internet Protocol, Src: 172.25.147.172 (172.25.147.172), Dst: 172.25.149.231 (172.25.149.231)
    Transmission Control Protocol, Src Port: 8089 (8089), Dst Port: 49740 (49740), Seq: 5462, Ack: 4155, Len: 1460
    Flags: 0x18 (PSH, ACK)
    Internet Protocol, Src: 172.25.147.172 (172.25.147.172), Dst: 172.25.149.231 (172.25.149.231)
    Transmission Control Protocol, Src Port: 8089 (8089), Dst Port: 49740 (49740), Seq: 6922, Ack: 4155, Len: 1081
    Flags: 0x18 (PSH, ACK)
    Internet Protocol, Src: 172.25.149.231 (172.25.149.231), Dst: 172.25.147.172 (172.25.147.172)
    Transmission Control Protocol, Src Port: 49740 (49740), Dst Port: 8089 (8089), Seq: 4155, Ack: 8003, Len: 0
    Flags: 0x10 (ACK)
    Internet Protocol, Src: 172.25.147.172 (172.25.147.172), Dst: 172.25.149.231 (172.25.149.231)
    Transmission Control Protocol, Src Port: 8089 (8089), Dst Port: 49740 (49740), Seq: 8003, Ack: 4155, Len: 1460
    Flags: 0x18 (PSH, ACK)
    Internet Protocol, Src: 172.25.147.172 (172.25.147.172), Dst: 172.25.149.231 (172.25.149.231)
    Transmission Control Protocol, Src Port: 8089 (8089), Dst Port: 49740 (49740), Seq: 9463, Ack: 4155, Len: 108
    Flags: 0x18 (PSH, ACK)
    Internet Protocol, Src: 172.25.149.231 (172.25.149.231), Dst: 172.25.147.172 (172.25.147.172)
    Transmission Control Protocol, Src Port: 49740 (49740), Dst Port: 8089 (8089), Seq: 4155, Ack: 9571, Len: 0
    Flags: 0x10 (ACK)
    Internet Protocol, Src: 172.25.147.172 (172.25.147.172), Dst: 172.25.149.231 (172.25.149.231)
    Transmission Control Protocol, Src Port: 8089 (8089), Dst Port: 49740 (49740), Seq: 9571, Ack: 4155, Len: 0
    Flags: 0x14 (RST, ACK)

    Rick, this did not help as it's the approach i've already taken in testing. Albeit, I did disable the chimney offload on the NIC drivers instead of the windows options, but in the wireshark captures I'm still seeing the same behavior.
    Humand, I don't think you're issue is the same as mine, I beleive these RST, ACK's are in response to a NORMAL connection shutdown, and being interpretted as errors be the partner stack. I haven't seen this cause premature tcp shutdown, which would be you're
    dropped RDP connections.
    TCP Global Parameters
    Receive-Side Scaling State : disabled
    Chimney Offload State : disabled
    NetDMA State : enabled
    Direct Cache Acess (DCA) : disabled
    Receive Window Auto-Tuning Level : disabled
    Add-On Congestion Control Provider : ctcp
    ECN Capability : disabled
    RFC 1323 Timestamps : disabled

  • Perfect wifi reception, IP-parameters correct, but doesn't work

    Hi all,
    In my house I have several computers connecting to the internet through a Belkin N1 wireless router. There usually are 2 macbooks and 1 mac mini (c2d 2.0GHz), all now running Snow Leopard. Since the mini doesn't support N-networking, I've disabled that in the router. The macbooks connect perfectly, no problem there. The mac mini is another story.
    When I bought it it ran Leopard, and contrary to the Macbooks it usually connected with a measly 2 bars in the wifi-indicator. But it worked. Since I upgraded it to Snow Leopard, it has perfect 4 bar reception all of the time, but it doesn't actually work. 90% of the time it is totally isolated from the network, despite showing perfect reception and all tcp/ip parameters configured correctly. If I monitor the connection by pinging the router, I see it works for a few seconds, and then most of the ping packets fail with either a timeout or a 'host is down'. It's not just the router, the mac mini can't reach the macbooks either. It appears that switching airport off and on helps, but mostly just for a few seconds, if I'm lucky it'll work for roughly half an hour.
    Changing from dhcp to manual configuration makes no difference. The connection always works long enough for dhcp to retrieve the network parameters.
    I'm pretty experienced with network stuff, but a system that indicates that all is well while it doesn't actually work, doesn't really give me any starting leads. What can I do?

    Well, iStumbler answered some questions and created some new ones: Wifi reception is not erratic at all, it's constantly terrible, a stable 20%. All that time the menubar indicator happily reports 4 bars. Still, while both indicators show no variation at all, the actual network functionality will always go from perfect to nothing at all, a few seconds after switching on airport. Switching airport off and on again consistently helps for getting a few seconds of connectivity.
    One afternoon, the indicator in the menubar correctly indicated the poor reception. Can you guess what happened? Yep, the network functioned properly for hours.
    So it seems I have two issues to deal with; one is the poor reception that only the mini seems to suffer from (now there are some net interference issues in that corner of the room too, so I have some leads to follow there), and a bug in the Snow Leopard airport driver that at least makes it indicate a totally wrong reception quality, but probably also messes up the actual communication (since it should work with the poor reception, and it did until I upgraded from 10.5 to 10.6). Come to think of it, when installing the Snow Leopard upgrade, I had no problems downloading the huge amounts of online updates. So it might actually be a bug recently introduced.
    The search goes on.. unfortunately without having internet radio on the hifi system while searching

  • How can i see traffic being dropped by Firewall?

    Hi All. I have a problem where users on the inside of my network cannot receive emails when they use outlook and windows live to external mail servers. If email is unencrypted (eg hotmail) there are no issues. If however email is encrypted ( gmail on por 465 or outlook over ssl) then the users can receive but cannot send emails. I have already disable inspect esmpt and i have removed any outbound access-list. I want to see if there is anything elese that could be blocking the traffic. How can i do that?
    My Firewall config it attached.
    Marlon

    Hi All. I found a work around for the problem. I took Jose's advice and looked at it from the end point and found that windows 7 handles tcp windowing diffrently than previous OS's. I still think there is an issue somewhere but i am not sure where esle to look so i will work with this for now.
    See note below. Thanks for your help guys.
    Disable the auto tuning
    Check the state or current setting of TCP Auto-Tuning
    1.          Open elevated command prompt with administrator’s privileges.
    2.          Type the following command and press Enter:
    netsh interface tcp show global
    The system will display the following text on screen, where you can check on the Auto-Tuning setting:
    Querying active state…
    TCP Global Parameters
    Receive-Side Scaling State : enabled
    Chimney Offload State : enabled
    Receive Window Auto-Tuning Level : normal
    Add-On Congestion Control Provider : none
    ECN Capability : disabled
    RFC 1323 Timestamps : disabled
    Disable TCP Auto-Tuning
    1.          Open elevated command prompt with administrator’s privileges.
    2.          Type the following command and press Enter:
    netsh interface tcp set global autotuning=disabled
    Enable TCP Auto-Tuning
    1.          Open elevated command prompt with administrator’s privileges.
    2.          Type the following command and press Enter:
    netsh interface tcp set global autotuning=normal
    http://www.mydigitallife.info/disable-tcp-auto-tuning-to-solve-slow-network-cannot-load-web-page-or-download-email-problems-in-vista/

  • HP Color Laserjet 4700 and HP Designjet 5000PS

    I've been working for 4 days trying to get these 2 printers to work in Snow Leopard to no avail.
    I've Reset, Reinstalled several times. The 4700 will never install and Designjet installs, but will never communicate and print.
    I also have a HP Laserjet 4200 that works absolutely fine. Go Figure.
    I've been in contact with Apple and have done everything the Apple tech has said but am now awaiting further communications from him as they are working on it.
    All the printers work fine off of a Leopard Computer that is sharing the printer under Appletalk.
    (Is bonjour just a new name for appletalk????)
    If I ever get it fixed I'll post an update.

    Finally got everything working, and it took for ever trying to reconfigure the IP address on the Designjet 5000.
    Here's how to do it:
    Configuring TCPIIP on the HP Designjet 5000, 5500, 800, and 500
    series printers
    NOTE: A delay of approximately 30 seconds may be expected between each step.
    1. From the Front Panel, use the arrow buttons to highlight the printer icon and press ENTER.
    2. Press the Up arrow key until llO Setup appears. Press ENTER,
    3. Press the Down arrow to Card Setup and press ENTER. Highlight Configuration,
    ENTER,
    4. From the Front Panel message of Cfg Network=No, press ENTER.
    5. Press the Up arrow key to change to Yes. Press ENTER,
    6. Press the Up arrow key until Cfg TCP/IP=No appears, Press ENTER.
    7. Press the Up arrow key to change to Yes. Press ENTER.
    8. Press the Up arrow until BOOTP=Yes appears. Press ENTER.
    9. Press the Up arrow key to change to No. Press ENTER.
    and press
    NOTE: If BOOTP=Yes, the printer is configured to retrieve its TCP/IP parameters
    overthe network from a BOOTP, RARP, or DHCP server. If BOOTP=NO, the
    printer is configured to accept TCP/IP parameters from the Front Panel.
    10. On HP Jetdirect print servers with firmware L.20.04 or greater, press the Up arrow
    DHCP=yes appears. Press ENTER.
    11. Press the Up arrow key to change to No. Press ENTER.
    12. Press the Up arrow key until IP Byte1=xxx appears.
    13. To change the IP byte, proceed as follows:
    a. Press ENTER.
    b. Press the Up or Down arrow key to change the number.
    c. Press ENTER.
    key to until
    d. Press the Up arrow key to continue to IP Byte2=xxx.
    e. Press ENTER.
    14. Configure the remaining bytes ofthe IP address in the same manner.
    15. Configure the subnet mask bytes (SM), syslog server IP address (LG), default gateway (GW),
    and timeout (TIMEOUT) inthe same manner.
    NOTE: When the HP Jetdirect print server is given a syslog server IP address, it can
    generate syslog messages and send them to that. If no syslog sewer is on the
    network, skip this setting,
    16. The TIMEOUT parameter default is 90 seconds. Up to 3,600 seconds can be conhgured. lf set
    to the timeout feature of the HP Jetdirect print server is disabled —- TCP/IP connections
    will remain open until closed by the sewer.
    17. To save this information:
    a.Designjet 5000: Press the BACK button three times, then press the TOP button to
    save the configuration,
    b.Designjet 500 and 800: Press the BACK button four times and wait until the Ready
    message is displayed,
    c.Turn the printer off, then turn it on to refresh the new settings.

  • Windows 8.1 Network Share 0x8007003b Error Copying Files

    Hi All,
    I'm having an issue when copying files from a Windows network share. If the file is above 5MB it will start copying and then after a minute or two I will get an error such as:
    Interrupted Action
    An unexpected error is keeping you from copying the file. If you continue to receive this error, you can use the error code to search for help with this problem.
    Error 0x8007003B: An unexpected network error occurred.
    The network share is in a different office and is accessible over a point-to-point VPN, so the connection is not that fast. 
    But before everyone tells me to start diagnosing the router and or VPN please take note of the following:
    My colleague who is running Windows 7 can copy files from the network share fine
    If I run a Windows 7 VM on my Windows 8.1 laptop I can copy the files across fine
    I also know it isn't an issue with this particular file share. I set up a little lab, where I had one VM which had a shared folder and 2 other client VMs one running Windows 7 and one running Windows 8.1. I limited the bandwidth to the two client
    VMs to a similar speed we see across our VPN. The result was that the Windows 8.1 failed with the same error above, whereas the Windows 7 copied fine.
    There is also a noticeable difference visually when copying using Windows 8.1 and Windows 7:
    Windows 7 - The progress bar progresses smoothly giving the current speed and estimated time remaining
    Windows 8.1 - The progress bar progresses in a jumpy fashion. E.g. it shows no progress, then after a minute it jumps and shows 16% progress, then shows no further progress, then jumps by another 18%. There is no estimated time remaining.
    I ran Wireshark to investigate what was going on at the network layer. I see the following behaviour:
    Windows 8.1 - The file is copying fine then all of a sudden there are a lot of TCP dupack packets. After which the Windows 8.1 client sends a RST packet and then the error pops up.
    Windows 7 - The file is copying fine then you also see a load of TCP dupack packets, however no error is displayed and it continues copying fine.
    I'd appreciate any help with this I'm hoping there are some settings I can change to make Windows 8.1 behave more like Windows 7!
    FYI originally posed here (but was advised to ask on Technet) - https://answers.microsoft.com/en-us/windows/forum/windows8_1-files/windows-81-network-share-0x8007003b-error-copying/300bc9a4-597b-402a-b885-9dd6f6fb51a2
    Thanks,
    Zak

    Hi MeipoXu,
    Sorry, it has taken me so long to reply. I thought I had subscribed to receive updates on this thread, but I didn't see anything!
    I've ran the commands you gave but some of them did not work, I got the following error: 
    Set global command failed on IPv4 The parameter is incorrect.
    However, output of current settings are below.
    TCP Global Parameters
    Receive-Side Scaling State          : disabled
    Chimney Offload State               : disabled
    NetDMA State                        : disabled
    Direct Cache Access (DCA)           : disabled
    Receive Window Auto-Tuning Level    : normal
    Add-On Congestion Control Provider  : none
    ECN Capability                      : disabled
    RFC 1323 Timestamps                 : disabled
    Initial RTO                         : 3000
    Receive Segment Coalescing State    : disabled
    Non Sack Rtt Resiliency             : disabled
    Max SYN Retransmissions             : 2
    To answer your original questions.
    Do you mean the file below 5 MB will copy fine ?
    5MB is not a hard limit. If the file is small enough it will copy before it errors out. A file of around 5MB or so fails with an error.
    Have you tried to copy the files from another Windows 8.1 machine (This will help us to verify whether the issue is casued by the specific machine)?Have you tried to copy another file from the network share to have a check ?
    Yes I've tried from another Windows 8.1 machine and have the same behaviour. It is the same with different files from the network share. And the same with different network shares.
    I am seeing nothing in the event viewer when the error occurs. I do not believe this to be a VPN issue because I can recreate this situation perfectly. As I mentioned in the original post- I
    also know it isn't an issue with this particular file share. I set up a little lab, where I had one VM which had a shared folder and 2 other client VMs one running Windows 7 and one running Windows 8.1. I limited the bandwidth to the two client VMs to a similar
    speed we see across our VPN. The result was that the Windows 8.1 failed with the same error above, whereas the Windows 7 copied fine.
    Basically, I had virtual machines all on the same local network. One virtual machine was hosting a file share with files on it. I created one Windows 7 VM and one Windows 8.1 VM. Using VMWare Workstation you can limit the network bandwidth to the VMs. When
    I did not limit the network bandwidth I could copy all files of all sizes fine. However, when I limit the network bandwidth to a speed of around 30-50KBps, the Windows 8.1 VM fails to copy files with the same error. But Windows 7 VM copies the files fine.
    The point of limiting the bandwidth in this way is to create a similar scenario to that I see when connecting over the VPN.
    This all leads me to believe there is some sort of issue with copying files from fileshares over slow connections using Windows 8.1.
    Cheers,
    Zak

  • Windows 7 Pro on iMac (mid 2011) and network slows down

    We have an office with about 8 mid 2011 21" iMacs running Windows 7 Pro courtesy of Bootcamp.  The machines boot quickly and all programs open quickly.  They each have from 2 - 4 mapped network drives to a 2012 Windows Server 2008 R2 HP GL380 server on-site and are wired with Gig Ethernet HP business class switches).  The users' Documents, Desktop, and a couple other directories are re-directed to the file server using an Active Directory Group Policy Object (GPO).
    A couple of the computers will regularly start to slow down after only 20 or 30 minutes of use.  The rest have occasionally experienced it but it's always independent of others experiencing the slowdown.  The computers with the more regular slowdowns I've also booted into OS 10.7.x and the transfer rates to the same file server are fast and consistent.  BTW, all computers are current on Windows updates and Apple updates.
    I've tried some of the suggestions I found here:  http://www.neowin.net/forum/topic/950538-running-windows-7-via-bootcamp-on-mac-m ini-slows-down-over-time/
    1.)
    Disable all property's that has something to do with "Offload" under advanced on the nic properties.
    Also disable "Recieve Side Scaling".
    And set "Speed and Duplex" to auto.
    2.)
    Turn off windows feature "Remote Differential Compression" (Uninstall) under "programs and features".
    3.)
    Run fallowing command in cmd:
    netsh int tcp set global autotuninglevel=disabled
    netsh int tcp set global chimney=disabled
    i ran disabled on every setting i could.
    so it looks like this now:
    C:\Users\tab>netsh int tcp show global
    Querying active state...
    TCP Global Parameters
    Receive-Side Scaling State : disabled
    Chimney Offload State : disabled
    NetDMA State : disabled
    Direct Cache Acess (DCA) : disabled
    Receive Window Auto-Tuning Level : disabled
    Add-On Congestion Control Provider : none
    ECN Capability : disabled
    RFC 1323 Timestamps : disabled
    4.)
    Uncheck fallowing under your local area connection properties:
    Link Layer Mapper I/O Driver.
    Link Layer Responder.
    With all of that said, do any of you have any suggestions of what to check?

    We have an office with about 8 mid 2011 21" iMacs running Windows 7 Pro courtesy of Bootcamp.  The machines boot quickly and all programs open quickly.  They each have from 2 - 4 mapped network drives to a 2012 Windows Server 2008 R2 HP GL380 server on-site and are wired with Gig Ethernet HP business class switches).  The users' Documents, Desktop, and a couple other directories are re-directed to the file server using an Active Directory Group Policy Object (GPO).
    A couple of the computers will regularly start to slow down after only 20 or 30 minutes of use.  The rest have occasionally experienced it but it's always independent of others experiencing the slowdown.  The computers with the more regular slowdowns I've also booted into OS 10.7.x and the transfer rates to the same file server are fast and consistent.  BTW, all computers are current on Windows updates and Apple updates.
    I've tried some of the suggestions I found here:  http://www.neowin.net/forum/topic/950538-running-windows-7-via-bootcamp-on-mac-m ini-slows-down-over-time/
    1.)
    Disable all property's that has something to do with "Offload" under advanced on the nic properties.
    Also disable "Recieve Side Scaling".
    And set "Speed and Duplex" to auto.
    2.)
    Turn off windows feature "Remote Differential Compression" (Uninstall) under "programs and features".
    3.)
    Run fallowing command in cmd:
    netsh int tcp set global autotuninglevel=disabled
    netsh int tcp set global chimney=disabled
    i ran disabled on every setting i could.
    so it looks like this now:
    C:\Users\tab>netsh int tcp show global
    Querying active state...
    TCP Global Parameters
    Receive-Side Scaling State : disabled
    Chimney Offload State : disabled
    NetDMA State : disabled
    Direct Cache Acess (DCA) : disabled
    Receive Window Auto-Tuning Level : disabled
    Add-On Congestion Control Provider : none
    ECN Capability : disabled
    RFC 1323 Timestamps : disabled
    4.)
    Uncheck fallowing under your local area connection properties:
    Link Layer Mapper I/O Driver.
    Link Layer Responder.
    With all of that said, do any of you have any suggestions of what to check?

  • SMB networking issue in 10.5.2 - detailed analysis and cry for help

    I am a computer technician for a medium-sized company. Our setup in the company is mainly Windows-based, using Active Directory and Windows Server 2003 on our file servers, with a fully Gigabit network. We also have about a dozen Macs, mostly Mac Pros but some PowerMac G5s and a few MacBooks here and there.
    In the past few weeks, we've started seeing an issue on some of the computers relating to transferring large files from the Macs to the file server. This generally only applies to files over 1GB in size, but sometimes it can even happen with files of a few hundred megabytes. What is happening is that when the Mac starts up, the first couple of times a user will transfer a file of this size to the server over SMB, it will act normally. Eventually (usually anywhere from the third to the seventh transfer of a file that size) a file transfer will stall out. It doesn't stop completely, but it begins transferring painfully slowly. We're talking maybe 100 or 200KBps, and it never speeds up again. If you cancel out that transfer, any subsequent transfers will do the same thing.
    Here are the things we've determined through almost a week of testing this issue:
    *It doesn't matter what file or what kind of file we're copying, what server we're copying it to, or what folder on the server it is being copied to
    *The issue is relatively random as far as which file or transfer number will trigger it, but it is repeatable in that it inevitably will happen
    *It doesn't appear to be a third-party product, hardware or software, that is causing this. It's happening on Mac Pros, G5s, a MacBook, and a MacBook Pro, none of which have very much overlap in terms of third-party things installed
    *Using clean and basic installs, we've tested under controlled circumstances found that the problem does not manifest itself in Tiger or even Leopard 10.5.1
    *The problem doesn't seem to be affected by the time of day at all, in that when it happens, it consistently does it whether morning, afternoon, evening, or overnight; subsequently, it does not appear to be affected by cross traffic on the network or peak network usage
    *Using a quick AppleScript, we made a file transfer happen repeatedly over time; on Tiger and Leopard 10.5.1, the transfers were able to run uninterrupted for as much as 12 hours at a time without the problem manifesting itself, but with Leopard 10.5.2, the problem would occur under the same conditions within 10 to 15 minutes
    *We have ruled out any faulty hardware by connecting the test machine directly to our main switch
    *All the PCs on our network, in many different configurations, are performing the same types of transfers just fine
    *Using Ethereal, we can see that when a transfer starts stalling out, you'll start seeing a more than usual number of duplicate ACKs on the connection
    *We're not really sure what this means, but it is worth noting that doing the same types of transfers under the same conditions using curl never manifested the issue
    *After the problem has manifested itself, rebooting the machine clears up the issue, but it will come back steadily
    I think that's pretty much everything we've figured out, which leads us to believe that this is a problem that started happening in 10.5.2. We've eliminated as many variables as possible, so I feel very comfortable with that conclusion. The issue now, then, is finding a fix. Yes, we could just use 10.5.1, but there are so many good bug fixes in 10.5.2 that we'd like to not miss out on. We have tried several fixes on both our server side and on the Mac side, but nothing has helped.
    I doubt that this is something that will get resolved, but I'm posting it here just hoping to see others having the same issue so maybe it will get noticed. Any suggestions, though, are very welcome.
    Message was edited by: Josh R. Holloway

    There have been some other posts suggesting that Apple might have messed up some default TCP configuration parameters in 10.5.2. It might be instructive to compare the result of:
    sysctl -a | grep tcp
    on otherwise similar 10.5.1 and 10.5.2 systems.

  • How to optimize Mac file sharing speeds over LAN?

    I got a new NetGear R7000, and I find it very fast for LAN transfers wired or wireless between Macs running OS X Mountain Lion or Mavericks. Testing large (multigigabyte) file transfers over AFP, an early 2011 Macbook Pro running Mavericks 10.9.1 connecting to a 2011 Mac Mini running Mountain Lion 10.8.5, Ethernet speeds can hit 6 GB per min (800 Mbps) real world and wireless 2.06 GB / min (273 Mbps), real world performance.
    If I perform a similar test for Snow Leopard on both ends, an early 2011 Macbook Pro running Snow Leopard 10.6.8 onnecting to a 2010 Mac Mini also running 10.6.8, has Ethernet transfers topping out at about 1.72 Gigabytes per minute for Ethernet (228 Mbps) and 1.07 GB / min (140 Mbps) for Wifi, real world performance.
    I am wondering if there is anything I can do to speed up the Snow Leopard AFP networking. (There are some applications that run only in Snow Leopard and I would like to have it network efficiently instead of getting rid of it. The Macbook Pro has a partitioned drive so I can boot between Snow Leopard and Mavericks.)
    I know that Apple made some tweaks in file sharing performance in Mountain Lion 10.8.5, and I suspect TCP IP parameters were altered to speed things up dramatically when there were all the complaints about it being slow on Mac Book Airs with 802.11ac. I am curious what Apple did.
    I am wondering if anyone has any tips on how I can improve AFP file sharing performance in Snow Leopard. Are there TCP IP parameters, or plist entries, or other hidden settings I can change? Parameters I can tune in sysctl.conf? I am hoping a real network guru can come on here to advise on some advanced techniques I can use to speed up AFP on Snow Leopard. It is the Macs running Snow Leopard that need to be tuned, not the router, as speeds are very fast sharing between Mountain Lion and Mavericks.
    Also, for any/all Mac OS versions, are there networking parameters that I can tune to speed up transfers that involve a very large number of very small files? Even recent versions of OS X slow down greatly on this.
    Thanks in advance

    Anyone have any ideas? What kind of speed do others see through gigabit Ethernet, with snow leopard or Lion?

  • SMB issue in 10.9.2 Server

    Hey everyone,
    Hopefully I can get some help or at least someone else can recognize the issue at hand. We recently upgrade our Mac Mini Server to 10.9.2 and are having major SMB issues. It's hard to specifically tell but it seems like when we hit a magic number around 150~200 users the server shuts down all filesharing. When I look at Activity Monitor, SMBD is around 99% CPU usage. The only remedy is to restart the server.
    Any help would be greatly appriciated. Thank you!

    There have been some other posts suggesting that Apple might have messed up some default TCP configuration parameters in 10.5.2. It might be instructive to compare the result of:
    sysctl -a | grep tcp
    on otherwise similar 10.5.1 and 10.5.2 systems.

Maybe you are looking for

  • Database is not starting after RAC installation

    Hi All, I have installed 10g R2 2-node RAC on RHEL4 without any problem. But after the installation the database is not starting and i'm not able to connect. It is showing the below error.. ERROR: ORA-01034: ORACLE not available ORA-27101: shared mem

  • Boggle type game: What's the best "dictionary" or text file of all the words of a dictionary

    Hi - seen a tut on a boggle type game - making words from a selection of letters. You type in a word and then it is checked against the array word list. etc... It's for English learners and for me probably (lol)

  • Simple VISA Setup Problem

    Hi all, I'm having issues working with VISA in LabVIEW. Previously, I've used Peek/Poke VIs (before they were VISA VIs) that allowed me to read and write directly to registers at a given Windows address. I have a new, custom PCI Express board that us

  • Data missing from W_GL_BALANCE_F table

    Hi: OBIA: 7.6.3 Source: EBS 11.5.10 I have run and re-run a full load for Financial Analytics, and, strangely, data is missing for a few months. When I have re-run, I selected Reset Data Sources in DAC. Specifically, data is missing in W_GL_BALANCE_F

  • Error ORA-07445: exception encountered: core dump

    Hi, Every time we find in the checkdb of transaction DB13 the next error: BR0976W Database message alert - level: ERROR, line: 2584, time: 2008-10-09 14.00.37, message: ORA-07445: exception encountered: core dump [$cold_qerfxArrayMaxSize()+14320] [SI