Jumbo Frame issue

I have a iMac late 2008, a lot of times (generally every OS upgrade, minor and major) that I try to use the jumbo frame, every time  with the same result: for one our or two it works well, after the LAN stuck, I have to pull off and plug-in the cable in order to unstuck the LAN.
I use NFS in order to store in the NAS the Virtual Machine and they fly with jumbo frame, 2 or 3 times faster then without.
I tried to fix the speed, the half or full duplex but the result are always the same, like a time bomb!
Why that's happen? Have anyone any idea?

Hi,
In the end, it turned out to be that the port-channel interface on the Dell did not have jumbo MTU set. I did the following to set it:
config
interface port-channel 1
mtu 9216
end
copy run start
At that point, both interfaces in the port channel got shutdown for some reason, the 7120 showed its 2 SAN interfaces as down too.
So, I did a:
no shutdown
When the interfaces and port-channel came back up, I was able to vmkping -d -s 8972 to the target IP address, and from windows a ping -f -l 8972 worked. I am able to scan and see iSCSI targets now.
Edited by: ziffsis on Jan 31, 2013 10:51 AM

Similar Messages

  • [BUG] Jumbo frames issues after suspend/resume

    I have a 2.66Ghz MacBookPro5,1 which I recently upgraded to Snow Leopard. It's connected to a gigabit LAN and makes use of jumbo frames (MTU 9000).
    Everything was working flawlessly under Leopard, but since the upgrade to 10.6, I've noticed some very strange networking behavior. Essentially, networking goes AWOL as soon as it tries to transfer big data chunks *after a suspend/resume*. Remote sessions (ssh), file tranfers (ftp/afp) are affected.
    AFAICT, small packets work (eg, the output of "uptime" in a ssh session works just fine), but anything "big" (copying a large file over ftp or afp) hangs the connection. It seems that reducing the jumbo frame max size (using "ifconfig en0 mtu <whatever smaller than the previous setting>") temporarily fixes the issue, until the next hang.
    This problem only happens after suspend/resume. Upon cold boot, everything works just fine, so I'm suspecting a driver issue.
    The only difference between previously "working just fine" situation and the current issue is the upgrade from 10.5 to 10.6. I should mention I'm running in 64bit mode.
    HTH
    Message was edited by: thibaut

    If you have found a verified and reproducible bug then report it here: Feedback. The Discussions is not a bug reporting venue. Apple engineers do not read the Discussions.
    If you want help resolving a problem that is the purpose of the Discussions, but you need to post a question.

  • Aggregates, VLAN's, Jumbo-Frames and cluster interconnect opinions

    Hi All,
    I'm reviewing my options for a new cluster configuration and would like the opinions of people with more expertise than myself out there.
    What I have in mind as follows:
    2 x X4170 servers with 8 x NIC's in each.
    On each 4170 I was going to configure 2 aggregates with 3 nics in each aggregate as follows
    igb0 device in aggr1
    igb1 device in aggr1
    igb2 device in aggr1
    igb3 stand-alone device for iSCSI network
    e1000g0 device in aggr2
    e1000g1 device in aggr2
    e1000g2 device in aggr3
    e1000g3 stand-alone device of iSCSI network
    Now, on top of these aggregates, I was planning on creating VLAN interfaces which will allow me to connect to our two "public" network segments and for the cluster heartbeat network.
    I was then going to configure the vlan's in an IPMP group for failover. I know there are some questions around that configuration in the sense that IPMP will not detect a nic failure if a NIC goes offline in the aggregate, but I could monitor that in a different manner.
    At this point, my questions are:
    [1] Are vlan's, on top of aggregates, supported withing Solaris Cluster? I've not seen anything in the documentation to mention that it is, or is not for that matter. I see that vlan's are supported, inluding support for cluster interconnects over vlan's.
    Now with the standalone interface I want to enable jumbo frames, but I've noticed that the igb.conf file has a global setting for all nic ports, whereas I can enable it for a single nic port in the e1000g.conf kernel driver. My questions are as follows:
    [2] What is the general feeling with mixing mtu sizes on the same lan/vlan? Ive seen some comments that this is not a good idea, and some say that it doesnt cause a problem.
    [3] If the underlying nic, igb0-2 (aggr1) for example, has 9k mtu enabled, I can force the mtu size (1500) for "normal" networks on the vlan interfaces pointing to my "public" network and cluster interconnect vlan. Does anyone have experience of this causing any issues?
    Thanks in advance for all comments/suggestions.

    For 1) the question is really "Do I need to enable Jumbo Frames if I don't want to use them (neither public nore private network)" - the answer is no.
    For 2) each cluster needs to have its own seperate set of VLANs.
    Greets
    Thorsten

  • Mid 2010 Macbook Pro - Change MTU size kills internet (Jumbo Frames)

    Hi everyone, i'm hoping someone here can enlighten or help me solve my problem I'm having.
    I am trying to change my MTU size to enable Jumbo frames on my 13 inch Mid 2010 Macbook Pro. I recently bought a ReadyNAS Ultra and would like to speed up transfers to the unit.
    My setup is as follows:
    I have my ReadyNAS Ultra 2 and 2010 Macbook Pro (Core 2 Duo) wired via cat6 ethernet to my 5th Generation Apple Airport Extreme. The Airport Extreme is connected via cat5e to my AT&T Uverse Gateway which is set up to allow my Airport to assign DHCP and NAT (gateway is in bridge mode with wireless off).
    Anyways, I have enabled Jumbo frames on my ReadyNAS, when I enable them on my MBP.. it applies fine. It disconnects / reconnects the ethernet like it should, but then my connection drops. I can't see any devices on my LAN and I cannot access any internet websites, but according to the network pane I am still assigned a valid dhcp address. When I manually try to increase my MTU size, the same thing happens (from 9000 to 1600 I tried every size).....
    Could it be my MBP just can't suppose the increase of MTU size? It leaves them at 1500 when I set it to automatic... if it doesn't support the increased MTU size, why would it let me custom change the MTU and even give an option to select "Jumbo Frames (9000)"?
    I appreciate any help in advance!!

    asdftroy wrote:
    If you did read my post then you would have saw that the option is there, but that is not entirely what my inquiry is about. The option isn't working as intended, and I was wondering if anyone had the same issues as me. Thanks anyways.
    Anyone else?
    The way you responded to someone trying to help you probably means others will be hesitant to try.

  • How do I maximize LAN speeds using Gigabit Ethernet, jumbo frames?

    I move a lot of large files (RAW photos, music and video) around my internal network, and I'm trying to squeeze out the fastest transfer speeds possible. My question has to do both with decisions about hardware and what settings to use once it's all hooked up.
    This is what I have so far:
    -- imac 3.06GHz, macbook pro 2.53GHz
    -- Cisco gigabit smart switch capable of jumbo frames
    -- Buffalo Terastation Duo NAS (network attached storage), also capable of Gbit and jumbo frames
    -- All wired up with either cat6 or cat53e.
    -- The sizes of the files I'm moving would include large #s of files at either 15MB (photos), 7MB (music), 1-2GB (video) and 650MB (also video).
    -- jumbo frames have been enabled in the settings of the macs, the switch and the buffalo HD.
    -- I've played with various settings of simultaneous connections (more of a help with smaller files), no real difference
    -- Network utility shows the ethernet set to Gbit, with no errors or collisions.
    -- have tried both ftp and the finder's drap and drop
    -- also, whenever I'm doing a major move of data, I kick my family off the network, so there is no other traffic that should be interfering.
    Even with all that, I'm still lucky to get transfer speeds at 15-20mbps, but more commonly at around 10. The other odd thing I've encountered while trying to up my speeds, is that I might start out a transfer at maybe 60mbps, it will maintain that for about 30-60sec and then it appears to ramp itself down, sometimes to as low as 1-5mbps. I'm starting to think my network is mocking me
    I also have a dual band (2.4/5) wireless n router (not jumbo frame capable), but I'm assuming wired is going to trump wireless? (NOTE: in my tests, I have disabled wireless to force the connection through the ethernet).
    Can anyone help with suggestions, and/or suggest a strong networking reference book with emphasis on mac? I'm willing to invest in additional equipment within reason.
    Thanks in advance!
    juliana

    I'm going to pick and choose to answer just a few of the items you have listed. Hopefully others will address other items.
    • This setup was getting me speeds as high as 10-15MB/sec, and as low as 5-6MB/sec when I was transferring video files around 1-2 GB in size
    I would think a single large file would get the best sustained transfer rates, as you have less create new file overhead on the destination device. It is disturbing that the large files transfer at a slower rate.
    • Would a RAID0 config get me faster write speeds than RAID1? I have another NAS that can do other RAID configs, which is fastest as far as write times?
    RAID0 (Striped) is generally faster, as the I/O is spread across 2 disks.
    RAID1 is mirrored, so you can not free the buffer until the same data is on BOTH disks. The disks are NOT going to be in rotational sync, so at least one of the disks will have to wait longer for the write sectors to move under the write heads.
    But RAID1 gives you redundency. RAID0 has not redundency. And you can NOT switch back and forth between the 2 without reformatting your disks, so if you choose RAID0, you do not get redundency unless you provide your own via a backup device for your NAS.
    • what is the most efficient transfer protocol? ftp? smb? something else? And am I better off invoking the protocol from the terminal, or is the overhead of an app-based client negligible?
    Test the different transfers using a large file (100's of MB or a GB sized file would be good as a test file).
    I've had good file transfers with AFP file sharing, but not knowing anything about your NAS, I do not know if it supports AFP, and if it does, whether it is a good implementation.
    If your NAS supports ssh, then I would try scp instead of ftp. scp is like using cp only it works over the network.
    If your NAS support rsync, that would be even better, as it has the ability to just copy files that are either NOT on the destination or update files which have changed, but leave the matching files alone.
    This would help in situations where you cannot copy everything all at once.
    But no matter what you choose, you should measure your performance so you choose something that is good enough.
    • If a client is fine, does anyone have a suggestion as to best one for speed? Doesn't have to be free -- I don't mind supporting good software.
    Again just test what you have.
    • Whats a good number to allow for simultaneous connections, given the number of files and their size?
    If the bottleneck is the NAS, then adding more I/O that will force the disk heads to move away from the current file being written will just slow things down.
    But try 2 connections and measure your performance. If it gets better, then maybe the NAS is not the bottleneck.
    • What question am I not asking?
    You should try using another system as a test destination device in the network setup to see if it gets better, worse, or the same throughput as the NAS. You need to see about changing things in your setup to isolate where the problem might be.
    Also do not rule out bad ethernet cables, so switch them out as well. For example, there was a time I tried to use Gigabit ethernet, but could only get 100BaseT. I even purchased a new gigabit switch, thinking the 1st was just not up to the task. It turned out I had a cheap ethernet cable that only had 4 wires instead of 8 and was not capable of gigabit speeds. An ethernet cable that has a broken wire or connector could exhibit similar performance issues.
    So change anything and everything in your setup, one item at a time and use the same test so you have a pear to pear comparision.

  • Jumbo frame ethernet

    I came across a number of articles relating to jumbo frame gigabit ethernet and integrating into existing networks with fastethernet and 1518 frame size gigabit ethernet devices. Heres a quote i'd like to discuss...
    "Today, however, applications optimized for large frame sizes can easily be integrated with existing Ethernet LANs without causing interoperability problems. For example, you can partition a logical network in which systems can exchange Jumbo Frames and mark them with IEEE 802.1Q virtual LAN tags. The extended frames will be transparent to the rest of the network.
    Adapters that implement IEEE 802.1Q can support different Ethernet frame sizes for different logical network interfaces. For example, a server could communicate with another server using Jumbo Frames while communicating with clients sitting on another VLAN or IP subnet using standard Ethernet frames - all via the same physical connection."
    The use of VLANs to ease interoperability issues is discussed in numerous articles and papers on jumbo ethernet - however, can someone pls explain why cisco have and the implications of the design decision they have taken with the 3750 switches to have no flexibility with configuring jumbo frame support, It can't be assigned on port-by-port basis, nor even on a VLAN basis but is set system wide so on a 24 port 10/100/1000 switch all ports are configured as jumbo regardless of what the connected client devices support.
    Are there any plans to upgrade the IOS to support configuring jumbo frames on per VLAN basis. Jumbo frames are an important issue to us as we can benefit from the performance increases and the improved server CPU utilization. Any thoughts ?

    Best practice here is to segment your Jumbo Frame servers on their own VLANs for Jumbo supported systems only.
    As a post here has already mentioned, Path MTU discovery will tell the systems on the Jumbo VLANs to keep the frames under 1500 when talking to a non-Jumbo VLAN.
    http://www.cisco.com/univercd/cc/td/doc/product/lan/cat2970/12225see/scg/swint.htm#wp1154596
    Please rate all helpful posts.
    Brad

  • *****Jumbo frame on colapse core****

    Hi Folks,
    I have been thinking and research about for that a while now, and yet i have not gotten a formal answer... Please read carefully...
    We are a medium size company.... In both our our remotes, we have four 3750G(two of them are 3750X) in a stack. All good there, the nightmare is we have everything single thing(pc,printer,phones,ipcam,servers(running esxi),SAN(storageFlex)) connected to the stack;therefore, the stack switches are acting as core,distribution and access layer at the same time.
    I need to enable jumbo frame to speed up back, isci frame between SAN and ESXi hosts. knowing i can only enable jumbo frame globally. Since i have all these devices connected to the stack which aren't supported jumbo! should i go head enable jumbo on the stack? Will the device which aren't support jumbo frame will continue to work? Or since i know interface with 100Mb and less will ignore jumbo, should i set all device for which i don't jumbo frame to 100Mb?
    Any help and suggestion will be greatly appreciate.
    Thanks,

    Disclaimer
    The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
    Liability Disclaimer
    In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
    Posting
    A device that doesn't support jumbo on a port that does, will work fine as long as another device doesn't send it a jumbo frame.  If that happens, the device will be unable to process the received jumbo frame.  (I.e. a jumbo enabled switch can allow MTU mismatch between hosts.)
    (If you're thinking about IP fragmentation, that will only happen across a L3 hop, and it often creates additional performance issues.)
    BTW, on the 3750 series, data transfer performance problems are often caused by default 3750 buffer allocations.  Allowing jumbo doesn't address that.  I.e., you might obtain much be better data transfer performance via buffer tuning.

  • Jumbo frame stuck at 8165 bytes or larger

    Model: Macbook Pro Retina 15" Mid 2012, OS X 10.9
    NIC: thunderbolt Ethernet
    Issue:
    I enabled Jumbo frame in Network > Advance > Hardware > Manaually > MTU 9000,
    and then i tried the ping command in Terminal to verify the functionality.
    The following 192.168.16.13 is another LAN client.
    1.
    XXX$ ping -D -s 8100 192.168.16.13
    PING 192.168.16.13 (192.168.16.13): 8100 data bytes
    8108 bytes from 192.168.16.13: icmp_seq=0 ttl=128 time=1.255 ms
    8108 bytes from 192.168.16.13: icmp_seq=1 ttl=128 time=1.014 ms
    8108 bytes from 192.168.16.13: icmp_seq=2 ttl=128 time=0.953 ms
    ^C
    works fine
    2.
    XXX$ ping -D -s 8200 192.168.16.13
    PING 192.168.16.13 (192.168.16.13): 8200 data bytes
    ping: sendto: Message too long
    ping: sendto: Message too long
    Request timeout for icmp_seq 0
    ^C
    doesn't work!
    So i tried some more ping to find out the critical size on my Mac, and turned out that when the ping size is 8165 bytes or larger, the ping fails:
    1.
    XXX$ ping -D -s 8164 192.168.16.13
    PING 192.168.16.13 (192.168.16.13): 8164 data bytes
    8172 bytes from 192.168.16.13: icmp_seq=0 ttl=128 time=0.901 ms
    8172 bytes from 192.168.16.13: icmp_seq=1 ttl=128 time=0.974 ms
    ^C
    --- 192.168.16.13 ping statistics ---
    2 packets transmitted, 2 packets received, 0.0% packet loss
    round-trip min/avg/max/stddev = 0.901/0.938/0.974/0.036 ms
    XXX$ ping -D -s 8165 192.168.16.13
    PING 192.168.16.13 (192.168.16.13): 8165 data bytes
    ping: sendto: Message too long
    ping: sendto: Message too long
    Request timeout for icmp_seq 0
    ^C
    Just to figure out is this issue caused by my Mac or the responder (another LAN client), i launched my wireshark to sniff packets.
    And the wireshark told me when the ping size is 8165 or larger, my Mac actually did NOT send out the icmp packets.
    So now it seems the issue is cuased by my Mac since it didn't send out the packets...
    Is there anyone have any related experience?
    Thanks for any kind of feedbacks in advance!

    Hi rack0 tack0,
    Thanks! and yes i did read this article before.
    As this article suggested, that the max icmp size supported by Mac OS X is 8184, and i got stuck at 8164.
    20 bytes is not a big issue, but i just want to figure out what's making it wrong.
    Thanks again!
    any other suggestion?

  • Catalyst 3750 and jumbo frames

    We're looking to implement a gigabit segment with a 3750 switch, with the latest apple imac G5 clients connected and and an xserve G5 connected doing link aggregation using a 4 port smalltree NIC.
    Although the Xserve supports jumbo frames i believe the imac NICs DON'T support jumbo frames although the operating system does( the imac NICs DO support 1000T ) Ideally we'd want the 3750 switch to be configured for Jumbo frames. The 3750 switch we've chosen has all ports of 10/100/1000T with the SMI, so all ports will have the MTU set at 9000 if we enable jumbo.
    Although the Xserve will be fine, i'm worried about traffic that ingresses from the xserve and egresses out to a 10/100/1000 port to which an imac is connected which i believe does not support Jumbo frames. What are the issues in terms of connectivity and dropped packets for an imac G5 connected to a 3750 ?
    seeing as the MTU is set globally and all our ports are gigabit, and machines will be connected to these ports that don't support jumbo but are advertised as having 'gigabit capability'
    Sorry if these sounds like an incoherent rant, but i needed to provide as much info as possible. Help much appreciated

    just to add, in comparison HP gigabit switches can do jumbo vlan on a per vlan and per port basis it's a shame the 3750 can't do that

  • Jumbo frame

    We enabled jumbo frame (MTU 8500) on our FWSM interface , after that the IP phones not working normally (go to backup mode and some of them keep reloading).
    the IP Phones are communicating with CCM which found behind this FWSM.
    after return MTU to 1500 , they are working properly.
    Anyone face this issue?

    Hey Lorenzo,
    Please take look at the following posting. 
    https://supportforums.cisco.com/discussion/12029186/nexus-7000-configuring-l2-jumbomtu
    Hope this helps,
    Please remember to answer helpful posts.
    Thanks.

  • LwIP Socket Jumbo Frame

    Running a VC707, the HW design is based on the BIST project.  I updated the TEMAC to have 32k FIFO buffers.  SDK 2014.2.
    For my SDK application I have 2 designs.
     1.  Socket based echo server from XAP1026
     2.  Default echo server example project
    Following XAP1026 for configuring LwIP for jumbo frames I can get my RAW API design to send jumbo frames (8192 length packets).
    My socket echo server still only sends 1500 byte packets with the same configuration (only difference is socket vs. RAW mode).
    Has anyone overcome this issue before?  Or has anyone information on configure jumbo frames for LwIP in socket mode.
    Found this older post that seems similar yet different, but it is like 3 years old so I'm not sure if it is still accurate: 
    http://forums.xilinx.com/t5/Embedded-Development-Tools/Trying-to-send-larger-packets-using-UDP-XAPP1026-UECHO-test/m-p/16237?query.id=22717
     

    Xilkernel Configuration
    BEGIN OS
    PARAMETER OS_NAME = xilkernel
    PARAMETER OS_VER = 6.1
    PARAMETER PROC_INSTANCE = microblaze_0
    PARAMETER config_named_sema = true
    PARAMETER config_pthread_mutex = true
    PARAMETER config_sema = true
    PARAMETER config_time = true
    PARAMETER debug_mon = true
    PARAMETER max_bufs = 100
    PARAMETER max_pthreads = 100
    PARAMETER max_readyq = 100
    PARAMETER max_sem = 25
    PARAMETER max_sem_waitq = 100
    PARAMETER pthread_stack_size = 32768
    PARAMETER stdin = axi_uart16550_0
    PARAMETER stdout = axi_uart16550_0
    PARAMETER sysintc_spec = microblaze_0_axi_intc
    PARAMETER systmr_dev = axi_timer_0
    PARAMETER systmr_interval = 1
    PARAMETER verbose = true
    END
    LwIP Configuration
    BEGIN LIBRARY
    PARAMETER LIBRARY_NAME = lwip140
    PARAMETER LIBRARY_VER = 2.1
    PARAMETER PROC_INSTANCE = microblaze_0
    PARAMETER api_mode = SOCKET_API
    PARAMETER ip_frag_max_mtu = 9000
    PARAMETER mem_size = 524288
    PARAMETER pbuf_pool_bufsize = 9700
    PARAMETER tcp_mss = 8060
    PARAMETER tcp_snd_buf = 32768
    PARAMETER temac_use_jumbo_frames = true
    END
     

  • Airport Utility fails with Jumbo Frames enabled.

    I have an iMac running 10.6.4 and Airport Utility Version 5.5.1 (551.19). A few weeks ago I enabled Jumbo Frames for some wired gigabit testing. I forgot to change back to 1500 MTU. Tried checking an Airport Setting with the Utility and it discovers the Airport Express (non-gige) but cannot retrieve the configuration. I manage the Airport via ethernet. Changing back to 1500 MTU fixes the issue. Is this expected? The odd thing is that printing works fine to the printer attached to the Airport even at 9000 MTU.

    I have an iMac running 10.6.4 and Airport Utility Version 5.5.1 (551.19). A few weeks ago I enabled Jumbo Frames for some wired gigabit testing. I forgot to change back to 1500 MTU. Tried checking an Airport Setting with the Utility and it discovers the Airport Express (non-gige) but cannot retrieve the configuration. I manage the Airport via ethernet. Changing back to 1500 MTU fixes the issue. Is this expected? The odd thing is that printing works fine to the printer attached to the Airport even at 9000 MTU.

  • Linksys SE2800 and jumbo frames

    Does the Linksys SE2800 gigabit 8 port switch support jumbo frames?  Anyone have this switch?  Any issues?  Looking to replace a netgear gigabit switch that likes to forget that it has gigabit machines connected to it.

    Hi Michael,
    Actually had a chat with a colleague at linksys regarding your question, but he referred me to a datasheet, which left me with the question I started with. The technician said yes it suppported Jumbo frames but he could post me nothing in black and white..
    Why not look at the Cisco Small Business  umnanaged product the SG100D-08.   It offers as the datasheet suggets;
    Peace of mind:
    All Cisco 100 Series switches are protected for the life of the product by the Cisco Limited Lifetime Hardware Warranty
    Also,  even though an unmanaged product, this series supports such features as;
    1. Green Energy—Efficient Technology
    The Cisco SG 100D-08 switch supports Green Energy-efficient
    Technology. It can enter sleep mode, turn off unused ports, and adjust
    power as needed. This increases energy efficiency to help businesses use
    less power and save money.
    2. Jumbo Frame Support
    The Cisco SG 100D-08 switch supports frames up to 9,000 bytes called
    jumbo frames. Jumbo Frame support improves network throughput and
    reduces CPU utilization during large file transfers, such as multimedia files,
    by allowing larger payloads in each packet.
    regards Dave

  • Dual nic NAS and Jumbo Frame

    I am posting this on the server area because I doubt I am going to get an answer anywhere else.
    I have a linux based NAS running netatalk and avahi (afp server and bonjour) with two nics and I have a brand new Mac Pro with two NICS. What I want to do is run a crossover cable between the NAS and the Mac Pro in addition to both being plugged into the normal network. The normal network would have 1500 byte mtu so my internet performance and all of the various vintages of print servers work ok. The dedicated network would have jumbo frames. As we get more Mac Pros, we would add a switch and more machines to this secondary jumbo frame network.
    That in theory should work fine (I have done it with other operating systems). My quandary is how to get the Mac to always connect to the NAS via the Jumbo nic and not through the other nic? The Mac learns of the server via Bonjour, so how do I tell it to prefer the "appearance" of the server on the jumbo NIC vs the appearance on the normal network. I know with WINS or DNS I can override the name resolution with a LMHOSTS or hosts file entry, can I do the same with Bonjour?
    Thanks for any help or any pointers in the right direction!

    I think you are misguided in your assumption that I am not intimately familiar with TCP and don't know what I am talking about.
    TCP does not "negotiate" MSS, it advertises the MSS of each side to the remote in the 3 way handshake. It is perfectly acceptable to have asymetric MSS values. TCP does NOT NEGOTIATE a common MSS size. On a LAN, this will result in a functional communication. UDP however does not have such mechanisms and will fail.
    TCP will also not function properly in the scnario of my local workstaion with jumbos enabled communicating with a distant endpoint that also has jumbos enabled across a transit network that does not support the maximum MSS used by one of the end stations. For giggles let's say the far end is FDDI and has 4k frame size. Our transit does not support frame sizes larger than the "natural" frame size of 576 bytes. We will use a 4k frame size from me to the remote and a 9k from the remote to me. If the remote sends to me it can use the full 4k MSS of token ring because its less than my MSS. In the reverse my workstation would send 4k frames back to the token ring station. Successful communication would then depend on path MTU and intermediary routers to send ICMP type 3 code 4 messages to signal back to our end stations to reduce our MSS (assuming the DF bit is set on our traffic or the transit router is incapable of fragmentation).
    This is perhaps a bit of a flippant example in that nobody would be running FDDI or Token ring anymore, but random entities on the internet will run jumbo frame and perhaps some other l2 technology we aren't familiar with.
    Did you ever deal with someone on a token ring segment trying to hit 3Com's web site when it was fddi or token ring? I have on several occasions. I also see this with VPNs all the time. Cisco's genius recomendation is to reduce your MSS on your server as some of their products don't support PMTU. I have had a Cisco <-> Juniper VPN where transfers worked one way because the Juniper would silently strip the DF bit from the packet and fragment it and the Cisco router (38xx) wouldn't do the same in the reverse direction. I also went through **** with the Nortel Contivity VPN devices while they sorted out what to do with the whole MTU negotiation issue.
    I have spent many hours of my life pouring through sniffer captures because of mismatched MTUs. Let's not forget the old days of FDDI backbones with ethernet segments bridged across them and FDDI attached servers... mismatched buffers... no thanks.
    I therefore don't want to waste my time troubleshooting some bizzare networking issue when there is a perfectly valid way of solving the issue for absolutely minimal expense. I am moving large files here (certainly large enough to get well out of TCP slow start), we easily saturate the full gig link minutes at a time and a saturated gigabit link at standard frame size is inefficient due to the interpacket gap which is locked at 96 bit times for ethernet and the 40 bytes of TCP/IP header plus whatever application payload is prepended per packet on each link. Cutting the number of TCP/IP headers and (probably more importantly since most decent nics do checksum offload these days) application layer headers also reduces load on both client and server.
    On large sequential bulk data transfers jumbo frame effectively increases performance and reduces overhead. Period. I have implemented it from the early days of Alteon hardware in Sun servers through Juniper EX products last week. Every iSCSI implementation I run into is jumbo frame based for those exact reasons.
    That being said, I don't need to restrict anything. All I want to do is to override bonjour/mDNS for this particular host such that the Pro always communicates over the jumbo segment. This is easily accomplished in windows with an LMHOST entry or in a unix environment with a HOSTS file entry. Is there some way to override bonjour from the client side? I'm ok even statically defining the services presented by bonjour on this host.
    I am also willing to force all bonjour requests through a DNS server, however Apple doesn't have any decent documentation on how this is accomplished in an enterprise environment.

  • T61 and Intel 82566mm Jumbo Frame Capable?

    I'm trying to achieve the near 100MB/sec performance with my NAS but cannot seem to get past 15MB/sec with 25MB/sec bursts on my gigabit LAN.  Do I need to enable jumbo frames for this to work?  If so, is the Intel 82566 gigabit NIC even capable of jumbo frames or is this an OS dependent issue?  In addition, does my switch also need to have jumbo frame support or is it built into gigabit? And lastly, how would this adversely affect my 802.11n clients connecting to my server - are jumbo frames something 802.11n clients capable of?
    T61_Wide | Model No. 7662 - CTO
    Core 2 Duo T7250 | 2GB OCZ DDR2-800
    82566MM Gigabit | 4965AGN Centrino Pro

    check BIOS / configuration and see if INTERNAL LAN is enbaled or disabled. i had this happen when my wireless and lan were not working. fixed it in bios.
    T7600, T60p - 2GB - 2.33GHZ - 100GB

Maybe you are looking for