TCP Segmentation Offload and Jumbo Frames.

I have VmWare-OpenSolaris and VMXNET3 network card.
Can i enable subj?
#:> ifconfig vmxnet3s mtu 5000
ifconfig: setifmtu: SIOCSLIFMTU: vmxnet3s0: Invalid argument

Hi sdavids5670
Did you ever find a proper fix to your issue? was updating the N1Kv a solution?
I have exactly the same symptoms with a N1Kv [4.2(1)SV2(1.1a)], F5 (ASM), vmxnet3 guests, however I'm failing all RDP (win2k8r2) and SSH large packets (redhat); the rest of my traffic appears fine. This only occurs when the F5 resides on the same VEM or VMware host as you have seen. My packet captures are similar.
My work around is two fold. Firstly create rules to isolate the F5 onto hosts where guests are not utilising it and secondly, disable TCP offloading (I use IPv4 only). Neither of these are solutions.
I have not tried a non-F5 trunk (ie, perhaps a CSR1000v) to replicate this without the F5.
I suspected that the onbox / offbox issue was something specific about the logic of the VEM installed on the host (that's how I justified it to myself) rather than VEM->VEM traffic. It appears that only vEth -> vEth traffic on the same VEM is the issue. Also, I can only replicate this when one of the vEth ports is a trunk. I have yet to be able to replicate this if both are access port-groups (vEths).
I have yet to log a TAC as I wanted to perform testing to exclude the F5.
Thought that I would just ask....
Cheers
David

Similar Messages

  • TCP segmentation offload (TSO) and vmxnet3/1000v - bug?

    NOTE: My knowledge of ESX and the 1000v does not run deep so I do not have a thorough understanding of the relationship / integration between those two components.  If anything I'm saying here is out-of-line I apologize in advance.
    Yesterday a report came in that an IIS application on a staging server in our test environment was not working (in Internet Explorer it returned "Page cannot be displayed").  The IIS server sits behind an F5 load balancer.  Both the F5 and the IIS server are VM guests on a VMware ESX host.  Both the IIS server and the F5 had recently been moved to a new environment in which the version of 1000v changed and the vnic driver changed (from E1000 to vmxnet3) and this appeared to be the first time that anybody noticed an issue.
    After some digging we noticed something peculiar.  The problem only manifested when the IIS server and the F5 were on the same physical host.  If they were not co-resident everything worked just fine.  After reviewing packet captures we figured out that the "Page cannot be displayed" was not because the content wasn't making it from IIS server to client but rather because the content was gzip compressed and something happened in-transit between the IIS server and the client to corrupt the payload thereby making the gzip decompressible.  As a matter of fact, at no time was IP connectivity ever an issue.  We could RDP to the IIS server and SSH/HTTP into the F5 without any issues.
    I started digging in a little deeper with Wireshark (details of which are included as an attached PDF).  It turns out that a bug??? involving TCP segmentation offload (TSO) was causing the payload of the communication to become corrupted.  What I'm still trying to figure out is who is responsible for the bug?  Is it VMware or Cisco?  I'm leaning towards Cisco and the 1000v and this is why.
    Referring to the attached PDF, TEST #2 (hosts co-resident) and TEST #3 (hosts not co-resident) show packet captures taken from the IIS box, the 1000v and the F5.  Figure 6 shows that the contents of the gzip'd payload can be deciphered by Wireshark as it leaves the IIS box.  Figure 8 shows capture data from the 1000v's perspective (spanning rx/tx on the F5 veth port).  It's still good at this point.  However, figure 10 shows capture data taken on the F5.  At some point in time between leaving the egress port on the 1000v and entering the F5 it cannot be decompressed (corrupt data).  There is no mention that the TCP checksum failed.  In my mind the only way that the data could be corrupt without a TCP checksum failure is if the corruption occurred during the segmentation of the packet.  However, if it was due to the guest OS-level vnic driver then why did it still look good to the 1000v egress towards the F5? 
    The most curious aspect of this whole thing is the behavior I described earlier related to onbox vs. offbox.  This problem only occurs when the traffic is switched in memory.  Refer to figure's 11 - 16 for capture data that shows the very same test when the F5 and IIS are not co-resident.  Is the 1000v (or vnic) savy enough to skip TSO in software and allow the physical NIC to do TSO if it knows that the traffic is going to have to leave the host and go onto the physical wire?  That's the only way I can make sense of this difference in behavior.
    In any case, here are all of the guest OS-level settings related to offload of any type (along with the defaults) and the one we had to change (in bold) to get this to work with the vmxnet3 NIC:
    IPv4 Checksum Offload: Rx & Tx Enabled
    IPv4 TSO Offload: From Enabled to Disabled
    Large Send Offload V2 (IPv4): Enabled
    Offload IP Options: Enabled
    Offload TCP Options: Enabled
    TCP Checksum Offload (IPv4): Rx & Tx Enabled
    UPD Checksum Offload (IPv4): Rx & Tx Enabled

    Hi sdavids5670
    Did you ever find a proper fix to your issue? was updating the N1Kv a solution?
    I have exactly the same symptoms with a N1Kv [4.2(1)SV2(1.1a)], F5 (ASM), vmxnet3 guests, however I'm failing all RDP (win2k8r2) and SSH large packets (redhat); the rest of my traffic appears fine. This only occurs when the F5 resides on the same VEM or VMware host as you have seen. My packet captures are similar.
    My work around is two fold. Firstly create rules to isolate the F5 onto hosts where guests are not utilising it and secondly, disable TCP offloading (I use IPv4 only). Neither of these are solutions.
    I have not tried a non-F5 trunk (ie, perhaps a CSR1000v) to replicate this without the F5.
    I suspected that the onbox / offbox issue was something specific about the logic of the VEM installed on the host (that's how I justified it to myself) rather than VEM->VEM traffic. It appears that only vEth -> vEth traffic on the same VEM is the issue. Also, I can only replicate this when one of the vEth ports is a trunk. I have yet to be able to replicate this if both are access port-groups (vEths).
    I have yet to log a TAC as I wanted to perform testing to exclude the F5.
    Thought that I would just ask....
    Cheers
    David

  • Dual nic NAS and Jumbo Frame

    I am posting this on the server area because I doubt I am going to get an answer anywhere else.
    I have a linux based NAS running netatalk and avahi (afp server and bonjour) with two nics and I have a brand new Mac Pro with two NICS. What I want to do is run a crossover cable between the NAS and the Mac Pro in addition to both being plugged into the normal network. The normal network would have 1500 byte mtu so my internet performance and all of the various vintages of print servers work ok. The dedicated network would have jumbo frames. As we get more Mac Pros, we would add a switch and more machines to this secondary jumbo frame network.
    That in theory should work fine (I have done it with other operating systems). My quandary is how to get the Mac to always connect to the NAS via the Jumbo nic and not through the other nic? The Mac learns of the server via Bonjour, so how do I tell it to prefer the "appearance" of the server on the jumbo NIC vs the appearance on the normal network. I know with WINS or DNS I can override the name resolution with a LMHOSTS or hosts file entry, can I do the same with Bonjour?
    Thanks for any help or any pointers in the right direction!

    I think you are misguided in your assumption that I am not intimately familiar with TCP and don't know what I am talking about.
    TCP does not "negotiate" MSS, it advertises the MSS of each side to the remote in the 3 way handshake. It is perfectly acceptable to have asymetric MSS values. TCP does NOT NEGOTIATE a common MSS size. On a LAN, this will result in a functional communication. UDP however does not have such mechanisms and will fail.
    TCP will also not function properly in the scnario of my local workstaion with jumbos enabled communicating with a distant endpoint that also has jumbos enabled across a transit network that does not support the maximum MSS used by one of the end stations. For giggles let's say the far end is FDDI and has 4k frame size. Our transit does not support frame sizes larger than the "natural" frame size of 576 bytes. We will use a 4k frame size from me to the remote and a 9k from the remote to me. If the remote sends to me it can use the full 4k MSS of token ring because its less than my MSS. In the reverse my workstation would send 4k frames back to the token ring station. Successful communication would then depend on path MTU and intermediary routers to send ICMP type 3 code 4 messages to signal back to our end stations to reduce our MSS (assuming the DF bit is set on our traffic or the transit router is incapable of fragmentation).
    This is perhaps a bit of a flippant example in that nobody would be running FDDI or Token ring anymore, but random entities on the internet will run jumbo frame and perhaps some other l2 technology we aren't familiar with.
    Did you ever deal with someone on a token ring segment trying to hit 3Com's web site when it was fddi or token ring? I have on several occasions. I also see this with VPNs all the time. Cisco's genius recomendation is to reduce your MSS on your server as some of their products don't support PMTU. I have had a Cisco <-> Juniper VPN where transfers worked one way because the Juniper would silently strip the DF bit from the packet and fragment it and the Cisco router (38xx) wouldn't do the same in the reverse direction. I also went through **** with the Nortel Contivity VPN devices while they sorted out what to do with the whole MTU negotiation issue.
    I have spent many hours of my life pouring through sniffer captures because of mismatched MTUs. Let's not forget the old days of FDDI backbones with ethernet segments bridged across them and FDDI attached servers... mismatched buffers... no thanks.
    I therefore don't want to waste my time troubleshooting some bizzare networking issue when there is a perfectly valid way of solving the issue for absolutely minimal expense. I am moving large files here (certainly large enough to get well out of TCP slow start), we easily saturate the full gig link minutes at a time and a saturated gigabit link at standard frame size is inefficient due to the interpacket gap which is locked at 96 bit times for ethernet and the 40 bytes of TCP/IP header plus whatever application payload is prepended per packet on each link. Cutting the number of TCP/IP headers and (probably more importantly since most decent nics do checksum offload these days) application layer headers also reduces load on both client and server.
    On large sequential bulk data transfers jumbo frame effectively increases performance and reduces overhead. Period. I have implemented it from the early days of Alteon hardware in Sun servers through Juniper EX products last week. Every iSCSI implementation I run into is jumbo frame based for those exact reasons.
    That being said, I don't need to restrict anything. All I want to do is to override bonjour/mDNS for this particular host such that the Pro always communicates over the jumbo segment. This is easily accomplished in windows with an LMHOST entry or in a unix environment with a HOSTS file entry. Is there some way to override bonjour from the client side? I'm ok even statically defining the services presented by bonjour on this host.
    I am also willing to force all bonjour requests through a DNS server, however Apple doesn't have any decent documentation on how this is accomplished in an enterprise environment.

  • Catalyst 3750 and jumbo frames

    We're looking to implement a gigabit segment with a 3750 switch, with the latest apple imac G5 clients connected and and an xserve G5 connected doing link aggregation using a 4 port smalltree NIC.
    Although the Xserve supports jumbo frames i believe the imac NICs DON'T support jumbo frames although the operating system does( the imac NICs DO support 1000T ) Ideally we'd want the 3750 switch to be configured for Jumbo frames. The 3750 switch we've chosen has all ports of 10/100/1000T with the SMI, so all ports will have the MTU set at 9000 if we enable jumbo.
    Although the Xserve will be fine, i'm worried about traffic that ingresses from the xserve and egresses out to a 10/100/1000 port to which an imac is connected which i believe does not support Jumbo frames. What are the issues in terms of connectivity and dropped packets for an imac G5 connected to a 3750 ?
    seeing as the MTU is set globally and all our ports are gigabit, and machines will be connected to these ports that don't support jumbo but are advertised as having 'gigabit capability'
    Sorry if these sounds like an incoherent rant, but i needed to provide as much info as possible. Help much appreciated

    just to add, in comparison HP gigabit switches can do jumbo vlan on a per vlan and per port basis it's a shame the 3750 can't do that

  • FTTH connection proper MTU Size and Jumbo frames

    I've recently moved to a ISP that provides a 4mbps connection through FTTH(Single OFC). There is a EPON ONU in my premise from which a RJ-45 lan cable is connected to my Intel DH67CL1 board based PC. manual says, the NIC is a gigabit ethernet card. I tried setting MTU of 8996 and I can ping and browse fine. But, I'm totally in dark whether this value is optimum and works flawlessly browsing sites. How to find and set the proper MTU for a fibre network like this? Is the value correct?
    I tried like this decreasing mtu value:
    ifconfig eth0 mtu 8997
    SIOCSIFMTU: Invalid argument
    then,
    ifconfig eth0 mtu 8996
    ^^^ No error message and it seems accepting.
    BTW, from arch wiki, I saw that the driver module(e1000e which is used here) used by NIC  have some bug report filed wr.to Jumbo frame. Am I doing things correctly? Earlier MTU was at default 1500. Please guide. thank you
    Some drivers will prevent lower C-states
    Some kernel drivers, like e1000e will prevent the CPU from entering C-states under C3 with non-standard MTU sizes by design. See bugzilla #77361 for comments by the developers.
    https://wiki.archlinux.org/index.php/Ju … mbo_frames

    yeah, i actually talked to support and they told me the same thing. just another example of misleading information from Linksys as here is what the manual and the help page say:
    MTU
    MTU is the Maximum Transmission Unit. It specifics the largest packet size permitted for Internet transmission. Select Manual if you want to manually enter the largest packet size that will be transmitted. The recommended size, entered in the Size field, is 1500. You should leave this value in the 1200 to 1500 range. To have the Router select the best MTU for your Internet connection, keep the default setting, Auto.
    no where in that description does it say that 1500 is the maxmium. 
    because this is also a gigabit switch, one would expect that jumbo frame support is not out of the realm of possibility. as a point of reference any other $50 (or less) gigabit switch supports this, but that's what i get for expecting too much from Linksys.
    thanks for the info.

  • Linksys SE2800 and jumbo frames

    Does the Linksys SE2800 gigabit 8 port switch support jumbo frames?  Anyone have this switch?  Any issues?  Looking to replace a netgear gigabit switch that likes to forget that it has gigabit machines connected to it.

    Hi Michael,
    Actually had a chat with a colleague at linksys regarding your question, but he referred me to a datasheet, which left me with the question I started with. The technician said yes it suppported Jumbo frames but he could post me nothing in black and white..
    Why not look at the Cisco Small Business  umnanaged product the SG100D-08.   It offers as the datasheet suggets;
    Peace of mind:
    All Cisco 100 Series switches are protected for the life of the product by the Cisco Limited Lifetime Hardware Warranty
    Also,  even though an unmanaged product, this series supports such features as;
    1. Green Energy—Efficient Technology
    The Cisco SG 100D-08 switch supports Green Energy-efficient
    Technology. It can enter sleep mode, turn off unused ports, and adjust
    power as needed. This increases energy efficiency to help businesses use
    less power and save money.
    2. Jumbo Frame Support
    The Cisco SG 100D-08 switch supports frames up to 9,000 bytes called
    jumbo frames. Jumbo Frame support improves network throughput and
    reduces CPU utilization during large file transfers, such as multimedia files,
    by allowing larger payloads in each packet.
    regards Dave

  • SRW2048 and Jumbo frames

    I have a Cisco SRW2048 (Firmware 1.2.2d, boot 1.0.0.05) on which I have enabled jumbo frames but yet when I attempt with a Windows 2003 server to ping the mgmt IP of this switch with a frame over 1472 (ping -f -l 1472 w.x.y.z) I get a response of "Packet needs to be fragmented".  The server is connected to a Netgear gig switch (GS724AT) that has jumbo frames enabled and working as I can talk to other devices using jumbo frames.  Attempts to access the SRW2048 switch fail on packets larger than 1472.  I have also attempted a Windows 7 box using jumbo frames to no success.  The Windows 7 box works when connected to the Netgear switch but when I move the connection back to the SRW2048, jumbo frames on the windows box stop working.
    Any help or suggestions would be appreciated.
    Thanks.
    David

    On the SF-300 and SG-300 for Jumbo to work the Enable Jumbo checkbox must be unchecked. Either I don't understand Cisco logic or it's a bug in the GUI.

  • How do I maximize LAN speeds using Gigabit Ethernet, jumbo frames?

    I move a lot of large files (RAW photos, music and video) around my internal network, and I'm trying to squeeze out the fastest transfer speeds possible. My question has to do both with decisions about hardware and what settings to use once it's all hooked up.
    This is what I have so far:
    -- imac 3.06GHz, macbook pro 2.53GHz
    -- Cisco gigabit smart switch capable of jumbo frames
    -- Buffalo Terastation Duo NAS (network attached storage), also capable of Gbit and jumbo frames
    -- All wired up with either cat6 or cat53e.
    -- The sizes of the files I'm moving would include large #s of files at either 15MB (photos), 7MB (music), 1-2GB (video) and 650MB (also video).
    -- jumbo frames have been enabled in the settings of the macs, the switch and the buffalo HD.
    -- I've played with various settings of simultaneous connections (more of a help with smaller files), no real difference
    -- Network utility shows the ethernet set to Gbit, with no errors or collisions.
    -- have tried both ftp and the finder's drap and drop
    -- also, whenever I'm doing a major move of data, I kick my family off the network, so there is no other traffic that should be interfering.
    Even with all that, I'm still lucky to get transfer speeds at 15-20mbps, but more commonly at around 10. The other odd thing I've encountered while trying to up my speeds, is that I might start out a transfer at maybe 60mbps, it will maintain that for about 30-60sec and then it appears to ramp itself down, sometimes to as low as 1-5mbps. I'm starting to think my network is mocking me
    I also have a dual band (2.4/5) wireless n router (not jumbo frame capable), but I'm assuming wired is going to trump wireless? (NOTE: in my tests, I have disabled wireless to force the connection through the ethernet).
    Can anyone help with suggestions, and/or suggest a strong networking reference book with emphasis on mac? I'm willing to invest in additional equipment within reason.
    Thanks in advance!
    juliana

    I'm going to pick and choose to answer just a few of the items you have listed. Hopefully others will address other items.
    • This setup was getting me speeds as high as 10-15MB/sec, and as low as 5-6MB/sec when I was transferring video files around 1-2 GB in size
    I would think a single large file would get the best sustained transfer rates, as you have less create new file overhead on the destination device. It is disturbing that the large files transfer at a slower rate.
    • Would a RAID0 config get me faster write speeds than RAID1? I have another NAS that can do other RAID configs, which is fastest as far as write times?
    RAID0 (Striped) is generally faster, as the I/O is spread across 2 disks.
    RAID1 is mirrored, so you can not free the buffer until the same data is on BOTH disks. The disks are NOT going to be in rotational sync, so at least one of the disks will have to wait longer for the write sectors to move under the write heads.
    But RAID1 gives you redundency. RAID0 has not redundency. And you can NOT switch back and forth between the 2 without reformatting your disks, so if you choose RAID0, you do not get redundency unless you provide your own via a backup device for your NAS.
    • what is the most efficient transfer protocol? ftp? smb? something else? And am I better off invoking the protocol from the terminal, or is the overhead of an app-based client negligible?
    Test the different transfers using a large file (100's of MB or a GB sized file would be good as a test file).
    I've had good file transfers with AFP file sharing, but not knowing anything about your NAS, I do not know if it supports AFP, and if it does, whether it is a good implementation.
    If your NAS supports ssh, then I would try scp instead of ftp. scp is like using cp only it works over the network.
    If your NAS support rsync, that would be even better, as it has the ability to just copy files that are either NOT on the destination or update files which have changed, but leave the matching files alone.
    This would help in situations where you cannot copy everything all at once.
    But no matter what you choose, you should measure your performance so you choose something that is good enough.
    • If a client is fine, does anyone have a suggestion as to best one for speed? Doesn't have to be free -- I don't mind supporting good software.
    Again just test what you have.
    • Whats a good number to allow for simultaneous connections, given the number of files and their size?
    If the bottleneck is the NAS, then adding more I/O that will force the disk heads to move away from the current file being written will just slow things down.
    But try 2 connections and measure your performance. If it gets better, then maybe the NAS is not the bottleneck.
    • What question am I not asking?
    You should try using another system as a test destination device in the network setup to see if it gets better, worse, or the same throughput as the NAS. You need to see about changing things in your setup to isolate where the problem might be.
    Also do not rule out bad ethernet cables, so switch them out as well. For example, there was a time I tried to use Gigabit ethernet, but could only get 100BaseT. I even purchased a new gigabit switch, thinking the 1st was just not up to the task. It turned out I had a cheap ethernet cable that only had 4 wires instead of 8 and was not capable of gigabit speeds. An ethernet cable that has a broken wire or connector could exhibit similar performance issues.
    So change anything and everything in your setup, one item at a time and use the same test so you have a pear to pear comparision.

  • Jumbo Frame support

    Hi all
    I have a vendor that wants to run an application called mirrorview to do a bandwidth test for a new application; the only requirement is that I support jumbo frames on my uplinks. I do not currently have my ports configured to support jumbo frames, are there any benefits or drawbacks to supporting jumbo frames? If so please post so I can weigh those options before I reconfigure my uplinks.
    TIA Rodney.
    FYI I am running 6500 hybrid in the core , and 3550-xx at the edge.

    If jumbo frames are enabled only on uplinks but not all the way between two systems, then the end systems won't take any advantage of jumbo frames. There is no drawbacks of jumbo frames as such as far as I know, but some pitfalls.
    Jumbo frames are any frames bigger than standard Ethernet frames (1518 bytes of user-visible part). And some platforms implement jambo frames as big as 9216 bytes (Cat 6500), while others (e.g. Cat 2950) are limited to baby-jumbo of 1530 bytes. So when you enable jumbo frames you must be sure that the size of jumbo frame is consistent across all your systems including servers and client PC's connected to your network.
    Another pitfall is that if you enable jumbo frame on any IP-layer interface this will automatically change IP MTU. If you're running OSPF and jumbo frames are enabled only on some systems connected to a subnet but not on other systems from the same subnet OSPF adjacencies will not form until you specify 'ip mtu 1500' on jumbo-enabled systems. As soon as you do this, effect of jumbo frames for IP traffic will be void (but it might still be necessary for things like MPLS). So be sure that systems on common subnet have same MTU.
    Routing problem is easy to detect, more general problem is that travelling across L2-only path there is no way for switches to send 'ICMP Fragmentation required' if packet is large then next interface MTU. This will break PMTU Discovery and since most applications usually sent packets at max MTU and DF-bit set, there will be timeouts. So again, consistent MTU across whole L2 path is important.
    By the way, if your servers and PC's are not connected via jumbo-enabled links then you unlikely see any difference by enabling jumbo frames on the uplinks because both 6500 and 3550 catalysts are capable of wirespeed performance. The only time when it makes sense to enable jumbo frames only on the core links is when you need some non-IP headers to encapsulate your max-sized IP packets (MPLS is one such example).
    As for benefits - servers running (very) heavy traffic applications (think full-feed USENET server with multiple fast peerings) may benefit from sending large portion of data in each packet, so for the same amount of data they need less number of packets. Destination system will have to handle less interrupts and overall performance may increase.
    Hope this helps.

  • Aggregates, VLAN's, Jumbo-Frames and cluster interconnect opinions

    Hi All,
    I'm reviewing my options for a new cluster configuration and would like the opinions of people with more expertise than myself out there.
    What I have in mind as follows:
    2 x X4170 servers with 8 x NIC's in each.
    On each 4170 I was going to configure 2 aggregates with 3 nics in each aggregate as follows
    igb0 device in aggr1
    igb1 device in aggr1
    igb2 device in aggr1
    igb3 stand-alone device for iSCSI network
    e1000g0 device in aggr2
    e1000g1 device in aggr2
    e1000g2 device in aggr3
    e1000g3 stand-alone device of iSCSI network
    Now, on top of these aggregates, I was planning on creating VLAN interfaces which will allow me to connect to our two "public" network segments and for the cluster heartbeat network.
    I was then going to configure the vlan's in an IPMP group for failover. I know there are some questions around that configuration in the sense that IPMP will not detect a nic failure if a NIC goes offline in the aggregate, but I could monitor that in a different manner.
    At this point, my questions are:
    [1] Are vlan's, on top of aggregates, supported withing Solaris Cluster? I've not seen anything in the documentation to mention that it is, or is not for that matter. I see that vlan's are supported, inluding support for cluster interconnects over vlan's.
    Now with the standalone interface I want to enable jumbo frames, but I've noticed that the igb.conf file has a global setting for all nic ports, whereas I can enable it for a single nic port in the e1000g.conf kernel driver. My questions are as follows:
    [2] What is the general feeling with mixing mtu sizes on the same lan/vlan? Ive seen some comments that this is not a good idea, and some say that it doesnt cause a problem.
    [3] If the underlying nic, igb0-2 (aggr1) for example, has 9k mtu enabled, I can force the mtu size (1500) for "normal" networks on the vlan interfaces pointing to my "public" network and cluster interconnect vlan. Does anyone have experience of this causing any issues?
    Thanks in advance for all comments/suggestions.

    For 1) the question is really "Do I need to enable Jumbo Frames if I don't want to use them (neither public nore private network)" - the answer is no.
    For 2) each cluster needs to have its own seperate set of VLANs.
    Greets
    Thorsten

  • Routers: What Are Jumbo Frames and why do I need them?

    Some routers' specs specifically mention that they handle jumbo frames (with a number like 9K). I have a network with 2 iphones, two ipads, 4 computers, two networked Blu-Ray players, and 3 computers, all of which are operating simultaneously a lot of the time.
    Some other companies seem to be using the fact that they support jumbo frames as part of their selling points. How do they help?
    I asked Cisco Chat support about the RVS4000 and whether it supported them on both the WAN and the LAN. They said not on the WAN. They also said "
    It appears under the L2 Switch tab you can input a Max Frame type.....
    I don't see anything that actually says jumbo frames but I believe you can put in a value.....
    after the device is setup you can navigate to the L2 Switch option and it has a Max Frame value"
    I'm not sure whether this router supports jumbo frames or not. I have a short list of wired gigabit routers that I'm considering for purchase and the RVS4000 is on the list.
    I need to learn more about this topic so any help or pointers to stuff to read would be greatly appreciated.

    Thanks so much for the info. I read virtually all of it. The Jumbo Frames thing sounds very tricky - and possibly detrimental. I'll have to see if Time Warner Roadrunner supports them and at what sizes. Other than for really big file transfers between machines on my network (which I don't do that often) it sounds like jumbo frames isn't going to do much for me.
    It also looks like the RVS4000 is not what I want. The smallnetbuilder review was a very useful one-although it's 4 yrs old, it's still likely mostly valid.
    I do some gaming at times and it sounded like the adjusting of frame sizes until all the devices in the path are the same can cause unacceptable latency. Now it seems that no matter which gigabit router I choose, I need to be sure I get one where I can disable the major frames process, and maybe enable it when I want to do hard drive backups across the network. Welcome to the gigabit ethernet world I guess.
    The RV220W sounds like a nice machine, but is a lot more machine than I think I need for my network. I read a very detailed review of it on Amazon at:  http://www.amazon.com/gp/cdp/member-reviews/A2BBGBR6ARRJQO/ref=cm_pdp_rev_more?ie=UTF8&sort_by=MostRecentReview#R2SCJUQOKY7EN
    It also sounds like it's more complex to set up than I would like to tackle. I'm a retired electrical engineer but definitely not a skilled IT person, so plug and play simplicity is important. I understand just enough to get in trouble.
    Thanks again for the links. Much appreciated.

  • Jumbo Frames within Solaris 10 zones and multiple interfaces...

    We have Jumbo Frames working in the Global Zone, and have the MaxFrameSize=3,3,3 etc...
    We also have our AGGR's built correctly and defined aggr1:1 and aggr1:2
    the problem is on boot-up, if all the name files (hostname.aggr1 and hostname.aggr1:1) are defined in the /etc directory, then you can't start the zones....?
    and if you place the files in the /export/zones/<machinename>/root/etc/ directory, than the interfaces do not start-up automatically..... ?
    So If I want all the interfaces in the global zone to be seen by the other zones, and for the interfaces to come live when the zones are booted.... where do the hostname.interface files live....???

    Darren:
    I understand where you're coming from from a technical perspective. But there is a way you could work around it.
    For argument's sake, zones a+b with e1000g0 - e1000g3
    From an implementation perspective, what's to stop you from:
    e1000g0 / e1000g1 shared between all
    e1000g2 plumbed at global, only assigned to zone a.
    e1000g3 plumbed at global, only assigned to zone b.
    You can certainly have an empty interface file (i.e. cp /dev/null /etc/hostname.e1000g2 ; cp /dev/null /etc/hostname.e1000g3). The interface will plumb but have no IP information configured.
    This doesn't give truly exclusive interfaces to either zone, but it operates effectively as though it were.
    Warning: I haven't actually tested this, but I see no reason that it wouldn't work.

  • T61 and Intel 82566mm Jumbo Frame Capable?

    I'm trying to achieve the near 100MB/sec performance with my NAS but cannot seem to get past 15MB/sec with 25MB/sec bursts on my gigabit LAN.  Do I need to enable jumbo frames for this to work?  If so, is the Intel 82566 gigabit NIC even capable of jumbo frames or is this an OS dependent issue?  In addition, does my switch also need to have jumbo frame support or is it built into gigabit? And lastly, how would this adversely affect my 802.11n clients connecting to my server - are jumbo frames something 802.11n clients capable of?
    T61_Wide | Model No. 7662 - CTO
    Core 2 Duo T7250 | 2GB OCZ DDR2-800
    82566MM Gigabit | 4965AGN Centrino Pro

    check BIOS / configuration and see if INTERNAL LAN is enbaled or disabled. i had this happen when my wireless and lan were not working. fixed it in bios.
    T7600, T60p - 2GB - 2.33GHZ - 100GB

  • No jumbo frames between 4500-X and Force10

    Hi,
    We have just installed new Cisco 4500-X 10Gbe SFP+ switches.
    We have Dell Blade Chasis with Force 10 MXL blade switches.
    We are testing/preparing the environment before our new Dell compellent hybrid arrays arrive, but we face a problem with Jumbo frames between the Cisco 4500-X and the Dell Force10.
    When I do a ping -L from the blade server to a host connected to the Cisco 4500-X anything bigger then 1472 bytes seems to get dropped.
    When performing a ping between servers connected to the 4500-X I can use jumbo frame sizes like 9000 without issues
    When performing a ping between servers in the blade chassis I can use jumbo frame sizes like 9000 without issues
    Now we also have a pair of Cisco 3750X switches that are currently being used for Dell Equallogic 1Gbe SAN and they are connected to the same Force10 switches as the 4500-X are and I have created the exact same LACP port-channels between the Force10/3750X as for the Force10 4500-X. Jumbo packets to the 3750X is working fine but for the 4500-X it is not.
    I have tried connecting the switches simply on access port level, creat a vlan trunk and create a portchannel with trunking but none of the solutions seem to work, in every situation packets above 1472 are dropped... at least they dont arrive at the destination.
    On every port of the 4500-X the MTU is set to 9198, the vlan each port is in is set to 9198 and the port-channel is set to 9198
    System MTU can only be set to 1550 so that is on default 1500 altough I tried setting it to 1550 but that didnt make any difference ofcourse.
    The 4500-X is running version 03.04.04 SG / 15.0(1sr) SG10
    The Force10 is running 9.6.0.0
    Any help is appriciated

    Sorry for the delay , i have bought the 3064-32T , it seemed more cost effective as the transceivers are so expensive and it has more ports too.

  • Jumbo frame support with BGP MSS size

    Hi All,
    I am working at small SP. we are going to enable Jumbo Frame support from end to end. Our core segment have MPLS cloud and packet size could be able support  up over 9000 already. Since our Core segment router  are running pure BGP with our edge/access segment router , when I enable jumbo frame support on their  interface level, I still can see BGP MSS size is 1260 right now. so my question, do I need  increase BGP MSS size between our core router  and edge router for transiting our SP cloud  traffic packet ?
    many thanks!
    Eric

    hi Harold,
    I have other question about MSS size for IPv4 and IPv6 BGP session, if the physical link MTU size is 1500 ( same as 1514 on ASR 9K platform), why IPv6 BGP MSS is 1240 and IPv6 BGP MSS is 1220? As I only understand IPv4 headr 20, tcp header 20, but didn't match these MSS size number, I am sure I mis-understand some value in between, could you please let me know how we get 1240 and 1220?
    many thanks!
    Eric

Maybe you are looking for

  • Putting 1 of 2 displays to sleep?

    I just started using my 46" HD TV as a second display for playing iTunes movies, etc. I'm using a DVI to HDMI cable and I have 2 questions: 1. When running in dual screen mode, is there a way to stop video output to the TV besides unplugging the cabl

  • Forgot apple security question answers

    don't have link to forgot questions under password and security in apple id. Is there any other way to change questions?

  • Iview Settings For Bank Details in ESS

    Hi All, 1)When i am Trying to Change the Bank Details Information in ESS,the System is not alllowing me to Enter a Bank key change and giving a message that only 11 digit bank Key is permissible .

  • SC Uploading Program to create SC automatically.

    Hello We use the classic scenario with SRM server 5.5 Sometime enduser should create SC with line items 100-300. For this reason, my customer wants uploading program to create SC. First of all, enduser will make the excel file with material code, pri

  • Getting an airport signal, but can't access internet

    My boyfriend and I recently hooked our Airport Express up to the digital cable modem we got from Comcast. While his Powerbook is working fine, mine recognizes the signal from Airport, but I can't connect to the internet. Any ideas on how to fix this