On Mon, Sep 19, 2016 at 02:28:56PM -0700, Lyndon Nerenberg wrote:> > Everything on physical Ethernet has support for it Including the LAN > > interface of Firewall, and talks to it just fine over a single interface with > > Jumbo frames enabled. > > Well, before you get too carried away, try this: > > 1) Run a ttcp test between a pair of local hosts using the exiting > jumboframes (pick two that you expect high volume traffic between). > > 2) Run the same test, but with the default MTU. > > If you don't see a very visible difference in throughput (e.g. >15%), it's > not worth the hassle. > > Just as a datapoint, we're running 10-gigE off some low-end Supermicro > boxes with 10.3-RELEASE. Using the default MTU we're getting > 750 MB/s > TCP throughput. I can't believe that you won't be able to fully saturate > a 1 Gb/s link running the default MTU on anything with more oomph than a > dual-core 32-bit Atom. > > IOW, don't micro-optimize. Life's too short ...May be surprised, but jumbo frames can degrade performance for not direct connected host, i.e. multiple switch between host: [hostA]=[SW1]=[SW2]=[SW3]=[hostB] This is because RTT of this link for jumbo frames higher 1500 bytes frame for store-and-forward switch chain.
> On Sep 19, 2016, at 3:08 PM, Slawa Olhovchenkov <slw at zxy.spb.ru> wrote: > > This is because RTT of this link for jumbo frames higher 1500 bytes > frame for store-and-forward switch chain.For TCP, RTT isn't really a factor (in this scenario), as the windowing and congestion avoidance algorithms will adapt to the actual bandwidth-delay product of the link, and the delays in each direction will be symmetrical. Now the ack for a single 9000 octet packet will take longer than that for a 1500 octet one, but that's because you're sending six times as many octets before the ACK can be generated. The time to send six 1500 octet packets and receive the ACK from sixth packet is going to be comparable to that of receiving the ack from a single 9000 octet packet. It's simple arithmetic to calculate the extra protocol header overhead for 6x1500 vs 1x9000. If there *is* a significant difference (beyond the extra protocol header overhead), it's time to take a very close look at the NICs you are using in the end hosts. A statistically significant difference would hint at poor interrupt handling performance on the part of one or more of the NICs and their associated device drivers. The intermediate switch overhead will be a constant (unless the switch backplane becomes saturated from unrelated traffic). --lyndon -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: <http://lists.freebsd.org/pipermail/freebsd-stable/attachments/20160919/a136e522/attachment.sig>