> On 24 Aug 2015, at 02:02, Rick Macklem <rmacklem at uoguelph.ca>
wrote:
>
> Daniel Braniss wrote:
>>
>>> On 22 Aug 2015, at 14:59, Rick Macklem <rmacklem at
uoguelph.ca> wrote:
>>>
>>> Daniel Braniss wrote:
>>>>
>>>>> On Aug 22, 2015, at 12:46 AM, Rick Macklem <rmacklem at
uoguelph.ca> wrote:
>>>>>
>>>>> Yonghyeon PYUN wrote:
>>>>>> On Wed, Aug 19, 2015 at 09:00:35AM -0400, Rick Macklem
wrote:
>>>>>>> Hans Petter Selasky wrote:
>>>>>>>> On 08/19/15 09:42, Yonghyeon PYUN wrote:
>>>>>>>>> On Wed, Aug 19, 2015 at 09:00:52AM +0200,
Hans Petter Selasky wrote:
>>>>>>>>>> On 08/18/15 23:54, Rick Macklem wrote:
>>>>>>>>>>> Ouch! Yes, I now see that the code
that counts the # of mbufs is
>>>>>>>>>>> before
>>>>>>>>>>> the
>>>>>>>>>>> code that adds the tcp/ip header
mbuf.
>>>>>>>>>>>
>>>>>>>>>>> In my opinion, this should be fixed
by setting if_hw_tsomaxsegcount
>>>>>>>>>>> to
>>>>>>>>>>> whatever
>>>>>>>>>>> the driver provides - 1. It is not
the driver's responsibility to
>>>>>>>>>>> know if
>>>>>>>>>>> a tcp/ip
>>>>>>>>>>> header mbuf will be added and is a
lot less confusing that
>>>>>>>>>>> expecting
>>>>>>>>>>> the
>>>>>>>>>>> driver
>>>>>>>>>>> author to know to subtract one. (I
had mistakenly thought that
>>>>>>>>>>> tcp_output() had
>>>>>>>>>>> added the tc/ip header mbuf before
the loop that counts mbufs in
>>>>>>>>>>> the
>>>>>>>>>>> list.
>>>>>>>>>>> Btw,
>>>>>>>>>>> this tcp/ip header mbuf also has
leading space for the MAC layer
>>>>>>>>>>> header.)
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Hi Rick,
>>>>>>>>>>
>>>>>>>>>> Your question is good. With the
Mellanox hardware we have separate
>>>>>>>>>> so-called inline data space for the
TCP/IP headers, so if the TCP
>>>>>>>>>> stack
>>>>>>>>>> subtracts something, then we would need
to add something to the
>>>>>>>>>> limit,
>>>>>>>>>> because then the scatter gather list is
only used for the data part.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> I think all drivers in tree don't
subtract 1 for
>>>>>>>>> if_hw_tsomaxsegcount. Probably touching
Mellanox driver would be
>>>>>>>>> simpler than fixing all other drivers in
tree.
>>>>>>>>>
>>>>>>>>>> Maybe it can be controlled by some kind
of flag, if all the three
>>>>>>>>>> TSO
>>>>>>>>>> limits should include the
TCP/IP/ethernet headers too. I'm pretty
>>>>>>>>>> sure
>>>>>>>>>> we want both versions.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Hmm, I'm afraid it's already
complex. Drivers have to tell almost
>>>>>>>>> the same information to both bus_dma(9) and
network stack.
>>>>>>>>
>>>>>>>> Don't forget that not all drivers in the
tree set the TSO limits
>>>>>>>> before
>>>>>>>> if_attach(), so possibly the subtraction of one
TSO fragment needs to
>>>>>>>> go
>>>>>>>> into ip_output() ....
>>>>>>>>
>>>>>>> Ok, I realized that some drivers may not know the
answers before
>>>>>>> ether_ifattach(),
>>>>>>> due to the way they are configured/written (I saw
the use of
>>>>>>> if_hw_tsomax_update()
>>>>>>> in the patch).
>>>>>>
>>>>>> I was not able to find an interface that configures TSO
parameters
>>>>>> after if_t conversion. I'm under the impression
>>>>>> if_hw_tsomax_update() is not designed to use this way.
Probably we
>>>>>> need a better one?(CCed to Gleb).
>>>>>>
>>>>>>>
>>>>>>> If it is subtracted as a part of the assignment to
if_hw_tsomaxsegcount
>>>>>>> in
>>>>>>> tcp_output()
>>>>>>> at line#791 in tcp_output() like the following, I
don't think it should
>>>>>>> matter if the
>>>>>>> values are set before ether_ifattach()?
>>>>>>> /*
>>>>>>> * Subtract 1 for the tcp/ip header mbuf that
>>>>>>> * will be prepended to the mbuf chain in this
>>>>>>> * function in the code below this block.
>>>>>>> */
>>>>>>> if_hw_tsomaxsegcount = tp->t_tsomaxsegcount -
1;
>>>>>>>
>>>>>>> I don't have a good solution for the case where
a driver doesn't plan
>>>>>>> on
>>>>>>> using the
>>>>>>> tcp/ip header provided by tcp_output() except to
say the driver can add
>>>>>>> one
>>>>>>> to the
>>>>>>> setting to compensate for that (and if they fail to
do so, it still
>>>>>>> works,
>>>>>>> although
>>>>>>> somewhat suboptimally). When I now read the comment
in sys/net/if_var.h
>>>>>>> it
>>>>>>> is clear
>>>>>>> what it means, but for some reason I didn't
read it that way before? (I
>>>>>>> think it was
>>>>>>> the part that said the driver didn't have to
subtract for the headers
>>>>>>> that
>>>>>>> confused me?)
>>>>>>> In any case, we need to try and come up with a
clear definition of what
>>>>>>> they need to
>>>>>>> be set to.
>>>>>>>
>>>>>>> I can now think of two ways to deal with this:
>>>>>>> 1 - Leave tcp_output() as is, but provide a macro
for the device driver
>>>>>>> authors to use
>>>>>>> that sets if_hw_tsomaxsegcount with a flag for
"driver uses tcp/ip
>>>>>>> header mbuf",
>>>>>>> documenting that this flag should normally be
true.
>>>>>>> OR
>>>>>>> 2 - Change tcp_output() as above, noting that this
is a workaround for
>>>>>>> confusion w.r.t.
>>>>>>> whether or not if_hw_tsomaxsegcount should include
the tcp/ip header
>>>>>>> mbuf and
>>>>>>> update the comment in if_var.h to reflect this.
Then drivers that
>>>>>>> don't
>>>>>>> use the
>>>>>>> tcp/ip header mbuf can increase their value for
if_hw_tsomaxsegcount
>>>>>>> by
>>>>>>> 1.
>>>>>>> (The comment should also mention that a value of
35 or greater is
>>>>>>> much
>>>>>>> preferred to
>>>>>>> 32 if the hardware will support that.)
>>>>>>>
>>>>>>
>>>>>> Both works for me. My preference is 2 just because
it's very
>>>>>> common for most drivers that use tcp/ip header mbuf.
>>>>> Thanks for this comment. I tend to agree, both for the
reason you state
>>>>> and
>>>>> also
>>>>> because the patch is simple enough that it might qualify as
an errata for
>>>>> 10.2.
>>>>>
>>>>> I am hoping Daniel Braniss will be able to test the patch
and let us know
>>>>> if it
>>>>> improves performance with TSO enabled?
>>>>
>>>> send me the patch and I?ll test it ASAP.
>>>> danny
>>>>
>>> Patch is attached. The one for head will also include an update to
the
>>> comment
>>> in sys/net/if_var.h, but that isn't needed for testing.
>>
>>
>> well, the plot thickens.
>>
>> Yesterday, before running the new kernel, I decided to re run my test,
and to
>> my surprise
>> i was getting good numbers, about 300MGB/s with and without TSO.
>>
>> this morning, the numbers were again bad, around 70MGB/s,what the
^%$#@!
>>
>> so, after some coffee, I run some more tests, and some conclusions:
>> using a netapp(*) as the nfs client:
>> - doing
>> ifconfig ix0 tso or -tso
>> does some magic and numbers are back to normal - for a while
>>
>> using another Fbsd/zfs as client all is nifty, actually a bit faster
than the
>> netapp (not a fair
>> comparison, since the zfs client is not heavily used) and I can?t see
any
>> degradation.
>>
> I assume you meant "server" and not "client" above.
you are correct.
>
>> btw, this is with the patch applied, but was seeing similar numbers
before
>> the patch.
>>
>> running with tso, initially I get around 300MGB/s, but after a
while(sorry
>> can?t be more scientific)
>> it drops down to about half, and finally to a pathetic 70MGB/s
>>
> Ok, so it sounds like tso isn't the issue. (At least it seems the
patch,
> which I believe is needed, doesn't cause a regression.)
>
> All I can suggest is:
> - looking at the ix stats (I know nothing about them), but if you post them
> maybe someone conversant with the chip can help? (Before and after
degredation.)
> - if you captured packets for a short period of time when degraded and then
> after doing "ifconfig", looking at the packet capture in
wireshark might give
> some indication of what changes?
> - For this I'd be focused on the TCP layer (window sizes, etc) and
timing of
> packets.
> --> I don't know if there is a packet capture tool like tcpdump on a
Netapp, but
> that might be better than capturing them on the client, in case tcpdump
affects
> the outcome. However, tcpdump run on the client would be a fallback, I
think.
>
> The other thing is the degradation seems to cut the rate by about half each
time.
> 300-->150-->70 I have no idea if this helps to explain it.
>
the halving is an optical illusion, it starts degrading slowly.
actually it?s bad after reboot, fiddling with the two flags shows the above
?fetaure?.
one conclusion so far:
ix0 behaves much better without TSO when the server is a NetAPP
BTW, this thread started because next week, our main NetAPP will be upgraded,
and I wanted to see if there will be any improvement.
> Have fun with it, rick
love your generosity ;-)
cheers, and thanks,
danny
>
>> *: while running the tests I monitored the Netapp, and nothing out of
the
>> ordinary there.
>>
>> cheers,
>> danny
>>
>> _______________________________________________
>> freebsd-stable at freebsd.org <mailto:freebsd-stable at
freebsd.org> mailing list
>> https://lists.freebsd.org/mailman/listinfo/freebsd-stable
<https://lists.freebsd.org/mailman/listinfo/freebsd-stable>
>> To unsubscribe, send any mail to "freebsd-stable-unsubscribe at
freebsd.org <mailto:freebsd-stable-unsubscribe at freebsd.org>"