Hi Slawa,
On 9/22/16 11:53 AM, Slawa Olhovchenkov wrote:> On Wed, Sep 21, 2016 at 11:25:18PM +0200, Julien Charbon wrote:
>> On 9/21/16 9:51 PM, Slawa Olhovchenkov wrote:
>>> On Wed, Sep 21, 2016 at 09:11:24AM +0200, Julien Charbon wrote:
>>>> You can also use Dtrace and lockstat (especially with the
lockstat -s
>>>> option):
>>>>
>>>> https://wiki.freebsd.org/DTrace/One-Liners#Kernel_Locks
>>>>
https://www.freebsd.org/cgi/man.cgi?query=lockstat&manpath=FreeBSD+11.0-RELEASE
>>>>
>>>> But I am less familiar with Dtrace/lockstat tools.
>>>
>>> I am still use old kernel and got lockdown again.
>>> Try using lockstat (I am save more output), interesting may be
next:
>>>
>>> R/W writer spin on writer: 190019 events in 1.070 seconds (177571
events/sec)
>>>
>>>
-------------------------------------------------------------------------------
>>> Count indv cuml rcnt nsec Lock Caller
>>> 140839 74% 74% 0.00 24659 tcpinp
tcp_tw_2msl_scan+0xc6
>>>
>>> nsec ------ Time Distribution ------ count Stack
>>> 4096 | 913
tcp_twstart+0xa3
>>> 8192 |@@@@@@@@@@@@ 58191
tcp_do_segment+0x201f
>>> 16384 |@@@@@@ 29594
tcp_input+0xe1c
>>> 32768 |@@@@ 23447 ip_input+0x15f
>>> 65536 |@@@ 16197
>>> 131072 |@ 8674
>>> 262144 | 3358
>>> 524288 | 456
>>> 1048576 | 9
>>>
-------------------------------------------------------------------------------
>>> Count indv cuml rcnt nsec Lock Caller
>>> 49180 26% 100% 0.00 15929 tcpinp
tcp_tw_2msl_scan+0xc6
>>>
>>> nsec ------ Time Distribution ------ count Stack
>>> 4096 | 157
pfslowtimo+0x54
>>> 8192 |@@@@@@@@@@@@@@@ 24796
softclock_call_cc+0x179
>>> 16384 |@@@@@@ 11223 softclock+0x44
>>> 32768 |@@@@ 7426
intr_event_execute_handlers+0x95
>>> 65536 |@@ 3918
>>> 131072 | 1363
>>> 262144 | 278
>>> 524288 | 19
>>>
-------------------------------------------------------------------------------
>>
>> This is interesting, it seems that you have two call paths competing
>> for INP locks here:
>>
>> - pfslowtimo()/tcp_tw_2msl_scan(reuse=0) and
>>
>> - tcp_input()/tcp_twstart()/tcp_tw_2msl_scan(reuse=1)
>
> I think same.
>
>> These paths can indeed compete for the same INP lock, as both
>> tcp_tw_2msl_scan() calls always start with the first inp found in
>> twq_2msl list. But in both cases, this first inp should be quickly
used
>> and its lock released anyway, thus that could explain your situation it
>> that the TCP stack is doing that all the time, for example:
>>
>> - Let say that you are running out completely and constantly of tcptw,
>> and then all connections transitioning to TIME_WAIT state are competing
>> with the TIME_WAIT timeout scan that tries to free all the expired
>> tcptw. If the stack is doing that all the time, it can appear like
>> "live" locked.
>>
>> This is just an hypothesis and as usual might be a red herring.
>> Anyway, could you run:
>>
>> $ vmstat -z | head -2; vmstat -z | grep -E 'tcp|sock'
>
> ITEM SIZE LIMIT USED FREE REQ FAIL SLEEP
>
> socket: 864, 4192664, 18604, 25348,49276158, 0, 0
> tcp_inpcb: 464, 4192664, 34226, 18702,49250593, 0, 0
> tcpcb: 1040, 4192665, 18424, 18953,49250593, 0, 0
> tcptw: 88, 16425, 15802, 623,14526919, 8, 0
> tcpreass: 40, 32800, 15, 2285, 632381, 0, 0
>
> In normal case tcptw is about 16425/600/900
>
> And after `sysctl -a | grep tcp` system stuck on serial console and I am
reset it.
>
>> Ideally, once when everything is ok, and once when you have the issue
>> to see the differences (if any).
>>
>> If it appears your are quite low in tcptw, and if you have enough
>> memory, could you try increase the tcptw limit using sysctl
>
> I think this is not eliminate stuck, just may do it less frequency
You are right, it would just be a big hint that the tcp_tw_2msl_scan()
contention hypothesis is the right one. As I see you have plenty of
memory on your server, thus could you try with:
net.inet.tcp.maxtcptw=4192665
And see what happen. Just to validate this hypothesis.
Thanks.
--
Julien