On 22.07.2017 15:00, Konstantin Belousov wrote:> On Sat, Jul 22, 2017 at 02:40:59PM +0700, Eugene Grosbein wrote:
>> Also, I've always wondered what load pattern one should have
>> to exhibit real kernel stack problems due to KVA fragmentation
>> and KSTACK_PAGES>2 on i386?
> In fact each stack consumes 3 contigous pages because there is also
> the guard, which catches the double faults.
>
> You need to use the machine, e.g. to run something that creates and
destroys
> kernel threads, while doing something that consumes kernel_arena KVA.
> Plain malloc/zmalloc is enough.
Does an i386 box running PPPoE connection to an ISP (mpd5) plus several
IPSEC tunnels plus PPtP tunnel plus WiFi access point plus
"transmission" torrent client with 2TB UFS volume over GEOM_CACHE
over GEOM_JOURNAL over USB qualify? There are ospfd, racoon,
sendmail, ssh and several periodic cron jobs too.
> In other words, any non-static load would cause fragmentation preventing
> allocations of the kernel stacks for new threads.
>
>> How can I get ddb backtrace you asked for? I'm not very familiar
with ddb.
>> I have serial console to such i386 system.
>
> bt command for the given thread provides the backtrace. I have no idea
> how did you obtained the numbers that you show.
Not sure what kernel thread I too to trace... If you just need a name of the
function:
$ objdump -d vm_object.o | grep -B 8 'sub .*0x...,%esp' |less
00003b30 <sysctl_vm_object_list>:
3b30: 55 push %ebp
3b31: 89 e5 mov %esp,%ebp
3b33: 53 push %ebx
3b34: 57 push %edi
3b35: 56 push %esi
3b36: 83 e4 f8 and $0xfffffff8,%esp
3b39: 81 ec 30 05 00 00 sub $0x530,%esp
It uses stack for pretty large struct kinfo_vmobject (includes char
kvo_path[PATH_MAX])
and several others.