22.07.2017 14:05, Konstantin Belousov wrote:
> On Sat, Jul 22, 2017 at 12:51:01PM +0700, Eugene Grosbein wrote:
>> Also, there is https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=219476
>
> I strongly disagree with the idea of increasing the default kernel
> stack size, it will cause systematic problems for all users instead of
> current state where some workloads are problematic. Finding contig
> KVA ranges for larger stacks on KVA-starved architectures is not going
> to work.
My practice shows that increase of default kernel stack size for i386 system
using IPSEC and ZFS with compression and KVA_PAGES=512/KSTACK_PAGES=4 does work.
No stack-relates problems observed with such parametes.
Contrary, problems quickly arise if one does not increase default kernel stack
size
for such i386 system. I use several such systems for years.
We have src/UPDATING entries 20121223 and 20150728 stating the same.
Those are linked to Errata Notes to every release since 10.2 as open issues.
How many releases are we going to keep this "open"?
Also, I've always wondered what load pattern one should have
to exhibit real kernel stack problems due to KVA fragmentation
and KSTACK_PAGES>2 on i386?
> The real solution is to move allocations from stack to heap, one by one.
That was not done since 10.2-RELEASE and I see that this only getting worse.
> You claimed that vm/vm_object.o consumes 1.5K of stack, can you show
> the ddb backtrace of this situation ?
These data were collected with machine object code inspection and
only some of numbers were verified by hand. I admit there may be some false
positives.
How can I get ddb backtrace you asked for? I'm not very familiar with ddb.
I have serial console to such i386 system.