Yamagi
2021-Mar-17 13:33 UTC
13.0-RC2 / 14-CURRENT: Processes getting stuck in vlruwk state
Hi, me and some other users in the ##bsdforen.de IRC channel have the problem that during Poudriere runs processes getting stuck in the 'vlruwk' state. For me it's fairly reproduceable. The problems begin about 20 to 25 minutes after I've started poudriere. At first only some ccache processes hang in the 'vlruwk' state, after another 2 to 3 minutes nearly everything hangs and the total CPU load drops to about 5%. When I stop poudriere with ctrl-c it takes another 3 to 5 minutes until the system recovers. First the setup: * poudriere runs in a bhyve vm on zvol. The host is a 12.2-RELEASE-p2. The zvol has a 8k blocksize, the guests partition are aligned to 8k. The guest has only zpool, the pool was created with ashift=13. The vm has 16 E5-2620 and 16 gigabytes RAM assigned to it. * poudriere is configured with ccache and ALLOW_MAKE_JOBS=yes. Removing either of these options lowers the probability of the problem to show up significantly. I've tried several git revisions starting with 14-CURRENT at 54ac6f721efccdba5a09aa9f38be0a1c4ef6cf14 in the hope that I can find at least one known to be good revision. No chance, even a kernel build from 0932ee9fa0d82b2998993b649f9fa4cc95ba77d6 (Wed Sep 2 19:18:27 2020 +0000) has the problem. The problem isn't reproduceable with 12.2-RELEASE. The kernel stack ('procstat -kk') of a hanging process is: mi_switch+0x155 sleepq_switch+0x109 sleepq_catch_signals+0x3f1 sleepq_wait_sig+0x9 _sleep+0x2aa kern_wait6+0x482 sys_wait4+0x7d amd64_syscall+0x140 fast_syscall_common+0xf8 The kernel stack of vnlru is changing, even while the processes are hanging: * mi_switch+0x155 sleepq_switch+0x109 sleepq_timedwait+0x4b _sleep+0x29b vnlru_proc+0xa05 fork_exit+0x80 fork_trampoline+0xe * fork_exit+0x80 fork_trampoline+0xe Since vnlru is accumulating CPU time it looks like it's doing at least something. As an educated guess I would say that vn_alloc_hard() is waiting a long time or even forever to allocate new vnodes. I can provide more information, I just need to know what. Regards, Yamagi -- Homepage: https://www.yamagi.org Github: https://github.com/yamagi GPG: 0x1D502515 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 833 bytes Desc: not available URL: <http://lists.freebsd.org/pipermail/freebsd-stable/attachments/20210317/2e4beeef/attachment.sig>
Mateusz Guzik
2021-Mar-17 14:57 UTC
13.0-RC2 / 14-CURRENT: Processes getting stuck in vlruwk state
Can you reproduce the problem and run obtain "sysctl -a"? In general, there is a vnode limit which is probably too small. The reclamation mechanism is deficient in that it will eventually inject an arbitrary pause. On 3/17/21, Yamagi <lists at yamagi.org> wrote:> Hi, > me and some other users in the ##bsdforen.de IRC channel have the > problem that during Poudriere runs processes getting stuck in the > 'vlruwk' state. > > For me it's fairly reproduceable. The problems begin about 20 to 25 > minutes after I've started poudriere. At first only some ccache > processes hang in the 'vlruwk' state, after another 2 to 3 minutes > nearly everything hangs and the total CPU load drops to about 5%. > When I stop poudriere with ctrl-c it takes another 3 to 5 minutes > until the system recovers. > > First the setup: > * poudriere runs in a bhyve vm on zvol. The host is a 12.2-RELEASE-p2. > The zvol has a 8k blocksize, the guests partition are aligned to 8k. > The guest has only zpool, the pool was created with ashift=13. The > vm has 16 E5-2620 and 16 gigabytes RAM assigned to it. > * poudriere is configured with ccache and ALLOW_MAKE_JOBS=yes. Removing > either of these options lowers the probability of the problem to show > up significantly. > > I've tried several git revisions starting with 14-CURRENT at > 54ac6f721efccdba5a09aa9f38be0a1c4ef6cf14 in the hope that I can find at > least one known to be good revision. No chance, even a kernel build > from 0932ee9fa0d82b2998993b649f9fa4cc95ba77d6 (Wed Sep 2 19:18:27 2020 > +0000) has the problem. The problem isn't reproduceable with > 12.2-RELEASE. > > The kernel stack ('procstat -kk') of a hanging process is: > mi_switch+0x155 sleepq_switch+0x109 sleepq_catch_signals+0x3f1 > sleepq_wait_sig+0x9 _sleep+0x2aa kern_wait6+0x482 sys_wait4+0x7d > amd64_syscall+0x140 fast_syscall_common+0xf8 > > The kernel stack of vnlru is changing, even while the processes are > hanging: > * mi_switch+0x155 sleepq_switch+0x109 sleepq_timedwait+0x4b > _sleep+0x29b vnlru_proc+0xa05 fork_exit+0x80 fork_trampoline+0xe > * fork_exit+0x80 fork_trampoline+0xe > > Since vnlru is accumulating CPU time it looks like it's doing at least > something. As an educated guess I would say that vn_alloc_hard() is > waiting a long time or even forever to allocate new vnodes. > > I can provide more information, I just need to know what. > > > Regards, > Yamagi > > -- > Homepage: https://www.yamagi.org > Github: https://github.com/yamagi > GPG: 0x1D502515 >-- Mateusz Guzik <mjguzik gmail.com>
Yamagi
2021-Mar-17 15:48 UTC
13.0-RC2 / 14-CURRENT: Processes getting stuck in vlruwk state
Hi Mateusz, the sysctl output after about 10 minutes into the problem is attached. In case that its stripped by Mailman a copy can be found here: https://deponie.yamagi.org/temp/sysctl_vlruwk.txt.xz Regards, Yamagi On Wed, 17 Mar 2021 15:57:59 +0100 Mateusz Guzik <mjguzik at gmail.com> wrote:> Can you reproduce the problem and run obtain "sysctl -a"? > > In general, there is a vnode limit which is probably too small. The > reclamation mechanism is deficient in that it will eventually inject > an arbitrary pause. > > On 3/17/21, Yamagi <lists at yamagi.org> wrote: > > Hi, > > me and some other users in the ##bsdforen.de IRC channel have the > > problem that during Poudriere runs processes getting stuck in the > > 'vlruwk' state. > > > > For me it's fairly reproduceable. The problems begin about 20 to 25 > > minutes after I've started poudriere. At first only some ccache > > processes hang in the 'vlruwk' state, after another 2 to 3 minutes > > nearly everything hangs and the total CPU load drops to about 5%. > > When I stop poudriere with ctrl-c it takes another 3 to 5 minutes > > until the system recovers. > > > > First the setup: > > * poudriere runs in a bhyve vm on zvol. The host is a 12.2-RELEASE-p2. > > The zvol has a 8k blocksize, the guests partition are aligned to 8k. > > The guest has only zpool, the pool was created with ashift=13. The > > vm has 16 E5-2620 and 16 gigabytes RAM assigned to it. > > * poudriere is configured with ccache and ALLOW_MAKE_JOBS=yes. Removing > > either of these options lowers the probability of the problem to show > > up significantly. > > > > I've tried several git revisions starting with 14-CURRENT at > > 54ac6f721efccdba5a09aa9f38be0a1c4ef6cf14 in the hope that I can find at > > least one known to be good revision. No chance, even a kernel build > > from 0932ee9fa0d82b2998993b649f9fa4cc95ba77d6 (Wed Sep 2 19:18:27 2020 > > +0000) has the problem. The problem isn't reproduceable with > > 12.2-RELEASE. > > > > The kernel stack ('procstat -kk') of a hanging process is: > > mi_switch+0x155 sleepq_switch+0x109 sleepq_catch_signals+0x3f1 > > sleepq_wait_sig+0x9 _sleep+0x2aa kern_wait6+0x482 sys_wait4+0x7d > > amd64_syscall+0x140 fast_syscall_common+0xf8 > > > > The kernel stack of vnlru is changing, even while the processes are > > hanging: > > * mi_switch+0x155 sleepq_switch+0x109 sleepq_timedwait+0x4b > > _sleep+0x29b vnlru_proc+0xa05 fork_exit+0x80 fork_trampoline+0xe > > * fork_exit+0x80 fork_trampoline+0xe > > > > Since vnlru is accumulating CPU time it looks like it's doing at least > > something. As an educated guess I would say that vn_alloc_hard() is > > waiting a long time or even forever to allocate new vnodes. > > > > I can provide more information, I just need to know what. > > > > > > Regards, > > Yamagi > > > > -- > > Homepage: https://www.yamagi.org > > Github: https://github.com/yamagi > > GPG: 0x1D502515 > > > > > -- > Mateusz Guzik <mjguzik gmail.com>-- Homepage: https://www.yamagi.org Github: https://github.com/yamagi GPG: 0x1D502515 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 833 bytes Desc: not available URL: <http://lists.freebsd.org/pipermail/freebsd-stable/attachments/20210317/b79c2c6b/attachment.sig>
Mateusz Guzik
2021-Mar-18 15:44 UTC
13.0-RC2 / 14-CURRENT: Processes getting stuck in vlruwk state
To sum up what happened, Yamagi was kind enough to test several patches and ultimately the issue got solved here https://cgit.freebsd.org/src/commit/?id=e9272225e6bed840b00eef1c817b188c172338ee . The patch also got merged into releng/13.0 On 3/17/21, Yamagi <lists at yamagi.org> wrote:> Hi, > me and some other users in the ##bsdforen.de IRC channel have the > problem that during Poudriere runs processes getting stuck in the > 'vlruwk' state. > > For me it's fairly reproduceable. The problems begin about 20 to 25 > minutes after I've started poudriere. At first only some ccache > processes hang in the 'vlruwk' state, after another 2 to 3 minutes > nearly everything hangs and the total CPU load drops to about 5%. > When I stop poudriere with ctrl-c it takes another 3 to 5 minutes > until the system recovers. > > First the setup: > * poudriere runs in a bhyve vm on zvol. The host is a 12.2-RELEASE-p2. > The zvol has a 8k blocksize, the guests partition are aligned to 8k. > The guest has only zpool, the pool was created with ashift=13. The > vm has 16 E5-2620 and 16 gigabytes RAM assigned to it. > * poudriere is configured with ccache and ALLOW_MAKE_JOBS=yes. Removing > either of these options lowers the probability of the problem to show > up significantly. > > I've tried several git revisions starting with 14-CURRENT at > 54ac6f721efccdba5a09aa9f38be0a1c4ef6cf14 in the hope that I can find at > least one known to be good revision. No chance, even a kernel build > from 0932ee9fa0d82b2998993b649f9fa4cc95ba77d6 (Wed Sep 2 19:18:27 2020 > +0000) has the problem. The problem isn't reproduceable with > 12.2-RELEASE. > > The kernel stack ('procstat -kk') of a hanging process is: > mi_switch+0x155 sleepq_switch+0x109 sleepq_catch_signals+0x3f1 > sleepq_wait_sig+0x9 _sleep+0x2aa kern_wait6+0x482 sys_wait4+0x7d > amd64_syscall+0x140 fast_syscall_common+0xf8 > > The kernel stack of vnlru is changing, even while the processes are > hanging: > * mi_switch+0x155 sleepq_switch+0x109 sleepq_timedwait+0x4b > _sleep+0x29b vnlru_proc+0xa05 fork_exit+0x80 fork_trampoline+0xe > * fork_exit+0x80 fork_trampoline+0xe > > Since vnlru is accumulating CPU time it looks like it's doing at least > something. As an educated guess I would say that vn_alloc_hard() is > waiting a long time or even forever to allocate new vnodes. > > I can provide more information, I just need to know what. > > > Regards, > Yamagi > > -- > Homepage: https://www.yamagi.org > Github: https://github.com/yamagi > GPG: 0x1D502515 >-- Mateusz Guzik <mjguzik gmail.com>