Eric W. Biederman
2023-May-27 09:49 UTC
[PATCH 3/3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
Linus Torvalds <torvalds at linux-foundation.org> writes:> So I'd really like to finish this. Even if we end up with a hack or > two in signal handling that we can hopefully fix up later by having > vhost fix up some of its current assumptions.The real sticky widget for me is how to handle one of these processes coredumping. It really looks like it will result in a reliable hang. Limiting ourselves to changes that will only affect vhost, all I can see would be allowing the vhost_worker thread to exit as soon as get_signal reports the process is exiting. Then vhost_dev_flush would need to process the pending work. Something like this: diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index a92af08e7864..fb5ebc50c553 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -234,14 +234,31 @@ EXPORT_SYMBOL_GPL(vhost_poll_stop); void vhost_dev_flush(struct vhost_dev *dev) { struct vhost_flush_struct flush; + struct vhost_worker *worker = dev->worker; + struct llist_node *node, *head; + + if (!worker) + return; + + init_completion(&flush.wait_event); + vhost_work_init(&flush.work, vhost_flush_work); - if (dev->worker) { - init_completion(&flush.wait_event); - vhost_work_init(&flush.work, vhost_flush_work); + vhost_work_queue(dev, &flush.work); - vhost_work_queue(dev, &flush.work); - wait_for_completion(&flush.wait_event); + /* Either vhost_worker runs the pending work or we do */ + node = llist_del_all(&worker->work_list); + if (node) { + node = llist_reverse_order(node); + /* make sure flag is seen after deletion */ + smp_wmb(); + llist_for_each_entry_safe(work, work_next, node, node) { + clear_bit(VHOST_WORK_QUEUED, &work->flags); + work->fn(work); + cond_resched(); + } } + + wait_for_completion(&flush.wait_event); } EXPORT_SYMBOL_GPL(vhost_dev_flush); @@ -338,6 +355,7 @@ static int vhost_worker(void *data) struct vhost_worker *worker = data; struct vhost_work *work, *work_next; struct llist_node *node; + struct ksignal ksig; for (;;) { /* mb paired w/ kthread_stop */ @@ -348,6 +366,9 @@ static int vhost_worker(void *data) break; } + if (get_signal(&ksig)) + break; + node = llist_del_all(&worker->work_list); if (!node) schedule(); diff --git a/kernel/vhost_task.c b/kernel/vhost_task.c index b7cbd66f889e..613d52f01c07 100644 --- a/kernel/vhost_task.c +++ b/kernel/vhost_task.c @@ -47,6 +47,7 @@ void vhost_task_stop(struct vhost_task *vtsk) * not exiting then reap the task. */ kernel_wait4(pid, NULL, __WCLONE, NULL); + put_task_struct(vtsk->task); kfree(vtsk); } EXPORT_SYMBOL_GPL(vhost_task_stop); @@ -101,7 +102,7 @@ struct vhost_task *vhost_task_create(int (*fn)(void *), void *arg, return NULL; } - vtsk->task = tsk; + vtsk->task = get_task_struct(tsk); return vtsk; } EXPORT_SYMBOL_GPL(vhost_task_create); Eric
Linus Torvalds
2023-May-27 16:12 UTC
[PATCH 3/3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
On Sat, May 27, 2023 at 2:49?AM Eric W. Biederman <ebiederm at xmission.com> wrote:> > The real sticky widget for me is how to handle one of these processes > coredumping. It really looks like it will result in a reliable hang.Well, if *that* is the main worry, I think that's trivial enough to deal with. In particular, we could make the rule just be that user worker threads simply do not participate in core-dumps. THAT isn't hard. All we need to do is (a) not count those threads in zap_threads() (b) make sure that they don't add themselves to the "dumper" list by not calling "coredujmp_task_exit()" (c) not initiate core-dumping themselves. and I think that's pretty much it. In fact, that really seems like a good model *regardless*, because honestly, a PF_IO_WORKER doesn't have valid register state for the core dump anyway, and anything that would have caused a IO thread to get a SIGSEGV *should* have caused a kernel oops already. So the only worry is that the core dump will now happen while an IO worker is still busy and so it's not "atomic" wrt possible VM changes, but while that used to be a big problem back in the dark ages when we didn't get the VM locks for core dumping, that got fixed a few years ago because it already caused lots of potential issues. End result: I think the attached patch is probably missing something, but the approach "FeelsRight(tm)" to me. Comments? Linus -------------- next part -------------- A non-text attachment was scrubbed... Name: patch.diff Type: text/x-patch Size: 2133 bytes Desc: not available URL: <http://lists.linuxfoundation.org/pipermail/virtualization/attachments/20230527/9dcb4dad/attachment-0001.bin>
Eric W. Biederman
2023-May-30 15:01 UTC
[PATCH 3/3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
"Eric W. Biederman" <ebiederm at xmission.com> writes:> Linus Torvalds <torvalds at linux-foundation.org> writes: > >> So I'd really like to finish this. Even if we end up with a hack or >> two in signal handling that we can hopefully fix up later by having >> vhost fix up some of its current assumptions. > > > The real sticky widget for me is how to handle one of these processes > coredumping. It really looks like it will result in a reliable hang. > > Limiting ourselves to changes that will only affect vhost, all I can > see would be allowing the vhost_worker thread to exit as soon as > get_signal reports the process is exiting. Then vhost_dev_flush > would need to process the pending work. >Oleg recently pointed out that the trickiest case currently appears to be what happens if someone calls exec, in a process using vhost. do_close_on_exec is called after de_thread, and after the mm has changed. Which means that my idea of moving the work from vhost_worker into vhost_dev_flush can't work. At the point that flush is called it has the wrong mm. Which means the flush or cancel of the pending work needs to happen in the vhost thread, we can't assume there is any other thread available to do the work. What makes this all nice is that the vhost code has vhost_dev_check_owner which ensures only one mm can initiate I/O. Which means file descriptor passing is essentially an academic concern. In the case of both process exit, and exec except for a racing on which piece of code shuts down first there should be no more I/O going into the work queues. But it is going to take someone who understands and cares about vhost to figure out how to stop new I/O from going into the work queues and to ensure that on-going work is dealt with. Eric