search for: llist_reverse_ord

Displaying 14 results from an estimated 14 matches for "llist_reverse_ord".

Did you mean: llist_reverse_order
2023 May 23
4
[PATCH 3/3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
...); } continue; } ------------------------------------------------------------------------------- But let me ask a couple of questions. Let's forget this patch, let's look at the current code: node = llist_del_all(&worker->work_list); if (!node) schedule(); node = llist_reverse_order(node); ... process works ... To me this looks a bit confusing. Shouldn't we do if (!node) { schedule(); continue; } just to make the code a bit more clear? If node == NULL then llist_reverse_order() and llist_for_each_entry_safe() will do nothing. But this is minor. /* mak...
2016 Apr 26
2
[PATCH 2/2] vhost: lockless enqueuing
...struct vhost_work, node); > - list_del_init(&work->node); > - } else > - work = NULL; > - spin_unlock_irq(&dev->work_lock); > > - if (work) { > + node = llist_del_all(&dev->work_list); > + if (!node) > + schedule(); > + > + node = llist_reverse_order(node); Can we avoid llist reverse here? > + /* make sure flag is seen after deletion */ > + smp_wmb(); > + llist_for_each_entry_safe(work, work_next, node, node) { > + clear_bit(VHOST_WORK_QUEUED, &work->flags); > __set_current_state(TASK_RUNNING); > work-&...
2016 Apr 26
2
[PATCH 2/2] vhost: lockless enqueuing
...struct vhost_work, node); > - list_del_init(&work->node); > - } else > - work = NULL; > - spin_unlock_irq(&dev->work_lock); > > - if (work) { > + node = llist_del_all(&dev->work_list); > + if (!node) > + schedule(); > + > + node = llist_reverse_order(node); Can we avoid llist reverse here? > + /* make sure flag is seen after deletion */ > + smp_wmb(); > + llist_for_each_entry_safe(work, work_next, node, node) { > + clear_bit(VHOST_WORK_QUEUED, &work->flags); > __set_current_state(TASK_RUNNING); > work-&...
2023 May 22
1
[PATCH 3/3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
On 05/22, Mike Christie wrote: > > On 5/22/23 7:30 AM, Oleg Nesterov wrote: > >> + /* > >> + * When we get a SIGKILL our release function will > >> + * be called. That will stop new IOs from being queued > >> + * and check for outstanding cmd responses. It will then > >> + * call vhost_task_stop to tell us to return and exit. >
2023 May 23
2
[PATCH 3/3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
...---------------------------- > But let me ask a couple of questions. I share most of these questions. > Let's forget this patch, let's look at the > current code: > > node = llist_del_all(&worker->work_list); > if (!node) > schedule(); > > node = llist_reverse_order(node); > ... process works ... > > To me this looks a bit confusing. Shouldn't we do > > if (!node) { > schedule(); > continue; > } > > just to make the code a bit more clear? If node == NULL then > llist_reverse_order() and llist_for_each_entry_sa...
2016 Apr 26
2
[PATCH 1/2] vhost: simplify work flushing
We used to implement the work flushing through tracking queued seq, done seq, and the number of flushing. This patch simplify this by just implement work flushing through another kind of vhost work with completion. This will be used by lockless enqueuing patch. Signed-off-by: Jason Wang <jasowang at redhat.com> --- drivers/vhost/vhost.c | 53
2016 Apr 26
2
[PATCH 1/2] vhost: simplify work flushing
We used to implement the work flushing through tracking queued seq, done seq, and the number of flushing. This patch simplify this by just implement work flushing through another kind of vhost work with completion. This will be used by lockless enqueuing patch. Signed-off-by: Jason Wang <jasowang at redhat.com> --- drivers/vhost/vhost.c | 53
2016 Apr 26
0
[PATCH 2/2] vhost: lockless enqueuing
...------------------------- >> drivers/vhost/vhost.h | 7 ++++--- >> 2 files changed, 29 insertions(+), 30 deletions(-) [...] >> - if (work) { >> + node = llist_del_all(&dev->work_list); >> + if (!node) >> + schedule(); >> + >> + node = llist_reverse_order(node); > Can we avoid llist reverse here? > Probably not, this is because: - we should process the work exactly the same order as they were queued, otherwise flush won't work - llist can only add a node to the head of list. Thanks
2016 Apr 26
0
[PATCH 2/2] vhost: lockless enqueuing
...work = list_first_entry(&dev->work_list, - struct vhost_work, node); - list_del_init(&work->node); - } else - work = NULL; - spin_unlock_irq(&dev->work_lock); - if (work) { + node = llist_del_all(&dev->work_list); + if (!node) + schedule(); + + node = llist_reverse_order(node); + /* make sure flag is seen after deletion */ + smp_wmb(); + llist_for_each_entry_safe(work, work_next, node, node) { + clear_bit(VHOST_WORK_QUEUED, &work->flags); __set_current_state(TASK_RUNNING); work->fn(work); if (need_resched()) schedule(); - } else -...
2018 Sep 09
0
[PATCH net-next v8 5/7] net: vhost: introduce bitmap for vhost_poll
...EXPORT_SYMBOL_GPL(vhost_poll_queue); > > @@ -354,6 +363,7 @@ static int vhost_worker(void *data) > > if (!node) > > schedule(); > > > > + bitmap_zero(dev->work_pending, VHOST_DEV_MAX_VQ); > > node = llist_reverse_order(node); > > /* make sure flag is seen after deletion */ > > smp_wmb(); > > @@ -420,6 +430,8 @@ void vhost_dev_init(struct vhost_dev *dev, > > struct vhost_virtqueue *vq; > > int i; > > > > + BUG_ON(nvqs > VHOS...
2023 Jun 01
4
[PATCH 1/1] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
...rent_state(TASK_INTERRUPTIBLE); - - if (vhost_task_should_stop(worker->vtsk)) { - __set_current_state(TASK_RUNNING); - break; - } - - node = llist_del_all(&worker->work_list); - if (!node) - schedule(); - + node = llist_del_all(&worker->work_list); + if (node) { node = llist_reverse_order(node); /* make sure flag is seen after deletion */ smp_wmb(); llist_for_each_entry_safe(work, work_next, node, node) { clear_bit(VHOST_WORK_QUEUED, &work->flags); - __set_current_state(TASK_RUNNING); kcov_remote_start_common(worker->kcov_handle); work->fn(work)...
2023 May 22
3
[PATCH 0/3] vhost: Fix freezer/ps regressions
The following patches made over Linus's tree fix the 2 bugs: 1. vhost worker task shows up as a process forked from the parent that did VHOST_SET_OWNER ioctl instead of a process under root/kthreadd. This was causing breaking scripts. 2. vhost_tasks didn't disable or add support for freeze requests. The following patches fix these issues by making the vhost_task task a thread under the
2023 May 22
2
[PATCH 3/3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
...if (!dead && signal_pending(current)) { > + struct ksignal ksig; > + > + dead = get_signal(&ksig); > + if (dead) > + clear_thread_flag(TIF_SIGPENDING); Does get_signal actually return true only on SIGKILL then? > + } > + } > > node = llist_reverse_order(node); > /* make sure flag is seen after deletion */ > diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h > index 537cbf9a2ade..249a5ece9def 100644 > --- a/include/linux/sched/task.h > +++ b/include/linux/sched/task.h > @@ -29,7 +29,7 @@ struct kernel_clone_...
2023 Jun 02
2
[PATCH 1/1] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
Hi Mike, sorry, but somehow I can't understand this patch... I'll try to read it with a fresh head on Weekend, but for example, On 06/01, Mike Christie wrote: > > static int vhost_task_fn(void *data) > { > struct vhost_task *vtsk = data; > - int ret; > + bool dead = false; > + > + for (;;) { > + bool did_work; > + > + /* mb paired w/