search for: vhost_flush_work

Displaying 19 results from an estimated 19 matches for "vhost_flush_work".

2023 Mar 28
1
[PATCH v6 04/11] vhost: take worker or vq instead of dev for flushing
...c @@ -247,6 +247,20 @@ static void vhost_work_queue_on(struct vhost_worker *worker, } } +static void vhost_work_flush_on(struct vhost_worker *worker) +{ + struct vhost_flush_struct flush; + + if (!worker) + return; + + init_completion(&flush.wait_event); + vhost_work_init(&flush.work, vhost_flush_work); + + vhost_work_queue_on(worker, &flush.work); + wait_for_completion(&flush.wait_event); +} + void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work) { vhost_work_queue_on(dev->worker, work); @@ -261,15 +275,7 @@ EXPORT_SYMBOL_GPL(vhost_vq_work_queue); void vhost_de...
2019 Aug 05
1
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...eason to keep it is performance. > > > > > > Maybe it's time to introduce the config option? > > > Or does it make sense if I post a V3 with: > > - introduce config option and disable the optimization by default > > - switch from synchronize_rcu() to vhost_flush_work(), but the rest are the > same > > This can give us some breath to decide which way should go for next release? > > Thanks As is, with preempt enabled? Nope I don't think blocking an invalidator on swap IO is ok, so I don't believe this stuff is going into this release at...
2023 May 31
1
[syzbot] [kvm?] [net?] [virt?] general protection fault in vhost_work_queue
...a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -235,7 +235,7 @@ void vhost_dev_flush(struct vhost_dev *dev) { struct vhost_flush_struct flush; - if (dev->worker) { + if (READ_ONCE(dev->worker.vtsk)) { init_completion(&flush.wait_event); vhost_work_init(&flush.work, vhost_flush_work); @@ -247,7 +247,9 @@ EXPORT_SYMBOL_GPL(vhost_dev_flush); void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work) { - if (!dev->worker) + struct vhost_task *vtsk = READ_ONCE(dev->worker.vtsk); + + if (!vtsk) return; if (!test_and_set_bit(VHOST_WORK_QUEUED, &work...
2023 Jun 05
1
[PATCH 1/1] vhost: Fix crash during early vhost_transport_send_pkt calls
...100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -235,7 +235,7 @@ void vhost_dev_flush(struct vhost_dev *dev) { struct vhost_flush_struct flush; - if (dev->worker) { + if (dev->worker.vtsk) { init_completion(&flush.wait_event); vhost_work_init(&flush.work, vhost_flush_work); @@ -247,7 +247,7 @@ EXPORT_SYMBOL_GPL(vhost_dev_flush); void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work) { - if (!dev->worker) + if (!dev->worker.vtsk) return; if (!test_and_set_bit(VHOST_WORK_QUEUED, &work->flags)) { @@ -255,8 +255,8 @@ void vhost_w...
2023 Jun 05
1
[PATCH 1/1] vhost: Fix crash during early vhost_transport_send_pkt calls
...100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -235,7 +235,7 @@ void vhost_dev_flush(struct vhost_dev *dev) { struct vhost_flush_struct flush; - if (dev->worker) { + if (dev->worker.vtsk) { init_completion(&flush.wait_event); vhost_work_init(&flush.work, vhost_flush_work); @@ -247,7 +247,7 @@ EXPORT_SYMBOL_GPL(vhost_dev_flush); void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work) { - if (!dev->worker) + if (!dev->worker.vtsk) return; if (!test_and_set_bit(VHOST_WORK_QUEUED, &work->flags)) { @@ -255,8 +255,8 @@ void vhost_w...
2019 Aug 05
4
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
On 2019/8/2 ??10:27, Michael S. Tsirkin wrote: > On Fri, Aug 02, 2019 at 09:46:13AM -0300, Jason Gunthorpe wrote: >> On Fri, Aug 02, 2019 at 05:40:07PM +0800, Jason Wang wrote: >>>> This must be a proper barrier, like a spinlock, mutex, or >>>> synchronize_rcu. >>> >>> I start with synchronize_rcu() but both you and Michael raise some >>>
2019 Aug 05
4
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
On 2019/8/2 ??10:27, Michael S. Tsirkin wrote: > On Fri, Aug 02, 2019 at 09:46:13AM -0300, Jason Gunthorpe wrote: >> On Fri, Aug 02, 2019 at 05:40:07PM +0800, Jason Wang wrote: >>>> This must be a proper barrier, like a spinlock, mutex, or >>>> synchronize_rcu. >>> >>> I start with synchronize_rcu() but both you and Michael raise some >>>
2016 Apr 26
2
[PATCH 1/2] vhost: simplify work flushing
...44 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -131,6 +131,19 @@ static void vhost_reset_is_le(struct vhost_virtqueue *vq) vq->is_le = virtio_legacy_is_little_endian(); } +struct vhost_flush_struct { + struct vhost_work work; + struct completion wait_event; +}; + +static void vhost_flush_work(struct vhost_work *work) +{ + struct vhost_flush_struct *s; + + s = container_of(work, struct vhost_flush_struct, work); + complete(&s->wait_event); +} + static void vhost_poll_func(struct file *file, wait_queue_head_t *wqh, poll_table *pt) { @@ -158,8 +171,6 @@ void vhost_work_ini...
2016 Apr 26
2
[PATCH 1/2] vhost: simplify work flushing
...44 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -131,6 +131,19 @@ static void vhost_reset_is_le(struct vhost_virtqueue *vq) vq->is_le = virtio_legacy_is_little_endian(); } +struct vhost_flush_struct { + struct vhost_work work; + struct completion wait_event; +}; + +static void vhost_flush_work(struct vhost_work *work) +{ + struct vhost_flush_struct *s; + + s = container_of(work, struct vhost_flush_struct, work); + complete(&s->wait_event); +} + static void vhost_poll_func(struct file *file, wait_queue_head_t *wqh, poll_table *pt) { @@ -158,8 +171,6 @@ void vhost_work_ini...
2019 Aug 05
0
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...s") >> >> or keep it in. The only reason to keep it is performance. > > > Maybe it's time to introduce the config option? Or does it make sense if I post a V3 with: - introduce config option and disable the optimization by default - switch from synchronize_rcu() to vhost_flush_work(), but the rest are the same This can give us some breath to decide which way should go for next release? Thanks > > >> >> Now as long as all this code is disabled anyway, we can experiment a >> bit. >> >> I personally feel we would be best served by having...
2023 Jun 06
1
[CFT][PATCH v3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
On 6/6/23 7:16 AM, Oleg Nesterov wrote: > On 06/05, Mike Christie wrote: >> >> On 6/5/23 10:10 AM, Oleg Nesterov wrote: >>> On 06/03, michael.christie at oracle.com wrote: >>>> >>>> On 6/2/23 11:15 PM, Eric W. Biederman wrote: >>>> The problem is that as part of the flush the drivers/vhost/scsi.c code >>>> will wait for
2019 Aug 02
0
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...ement. 2) SRCU: full memory barrier requires on srcu_read_lock(), which still leads little performance improvement 3) mutex: a possible issue is need to wait for the page to be swapped in (is this unacceptable ?), another issue is that we need hold vq lock during range overlap check. 4) using vhost_flush_work() instead of synchronize_rcu(): still need to wait for swap. But can do overlap checking without the lock > > And, again, you can't re-invent a spinlock with open coding and get > something better. So the question is if waiting for swap is considered to be unsuitable for MMU noti...
2023 Jun 01
1
[syzbot] [kvm?] [net?] [virt?] general protection fault in vhost_work_queue
...drivers/vhost/vhost.c >@@ -235,7 +235,7 @@ void vhost_dev_flush(struct vhost_dev *dev) > { > struct vhost_flush_struct flush; > >- if (dev->worker) { >+ if (READ_ONCE(dev->worker.vtsk)) { > init_completion(&flush.wait_event); > vhost_work_init(&flush.work, vhost_flush_work); > >@@ -247,7 +247,9 @@ EXPORT_SYMBOL_GPL(vhost_dev_flush); > > void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work) > { >- if (!dev->worker) >+ struct vhost_task *vtsk = READ_ONCE(dev->worker.vtsk); >+ >+ if (!vtsk) > return; > > if (!...
2023 Jun 06
1
[PATCH 1/1] vhost: Fix crash during early vhost_transport_send_pkt calls
...>+++ b/drivers/vhost/vhost.c >@@ -235,7 +235,7 @@ void vhost_dev_flush(struct vhost_dev *dev) > { > struct vhost_flush_struct flush; > >- if (dev->worker) { >+ if (dev->worker.vtsk) { > init_completion(&flush.wait_event); > vhost_work_init(&flush.work, vhost_flush_work); > >@@ -247,7 +247,7 @@ EXPORT_SYMBOL_GPL(vhost_dev_flush); > > void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work) > { >- if (!dev->worker) >+ if (!dev->worker.vtsk) > return; > > if (!test_and_set_bit(VHOST_WORK_QUEUED, &work->flags...
2023 Jun 06
2
[CFT][PATCH v3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
...his assumes that vhost_task_create() uses CLONE_THREAD if (same_thread_group(current, dev->worker->vtsk->task)) { ... run the pending callbacks ... return; } // this is what we currently have init_completion(&flush.wait_event); vhost_work_init(&flush.work, vhost_flush_work); vhost_work_queue(dev, &flush.work); wait_for_completion(&flush.wait_event); } } ? Mike, I am just trying to understand what exactly vhost_worker() should do. > We need to add code like I mentioned in that reply because we don't have a > way to call into the layers b...
2019 Aug 01
3
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
On Thu, Aug 01, 2019 at 01:02:18PM +0800, Jason Wang wrote: > > On 2019/8/1 ??3:30, Jason Gunthorpe wrote: > > On Wed, Jul 31, 2019 at 09:28:20PM +0800, Jason Wang wrote: > > > On 2019/7/31 ??8:39, Jason Gunthorpe wrote: > > > > On Wed, Jul 31, 2019 at 04:46:53AM -0400, Jason Wang wrote: > > > > > We used to use RCU to synchronize MMU notifier with
2019 Aug 01
3
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
On Thu, Aug 01, 2019 at 01:02:18PM +0800, Jason Wang wrote: > > On 2019/8/1 ??3:30, Jason Gunthorpe wrote: > > On Wed, Jul 31, 2019 at 09:28:20PM +0800, Jason Wang wrote: > > > On 2019/7/31 ??8:39, Jason Gunthorpe wrote: > > > > On Wed, Jul 31, 2019 at 04:46:53AM -0400, Jason Wang wrote: > > > > > We used to use RCU to synchronize MMU notifier with
2023 Jun 02
2
[PATCH 1/1] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
Hi Mike, sorry, but somehow I can't understand this patch... I'll try to read it with a fresh head on Weekend, but for example, On 06/01, Mike Christie wrote: > > static int vhost_task_fn(void *data) > { > struct vhost_task *vtsk = data; > - int ret; > + bool dead = false; > + > + for (;;) { > + bool did_work; > + > + /* mb paired w/
2023 Mar 28
12
[PATCH v6 00/11] vhost: multiple worker support
The following patches were built over linux-next which contains various vhost patches in mst's tree and the vhost_task patchset in Christian Brauner's tree: git://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux.git kernel.user_worker branch: https://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux.git/log/?h=kernel.user_worker The latter patchset handles the review comment