Displaying 4 results from an estimated 4 matches for "vhost_work_flush_on".
2023 Mar 28
1
[PATCH v6 04/11] vhost: take worker or vq instead of dev for flushing
...anged, 15 insertions(+), 9 deletions(-)
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index cc2628ba9a77..6160aa1cc922 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -247,6 +247,20 @@ static void vhost_work_queue_on(struct vhost_worker *worker,
}
}
+static void vhost_work_flush_on(struct vhost_worker *worker)
+{
+ struct vhost_flush_struct flush;
+
+ if (!worker)
+ return;
+
+ init_completion(&flush.wait_event);
+ vhost_work_init(&flush.work, vhost_flush_work);
+
+ vhost_work_queue_on(worker, &flush.work);
+ wait_for_completion(&flush.wait_event);
+}
+
void...
2023 Mar 28
1
[PATCH v6 11/11] vhost: allow userspace to create workers
...insertions(+), 14 deletions(-)
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index 1fa5e9a49092..e40699e83c6d 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -271,7 +271,11 @@ EXPORT_SYMBOL_GPL(vhost_vq_work_queue);
void vhost_dev_flush(struct vhost_dev *dev)
{
- vhost_work_flush_on(dev->worker);
+ struct vhost_worker *worker;
+ unsigned long i;
+
+ xa_for_each(&dev->worker_xa, i, worker)
+ vhost_work_flush_on(worker);
}
EXPORT_SYMBOL_GPL(vhost_dev_flush);
@@ -489,7 +493,6 @@ void vhost_dev_init(struct vhost_dev *dev,
dev->umem = NULL;
dev->iotlb = NUL...
2023 Mar 28
12
[PATCH v6 00/11] vhost: multiple worker support
The following patches were built over linux-next which contains various
vhost patches in mst's tree and the vhost_task patchset in Christian
Brauner's tree:
git://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux.git
kernel.user_worker branch:
https://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux.git/log/?h=kernel.user_worker
The latter patchset handles the review comment
2023 Apr 10
1
[PATCH v6 11/11] vhost: allow userspace to create workers
...> will have made sure the flush has completed when the clear function returns.
> It does that with the device mutex so when we run __vhost_vq_attach_worker
> It will only see a vq/worker with no flushes in progress.
Ok.
>
> For the general case of can we be doing a vhost_dev_flush/vhost_work_flush_on
> and __vhost_vq_attach_worker, then I thought we are ok as well because I
> thought we have to currently have the device mutex when we flush so those can't
> race with ioctl calls to vhost_vq_attach_worker since we hold the dev mutex
> during that ioctls.
I'm not sure I unders...