This series adds support for the use of user virtual addresses in the vDPA simulator devices. The main reason for this change is to lift the pinning of all guest memory. Especially with virtio devices implemented in software. The next step would be to generalize the code in vdpa-sim to allow the implementation of in-kernel software devices. Similar to vhost, but using vDPA so we can reuse the same software stack (e.g. in QEMU) for both HW and SW devices. For example, we have never merged vhost-blk, and lately there has been interest. So it would be nice to do it directly with vDPA to reuse the same code in the VMM for both HW and SW vDPA block devices. The main problem (addressed by this series) was due to the pinning of all guest memory, which thus prevented the overcommit of guest memory. Thanks, Stefano Changelog listed in each patch. v2: https://lore.kernel.org/lkml/20230302113421.174582-1-sgarzare at redhat.com/ RFC v1: https://lore.kernel.org/lkml/20221214163025.103075-1-sgarzare at redhat.com/ Stefano Garzarella (8): vdpa: add bind_mm/unbind_mm callbacks vhost-vdpa: use bind_mm/unbind_mm device callbacks vringh: replace kmap_atomic() with kmap_local_page() vringh: support VA with iotlb vdpa_sim: make devices agnostic for work management vdpa_sim: use kthread worker vdpa_sim: replace the spinlock with a mutex to protect the state vdpa_sim: add support for user VA drivers/vdpa/vdpa_sim/vdpa_sim.h | 11 +- include/linux/vdpa.h | 10 ++ include/linux/vringh.h | 5 +- drivers/vdpa/mlx5/net/mlx5_vnet.c | 2 +- drivers/vdpa/vdpa_sim/vdpa_sim.c | 144 ++++++++++++++++++++----- drivers/vdpa/vdpa_sim/vdpa_sim_blk.c | 10 +- drivers/vdpa/vdpa_sim/vdpa_sim_net.c | 10 +- drivers/vhost/vdpa.c | 31 ++++++ drivers/vhost/vringh.c | 153 +++++++++++++++++++++------ 9 files changed, 301 insertions(+), 75 deletions(-) -- 2.39.2
Stefano Garzarella
2023-Mar-21 15:42 UTC
[PATCH v3 1/8] vdpa: add bind_mm/unbind_mm callbacks
These new optional callbacks is used to bind/unbind the device to a specific address space so the vDPA framework can use VA when these callbacks are implemented. Suggested-by: Jason Wang <jasowang at redhat.com> Signed-off-by: Stefano Garzarella <sgarzare at redhat.com> --- Notes: v2: - removed `struct task_struct *owner` param (unused for now, maybe useful to support cgroups) [Jason] - add unbind_mm callback [Jason] include/linux/vdpa.h | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h index 43f59ef10cc9..369c21394284 100644 --- a/include/linux/vdpa.h +++ b/include/linux/vdpa.h @@ -290,6 +290,14 @@ struct vdpa_map_file { * @vdev: vdpa device * @idx: virtqueue index * Returns pointer to structure device or error (NULL) + * @bind_mm: Bind the device to a specific address space + * so the vDPA framework can use VA when this + * callback is implemented. (optional) + * @vdev: vdpa device + * @mm: address space to bind + * @unbind_mm: Unbind the device from the address space + * bound using the bind_mm callback. (optional) + * @vdev: vdpa device * @free: Free resources that belongs to vDPA (optional) * @vdev: vdpa device */ @@ -351,6 +359,8 @@ struct vdpa_config_ops { int (*set_group_asid)(struct vdpa_device *vdev, unsigned int group, unsigned int asid); struct device *(*get_vq_dma_dev)(struct vdpa_device *vdev, u16 idx); + int (*bind_mm)(struct vdpa_device *vdev, struct mm_struct *mm); + void (*unbind_mm)(struct vdpa_device *vdev); /* Free device resources */ void (*free)(struct vdpa_device *vdev); -- 2.39.2
Stefano Garzarella
2023-Mar-21 15:42 UTC
[PATCH v3 2/8] vhost-vdpa: use bind_mm/unbind_mm device callbacks
When the user call VHOST_SET_OWNER ioctl and the vDPA device has `use_va` set to true, let's call the bind_mm callback. In this way we can bind the device to the user address space and directly use the user VA. The unbind_mm callback is called during the release after stopping the device. Signed-off-by: Stefano Garzarella <sgarzare at redhat.com> --- Notes: v3: - added `case VHOST_SET_OWNER` in vhost_vdpa_unlocked_ioctl() [Jason] v2: - call the new unbind_mm callback during the release [Jason] - avoid to call bind_mm callback after the reset, since the device is not detaching it now during the reset drivers/vhost/vdpa.c | 31 +++++++++++++++++++++++++++++++ 1 file changed, 31 insertions(+) diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c index 7be9d9d8f01c..20250c3418b2 100644 --- a/drivers/vhost/vdpa.c +++ b/drivers/vhost/vdpa.c @@ -219,6 +219,28 @@ static int vhost_vdpa_reset(struct vhost_vdpa *v) return vdpa_reset(vdpa); } +static long vhost_vdpa_bind_mm(struct vhost_vdpa *v) +{ + struct vdpa_device *vdpa = v->vdpa; + const struct vdpa_config_ops *ops = vdpa->config; + + if (!vdpa->use_va || !ops->bind_mm) + return 0; + + return ops->bind_mm(vdpa, v->vdev.mm); +} + +static void vhost_vdpa_unbind_mm(struct vhost_vdpa *v) +{ + struct vdpa_device *vdpa = v->vdpa; + const struct vdpa_config_ops *ops = vdpa->config; + + if (!vdpa->use_va || !ops->unbind_mm) + return; + + ops->unbind_mm(vdpa); +} + static long vhost_vdpa_get_device_id(struct vhost_vdpa *v, u8 __user *argp) { struct vdpa_device *vdpa = v->vdpa; @@ -709,6 +731,14 @@ static long vhost_vdpa_unlocked_ioctl(struct file *filep, case VHOST_VDPA_RESUME: r = vhost_vdpa_resume(v); break; + case VHOST_SET_OWNER: + r = vhost_dev_set_owner(d); + if (r) + break; + r = vhost_vdpa_bind_mm(v); + if (r) + vhost_dev_reset_owner(d, NULL); + break; default: r = vhost_dev_ioctl(&v->vdev, cmd, argp); if (r == -ENOIOCTLCMD) @@ -1287,6 +1317,7 @@ static int vhost_vdpa_release(struct inode *inode, struct file *filep) vhost_vdpa_clean_irq(v); vhost_vdpa_reset(v); vhost_dev_stop(&v->vdev); + vhost_vdpa_unbind_mm(v); vhost_vdpa_config_put(v); vhost_vdpa_cleanup(v); mutex_unlock(&d->mutex); -- 2.39.2
Stefano Garzarella
2023-Mar-21 15:42 UTC
[PATCH v3 3/8] vringh: replace kmap_atomic() with kmap_local_page()
kmap_atomic() is deprecated in favor of kmap_local_page() since commit f3ba3c710ac5 ("mm/highmem: Provide kmap_local*"). With kmap_local_page() the mappings are per thread, CPU local, can take page-faults, and can be called from any context (including interrupts). Furthermore, the tasks can be preempted and, when they are scheduled to run again, the kernel virtual addresses are restored and still valid. kmap_atomic() is implemented like a kmap_local_page() which also disables page-faults and preemption (the latter only for !PREEMPT_RT kernels, otherwise it only disables migration). The code within the mappings/un-mappings in getu16_iotlb() and putu16_iotlb() don't depend on the above-mentioned side effects of kmap_atomic(), so that mere replacements of the old API with the new one is all that is required (i.e., there is no need to explicitly add calls to pagefault_disable() and/or preempt_disable()). This commit reuses a "boiler plate" commit message from Fabio, who has already did this change in several places. Cc: "Fabio M. De Francesco" <fmdefrancesco at gmail.com> Signed-off-by: Stefano Garzarella <sgarzare at redhat.com> --- Notes: v3: - credited Fabio for the commit message - added reference to the commit that deprecated kmap_atomic() [Jason] v2: - added this patch since checkpatch.pl complained about deprecation of kmap_atomic() touched by next patch drivers/vhost/vringh.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c index a1e27da54481..0ba3ef809e48 100644 --- a/drivers/vhost/vringh.c +++ b/drivers/vhost/vringh.c @@ -1220,10 +1220,10 @@ static inline int getu16_iotlb(const struct vringh *vrh, if (ret < 0) return ret; - kaddr = kmap_atomic(iov.bv_page); + kaddr = kmap_local_page(iov.bv_page); from = kaddr + iov.bv_offset; *val = vringh16_to_cpu(vrh, READ_ONCE(*(__virtio16 *)from)); - kunmap_atomic(kaddr); + kunmap_local(kaddr); return 0; } @@ -1241,10 +1241,10 @@ static inline int putu16_iotlb(const struct vringh *vrh, if (ret < 0) return ret; - kaddr = kmap_atomic(iov.bv_page); + kaddr = kmap_local_page(iov.bv_page); to = kaddr + iov.bv_offset; WRITE_ONCE(*(__virtio16 *)to, cpu_to_vringh16(vrh, val)); - kunmap_atomic(kaddr); + kunmap_local(kaddr); return 0; } -- 2.39.2
vDPA supports the possibility to use user VA in the iotlb messages. So, let's add support for user VA in vringh to use it in the vDPA simulators. Signed-off-by: Stefano Garzarella <sgarzare at redhat.com> --- Notes: v3: - refactored avoiding code duplication [Eugenio] v2: - replace kmap_atomic() with kmap_local_page() [see previous patch] - fix cast warnings when build with W=1 C=1 include/linux/vringh.h | 5 +- drivers/vdpa/mlx5/net/mlx5_vnet.c | 2 +- drivers/vdpa/vdpa_sim/vdpa_sim.c | 4 +- drivers/vhost/vringh.c | 153 +++++++++++++++++++++++------- 4 files changed, 127 insertions(+), 37 deletions(-) diff --git a/include/linux/vringh.h b/include/linux/vringh.h index 1991a02c6431..d39b9f2dcba0 100644 --- a/include/linux/vringh.h +++ b/include/linux/vringh.h @@ -32,6 +32,9 @@ struct vringh { /* Can we get away with weak barriers? */ bool weak_barriers; + /* Use user's VA */ + bool use_va; + /* Last available index we saw (ie. where we're up to). */ u16 last_avail_idx; @@ -279,7 +282,7 @@ void vringh_set_iotlb(struct vringh *vrh, struct vhost_iotlb *iotlb, spinlock_t *iotlb_lock); int vringh_init_iotlb(struct vringh *vrh, u64 features, - unsigned int num, bool weak_barriers, + unsigned int num, bool weak_barriers, bool use_va, struct vring_desc *desc, struct vring_avail *avail, struct vring_used *used); diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c index 520646ae7fa0..dfd0e000217b 100644 --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c @@ -2537,7 +2537,7 @@ static int setup_cvq_vring(struct mlx5_vdpa_dev *mvdev) if (mvdev->actual_features & BIT_ULL(VIRTIO_NET_F_CTRL_VQ)) err = vringh_init_iotlb(&cvq->vring, mvdev->actual_features, - MLX5_CVQ_MAX_ENT, false, + MLX5_CVQ_MAX_ENT, false, false, (struct vring_desc *)(uintptr_t)cvq->desc_addr, (struct vring_avail *)(uintptr_t)cvq->driver_addr, (struct vring_used *)(uintptr_t)cvq->device_addr); diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c index eea23c630f7c..47cdf2a1f5b8 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim.c +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c @@ -60,7 +60,7 @@ static void vdpasim_queue_ready(struct vdpasim *vdpasim, unsigned int idx) struct vdpasim_virtqueue *vq = &vdpasim->vqs[idx]; uint16_t last_avail_idx = vq->vring.last_avail_idx; - vringh_init_iotlb(&vq->vring, vdpasim->features, vq->num, true, + vringh_init_iotlb(&vq->vring, vdpasim->features, vq->num, true, false, (struct vring_desc *)(uintptr_t)vq->desc_addr, (struct vring_avail *) (uintptr_t)vq->driver_addr, @@ -92,7 +92,7 @@ static void vdpasim_vq_reset(struct vdpasim *vdpasim, vq->cb = NULL; vq->private = NULL; vringh_init_iotlb(&vq->vring, vdpasim->dev_attr.supported_features, - VDPASIM_QUEUE_MAX, false, NULL, NULL, NULL); + VDPASIM_QUEUE_MAX, false, false, NULL, NULL, NULL); vq->vring.notify = NULL; } diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c index 0ba3ef809e48..72c88519329a 100644 --- a/drivers/vhost/vringh.c +++ b/drivers/vhost/vringh.c @@ -1094,10 +1094,18 @@ EXPORT_SYMBOL(vringh_need_notify_kern); #if IS_REACHABLE(CONFIG_VHOST_IOTLB) +struct iotlb_vec { + union { + struct iovec *iovec; + struct bio_vec *bvec; + } iov; + size_t count; + bool is_iovec; +}; + static int iotlb_translate(const struct vringh *vrh, u64 addr, u64 len, u64 *translated, - struct bio_vec iov[], - int iov_size, u32 perm) + struct iotlb_vec *ivec, u32 perm) { struct vhost_iotlb_map *map; struct vhost_iotlb *iotlb = vrh->iotlb; @@ -1107,9 +1115,9 @@ static int iotlb_translate(const struct vringh *vrh, spin_lock(vrh->iotlb_lock); while (len > s) { - u64 size, pa, pfn; + u64 size; - if (unlikely(ret >= iov_size)) { + if (unlikely(ret >= ivec->count)) { ret = -ENOBUFS; break; } @@ -1124,10 +1132,22 @@ static int iotlb_translate(const struct vringh *vrh, } size = map->size - addr + map->start; - pa = map->addr + addr - map->start; - pfn = pa >> PAGE_SHIFT; - bvec_set_page(&iov[ret], pfn_to_page(pfn), min(len - s, size), - pa & (PAGE_SIZE - 1)); + if (ivec->is_iovec) { + struct iovec *iovec = ivec->iov.iovec; + + iovec[ret].iov_len = min(len - s, size); + iovec[ret].iov_base = (void __user *)(unsigned long) + (map->addr + addr - map->start); + } else { + u64 pa = map->addr + addr - map->start; + u64 pfn = pa >> PAGE_SHIFT; + struct bio_vec *bvec = ivec->iov.bvec; + + bvec_set_page(&bvec[ret], pfn_to_page(pfn), + min(len - s, size), + pa & (PAGE_SIZE - 1)); + } + s += size; addr += size; ++ret; @@ -1141,26 +1161,42 @@ static int iotlb_translate(const struct vringh *vrh, return ret; } +#define IOTLB_IOV_SIZE 16 + static inline int copy_from_iotlb(const struct vringh *vrh, void *dst, void *src, size_t len) { + struct iotlb_vec ivec; + union { + struct iovec iovec[IOTLB_IOV_SIZE]; + struct bio_vec bvec[IOTLB_IOV_SIZE]; + } iov; u64 total_translated = 0; + ivec.iov.iovec = iov.iovec; + ivec.count = IOTLB_IOV_SIZE; + ivec.is_iovec = vrh->use_va; + while (total_translated < len) { - struct bio_vec iov[16]; struct iov_iter iter; u64 translated; int ret; ret = iotlb_translate(vrh, (u64)(uintptr_t)src, len - total_translated, &translated, - iov, ARRAY_SIZE(iov), VHOST_MAP_RO); + &ivec, VHOST_MAP_RO); if (ret == -ENOBUFS) - ret = ARRAY_SIZE(iov); + ret = IOTLB_IOV_SIZE; else if (ret < 0) return ret; - iov_iter_bvec(&iter, ITER_SOURCE, iov, ret, translated); + if (ivec.is_iovec) { + iov_iter_init(&iter, ITER_SOURCE, ivec.iov.iovec, ret, + translated); + } else { + iov_iter_bvec(&iter, ITER_SOURCE, ivec.iov.bvec, ret, + translated); + } ret = copy_from_iter(dst, translated, &iter); if (ret < 0) @@ -1177,23 +1213,37 @@ static inline int copy_from_iotlb(const struct vringh *vrh, void *dst, static inline int copy_to_iotlb(const struct vringh *vrh, void *dst, void *src, size_t len) { + struct iotlb_vec ivec; + union { + struct iovec iovec[IOTLB_IOV_SIZE]; + struct bio_vec bvec[IOTLB_IOV_SIZE]; + } iov; u64 total_translated = 0; + ivec.iov.iovec = iov.iovec; + ivec.count = IOTLB_IOV_SIZE; + ivec.is_iovec = vrh->use_va; + while (total_translated < len) { - struct bio_vec iov[16]; struct iov_iter iter; u64 translated; int ret; ret = iotlb_translate(vrh, (u64)(uintptr_t)dst, len - total_translated, &translated, - iov, ARRAY_SIZE(iov), VHOST_MAP_WO); + &ivec, VHOST_MAP_WO); if (ret == -ENOBUFS) - ret = ARRAY_SIZE(iov); + ret = IOTLB_IOV_SIZE; else if (ret < 0) return ret; - iov_iter_bvec(&iter, ITER_DEST, iov, ret, translated); + if (ivec.is_iovec) { + iov_iter_init(&iter, ITER_DEST, ivec.iov.iovec, ret, + translated); + } else { + iov_iter_bvec(&iter, ITER_DEST, ivec.iov.bvec, ret, + translated); + } ret = copy_to_iter(src, translated, &iter); if (ret < 0) @@ -1210,20 +1260,37 @@ static inline int copy_to_iotlb(const struct vringh *vrh, void *dst, static inline int getu16_iotlb(const struct vringh *vrh, u16 *val, const __virtio16 *p) { - struct bio_vec iov; - void *kaddr, *from; + struct iotlb_vec ivec; + union { + struct iovec iovec[1]; + struct bio_vec bvec[1]; + } iov; + __virtio16 tmp; int ret; + ivec.iov.iovec = iov.iovec; + ivec.count = 1; + ivec.is_iovec = vrh->use_va; + /* Atomic read is needed for getu16 */ - ret = iotlb_translate(vrh, (u64)(uintptr_t)p, sizeof(*p), NULL, - &iov, 1, VHOST_MAP_RO); + ret = iotlb_translate(vrh, (u64)(uintptr_t)p, sizeof(*p), + NULL, &ivec, VHOST_MAP_RO); if (ret < 0) return ret; - kaddr = kmap_local_page(iov.bv_page); - from = kaddr + iov.bv_offset; - *val = vringh16_to_cpu(vrh, READ_ONCE(*(__virtio16 *)from)); - kunmap_local(kaddr); + if (ivec.is_iovec) { + ret = __get_user(tmp, (__virtio16 __user *)ivec.iov.iovec[0].iov_base); + if (ret) + return ret; + } else { + void *kaddr = kmap_local_page(ivec.iov.bvec[0].bv_page); + void *from = kaddr + ivec.iov.bvec[0].bv_offset; + + tmp = READ_ONCE(*(__virtio16 *)from); + kunmap_local(kaddr); + } + + *val = vringh16_to_cpu(vrh, tmp); return 0; } @@ -1231,20 +1298,37 @@ static inline int getu16_iotlb(const struct vringh *vrh, static inline int putu16_iotlb(const struct vringh *vrh, __virtio16 *p, u16 val) { - struct bio_vec iov; - void *kaddr, *to; + struct iotlb_vec ivec; + union { + struct iovec iovec; + struct bio_vec bvec; + } iov; + __virtio16 tmp; int ret; + ivec.iov.iovec = &iov.iovec; + ivec.count = 1; + ivec.is_iovec = vrh->use_va; + /* Atomic write is needed for putu16 */ - ret = iotlb_translate(vrh, (u64)(uintptr_t)p, sizeof(*p), NULL, - &iov, 1, VHOST_MAP_WO); + ret = iotlb_translate(vrh, (u64)(uintptr_t)p, sizeof(*p), + NULL, &ivec, VHOST_MAP_RO); if (ret < 0) return ret; - kaddr = kmap_local_page(iov.bv_page); - to = kaddr + iov.bv_offset; - WRITE_ONCE(*(__virtio16 *)to, cpu_to_vringh16(vrh, val)); - kunmap_local(kaddr); + tmp = cpu_to_vringh16(vrh, val); + + if (ivec.is_iovec) { + ret = __put_user(tmp, (__virtio16 __user *)ivec.iov.iovec[0].iov_base); + if (ret) + return ret; + } else { + void *kaddr = kmap_local_page(ivec.iov.bvec[0].bv_page); + void *to = kaddr + ivec.iov.bvec[0].bv_offset; + + WRITE_ONCE(*(__virtio16 *)to, tmp); + kunmap_local(kaddr); + } return 0; } @@ -1306,6 +1390,7 @@ static inline int putused_iotlb(const struct vringh *vrh, * @features: the feature bits for this ring. * @num: the number of elements. * @weak_barriers: true if we only need memory barriers, not I/O. + * @use_va: true if IOTLB contains user VA * @desc: the userpace descriptor pointer. * @avail: the userpace avail pointer. * @used: the userpace used pointer. @@ -1313,11 +1398,13 @@ static inline int putused_iotlb(const struct vringh *vrh, * Returns an error if num is invalid. */ int vringh_init_iotlb(struct vringh *vrh, u64 features, - unsigned int num, bool weak_barriers, + unsigned int num, bool weak_barriers, bool use_va, struct vring_desc *desc, struct vring_avail *avail, struct vring_used *used) { + vrh->use_va = use_va; + return vringh_init_kern(vrh, features, num, weak_barriers, desc, avail, used); } -- 2.39.2
Stefano Garzarella
2023-Mar-21 15:48 UTC
[PATCH v3 5/8] vdpa_sim: make devices agnostic for work management
Let's move work management inside the vdpa_sim core. This way we can easily change how we manage the works, without having to change the devices each time. Acked-by: Eugenio P??rez Martin <eperezma at redhat.com> Acked-by: Jason Wang <jasowang at redhat.com> Signed-off-by: Stefano Garzarella <sgarzare at redhat.com> --- drivers/vdpa/vdpa_sim/vdpa_sim.h | 3 ++- drivers/vdpa/vdpa_sim/vdpa_sim.c | 17 +++++++++++++++-- drivers/vdpa/vdpa_sim/vdpa_sim_blk.c | 6 ++---- drivers/vdpa/vdpa_sim/vdpa_sim_net.c | 6 ++---- 4 files changed, 21 insertions(+), 11 deletions(-) diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.h b/drivers/vdpa/vdpa_sim/vdpa_sim.h index 144858636c10..acee20faaf6a 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim.h +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.h @@ -45,7 +45,7 @@ struct vdpasim_dev_attr { u32 ngroups; u32 nas; - work_func_t work_fn; + void (*work_fn)(struct vdpasim *vdpasim); void (*get_config)(struct vdpasim *vdpasim, void *config); void (*set_config)(struct vdpasim *vdpasim, const void *config); int (*get_stats)(struct vdpasim *vdpasim, u16 idx, @@ -78,6 +78,7 @@ struct vdpasim { struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *attr, const struct vdpa_dev_set_config *config); +void vdpasim_schedule_work(struct vdpasim *vdpasim); /* TODO: cross-endian support */ static inline bool vdpasim_is_little_endian(struct vdpasim *vdpasim) diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c index 47cdf2a1f5b8..f6329900e61a 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim.c +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c @@ -127,6 +127,13 @@ static void vdpasim_do_reset(struct vdpasim *vdpasim) static const struct vdpa_config_ops vdpasim_config_ops; static const struct vdpa_config_ops vdpasim_batch_config_ops; +static void vdpasim_work_fn(struct work_struct *work) +{ + struct vdpasim *vdpasim = container_of(work, struct vdpasim, work); + + vdpasim->dev_attr.work_fn(vdpasim); +} + struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr, const struct vdpa_dev_set_config *config) { @@ -163,7 +170,7 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr, vdpasim = vdpa_to_sim(vdpa); vdpasim->dev_attr = *dev_attr; - INIT_WORK(&vdpasim->work, dev_attr->work_fn); + INIT_WORK(&vdpasim->work, vdpasim_work_fn); spin_lock_init(&vdpasim->lock); spin_lock_init(&vdpasim->iommu_lock); @@ -214,6 +221,12 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr, } EXPORT_SYMBOL_GPL(vdpasim_create); +void vdpasim_schedule_work(struct vdpasim *vdpasim) +{ + schedule_work(&vdpasim->work); +} +EXPORT_SYMBOL_GPL(vdpasim_schedule_work); + static int vdpasim_set_vq_address(struct vdpa_device *vdpa, u16 idx, u64 desc_area, u64 driver_area, u64 device_area) @@ -248,7 +261,7 @@ static void vdpasim_kick_vq(struct vdpa_device *vdpa, u16 idx) } if (vq->ready) - schedule_work(&vdpasim->work); + vdpasim_schedule_work(vdpasim); } static void vdpasim_set_vq_cb(struct vdpa_device *vdpa, u16 idx, diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim_blk.c b/drivers/vdpa/vdpa_sim/vdpa_sim_blk.c index 5117959bed8a..eb4897c8541e 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim_blk.c +++ b/drivers/vdpa/vdpa_sim/vdpa_sim_blk.c @@ -11,7 +11,6 @@ #include <linux/module.h> #include <linux/device.h> #include <linux/kernel.h> -#include <linux/sched.h> #include <linux/blkdev.h> #include <linux/vringh.h> #include <linux/vdpa.h> @@ -286,9 +285,8 @@ static bool vdpasim_blk_handle_req(struct vdpasim *vdpasim, return handled; } -static void vdpasim_blk_work(struct work_struct *work) +static void vdpasim_blk_work(struct vdpasim *vdpasim) { - struct vdpasim *vdpasim = container_of(work, struct vdpasim, work); bool reschedule = false; int i; @@ -326,7 +324,7 @@ static void vdpasim_blk_work(struct work_struct *work) spin_unlock(&vdpasim->lock); if (reschedule) - schedule_work(&vdpasim->work); + vdpasim_schedule_work(vdpasim); } static void vdpasim_blk_get_config(struct vdpasim *vdpasim, void *config) diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim_net.c b/drivers/vdpa/vdpa_sim/vdpa_sim_net.c index 862f405362de..e61a9ecbfafe 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim_net.c +++ b/drivers/vdpa/vdpa_sim/vdpa_sim_net.c @@ -11,7 +11,6 @@ #include <linux/module.h> #include <linux/device.h> #include <linux/kernel.h> -#include <linux/sched.h> #include <linux/etherdevice.h> #include <linux/vringh.h> #include <linux/vdpa.h> @@ -192,9 +191,8 @@ static void vdpasim_handle_cvq(struct vdpasim *vdpasim) u64_stats_update_end(&net->cq_stats.syncp); } -static void vdpasim_net_work(struct work_struct *work) +static void vdpasim_net_work(struct vdpasim *vdpasim) { - struct vdpasim *vdpasim = container_of(work, struct vdpasim, work); struct vdpasim_virtqueue *txq = &vdpasim->vqs[1]; struct vdpasim_virtqueue *rxq = &vdpasim->vqs[0]; struct vdpasim_net *net = sim_to_net(vdpasim); @@ -260,7 +258,7 @@ static void vdpasim_net_work(struct work_struct *work) vdpasim_net_complete(rxq, write); if (tx_pkts > 4) { - schedule_work(&vdpasim->work); + vdpasim_schedule_work(vdpasim); goto out; } } -- 2.39.2
Reasonably Related Threads
- [PATCH v3 0/8] vdpa_sim: add support for user VA
- [PATCH v3 4/8] vringh: support VA with iotlb
- [PATCH v2 3/8] vringh: replace kmap_atomic() with kmap_local_page()
- [PATCH v5 0/9] vdpa_sim: add support for user VA
- [PATCH v2 0/8] vdpa_sim: add support for user VA