Si-Wei Liu
2023-Oct-19 01:10 UTC
[PATCH v3 0/4] vdpa: decouple reset of iotlb mapping from device reset
In order to reduce needlessly high setup and teardown cost of iotlb mapping during live migration, it's crucial to decouple the vhost-vdpa iotlb abstraction from the virtio device life cycle, i.e. iotlb mappings should be left intact across virtio device reset [1]. For it to work, the on-chip IOMMU parent device could implement a separate .reset_map() operation callback to restore 1:1 DMA mapping without having to resort to the .reset() callback, the latter of which is mainly used to reset virtio device state. This new .reset_map() callback will be invoked only before the vhost-vdpa driver is to be removed and detached from the vdpa bus, such that other vdpa bus drivers, e.g. virtio-vdpa, can start with 1:1 DMA mapping when they are attached. For the context, those on-chip IOMMU parent devices, create the 1:1 DMA mapping at vdpa device creation, and they would implicitly destroy the 1:1 mapping when the first .set_map or .dma_map callback is invoked. This patchset is rebased on top of the latest vhost tree. [1] Reducing vdpa migration downtime because of memory pin / maps https://www.mail-archive.com/qemu-devel at nongnu.org/msg953755.html --- v3: - add .reset_map support to vdpa_sim - introduce module parameter to provide bug-for-bug compatiblity with older userspace v2: - improved commit message to clarify the intended csope of .reset_map API - improved commit messages to clarify no breakage on older userspace v1: - rewrote commit messages to include more detailed description and background - reword to vendor specific IOMMU implementation from on-chip IOMMU - include parent device backend feautres to persistent iotlb precondition - reimplement mlx5_vdpa patch on top of descriptor group series RFC v3: - fix missing return due to merge error in patch #4 RFC v2: - rebased on top of the "[PATCH RFC v2 0/3] vdpa: dedicated descriptor table group" series: https://lore.kernel.org/virtualization/1694248959-13369-1-git-send-email-si-wei.liu at oracle.com/ --- Si-Wei Liu (5): vdpa: introduce .reset_map operation callback vhost-vdpa: reset vendor specific mapping to initial state in .release vhost-vdpa: introduce IOTLB_PERSIST backend feature bit vdpa/mlx5: implement .reset_map driver op vdpa_sim: implement .reset_map support drivers/vdpa/mlx5/core/mlx5_vdpa.h | 1 + drivers/vdpa/mlx5/core/mr.c | 17 +++++++++ drivers/vdpa/mlx5/net/mlx5_vnet.c | 26 ++++++++++++-- drivers/vdpa/vdpa_sim/vdpa_sim.c | 58 ++++++++++++++++++++++++------ drivers/vhost/vdpa.c | 31 ++++++++++++++++ include/linux/vdpa.h | 10 ++++++ include/uapi/linux/vhost_types.h | 2 ++ 7 files changed, 132 insertions(+), 13 deletions(-) -- 2.39.3
Si-Wei Liu
2023-Oct-19 01:10 UTC
[PATCH v3 1/5] vdpa: introduce .reset_map operation callback
Device specific IOMMU parent driver who wishes to see mapping to be decoupled from virtio or vdpa device life cycle (device reset) can use it to restore memory mapping in the device IOMMU to the initial or default state. The reset of mapping is done per address space basis. The reason why a separate .reset_map op is introduced is because this allows a simple on-chip IOMMU model without exposing too much device implementation details to the upper vdpa layer. The .dma_map/unmap or .set_map driver API is meant to be used to manipulate the IOTLB mappings, and has been abstracted in a way similar to how a real IOMMU device maps or unmaps pages for certain memory ranges. However, apart from this there also exists other mapping needs, in which case 1:1 passthrough mapping has to be used by other users (read virtio-vdpa). To ease parent/vendor driver implementation and to avoid abusing DMA ops in an unexpacted way, these on-chip IOMMU devices can start with 1:1 passthrough mapping mode initially at the he time of creation. Then the .reset_map op can be used to switch iotlb back to this initial state without having to expose a complex two-dimensional IOMMU device model. The .reset_map is not a MUST for every parent that implements the .dma_map or .set_map API, because there could be software vDPA devices (which has use_va=true) that don't have to pin kernel memory so they don't care much about high mapping cost during device reset. And those software devices may have also implemented their own DMA ops, so don't have to use .reset_map to achieve a simple IOMMU device model for 1:1 passthrough mapping, either. Signed-off-by: Si-Wei Liu <si-wei.liu at oracle.com> Acked-by: Eugenio P?rez <eperezma at redhat.com> Acked-by: Jason Wang <jasowang at redhat.com> --- include/linux/vdpa.h | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h index d376309b99cf..26ae6ae1eac3 100644 --- a/include/linux/vdpa.h +++ b/include/linux/vdpa.h @@ -327,6 +327,15 @@ struct vdpa_map_file { * @iova: iova to be unmapped * @size: size of the area * Returns integer: success (0) or error (< 0) + * @reset_map: Reset device memory mapping to the default + * state (optional) + * Needed for devices that are using device + * specific DMA translation and prefer mapping + * to be decoupled from the virtio life cycle, + * i.e. device .reset op does not reset mapping + * @vdev: vdpa device + * @asid: address space identifier + * Returns integer: success (0) or error (< 0) * @get_vq_dma_dev: Get the dma device for a specific * virtqueue (optional) * @vdev: vdpa device @@ -405,6 +414,7 @@ struct vdpa_config_ops { u64 iova, u64 size, u64 pa, u32 perm, void *opaque); int (*dma_unmap)(struct vdpa_device *vdev, unsigned int asid, u64 iova, u64 size); + int (*reset_map)(struct vdpa_device *vdev, unsigned int asid); int (*set_group_asid)(struct vdpa_device *vdev, unsigned int group, unsigned int asid); struct device *(*get_vq_dma_dev)(struct vdpa_device *vdev, u16 idx); -- 2.39.3
Si-Wei Liu
2023-Oct-19 01:10 UTC
[PATCH v3 2/5] vhost-vdpa: reset vendor specific mapping to initial state in .release
Devices with on-chip IOMMU or vendor specific IOTLB implementation may need to restore iotlb mapping to the initial or default state using the .reset_map op, as it's desirable for some parent devices to solely manipulate mappings by its own, independent of virtio device state. For instance, device reset does not cause mapping go away on such IOTLB model in need of persistent mapping. Before vhost-vdpa is going away, give them a chance to reset iotlb back to the initial state in vhost_vdpa_cleanup(). Signed-off-by: Si-Wei Liu <si-wei.liu at oracle.com> Acked-by: Eugenio P?rez <eperezma at redhat.com> --- drivers/vhost/vdpa.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c index 851535f57b95..a3f8160c9807 100644 --- a/drivers/vhost/vdpa.c +++ b/drivers/vhost/vdpa.c @@ -131,6 +131,15 @@ static struct vhost_vdpa_as *vhost_vdpa_find_alloc_as(struct vhost_vdpa *v, return vhost_vdpa_alloc_as(v, asid); } +static void vhost_vdpa_reset_map(struct vhost_vdpa *v, u32 asid) +{ + struct vdpa_device *vdpa = v->vdpa; + const struct vdpa_config_ops *ops = vdpa->config; + + if (ops->reset_map) + ops->reset_map(vdpa, asid); +} + static int vhost_vdpa_remove_as(struct vhost_vdpa *v, u32 asid) { struct vhost_vdpa_as *as = asid_to_as(v, asid); @@ -140,6 +149,13 @@ static int vhost_vdpa_remove_as(struct vhost_vdpa *v, u32 asid) hlist_del(&as->hash_link); vhost_vdpa_iotlb_unmap(v, &as->iotlb, 0ULL, 0ULL - 1, asid); + /* + * Devices with vendor specific IOMMU may need to restore + * iotlb to the initial or default state which is not done + * through device reset, as the IOTLB mapping manipulation + * could be decoupled from the virtio device life cycle. + */ + vhost_vdpa_reset_map(v, asid); kfree(as); return 0; -- 2.39.3
Si-Wei Liu
2023-Oct-19 01:10 UTC
[PATCH v3 3/5] vhost-vdpa: introduce IOTLB_PERSIST backend feature bit
Userspace needs this feature flag to distinguish if vhost-vdpa iotlb in the kernel can be trusted to persist IOTLB mapping across vDPA reset. Without it, userspace has no way to tell apart if it's running on an older kernel, which could silently drop all iotlb mapping across vDPA reset, especially with broken parent driver implementation for the .reset driver op. The broken driver may incorrectly drop all mappings of its own as part of .reset, which inadvertently ends up with corrupted mapping state between vhost-vdpa userspace and the kernel. As a workaround, to make the mapping behaviour predictable across reset, userspace has to pro-actively remove all mappings before vDPA reset, and then restore all the mappings afterwards. This workaround is done unconditionally on top of all parent drivers today, due to the parent driver implementation issue and no means to differentiate. This workaround had been utilized in QEMU since day one when the corresponding vhost-vdpa userspace backend came to the world. There are 3 cases that backend may claim this feature bit on for: - parent device that has to work with platform IOMMU - parent device with on-chip IOMMU that has the expected .reset_map support in driver - parent device with vendor specific IOMMU implementation with persistent IOTLB mapping already that has to specifically declare this backend feature The reason why .reset_map is being one of the pre-condition for persistent iotlb is because without it, vhost-vdpa can't switch back iotlb to the initial state later on, especially for the on-chip IOMMU case which starts with identity mapping at device creation. virtio-vdpa requires on-chip IOMMU to perform 1:1 passthrough translation from PA to IOVA as-is to begin with, and .reset_map is the only means to turn back iotlb to the identity mapping mode after vhost-vdpa is gone. The difference in behavior did not matter as QEMU unmaps all the memory unregistering the memory listener at vhost_vdpa_dev_start( started = false), but the backend acknowledging this feature flag allows QEMU to make sure it is safe to skip this unmap & map in the case of vhost stop & start cycle. In that sense, this feature flag is actually a signal for userspace to know that the driver bug has been solved. Not offering it indicates that userspace cannot trust the kernel will retain the maps. Signed-off-by: Si-Wei Liu <si-wei.liu at oracle.com> Acked-by: Eugenio P?rez <eperezma at redhat.com> --- drivers/vhost/vdpa.c | 15 +++++++++++++++ include/uapi/linux/vhost_types.h | 2 ++ 2 files changed, 17 insertions(+) diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c index a3f8160c9807..9202986a7d81 100644 --- a/drivers/vhost/vdpa.c +++ b/drivers/vhost/vdpa.c @@ -438,6 +438,15 @@ static u64 vhost_vdpa_get_backend_features(const struct vhost_vdpa *v) return ops->get_backend_features(vdpa); } +static bool vhost_vdpa_has_persistent_map(const struct vhost_vdpa *v) +{ + struct vdpa_device *vdpa = v->vdpa; + const struct vdpa_config_ops *ops = vdpa->config; + + return (!ops->set_map && !ops->dma_map) || ops->reset_map || + vhost_vdpa_get_backend_features(v) & BIT_ULL(VHOST_BACKEND_F_IOTLB_PERSIST); +} + static long vhost_vdpa_set_features(struct vhost_vdpa *v, u64 __user *featurep) { struct vdpa_device *vdpa = v->vdpa; @@ -725,6 +734,7 @@ static long vhost_vdpa_unlocked_ioctl(struct file *filep, return -EFAULT; if (features & ~(VHOST_VDPA_BACKEND_FEATURES | BIT_ULL(VHOST_BACKEND_F_DESC_ASID) | + BIT_ULL(VHOST_BACKEND_F_IOTLB_PERSIST) | BIT_ULL(VHOST_BACKEND_F_SUSPEND) | BIT_ULL(VHOST_BACKEND_F_RESUME) | BIT_ULL(VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK))) @@ -741,6 +751,9 @@ static long vhost_vdpa_unlocked_ioctl(struct file *filep, if ((features & BIT_ULL(VHOST_BACKEND_F_DESC_ASID)) && !vhost_vdpa_has_desc_group(v)) return -EOPNOTSUPP; + if ((features & BIT_ULL(VHOST_BACKEND_F_IOTLB_PERSIST)) && + !vhost_vdpa_has_persistent_map(v)) + return -EOPNOTSUPP; vhost_set_backend_features(&v->vdev, features); return 0; } @@ -796,6 +809,8 @@ static long vhost_vdpa_unlocked_ioctl(struct file *filep, features |= BIT_ULL(VHOST_BACKEND_F_RESUME); if (vhost_vdpa_has_desc_group(v)) features |= BIT_ULL(VHOST_BACKEND_F_DESC_ASID); + if (vhost_vdpa_has_persistent_map(v)) + features |= BIT_ULL(VHOST_BACKEND_F_IOTLB_PERSIST); features |= vhost_vdpa_get_backend_features(v); if (copy_to_user(featurep, &features, sizeof(features))) r = -EFAULT; diff --git a/include/uapi/linux/vhost_types.h b/include/uapi/linux/vhost_types.h index 18ad6ae7ab5c..d7656908f730 100644 --- a/include/uapi/linux/vhost_types.h +++ b/include/uapi/linux/vhost_types.h @@ -190,5 +190,7 @@ struct vhost_vdpa_iova_range { * buffers may reside. Requires VHOST_BACKEND_F_IOTLB_ASID. */ #define VHOST_BACKEND_F_DESC_ASID 0x7 +/* IOTLB don't flush memory mapping across device reset */ +#define VHOST_BACKEND_F_IOTLB_PERSIST 0x8 #endif -- 2.39.3
Since commit 6f5312f80183 ("vdpa/mlx5: Add support for running with virtio_vdpa"), mlx5_vdpa starts with preallocate 1:1 DMA MR at device creation time. This 1:1 DMA MR will be implicitly destroyed while the first .set_map call is invoked, in which case callers like vhost-vdpa will start to set up custom mappings. When the .reset callback is invoked, the custom mappings will be cleared and the 1:1 DMA MR will be re-created. In order to reduce excessive memory mapping cost in live migration, it is desirable to decouple the vhost-vdpa IOTLB abstraction from the virtio device life cycle, i.e. mappings can be kept around intact across virtio device reset. Leverage the .reset_map callback, which is meant to destroy the regular MR (including cvq mapping) on the given ASID and recreate the initial DMA mapping. That way, the device .reset op runs free from having to maintain and clean up memory mappings by itself. Add a module paramemter, persist_mapping, to cater for older userspace which may wish to see mapping to be cleared during reset. Co-developed-by: Dragos Tatulea <dtatulea at nvidia.com> Signed-off-by: Dragos Tatulea <dtatulea at nvidia.com> Signed-off-by: Si-Wei Liu <si-wei.liu at oracle.com> --- drivers/vdpa/mlx5/core/mlx5_vdpa.h | 1 + drivers/vdpa/mlx5/core/mr.c | 17 +++++++++++++++++ drivers/vdpa/mlx5/net/mlx5_vnet.c | 26 ++++++++++++++++++++++++-- 3 files changed, 42 insertions(+), 2 deletions(-) diff --git a/drivers/vdpa/mlx5/core/mlx5_vdpa.h b/drivers/vdpa/mlx5/core/mlx5_vdpa.h index db988ced5a5d..84547d998bcf 100644 --- a/drivers/vdpa/mlx5/core/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/core/mlx5_vdpa.h @@ -127,6 +127,7 @@ int mlx5_vdpa_update_cvq_iotlb(struct mlx5_vdpa_dev *mvdev, struct vhost_iotlb *iotlb, unsigned int asid); int mlx5_vdpa_create_dma_mr(struct mlx5_vdpa_dev *mvdev); +int mlx5_vdpa_reset_mr(struct mlx5_vdpa_dev *mvdev, unsigned int asid); #define mlx5_vdpa_warn(__dev, format, ...) \ dev_warn((__dev)->mdev->device, "%s:%d:(pid %d) warning: " format, __func__, __LINE__, \ diff --git a/drivers/vdpa/mlx5/core/mr.c b/drivers/vdpa/mlx5/core/mr.c index 66530e28f327..2197c46e563a 100644 --- a/drivers/vdpa/mlx5/core/mr.c +++ b/drivers/vdpa/mlx5/core/mr.c @@ -645,3 +645,20 @@ int mlx5_vdpa_create_dma_mr(struct mlx5_vdpa_dev *mvdev) return mlx5_vdpa_update_cvq_iotlb(mvdev, NULL, 0); } + +int mlx5_vdpa_reset_mr(struct mlx5_vdpa_dev *mvdev, unsigned int asid) +{ + if (asid >= MLX5_VDPA_NUM_AS) + return -EINVAL; + + mlx5_vdpa_destroy_mr(mvdev, mvdev->mr[asid]); + + if (asid == 0 && MLX5_CAP_GEN(mvdev->mdev, umem_uid_0)) { + if (mlx5_vdpa_create_dma_mr(mvdev)) + mlx5_vdpa_warn(mvdev, "create DMA MR failed\n"); + } else { + mlx5_vdpa_update_cvq_iotlb(mvdev, NULL, asid); + } + + return 0; +} diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c index f4516a2d5bb0..e809ccec6048 100644 --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c @@ -25,6 +25,11 @@ MODULE_AUTHOR("Eli Cohen <eli at mellanox.com>"); MODULE_DESCRIPTION("Mellanox VDPA driver"); MODULE_LICENSE("Dual BSD/GPL"); +static bool persist_mapping = true; +module_param(persist_mapping, bool, 0444); +MODULE_PARM_DESC(persist_mapping, + "Enable/disable persistent mapping across reset: 1 to keep, 0 to clear"); + #define VALID_FEATURES_MASK \ (BIT_ULL(VIRTIO_NET_F_CSUM) | BIT_ULL(VIRTIO_NET_F_GUEST_CSUM) | \ BIT_ULL(VIRTIO_NET_F_CTRL_GUEST_OFFLOADS) | BIT_ULL(VIRTIO_NET_F_MTU) | BIT_ULL(VIRTIO_NET_F_MAC) | \ @@ -2888,7 +2893,8 @@ static int mlx5_vdpa_reset(struct vdpa_device *vdev) unregister_link_notifier(ndev); teardown_driver(ndev); clear_vqs_ready(ndev); - mlx5_vdpa_destroy_mr_resources(&ndev->mvdev); + if (unlikely(!persist_mapping)) + mlx5_vdpa_destroy_mr_resources(&ndev->mvdev); ndev->mvdev.status = 0; ndev->mvdev.suspended = false; ndev->cur_num_vqs = 0; @@ -2899,7 +2905,7 @@ static int mlx5_vdpa_reset(struct vdpa_device *vdev) init_group_to_asid_map(mvdev); ++mvdev->generation; - if (MLX5_CAP_GEN(mvdev->mdev, umem_uid_0)) { + if (unlikely(!persist_mapping) && MLX5_CAP_GEN(mvdev->mdev, umem_uid_0)) { if (mlx5_vdpa_create_dma_mr(mvdev)) mlx5_vdpa_warn(mvdev, "create MR failed\n"); } @@ -2987,6 +2993,18 @@ static int mlx5_vdpa_set_map(struct vdpa_device *vdev, unsigned int asid, return err; } +static int mlx5_vdpa_reset_map(struct vdpa_device *vdev, unsigned int asid) +{ + struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev); + struct mlx5_vdpa_net *ndev = to_mlx5_vdpa_ndev(mvdev); + int err; + + down_write(&ndev->reslock); + err = mlx5_vdpa_reset_mr(mvdev, asid); + up_write(&ndev->reslock); + return err; +} + static struct device *mlx5_get_vq_dma_dev(struct vdpa_device *vdev, u16 idx) { struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev); @@ -3255,6 +3273,7 @@ static const struct vdpa_config_ops mlx5_vdpa_ops = { .set_config = mlx5_vdpa_set_config, .get_generation = mlx5_vdpa_get_generation, .set_map = mlx5_vdpa_set_map, + .reset_map = mlx5_vdpa_reset_map, /* Op disabled on !persist_mapping */ .set_group_asid = mlx5_set_group_asid, .get_vq_dma_dev = mlx5_get_vq_dma_dev, .free = mlx5_vdpa_free, @@ -3630,6 +3649,9 @@ static int mlx5v_probe(struct auxiliary_device *adev, if (!MLX5_CAP_DEV_VDPA_EMULATION(mdev, desc_group_mkey_supported)) mgtdev->vdpa_ops.get_vq_desc_group = NULL; + if (unlikely(!persist_mapping)) + mgtdev->vdpa_ops.reset_map = NULL; + err = vdpa_mgmtdev_register(&mgtdev->mgtdev); if (err) goto reg_err; -- 2.39.3
In order to reduce excessive memory mapping cost in live migration and VM reboot, it is desirable to decouple the vhost-vdpa IOTLB abstraction from the virtio device life cycle, i.e. mappings can be kept intact across virtio device reset. Leverage the .reset_map callback, which is meant to destroy the iotlb on the given ASID and recreate the 1:1 passthrough/identity mapping. To be consistent, the mapping on device creation is initiailized to passthrough/identity with PA 1:1 mapped as IOVA. With this the device .reset op doesn't have to maintain and clean up memory mappings by itself. Add a module paramemter, iotlb_persist, to cater for older userspace which may wish to see mapping to be cleared during reset. Signed-off-by: Si-Wei Liu <si-wei.liu at oracle.com> Tested-by: Stefano Garzarella <sgarzare at redhat.com> --- drivers/vdpa/vdpa_sim/vdpa_sim.c | 58 ++++++++++++++++++++++++++------ 1 file changed, 47 insertions(+), 11 deletions(-) diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c index 76d41058add9..74506636375f 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim.c +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c @@ -40,6 +40,10 @@ static bool use_va = true; module_param(use_va, bool, 0444); MODULE_PARM_DESC(use_va, "Enable/disable the device's ability to use VA"); +static bool iotlb_persist = true; +module_param(iotlb_persist, bool, 0444); +MODULE_PARM_DESC(iotlb_persist, "Enable/disable persistent iotlb across reset: 1 to keep maps, 0 to clear"); + #define VDPASIM_QUEUE_ALIGN PAGE_SIZE #define VDPASIM_QUEUE_MAX 256 #define VDPASIM_VENDOR_ID 0 @@ -151,11 +155,13 @@ static void vdpasim_do_reset(struct vdpasim *vdpasim) &vdpasim->iommu_lock); } - for (i = 0; i < vdpasim->dev_attr.nas; i++) { - vhost_iotlb_reset(&vdpasim->iommu[i]); - vhost_iotlb_add_range(&vdpasim->iommu[i], 0, ULONG_MAX, - 0, VHOST_MAP_RW); - vdpasim->iommu_pt[i] = true; + if (unlikely(!iotlb_persist)) { + for (i = 0; i < vdpasim->dev_attr.nas; i++) { + vhost_iotlb_reset(&vdpasim->iommu[i]); + vhost_iotlb_add_range(&vdpasim->iommu[i], 0, ULONG_MAX, + 0, VHOST_MAP_RW); + vdpasim->iommu_pt[i] = true; + } } vdpasim->running = true; @@ -166,8 +172,8 @@ static void vdpasim_do_reset(struct vdpasim *vdpasim) ++vdpasim->generation; } -static const struct vdpa_config_ops vdpasim_config_ops; -static const struct vdpa_config_ops vdpasim_batch_config_ops; +static struct vdpa_config_ops vdpasim_config_ops; +static struct vdpa_config_ops vdpasim_batch_config_ops; static void vdpasim_work_fn(struct kthread_work *work) { @@ -191,7 +197,7 @@ static void vdpasim_work_fn(struct kthread_work *work) struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr, const struct vdpa_dev_set_config *config) { - const struct vdpa_config_ops *ops; + struct vdpa_config_ops *ops; struct vdpa_device *vdpa; struct vdpasim *vdpasim; struct device *dev; @@ -213,6 +219,9 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr, else ops = &vdpasim_config_ops; + if (unlikely(!iotlb_persist)) + ops->reset_map = NULL; + vdpa = __vdpa_alloc_device(NULL, ops, dev_attr->ngroups, dev_attr->nas, dev_attr->alloc_size, @@ -259,8 +268,14 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr, if (!vdpasim->iommu_pt) goto err_iommu; - for (i = 0; i < vdpasim->dev_attr.nas; i++) + for (i = 0; i < vdpasim->dev_attr.nas; i++) { vhost_iotlb_init(&vdpasim->iommu[i], max_iotlb_entries, 0); + if (likely(iotlb_persist)) { + vhost_iotlb_add_range(&vdpasim->iommu[i], 0, ULONG_MAX, 0, + VHOST_MAP_RW); + vdpasim->iommu_pt[i] = true; + } + } for (i = 0; i < dev_attr->nvqs; i++) vringh_set_iotlb(&vdpasim->vqs[i].vring, &vdpasim->iommu[0], @@ -637,6 +652,25 @@ static int vdpasim_set_map(struct vdpa_device *vdpa, unsigned int asid, return ret; } +static int vdpasim_reset_map(struct vdpa_device *vdpa, unsigned int asid) +{ + struct vdpasim *vdpasim = vdpa_to_sim(vdpa); + + if (asid >= vdpasim->dev_attr.nas) + return -EINVAL; + + spin_lock(&vdpasim->iommu_lock); + if (vdpasim->iommu_pt[asid]) + goto out; + vhost_iotlb_reset(&vdpasim->iommu[asid]); + vhost_iotlb_add_range(&vdpasim->iommu[asid], 0, ULONG_MAX, + 0, VHOST_MAP_RW); + vdpasim->iommu_pt[asid] = true; +out: + spin_unlock(&vdpasim->iommu_lock); + return 0; +} + static int vdpasim_bind_mm(struct vdpa_device *vdpa, struct mm_struct *mm) { struct vdpasim *vdpasim = vdpa_to_sim(vdpa); @@ -726,7 +760,7 @@ static void vdpasim_free(struct vdpa_device *vdpa) kfree(vdpasim->config); } -static const struct vdpa_config_ops vdpasim_config_ops = { +static struct vdpa_config_ops vdpasim_config_ops = { .set_vq_address = vdpasim_set_vq_address, .set_vq_num = vdpasim_set_vq_num, .kick_vq = vdpasim_kick_vq, @@ -759,12 +793,13 @@ static const struct vdpa_config_ops vdpasim_config_ops = { .set_group_asid = vdpasim_set_group_asid, .dma_map = vdpasim_dma_map, .dma_unmap = vdpasim_dma_unmap, + .reset_map = vdpasim_reset_map, .bind_mm = vdpasim_bind_mm, .unbind_mm = vdpasim_unbind_mm, .free = vdpasim_free, }; -static const struct vdpa_config_ops vdpasim_batch_config_ops = { +static struct vdpa_config_ops vdpasim_batch_config_ops = { .set_vq_address = vdpasim_set_vq_address, .set_vq_num = vdpasim_set_vq_num, .kick_vq = vdpasim_kick_vq, @@ -796,6 +831,7 @@ static const struct vdpa_config_ops vdpasim_batch_config_ops = { .get_iova_range = vdpasim_get_iova_range, .set_group_asid = vdpasim_set_group_asid, .set_map = vdpasim_set_map, + .reset_map = vdpasim_reset_map, .bind_mm = vdpasim_bind_mm, .unbind_mm = vdpasim_unbind_mm, .free = vdpasim_free, -- 2.39.3