Jean-Philippe Brucker
2023-Sep-04 15:26 UTC
[PATCH 1/2] iommu/virtio: Make use of ops->iotlb_sync_map
Hi Niklas, Thanks for following up with these patches On Fri, Aug 25, 2023 at 05:21:25PM +0200, Niklas Schnelle wrote:> Pull out the sync operation from viommu_map_pages() by implementing > ops->iotlb_sync_map. This allows the common IOMMU code to map multiple > elements of an sg with a single sync (see iommu_map_sg()). Furthermore, > it is also a requirement for IOMMU_CAP_DEFERRED_FLUSH. > > Link: https://lore.kernel.org/lkml/20230726111433.1105665-1-schnelle at linux.ibm.com/ > Signed-off-by: Niklas Schnelle <schnelle at linux.ibm.com> > --- > drivers/iommu/virtio-iommu.c | 15 ++++++++++++++- > 1 file changed, 14 insertions(+), 1 deletion(-) > > diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c > index 3551ed057774..fb73dec5b953 100644 > --- a/drivers/iommu/virtio-iommu.c > +++ b/drivers/iommu/virtio-iommu.c > @@ -843,7 +843,7 @@ static int viommu_map_pages(struct iommu_domain *domain, unsigned long iova, > .flags = cpu_to_le32(flags), > }; > > - ret = viommu_send_req_sync(vdomain->viommu, &map, sizeof(map)); > + ret = viommu_add_req(vdomain->viommu, &map, sizeof(map)); > if (ret) { > viommu_del_mappings(vdomain, iova, end); > return ret; > @@ -909,9 +909,21 @@ static void viommu_iotlb_sync(struct iommu_domain *domain, > { > struct viommu_domain *vdomain = to_viommu_domain(domain); > > + if (!vdomain->nr_endpoints) > + return;I was wondering about these nr_endpoints checks, which seemed unnecessary: if map()/unmap() were called with no attached endpoints, then no requests were added to the queue, and viommu_sync_req() below is a nop. But at least viommu_iotlb_sync_map() and viommu_flush_iotlb_all() need to handle being called before the domain is finalized (for example by iommu_create_device_direct_mappings()). In that case vdomain->viommu is NULL so if you add a NULL check in viommu_sync_req() then you should be able to drop the nr_endpoints checks in both patches. Thanks, Jean> viommu_sync_req(vdomain->viommu); > } > > +static int viommu_iotlb_sync_map(struct iommu_domain *domain, > + unsigned long iova, size_t size) > +{ > + struct viommu_domain *vdomain = to_viommu_domain(domain); > + > + if (!vdomain->nr_endpoints) > + return 0; > + return viommu_sync_req(vdomain->viommu); > +} > + > static void viommu_get_resv_regions(struct device *dev, struct list_head *head) > { > struct iommu_resv_region *entry, *new_entry, *msi = NULL; > @@ -1058,6 +1070,7 @@ static struct iommu_ops viommu_ops = { > .unmap_pages = viommu_unmap_pages, > .iova_to_phys = viommu_iova_to_phys, > .iotlb_sync = viommu_iotlb_sync, > + .iotlb_sync_map = viommu_iotlb_sync_map, > .free = viommu_domain_free, > } > }; > > -- > 2.39.2 >
Maybe Matching Threads
- [PATCH 2/2] iommu/virtio: Add ops->flush_iotlb_all and enable deferred flush
- [PATCH 2/2] iommu/virtio: Add ops->flush_iotlb_all and enable deferred flush
- [PATCH 2/2] iommu/virtio: Add ops->flush_iotlb_all and enable deferred flush
- [PATCH] iommu/virtio: Detach domain on endpoint release
- [PATCH v5 5/7] iommu: Add virtio-iommu driver