Jean-Philippe Brucker
2023-Sep-04 15:34 UTC
[PATCH 2/2] iommu/virtio: Add ops->flush_iotlb_all and enable deferred flush
On Fri, Aug 25, 2023 at 05:21:26PM +0200, Niklas Schnelle wrote:> Add ops->flush_iotlb_all operation to enable virtio-iommu for the > dma-iommu deferred flush scheme. This results inn a significant increasein> in performance in exchange for a window in which devices can still > access previously IOMMU mapped memory. To get back to the prior behavior > iommu.strict=1 may be set on the kernel command line.Maybe add that it depends on CONFIG_IOMMU_DEFAULT_DMA_{LAZY,STRICT} as well, because I've seen kernel configs that enable either.> > Link: https://lore.kernel.org/lkml/20230802123612.GA6142 at myrica/ > Signed-off-by: Niklas Schnelle <schnelle at linux.ibm.com> > --- > drivers/iommu/virtio-iommu.c | 12 ++++++++++++ > 1 file changed, 12 insertions(+) > > diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c > index fb73dec5b953..1b7526494490 100644 > --- a/drivers/iommu/virtio-iommu.c > +++ b/drivers/iommu/virtio-iommu.c > @@ -924,6 +924,15 @@ static int viommu_iotlb_sync_map(struct iommu_domain *domain, > return viommu_sync_req(vdomain->viommu); > } > > +static void viommu_flush_iotlb_all(struct iommu_domain *domain) > +{ > + struct viommu_domain *vdomain = to_viommu_domain(domain); > + > + if (!vdomain->nr_endpoints) > + return;As for patch 1, a NULL check in viommu_sync_req() would allow dropping this one Thanks, Jean> + viommu_sync_req(vdomain->viommu); > +} > + > static void viommu_get_resv_regions(struct device *dev, struct list_head *head) > { > struct iommu_resv_region *entry, *new_entry, *msi = NULL; > @@ -1049,6 +1058,8 @@ static bool viommu_capable(struct device *dev, enum iommu_cap cap) > switch (cap) { > case IOMMU_CAP_CACHE_COHERENCY: > return true; > + case IOMMU_CAP_DEFERRED_FLUSH: > + return true; > default: > return false; > } > @@ -1069,6 +1080,7 @@ static struct iommu_ops viommu_ops = { > .map_pages = viommu_map_pages, > .unmap_pages = viommu_unmap_pages, > .iova_to_phys = viommu_iova_to_phys, > + .flush_iotlb_all = viommu_flush_iotlb_all, > .iotlb_sync = viommu_iotlb_sync, > .iotlb_sync_map = viommu_iotlb_sync_map, > .free = viommu_domain_free, > > -- > 2.39.2 >
Robin Murphy
2023-Sep-04 16:33 UTC
[PATCH 2/2] iommu/virtio: Add ops->flush_iotlb_all and enable deferred flush
On 2023-09-04 16:34, Jean-Philippe Brucker wrote:> On Fri, Aug 25, 2023 at 05:21:26PM +0200, Niklas Schnelle wrote: >> Add ops->flush_iotlb_all operation to enable virtio-iommu for the >> dma-iommu deferred flush scheme. This results inn a significant increase > > in > >> in performance in exchange for a window in which devices can still >> access previously IOMMU mapped memory. To get back to the prior behavior >> iommu.strict=1 may be set on the kernel command line. > > Maybe add that it depends on CONFIG_IOMMU_DEFAULT_DMA_{LAZY,STRICT} as > well, because I've seen kernel configs that enable either.Indeed, I'd be inclined phrase it in terms of the driver now actually being able to honour lazy mode when requested (which happens to be the default on x86), rather than as if it might be some potentially-unexpected change in behaviour. Thanks, Robin.>> Link: https://lore.kernel.org/lkml/20230802123612.GA6142 at myrica/ >> Signed-off-by: Niklas Schnelle <schnelle at linux.ibm.com> >> --- >> drivers/iommu/virtio-iommu.c | 12 ++++++++++++ >> 1 file changed, 12 insertions(+) >> >> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c >> index fb73dec5b953..1b7526494490 100644 >> --- a/drivers/iommu/virtio-iommu.c >> +++ b/drivers/iommu/virtio-iommu.c >> @@ -924,6 +924,15 @@ static int viommu_iotlb_sync_map(struct iommu_domain *domain, >> return viommu_sync_req(vdomain->viommu); >> } >> >> +static void viommu_flush_iotlb_all(struct iommu_domain *domain) >> +{ >> + struct viommu_domain *vdomain = to_viommu_domain(domain); >> + >> + if (!vdomain->nr_endpoints) >> + return; > > As for patch 1, a NULL check in viommu_sync_req() would allow dropping > this one > > Thanks, > Jean > >> + viommu_sync_req(vdomain->viommu); >> +} >> + >> static void viommu_get_resv_regions(struct device *dev, struct list_head *head) >> { >> struct iommu_resv_region *entry, *new_entry, *msi = NULL; >> @@ -1049,6 +1058,8 @@ static bool viommu_capable(struct device *dev, enum iommu_cap cap) >> switch (cap) { >> case IOMMU_CAP_CACHE_COHERENCY: >> return true; >> + case IOMMU_CAP_DEFERRED_FLUSH: >> + return true; >> default: >> return false; >> } >> @@ -1069,6 +1080,7 @@ static struct iommu_ops viommu_ops = { >> .map_pages = viommu_map_pages, >> .unmap_pages = viommu_unmap_pages, >> .iova_to_phys = viommu_iova_to_phys, >> + .flush_iotlb_all = viommu_flush_iotlb_all, >> .iotlb_sync = viommu_iotlb_sync, >> .iotlb_sync_map = viommu_iotlb_sync_map, >> .free = viommu_domain_free, >> >> -- >> 2.39.2 >>
Reasonably Related Threads
- [PATCH 2/2] iommu/virtio: Add ops->flush_iotlb_all and enable deferred flush
- [PATCH 2/2] iommu/virtio: Add ops->flush_iotlb_all and enable deferred flush
- [PATCH 1/2] iommu/virtio: Make use of ops->iotlb_sync_map
- [PATCH v3 5/7] iommu: Add virtio-iommu driver
- [PATCH v3 5/7] iommu: Add virtio-iommu driver