search for: iotlb_sync

Displaying 20 results from an estimated 53 matches for "iotlb_sync".

2023 Sep 04
1
[PATCH 2/2] iommu/virtio: Add ops->flush_iotlb_all and enable deferred flush
...+++++++ > 1 file changed, 12 insertions(+) > > diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c > index fb73dec5b953..1b7526494490 100644 > --- a/drivers/iommu/virtio-iommu.c > +++ b/drivers/iommu/virtio-iommu.c > @@ -924,6 +924,15 @@ static int viommu_iotlb_sync_map(struct iommu_domain *domain, > return viommu_sync_req(vdomain->viommu); > } > > +static void viommu_flush_iotlb_all(struct iommu_domain *domain) > +{ > + struct viommu_domain *vdomain = to_viommu_domain(domain); > + > + if (!vdomain->nr_endpoints) > + retu...
2020 Apr 14
0
[PATCH v2 32/33] iommu: Remove add_device()/remove_device() code-paths
..._for_each_dev(bus, NULL, NULL, add_iommu_group); + if (ret) + break; } return ret; diff --git a/include/linux/iommu.h b/include/linux/iommu.h index fea1622408ad..dd076366383f 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -223,8 +223,6 @@ struct iommu_iotlb_gather { * @iotlb_sync: Flush all queued ranges from the hardware TLBs and empty flush * queue * @iova_to_phys: translate iova to physical address - * @add_device: add device to iommu grouping - * @remove_device: remove device from iommu grouping * @probe_device: Add device to iommu driver handling * @...
2020 Apr 14
0
[PATCH v2 17/33] iommu/arm-smmu: Convert to probe/release_device() call-backs
...m_smmu_detach_dev(master); - iommu_group_remove_device(dev); - iommu_device_unlink(&smmu->iommu, dev); arm_smmu_disable_pasid(master); kfree(master); iommu_fwspec_free(dev); @@ -3138,8 +3120,8 @@ static struct iommu_ops arm_smmu_ops = { .flush_iotlb_all = arm_smmu_flush_iotlb_all, .iotlb_sync = arm_smmu_iotlb_sync, .iova_to_phys = arm_smmu_iova_to_phys, - .add_device = arm_smmu_add_device, - .remove_device = arm_smmu_remove_device, + .probe_device = arm_smmu_probe_device, + .release_device = arm_smmu_release_device, .device_group = arm_smmu_device_group, .domain_get_attr =...
2023 Sep 06
1
[PATCH 2/2] iommu/virtio: Add ops->flush_iotlb_all and enable deferred flush
...ff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c > > > > index fb73dec5b953..1b7526494490 100644 > > > > --- a/drivers/iommu/virtio-iommu.c > > > > +++ b/drivers/iommu/virtio-iommu.c > > > > @@ -924,6 +924,15 @@ static int viommu_iotlb_sync_map(struct iommu_domain *domain, > > > > return viommu_sync_req(vdomain->viommu); > > > > } > > > > > > > > +static void viommu_flush_iotlb_all(struct iommu_domain *domain) > > > > +{ > > > > + struct viommu_domain *...
2023 Sep 06
1
[PATCH 2/2] iommu/virtio: Add ops->flush_iotlb_all and enable deferred flush
...ff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c > > > > index fb73dec5b953..1b7526494490 100644 > > > > --- a/drivers/iommu/virtio-iommu.c > > > > +++ b/drivers/iommu/virtio-iommu.c > > > > @@ -924,6 +924,15 @@ static int viommu_iotlb_sync_map(struct iommu_domain *domain, > > > > return viommu_sync_req(vdomain->viommu); > > > > } > > > > > > > > +static void viommu_flush_iotlb_all(struct iommu_domain *domain) > > > > +{ > > > > + struct viommu_domain *...
2020 Apr 08
1
[RFC PATCH 17/34] iommu/arm-smmu: Store device instead of group in arm_smmu_s2cr
On 2020-04-08 3:37 pm, Joerg Roedel wrote: > Hi Robin, > > thanks for looking into this. > > On Wed, Apr 08, 2020 at 01:09:40PM +0100, Robin Murphy wrote: >> For a hot-pluggable bus where logical devices may share Stream IDs (like >> fsl-mc), this could happen: >> >> create device A >> iommu_probe_device(A) >> iommu_device_group(A)
2018 Dec 13
3
[PATCH v6 0/7] Add virtio-iommu driver
...e unmap visible. > > I think that will cost significant performance for both, vfio and > dma-iommu use-cases which both do (vfio at least to some degree), > deferred flushing. We already do deferred flush: UNMAP requests are added to the queue by iommu_unmap(), and then flushed out by iotlb_sync(). So we switch to the host only on iotlb_sync(), or when the request queue is full. > I also wonder whether the protocol should implement a > protocol version handshake and iommu-feature set queries. With the virtio transport there is a handshake when the device (IOMMU) is initialized, thr...
2018 Dec 13
3
[PATCH v6 0/7] Add virtio-iommu driver
...e unmap visible. > > I think that will cost significant performance for both, vfio and > dma-iommu use-cases which both do (vfio at least to some degree), > deferred flushing. We already do deferred flush: UNMAP requests are added to the queue by iommu_unmap(), and then flushed out by iotlb_sync(). So we switch to the host only on iotlb_sync(), or when the request queue is full. > I also wonder whether the protocol should implement a > protocol version handshake and iommu-feature set queries. With the virtio transport there is a handshake when the device (IOMMU) is initialized, thr...
2020 Aug 18
3
[PATCH V2 1/2] Add new flush_iotlb_range and handle freelists when using iommu_unmap_fast
...} +static void amd_iommu_flush_iotlb_range(struct iommu_domain *domain, + unsigned long iova, size_t size, + struct page *freelist) +{ + struct protection_domain *dom = to_pdomain(domain); + + domain_flush_pages(dom, iova, size); + domain_flush_complete(dom); +} + static void amd_iommu_iotlb_sync(struct iommu_domain *domain, struct iommu_iotlb_gather *gather) { @@ -2675,6 +2686,7 @@ const struct iommu_ops amd_iommu_ops = { .is_attach_deferred = amd_iommu_is_attach_deferred, .pgsize_bitmap = AMD_IOMMU_PGSIZES, .flush_iotlb_all = amd_iommu_flush_iotlb_all, + .flush_iotlb_range =...
2020 Aug 18
3
[PATCH V2 1/2] Add new flush_iotlb_range and handle freelists when using iommu_unmap_fast
...} +static void amd_iommu_flush_iotlb_range(struct iommu_domain *domain, + unsigned long iova, size_t size, + struct page *freelist) +{ + struct protection_domain *dom = to_pdomain(domain); + + domain_flush_pages(dom, iova, size); + domain_flush_complete(dom); +} + static void amd_iommu_iotlb_sync(struct iommu_domain *domain, struct iommu_iotlb_gather *gather) { @@ -2675,6 +2686,7 @@ const struct iommu_ops amd_iommu_ops = { .is_attach_deferred = amd_iommu_is_attach_deferred, .pgsize_bitmap = AMD_IOMMU_PGSIZES, .flush_iotlb_all = amd_iommu_flush_iotlb_all, + .flush_iotlb_range =...
2020 Apr 14
0
[PATCH v2 20/33] iommu/virtio: Convert to probe/release_device() call-backs
...group_remove_device(dev); - iommu_device_unlink(&vdev->viommu->iommu, dev); generic_iommu_put_resv_regions(dev, &vdev->resv_regions); kfree(vdev); } @@ -960,8 +939,8 @@ static struct iommu_ops viommu_ops = { .unmap = viommu_unmap, .iova_to_phys = viommu_iova_to_phys, .iotlb_sync = viommu_iotlb_sync, - .add_device = viommu_add_device, - .remove_device = viommu_remove_device, + .probe_device = viommu_probe_device, + .release_device = viommu_release_device, .device_group = viommu_device_group, .get_resv_regions = viommu_get_resv_regions, .put_resv_regions = generi...
2020 Apr 14
0
[PATCH v2 22/33] iommu/mediatek: Convert to probe/release_device() call-backs
...mp;mtk_iommu_ops) return; - data = dev_iommu_priv_get(dev); - iommu_device_unlink(&data->iommu, dev); - - iommu_group_remove_device(dev); iommu_fwspec_free(dev); } @@ -526,8 +514,8 @@ static const struct iommu_ops mtk_iommu_ops = { .flush_iotlb_all = mtk_iommu_flush_iotlb_all, .iotlb_sync = mtk_iommu_iotlb_sync, .iova_to_phys = mtk_iommu_iova_to_phys, - .add_device = mtk_iommu_add_device, - .remove_device = mtk_iommu_remove_device, + .probe_device = mtk_iommu_probe_device, + .release_device = mtk_iommu_release_device, .device_group = mtk_iommu_device_group, .of_xlate = mtk_iom...
2020 Apr 14
0
[PATCH v2 24/33] iommu/qcom: Convert to probe/release_device() call-backs
...m_iommu = to_iommu(dev); if (!qcom_iommu) return; - iommu_device_unlink(&qcom_iommu->iommu, dev); - iommu_group_remove_device(dev); iommu_fwspec_free(dev); } @@ -619,8 +609,8 @@ static const struct iommu_ops qcom_iommu_ops = { .flush_iotlb_all = qcom_iommu_flush_iotlb_all, .iotlb_sync = qcom_iommu_iotlb_sync, .iova_to_phys = qcom_iommu_iova_to_phys, - .add_device = qcom_iommu_add_device, - .remove_device = qcom_iommu_remove_device, + .probe_device = qcom_iommu_probe_device, + .release_device = qcom_iommu_release_device, .device_group = generic_device_group, .of_xlate = qco...
2020 Apr 14
0
[PATCH v2 03/33] iommu/amd: Implement iommu_ops->def_domain_type call-back
...drivers/iommu/amd_iommu.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c index 20cce366e951..73b4f84cf449 100644 --- a/drivers/iommu/amd_iommu.c +++ b/drivers/iommu/amd_iommu.c @@ -2661,6 +2661,20 @@ static void amd_iommu_iotlb_sync(struct iommu_domain *domain, amd_iommu_flush_iotlb_all(domain); } +static int amd_iommu_def_domain_type(struct device *dev) +{ + struct iommu_dev_data *dev_data; + + dev_data = get_dev_data(dev); + if (!dev_data) + return 0; + + if (dev_data->iommu_v2) + return IOMMU_DOMAIN_IDENTITY; + +...
2023 Sep 04
0
[PATCH 1/2] iommu/virtio: Make use of ops->iotlb_sync_map
Hi Niklas, Thanks for following up with these patches On Fri, Aug 25, 2023 at 05:21:25PM +0200, Niklas Schnelle wrote: > Pull out the sync operation from viommu_map_pages() by implementing > ops->iotlb_sync_map. This allows the common IOMMU code to map multiple > elements of an sg with a single sync (see iommu_map_sg()). Furthermore, > it is also a requirement for IOMMU_CAP_DEFERRED_FLUSH. > > Link: https://lore.kernel.org/lkml/20230726111433.1105665-1-schnelle at linux.ibm.com/ > Sign...
2019 Dec 21
0
[PATCH 4/8] iommu: Handle freelists when using deferred flushing in iommu drivers
...} +static void amd_iommu_flush_iotlb_range(struct iommu_domain *domain, + unsigned long iova, size_t size, + struct page *freelist) +{ + struct protection_domain *dom = to_pdomain(domain); + + domain_flush_pages(dom, iova, size); + domain_flush_complete(dom); +} + static void amd_iommu_iotlb_sync(struct iommu_domain *domain, struct iommu_iotlb_gather *gather) { @@ -2692,6 +2703,7 @@ const struct iommu_ops amd_iommu_ops = { .is_attach_deferred = amd_iommu_is_attach_deferred, .pgsize_bitmap = AMD_IOMMU_PGSIZES, .flush_iotlb_all = amd_iommu_flush_iotlb_all, + .flush_iotlb_range =...
2020 Apr 14
35
[PATCH v2 00/33] iommu: Move iommu_group setup to IOMMU core code
Hi, here is the second version of this patch-set. The first version with some more introductory text can be found here: https://lore.kernel.org/lkml/20200407183742.4344-1-joro at 8bytes.org/ Changes v1->v2: * Rebased to v5.7-rc1 * Re-wrote the arm-smmu changes as suggested by Robin Murphy * Re-worked the Exynos patches to hopefully not break the driver anymore * Fixed a missing
2020 Apr 14
35
[PATCH v2 00/33] iommu: Move iommu_group setup to IOMMU core code
Hi, here is the second version of this patch-set. The first version with some more introductory text can be found here: https://lore.kernel.org/lkml/20200407183742.4344-1-joro at 8bytes.org/ Changes v1->v2: * Rebased to v5.7-rc1 * Re-wrote the arm-smmu changes as suggested by Robin Murphy * Re-worked the Exynos patches to hopefully not break the driver anymore * Fixed a missing
2020 Aug 17
1
[PATCH 1/2] Add new flush_iotlb_range and handle freelists when using iommu_unmap_fast
...} +static void amd_iommu_flush_iotlb_range(struct iommu_domain *domain, + unsigned long iova, size_t size, + struct page *freelist) +{ + struct protection_domain *dom = to_pdomain(domain); + + domain_flush_pages(dom, iova, size); + domain_flush_complete(dom); +} + static void amd_iommu_iotlb_sync(struct iommu_domain *domain, struct iommu_iotlb_gather *gather) { @@ -2675,6 +2686,7 @@ const struct iommu_ops amd_iommu_ops = { .is_attach_deferred = amd_iommu_is_attach_deferred, .pgsize_bitmap = AMD_IOMMU_PGSIZES, .flush_iotlb_all = amd_iommu_flush_iotlb_all, + .flush_iotlb_range =...
2020 Aug 18
0
[PATCH V2 1/2] Add new flush_iotlb_range and handle freelists when using iommu_unmap_fast
...ct iommu_domain *domain, > + unsigned long iova, size_t size, > + struct page *freelist) > +{ > + struct protection_domain *dom = to_pdomain(domain); > + > + domain_flush_pages(dom, iova, size); > + domain_flush_complete(dom); > +} > + > static void amd_iommu_iotlb_sync(struct iommu_domain *domain, > struct iommu_iotlb_gather *gather) > { > @@ -2675,6 +2686,7 @@ const struct iommu_ops amd_iommu_ops = { > .is_attach_deferred = amd_iommu_is_attach_deferred, > .pgsize_bitmap = AMD_IOMMU_PGSIZES, > .flush_iotlb_all = amd_iommu_flush...