Tom Murphy
2019-Dec-21 15:03 UTC
[PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
This patchset converts the intel iommu driver to the dma-iommu api. While converting the driver I exposed a bug in the intel i915 driver which causes a huge amount of artifacts on the screen of my laptop. You can see a picture of it here: https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg This issue is most likely in the i915 driver and is most likely caused by the driver not respecting the return value of the dma_map_ops::map_sg function. You can see the driver ignoring the return value here: https://github.com/torvalds/linux/blob/7e0165b2f1a912a06e381e91f0f4e495f4ac3736/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c#L51 Previously this didn?t cause issues because the intel map_sg always returned the same number of elements as the input scatter gather list but with the change to this dma-iommu api this is no longer the case. I wasn?t able to track the bug down to a specific line of code unfortunately. Could someone from the intel team look at this? I have been testing on a lenovo x1 carbon 5th generation. Let me know if there?s any more information you need. To allow my patch set to be tested I have added a patch (patch 8/8) in this series to disable combining sg segments in the dma-iommu api which fixes the bug but it doesn't fix the actual problem. As part of this patch series I copied the intel bounce buffer code to the dma-iommu path. The addition of the bounce buffer code took me by surprise. I did most of my development on this patch series before the bounce buffer code was added and my reimplementation in the dma-iommu path is very rushed and not properly tested but I?m running out of time to work on this patch set. On top of that I also didn?t port over the intel tracing code from this commit: https://github.com/torvalds/linux/commit/3b53034c268d550d9e8522e613a14ab53b8840d8#diff-6b3e7c4993f05e76331e463ab1fc87e1 So all the work in that commit is now wasted. The code will need to be removed and reimplemented in the dma-iommu path. I would like to take the time to do this but I really don?t have the time at the moment and I want to get these changes out before the iommu code changes any more. Tom Murphy (8): iommu/vt-d: clean up 32bit si_domain assignment iommu/vt-d: Use default dma_direct_* mapping functions for direct mapped devices iommu/vt-d: Remove IOVA handling code from non-dma_ops path iommu: Handle freelists when using deferred flushing in iommu drivers iommu: Add iommu_dma_free_cpu_cached_iovas function iommu: allow the dma-iommu api to use bounce buffers iommu/vt-d: Convert intel iommu driver to the iommu ops DO NOT MERGE: iommu: disable list appending in dma-iommu drivers/iommu/Kconfig | 1 + drivers/iommu/amd_iommu.c | 14 +- drivers/iommu/arm-smmu-v3.c | 3 +- drivers/iommu/arm-smmu.c | 3 +- drivers/iommu/dma-iommu.c | 183 +++++-- drivers/iommu/exynos-iommu.c | 3 +- drivers/iommu/intel-iommu.c | 936 ++++---------------------------- drivers/iommu/iommu.c | 39 +- drivers/iommu/ipmmu-vmsa.c | 3 +- drivers/iommu/msm_iommu.c | 3 +- drivers/iommu/mtk_iommu.c | 3 +- drivers/iommu/mtk_iommu_v1.c | 3 +- drivers/iommu/omap-iommu.c | 3 +- drivers/iommu/qcom_iommu.c | 3 +- drivers/iommu/rockchip-iommu.c | 3 +- drivers/iommu/s390-iommu.c | 3 +- drivers/iommu/tegra-gart.c | 3 +- drivers/iommu/tegra-smmu.c | 3 +- drivers/iommu/virtio-iommu.c | 3 +- drivers/vfio/vfio_iommu_type1.c | 2 +- include/linux/dma-iommu.h | 3 + include/linux/intel-iommu.h | 1 - include/linux/iommu.h | 32 +- 23 files changed, 345 insertions(+), 908 deletions(-) -- 2.20.1
Tom Murphy
2019-Dec-21 15:03 UTC
[PATCH 1/8] iommu/vt-d: clean up 32bit si_domain assignment
In the intel iommu driver devices which only support 32bit DMA can't be direct mapped. The implementation of this is weird. Currently we assign it a direct mapped domain and then remove the domain later and replace it with a domain of type IOMMU_DOMAIN_IDENTITY. We should just assign it a domain of type IOMMU_DOMAIN_IDENTITY from the begging rather than needlessly swapping domains. Signed-off-by: Tom Murphy <murphyt7 at tcd.ie> --- drivers/iommu/intel-iommu.c | 88 +++++++++++++------------------------ 1 file changed, 31 insertions(+), 57 deletions(-) diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index 0c8d81f56a30..c1ea66467918 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -3462,46 +3462,9 @@ static struct dmar_domain *get_private_domain_for_dev(struct device *dev) } /* Check if the dev needs to go through non-identity map and unmap process.*/ -static bool iommu_need_mapping(struct device *dev) +static bool iommu_no_mapping(struct device *dev) { - int ret; - - if (iommu_dummy(dev)) - return false; - - ret = identity_mapping(dev); - if (ret) { - u64 dma_mask = *dev->dma_mask; - - if (dev->coherent_dma_mask && dev->coherent_dma_mask < dma_mask) - dma_mask = dev->coherent_dma_mask; - - if (dma_mask >= dma_direct_get_required_mask(dev)) - return false; - - /* - * 32 bit DMA is removed from si_domain and fall back to - * non-identity mapping. - */ - dmar_remove_one_dev_info(dev); - ret = iommu_request_dma_domain_for_dev(dev); - if (ret) { - struct iommu_domain *domain; - struct dmar_domain *dmar_domain; - - domain = iommu_get_domain_for_dev(dev); - if (domain) { - dmar_domain = to_dmar_domain(domain); - dmar_domain->flags |= DOMAIN_FLAG_LOSE_CHILDREN; - } - dmar_remove_one_dev_info(dev); - get_private_domain_for_dev(dev); - } - - dev_info(dev, "32bit DMA uses non-identity mapping\n"); - } - - return true; + return iommu_dummy(dev) || identity_mapping(dev); } static dma_addr_t __intel_map_single(struct device *dev, phys_addr_t paddr, @@ -3568,20 +3531,22 @@ static dma_addr_t intel_map_page(struct device *dev, struct page *page, enum dma_data_direction dir, unsigned long attrs) { - if (iommu_need_mapping(dev)) - return __intel_map_single(dev, page_to_phys(page) + offset, - size, dir, *dev->dma_mask); - return dma_direct_map_page(dev, page, offset, size, dir, attrs); + if (iommu_no_mapping(dev)) + return dma_direct_map_page(dev, page, offset, size, dir, attrs); + + return __intel_map_single(dev, page_to_phys(page) + offset, size, dir, + *dev->dma_mask); } static dma_addr_t intel_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { - if (iommu_need_mapping(dev)) - return __intel_map_single(dev, phys_addr, size, dir, - *dev->dma_mask); - return dma_direct_map_resource(dev, phys_addr, size, dir, attrs); + if (iommu_no_mapping(dev)) + return dma_direct_map_resource(dev, phys_addr, size, dir, + attrs); + + return __intel_map_single(dev, phys_addr, size, dir, *dev->dma_mask); } static void intel_unmap(struct device *dev, dma_addr_t dev_addr, size_t size) @@ -3632,16 +3597,16 @@ static void intel_unmap_page(struct device *dev, dma_addr_t dev_addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { - if (iommu_need_mapping(dev)) - intel_unmap(dev, dev_addr, size); - else + if (iommu_no_mapping(dev)) dma_direct_unmap_page(dev, dev_addr, size, dir, attrs); + else + intel_unmap(dev, dev_addr, size); } static void intel_unmap_resource(struct device *dev, dma_addr_t dev_addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { - if (iommu_need_mapping(dev)) + if (!iommu_no_mapping(dev)) intel_unmap(dev, dev_addr, size); } @@ -3652,7 +3617,7 @@ static void *intel_alloc_coherent(struct device *dev, size_t size, struct page *page = NULL; int order; - if (!iommu_need_mapping(dev)) + if (iommu_no_mapping(dev)) return dma_direct_alloc(dev, size, dma_handle, flags, attrs); size = PAGE_ALIGN(size); @@ -3688,7 +3653,7 @@ static void intel_free_coherent(struct device *dev, size_t size, void *vaddr, int order; struct page *page = virt_to_page(vaddr); - if (!iommu_need_mapping(dev)) + if (iommu_no_mapping(dev)) return dma_direct_free(dev, size, vaddr, dma_handle, attrs); size = PAGE_ALIGN(size); @@ -3708,7 +3673,7 @@ static void intel_unmap_sg(struct device *dev, struct scatterlist *sglist, struct scatterlist *sg; int i; - if (!iommu_need_mapping(dev)) + if (iommu_no_mapping(dev)) return dma_direct_unmap_sg(dev, sglist, nelems, dir, attrs); for_each_sg(sglist, sg, nelems, i) { @@ -3734,7 +3699,7 @@ static int intel_map_sg(struct device *dev, struct scatterlist *sglist, int nele struct intel_iommu *iommu; BUG_ON(dir == DMA_NONE); - if (!iommu_need_mapping(dev)) + if (iommu_no_mapping(dev)) return dma_direct_map_sg(dev, sglist, nelems, dir, attrs); domain = deferred_attach_domain(dev); @@ -3782,7 +3747,7 @@ static int intel_map_sg(struct device *dev, struct scatterlist *sglist, int nele static u64 intel_get_required_mask(struct device *dev) { - if (!iommu_need_mapping(dev)) + if (iommu_no_mapping(dev)) return dma_direct_get_required_mask(dev); return DMA_BIT_MASK(32); } @@ -5618,9 +5583,13 @@ static int intel_iommu_add_device(struct device *dev) struct iommu_domain *domain; struct intel_iommu *iommu; struct iommu_group *group; + u64 dma_mask = *dev->dma_mask; u8 bus, devfn; int ret; + if (dev->coherent_dma_mask && dev->coherent_dma_mask < dma_mask) + dma_mask = dev->coherent_dma_mask; + iommu = device_to_iommu(dev, &bus, &devfn); if (!iommu) return -ENODEV; @@ -5640,7 +5609,12 @@ static int intel_iommu_add_device(struct device *dev) domain = iommu_get_domain_for_dev(dev); dmar_domain = to_dmar_domain(domain); if (domain->type == IOMMU_DOMAIN_DMA) { - if (device_def_domain_type(dev) == IOMMU_DOMAIN_IDENTITY) { + /* + * We check dma_mask >= dma_get_required_mask(dev) because + * 32 bit DMA falls back to non-identity mapping. + */ + if (device_def_domain_type(dev) == IOMMU_DOMAIN_IDENTITY && + dma_mask >= dma_get_required_mask(dev)) { ret = iommu_request_dm_for_dev(dev); if (ret) { dmar_remove_one_dev_info(dev); -- 2.20.1
Tom Murphy
2019-Dec-21 15:03 UTC
[PATCH 2/8] iommu/vt-d: Use default dma_direct_* mapping functions for direct mapped devices
We should only assign intel_dma_ops to devices which will actually use the iommu and let the default fall back dma_direct_* functions handle all other devices. This won't change any behaviour but will just use the generic implementations for direct mapped devices rather than intel specific ones. Signed-off-by: Tom Murphy <murphyt7 at tcd.ie> --- drivers/iommu/intel-iommu.c | 52 +++++-------------------------------- 1 file changed, 6 insertions(+), 46 deletions(-) diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index c1ea66467918..64b1a9793daa 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -2794,17 +2794,6 @@ static int __init si_domain_init(int hw) return 0; } -static int identity_mapping(struct device *dev) -{ - struct device_domain_info *info; - - info = dev->archdata.iommu; - if (info && info != DUMMY_DEVICE_DOMAIN_INFO && info != DEFER_DEVICE_DOMAIN_INFO) - return (info->domain == si_domain); - - return 0; -} - static int domain_add_dev_info(struct dmar_domain *domain, struct device *dev) { struct dmar_domain *ndomain; @@ -3461,12 +3450,6 @@ static struct dmar_domain *get_private_domain_for_dev(struct device *dev) return domain; } -/* Check if the dev needs to go through non-identity map and unmap process.*/ -static bool iommu_no_mapping(struct device *dev) -{ - return iommu_dummy(dev) || identity_mapping(dev); -} - static dma_addr_t __intel_map_single(struct device *dev, phys_addr_t paddr, size_t size, int dir, u64 dma_mask) { @@ -3531,9 +3514,6 @@ static dma_addr_t intel_map_page(struct device *dev, struct page *page, enum dma_data_direction dir, unsigned long attrs) { - if (iommu_no_mapping(dev)) - return dma_direct_map_page(dev, page, offset, size, dir, attrs); - return __intel_map_single(dev, page_to_phys(page) + offset, size, dir, *dev->dma_mask); } @@ -3542,10 +3522,6 @@ static dma_addr_t intel_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { - if (iommu_no_mapping(dev)) - return dma_direct_map_resource(dev, phys_addr, size, dir, - attrs); - return __intel_map_single(dev, phys_addr, size, dir, *dev->dma_mask); } @@ -3597,17 +3573,13 @@ static void intel_unmap_page(struct device *dev, dma_addr_t dev_addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { - if (iommu_no_mapping(dev)) - dma_direct_unmap_page(dev, dev_addr, size, dir, attrs); - else - intel_unmap(dev, dev_addr, size); + intel_unmap(dev, dev_addr, size); } static void intel_unmap_resource(struct device *dev, dma_addr_t dev_addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { - if (!iommu_no_mapping(dev)) - intel_unmap(dev, dev_addr, size); + intel_unmap(dev, dev_addr, size); } static void *intel_alloc_coherent(struct device *dev, size_t size, @@ -3617,9 +3589,6 @@ static void *intel_alloc_coherent(struct device *dev, size_t size, struct page *page = NULL; int order; - if (iommu_no_mapping(dev)) - return dma_direct_alloc(dev, size, dma_handle, flags, attrs); - size = PAGE_ALIGN(size); order = get_order(size); @@ -3653,9 +3622,6 @@ static void intel_free_coherent(struct device *dev, size_t size, void *vaddr, int order; struct page *page = virt_to_page(vaddr); - if (iommu_no_mapping(dev)) - return dma_direct_free(dev, size, vaddr, dma_handle, attrs); - size = PAGE_ALIGN(size); order = get_order(size); @@ -3673,9 +3639,6 @@ static void intel_unmap_sg(struct device *dev, struct scatterlist *sglist, struct scatterlist *sg; int i; - if (iommu_no_mapping(dev)) - return dma_direct_unmap_sg(dev, sglist, nelems, dir, attrs); - for_each_sg(sglist, sg, nelems, i) { nrpages += aligned_nrpages(sg_dma_address(sg), sg_dma_len(sg)); } @@ -3699,8 +3662,6 @@ static int intel_map_sg(struct device *dev, struct scatterlist *sglist, int nele struct intel_iommu *iommu; BUG_ON(dir == DMA_NONE); - if (iommu_no_mapping(dev)) - return dma_direct_map_sg(dev, sglist, nelems, dir, attrs); domain = deferred_attach_domain(dev); if (!domain) @@ -3747,8 +3708,6 @@ static int intel_map_sg(struct device *dev, struct scatterlist *sglist, int nele static u64 intel_get_required_mask(struct device *dev) { - if (iommu_no_mapping(dev)) - return dma_direct_get_required_mask(dev); return DMA_BIT_MASK(32); } @@ -5014,7 +4973,6 @@ int __init intel_iommu_init(void) if (!has_untrusted_dev() || intel_no_bounce) swiotlb = 0; #endif - dma_ops = &intel_dma_ops; init_iommu_pm_ops(); @@ -5623,6 +5581,8 @@ static int intel_iommu_add_device(struct device *dev) dev_info(dev, "Device uses a private identity domain.\n"); } + } else { + dev->dma_ops = &intel_dma_ops; } } else { if (device_def_domain_type(dev) == IOMMU_DOMAIN_DMA) { @@ -5639,6 +5599,7 @@ static int intel_iommu_add_device(struct device *dev) dev_info(dev, "Device uses a private dma domain.\n"); } + dev->dma_ops = &intel_dma_ops; } } @@ -5665,8 +5626,7 @@ static void intel_iommu_remove_device(struct device *dev) iommu_device_unlink(&iommu->iommu, dev); - if (device_needs_bounce(dev)) - set_dma_ops(dev, NULL); + set_dma_ops(dev, NULL); } static void intel_iommu_get_resv_regions(struct device *device, -- 2.20.1
Tom Murphy
2019-Dec-21 15:03 UTC
[PATCH 3/8] iommu/vt-d: Remove IOVA handling code from non-dma_ops path
Remove all IOVA handling code from the non-dma_ops path in the intel iommu driver. There's no need for the non-dma_ops path to keep track of IOVAs. The whole point of the non-dma_ops path is that it allows the IOVAs to be handled separately. The IOVA handling code removed in this patch is pointless. Signed-off-by: Tom Murphy <murphyt7 at tcd.ie> --- drivers/iommu/intel-iommu.c | 89 ++++++++++++++----------------------- 1 file changed, 33 insertions(+), 56 deletions(-) diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index 64b1a9793daa..8d72ea0fb843 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -1908,7 +1908,8 @@ static void domain_exit(struct dmar_domain *domain) domain_remove_dev_info(domain); /* destroy iovas */ - put_iova_domain(&domain->iovad); + if (domain->domain.type == IOMMU_DOMAIN_DMA) + put_iova_domain(&domain->iovad); if (domain->pgd) { struct page *freelist; @@ -2671,19 +2672,9 @@ static struct dmar_domain *set_domain_for_dev(struct device *dev, } static int iommu_domain_identity_map(struct dmar_domain *domain, - unsigned long long start, - unsigned long long end) + unsigned long first_vpfn, + unsigned long last_vpfn) { - unsigned long first_vpfn = start >> VTD_PAGE_SHIFT; - unsigned long last_vpfn = end >> VTD_PAGE_SHIFT; - - if (!reserve_iova(&domain->iovad, dma_to_mm_pfn(first_vpfn), - dma_to_mm_pfn(last_vpfn))) { - pr_err("Reserving iova failed\n"); - return -ENOMEM; - } - - pr_debug("Mapping reserved region %llx-%llx\n", start, end); /* * RMRR range might have overlap with physical memory range, * clear it first @@ -2760,7 +2751,8 @@ static int __init si_domain_init(int hw) for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) { ret = iommu_domain_identity_map(si_domain, - PFN_PHYS(start_pfn), PFN_PHYS(end_pfn)); + mm_to_dma_pfn(start_pfn), + mm_to_dma_pfn(end_pfn)); if (ret) return ret; } @@ -4593,58 +4585,37 @@ static int intel_iommu_memory_notifier(struct notifier_block *nb, unsigned long val, void *v) { struct memory_notify *mhp = v; - unsigned long long start, end; - unsigned long start_vpfn, last_vpfn; + unsigned long start_vpfn = mm_to_dma_pfn(mhp->start_pfn); + unsigned long last_vpfn = mm_to_dma_pfn(mhp->start_pfn + + mhp->nr_pages - 1); switch (val) { case MEM_GOING_ONLINE: - start = mhp->start_pfn << PAGE_SHIFT; - end = ((mhp->start_pfn + mhp->nr_pages) << PAGE_SHIFT) - 1; - if (iommu_domain_identity_map(si_domain, start, end)) { - pr_warn("Failed to build identity map for [%llx-%llx]\n", - start, end); + if (iommu_domain_identity_map(si_domain, start_vpfn, + last_vpfn)) { + pr_warn("Failed to build identity map for [%lx-%lx]\n", + start_vpfn, last_vpfn); return NOTIFY_BAD; } break; case MEM_OFFLINE: case MEM_CANCEL_ONLINE: - start_vpfn = mm_to_dma_pfn(mhp->start_pfn); - last_vpfn = mm_to_dma_pfn(mhp->start_pfn + mhp->nr_pages - 1); - while (start_vpfn <= last_vpfn) { - struct iova *iova; + { struct dmar_drhd_unit *drhd; struct intel_iommu *iommu; struct page *freelist; - iova = find_iova(&si_domain->iovad, start_vpfn); - if (iova == NULL) { - pr_debug("Failed get IOVA for PFN %lx\n", - start_vpfn); - break; - } - - iova = split_and_remove_iova(&si_domain->iovad, iova, - start_vpfn, last_vpfn); - if (iova == NULL) { - pr_warn("Failed to split IOVA PFN [%lx-%lx]\n", - start_vpfn, last_vpfn); - return NOTIFY_BAD; - } - - freelist = domain_unmap(si_domain, iova->pfn_lo, - iova->pfn_hi); + freelist = domain_unmap(si_domain, start_vpfn, + last_vpfn); rcu_read_lock(); for_each_active_iommu(iommu, drhd) iommu_flush_iotlb_psi(iommu, si_domain, - iova->pfn_lo, iova_size(iova), + start_vpfn, mhp->nr_pages, !freelist, 0); rcu_read_unlock(); dma_free_pagelist(freelist); - - start_vpfn = iova->pfn_hi + 1; - free_iova_mem(iova); } break; } @@ -4672,8 +4643,9 @@ static void free_all_cpu_cached_iovas(unsigned int cpu) for (did = 0; did < cap_ndoms(iommu->cap); did++) { domain = get_iommu_domain(iommu, (u16)did); - if (!domain) + if (!domain || domain->domain.type != IOMMU_DOMAIN_DMA) continue; + free_cpu_cached_iovas(cpu, &domain->iovad); } } @@ -5095,9 +5067,6 @@ static int md_domain_init(struct dmar_domain *domain, int guest_width) { int adjust_width; - init_iova_domain(&domain->iovad, VTD_PAGE_SIZE, IOVA_START_PFN); - domain_reserve_special_ranges(domain); - /* calculate AGAW */ domain->gaw = guest_width; adjust_width = guestwidth_to_adjustwidth(guest_width); @@ -5116,6 +5085,18 @@ static int md_domain_init(struct dmar_domain *domain, int guest_width) return 0; } +static void intel_init_iova_domain(struct dmar_domain *dmar_domain) +{ + init_iova_domain(&dmar_domain->iovad, VTD_PAGE_SIZE, IOVA_START_PFN); + copy_reserved_iova(&reserved_iova_list, &dmar_domain->iovad); + + if (init_iova_flush_queue(&dmar_domain->iovad, iommu_flush_iova, + iova_entry_free)) { + pr_warn("iova flush queue initialization failed\n"); + intel_iommu_strict = 1; + } +} + static struct iommu_domain *intel_iommu_domain_alloc(unsigned type) { struct dmar_domain *dmar_domain; @@ -5136,12 +5117,8 @@ static struct iommu_domain *intel_iommu_domain_alloc(unsigned type) return NULL; } - if (type == IOMMU_DOMAIN_DMA && - init_iova_flush_queue(&dmar_domain->iovad, - iommu_flush_iova, iova_entry_free)) { - pr_warn("iova flush queue initialization failed\n"); - intel_iommu_strict = 1; - } + if (type == IOMMU_DOMAIN_DMA) + intel_init_iova_domain(dmar_domain); domain_update_iommu_cap(dmar_domain); -- 2.20.1
Tom Murphy
2019-Dec-21 15:03 UTC
[PATCH 4/8] iommu: Handle freelists when using deferred flushing in iommu drivers
Allow the iommu_unmap_fast to return newly freed page table pages and pass the freelist to queue_iova in the dma-iommu ops path. This is useful for iommu drivers (in this case the intel iommu driver) which need to wait for the ioTLB to be flushed before newly free/unmapped page table pages can be freed. This way we can still batch ioTLB free operations and handle the freelists. Signed-off-by: Tom Murphy <murphyt7 at tcd.ie> --- drivers/iommu/amd_iommu.c | 14 ++++++++- drivers/iommu/arm-smmu-v3.c | 3 +- drivers/iommu/arm-smmu.c | 3 +- drivers/iommu/dma-iommu.c | 45 ++++++++++++++++++++++------- drivers/iommu/exynos-iommu.c | 3 +- drivers/iommu/intel-iommu.c | 51 +++++++++++++++++++++------------ drivers/iommu/iommu.c | 29 ++++++++++++++----- drivers/iommu/ipmmu-vmsa.c | 3 +- drivers/iommu/msm_iommu.c | 3 +- drivers/iommu/mtk_iommu.c | 3 +- drivers/iommu/mtk_iommu_v1.c | 3 +- drivers/iommu/omap-iommu.c | 3 +- drivers/iommu/qcom_iommu.c | 3 +- drivers/iommu/rockchip-iommu.c | 3 +- drivers/iommu/s390-iommu.c | 3 +- drivers/iommu/tegra-gart.c | 3 +- drivers/iommu/tegra-smmu.c | 3 +- drivers/iommu/virtio-iommu.c | 3 +- drivers/vfio/vfio_iommu_type1.c | 2 +- include/linux/iommu.h | 25 ++++++++++++---- 20 files changed, 151 insertions(+), 57 deletions(-) diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c index bd25674ee4db..e8a4c0842624 100644 --- a/drivers/iommu/amd_iommu.c +++ b/drivers/iommu/amd_iommu.c @@ -2542,7 +2542,8 @@ static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova, static size_t amd_iommu_unmap(struct iommu_domain *dom, unsigned long iova, size_t page_size, - struct iommu_iotlb_gather *gather) + struct iommu_iotlb_gather *gather, + struct page **freelist) { struct protection_domain *domain = to_pdomain(dom); @@ -2668,6 +2669,16 @@ static void amd_iommu_flush_iotlb_all(struct iommu_domain *domain) spin_unlock_irqrestore(&dom->lock, flags); } +static void amd_iommu_flush_iotlb_range(struct iommu_domain *domain, + unsigned long iova, size_t size, + struct page *freelist) +{ + struct protection_domain *dom = to_pdomain(domain); + + domain_flush_pages(dom, iova, size); + domain_flush_complete(dom); +} + static void amd_iommu_iotlb_sync(struct iommu_domain *domain, struct iommu_iotlb_gather *gather) { @@ -2692,6 +2703,7 @@ const struct iommu_ops amd_iommu_ops = { .is_attach_deferred = amd_iommu_is_attach_deferred, .pgsize_bitmap = AMD_IOMMU_PGSIZES, .flush_iotlb_all = amd_iommu_flush_iotlb_all, + .flush_iotlb_range = amd_iommu_flush_iotlb_range, .iotlb_sync = amd_iommu_iotlb_sync, }; diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index effe72eb89e7..a27d4bf4492c 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -2459,7 +2459,8 @@ static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova, } static size_t arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova, - size_t size, struct iommu_iotlb_gather *gather) + size_t size, struct iommu_iotlb_gather *gather, + struct page **freelist) { struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); struct io_pgtable_ops *ops = smmu_domain->pgtbl_ops; diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c index 4f1a350d9529..ea1ab3387a07 100644 --- a/drivers/iommu/arm-smmu.c +++ b/drivers/iommu/arm-smmu.c @@ -1205,7 +1205,8 @@ static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova, } static size_t arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova, - size_t size, struct iommu_iotlb_gather *gather) + size_t size, struct iommu_iotlb_gather *gather, + struct page **freelist) { struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops; struct arm_smmu_device *smmu = to_smmu_domain(domain)->smmu; diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 0cc702a70a96..df28facdfb8b 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -50,6 +50,19 @@ struct iommu_dma_cookie { struct iommu_domain *fq_domain; }; +static void iommu_dma_entry_dtor(unsigned long data) +{ + struct page *freelist = (struct page *)data; + + while (freelist != NULL) { + unsigned long p = (unsigned long)page_address(freelist); + + freelist = freelist->freelist; + free_page(p); + } +} + + static inline size_t cookie_msi_granule(struct iommu_dma_cookie *cookie) { if (cookie->type == IOMMU_DMA_IOVA_COOKIE) @@ -345,7 +358,8 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, if (!cookie->fq_domain && !iommu_domain_get_attr(domain, DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE, &attr) && attr) { cookie->fq_domain = domain; - init_iova_flush_queue(iovad, iommu_dma_flush_iotlb_all, NULL); + init_iova_flush_queue(iovad, iommu_dma_flush_iotlb_all, + iommu_dma_entry_dtor); } if (!dev) @@ -439,7 +453,7 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain, } static void iommu_dma_free_iova(struct iommu_dma_cookie *cookie, - dma_addr_t iova, size_t size) + dma_addr_t iova, size_t size, struct page *freelist) { struct iova_domain *iovad = &cookie->iovad; @@ -448,7 +462,8 @@ static void iommu_dma_free_iova(struct iommu_dma_cookie *cookie, cookie->msi_iova -= size; else if (cookie->fq_domain) /* non-strict mode */ queue_iova(iovad, iova_pfn(iovad, iova), - size >> iova_shift(iovad), 0); + size >> iova_shift(iovad), + (unsigned long) freelist); else free_iova_fast(iovad, iova_pfn(iovad, iova), size >> iova_shift(iovad)); @@ -462,18 +477,26 @@ static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr, struct iova_domain *iovad = &cookie->iovad; size_t iova_off = iova_offset(iovad, dma_addr); struct iommu_iotlb_gather iotlb_gather; + struct page *freelist = NULL; size_t unmapped; dma_addr -= iova_off; size = iova_align(iovad, size + iova_off); iommu_iotlb_gather_init(&iotlb_gather); - unmapped = iommu_unmap_fast(domain, dma_addr, size, &iotlb_gather); + unmapped = iommu_unmap_fast(domain, dma_addr, size, &iotlb_gather, + &freelist); WARN_ON(unmapped != size); - if (!cookie->fq_domain) - iommu_tlb_sync(domain, &iotlb_gather); - iommu_dma_free_iova(cookie, dma_addr, size); + if (!cookie->fq_domain) { + if (domain->ops->flush_iotlb_range) + domain->ops->flush_iotlb_range(domain, dma_addr, size, + freelist); + else + iommu_tlb_sync(domain, &iotlb_gather); + } + + iommu_dma_free_iova(cookie, dma_addr, size, freelist); } static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, @@ -495,7 +518,7 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, return DMA_MAPPING_ERROR; if (iommu_map_atomic(domain, iova, phys - iova_off, size, prot)) { - iommu_dma_free_iova(cookie, iova, size); + iommu_dma_free_iova(cookie, iova, size, NULL); return DMA_MAPPING_ERROR; } return iova + iova_off; @@ -650,7 +673,7 @@ static void *iommu_dma_alloc_remap(struct device *dev, size_t size, out_free_sg: sg_free_table(&sgt); out_free_iova: - iommu_dma_free_iova(cookie, iova, size); + iommu_dma_free_iova(cookie, iova, size, NULL); out_free_pages: __iommu_dma_free_pages(pages, count); return NULL; @@ -901,7 +924,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, return __finalise_sg(dev, sg, nents, iova); out_free_iova: - iommu_dma_free_iova(cookie, iova, iova_len); + iommu_dma_free_iova(cookie, iova, iova_len, NULL); out_restore_sg: __invalidate_sg(sg, nents); return 0; @@ -1194,7 +1217,7 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, return msi_page; out_free_iova: - iommu_dma_free_iova(cookie, iova, size); + iommu_dma_free_iova(cookie, iova, size, NULL); out_free_page: kfree(msi_page); return NULL; diff --git a/drivers/iommu/exynos-iommu.c b/drivers/iommu/exynos-iommu.c index 186ff5cc975c..e6456eb8ac4d 100644 --- a/drivers/iommu/exynos-iommu.c +++ b/drivers/iommu/exynos-iommu.c @@ -1129,7 +1129,8 @@ static void exynos_iommu_tlb_invalidate_entry(struct exynos_iommu_domain *domain static size_t exynos_iommu_unmap(struct iommu_domain *iommu_domain, unsigned long l_iova, size_t size, - struct iommu_iotlb_gather *gather) + struct iommu_iotlb_gather *gather, + struct page **freelist) { struct exynos_iommu_domain *domain = to_exynos_domain(iommu_domain); sysmmu_iova_t iova = (sysmmu_iova_t)l_iova; diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index 8d72ea0fb843..675ca2aa6e20 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -1145,9 +1145,9 @@ static struct page *dma_pte_clear_level(struct dmar_domain *domain, int level, pages can only be freed after the IOTLB flush has been done. */ static struct page *domain_unmap(struct dmar_domain *domain, unsigned long start_pfn, - unsigned long last_pfn) + unsigned long last_pfn, + struct page *freelist) { - struct page *freelist; BUG_ON(!domain_pfn_supported(domain, start_pfn)); BUG_ON(!domain_pfn_supported(domain, last_pfn)); @@ -1155,7 +1155,8 @@ static struct page *domain_unmap(struct dmar_domain *domain, /* we don't need lock here; nobody else touches the iova range */ freelist = dma_pte_clear_level(domain, agaw_to_level(domain->agaw), - domain->pgd, 0, start_pfn, last_pfn, NULL); + domain->pgd, 0, start_pfn, last_pfn, + freelist); /* free pgd */ if (start_pfn == 0 && last_pfn == DOMAIN_MAX_PFN(domain->gaw)) { @@ -1914,7 +1915,8 @@ static void domain_exit(struct dmar_domain *domain) if (domain->pgd) { struct page *freelist; - freelist = domain_unmap(domain, 0, DOMAIN_MAX_PFN(domain->gaw)); + freelist = domain_unmap(domain, 0, DOMAIN_MAX_PFN(domain->gaw), + NULL); dma_free_pagelist(freelist); } @@ -3541,7 +3543,7 @@ static void intel_unmap(struct device *dev, dma_addr_t dev_addr, size_t size) if (dev_is_pci(dev)) pdev = to_pci_dev(dev); - freelist = domain_unmap(domain, start_pfn, last_pfn); + freelist = domain_unmap(domain, start_pfn, last_pfn, NULL); if (intel_iommu_strict || (pdev && pdev->untrusted) || !has_iova_flush_queue(&domain->iovad)) { iommu_flush_iotlb_psi(iommu, domain, start_pfn, @@ -4607,7 +4609,7 @@ static int intel_iommu_memory_notifier(struct notifier_block *nb, struct page *freelist; freelist = domain_unmap(si_domain, start_vpfn, - last_vpfn); + last_vpfn, NULL); rcu_read_lock(); for_each_active_iommu(iommu, drhd) @@ -5412,13 +5414,12 @@ static int intel_iommu_map(struct iommu_domain *domain, static size_t intel_iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size, - struct iommu_iotlb_gather *gather) + struct iommu_iotlb_gather *gather, + struct page **freelist) { struct dmar_domain *dmar_domain = to_dmar_domain(domain); - struct page *freelist = NULL; unsigned long start_pfn, last_pfn; - unsigned int npages; - int iommu_id, level = 0; + int level = 0; /* Cope with horrid API which requires us to unmap more than the size argument if it happens to be a large-page mapping. */ @@ -5432,20 +5433,33 @@ static size_t intel_iommu_unmap(struct iommu_domain *domain, start_pfn = iova >> VTD_PAGE_SHIFT; last_pfn = (iova + size - 1) >> VTD_PAGE_SHIFT; - freelist = domain_unmap(dmar_domain, start_pfn, last_pfn); + *freelist = domain_unmap(dmar_domain, start_pfn, last_pfn, *freelist); - npages = last_pfn - start_pfn + 1; + if (dmar_domain->max_addr == iova + size) + dmar_domain->max_addr = iova; + + return size; +} + +static void intel_iommu_flush_iotlb_range(struct iommu_domain *domain, + unsigned long iova, size_t size, + struct page *freelist) +{ + struct dmar_domain *dmar_domain = to_dmar_domain(domain); + unsigned long start_pfn, last_pfn; + unsigned long iova_pfn = IOVA_PFN(iova); + unsigned long nrpages; + int iommu_id; + + nrpages = aligned_nrpages(iova, size); + start_pfn = mm_to_dma_pfn(iova_pfn); + last_pfn = start_pfn + nrpages - 1; for_each_domain_iommu(iommu_id, dmar_domain) iommu_flush_iotlb_psi(g_iommus[iommu_id], dmar_domain, - start_pfn, npages, !freelist, 0); + start_pfn, nrpages, !freelist, 0); dma_free_pagelist(freelist); - - if (dmar_domain->max_addr == iova + size) - dmar_domain->max_addr = iova; - - return size; } static phys_addr_t intel_iommu_iova_to_phys(struct iommu_domain *domain, @@ -5902,6 +5916,7 @@ const struct iommu_ops intel_iommu_ops = { .aux_get_pasid = intel_iommu_aux_get_pasid, .map = intel_iommu_map, .unmap = intel_iommu_unmap, + .flush_iotlb_range = intel_iommu_flush_iotlb_range, .iova_to_phys = intel_iommu_iova_to_phys, .add_device = intel_iommu_add_device, .remove_device = intel_iommu_remove_device, diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index db7bfd4f2d20..cec728f40d9c 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -667,7 +667,7 @@ static int iommu_group_create_direct_mappings(struct iommu_group *group, } - iommu_flush_tlb_all(domain); + iommu_flush_iotlb_all(domain); out: iommu_put_resv_regions(dev, &mappings); @@ -1961,11 +1961,13 @@ EXPORT_SYMBOL_GPL(iommu_map_atomic); static size_t __iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size, - struct iommu_iotlb_gather *iotlb_gather) + struct iommu_iotlb_gather *iotlb_gather, + struct page **freelist) { const struct iommu_ops *ops = domain->ops; size_t unmapped_page, unmapped = 0; unsigned long orig_iova = iova; + struct page *freelist_head = NULL; unsigned int min_pagesz; if (unlikely(ops->unmap == NULL || @@ -1998,7 +2000,8 @@ static size_t __iommu_unmap(struct iommu_domain *domain, while (unmapped < size) { size_t pgsize = iommu_pgsize(domain, iova, size - unmapped); - unmapped_page = ops->unmap(domain, iova, pgsize, iotlb_gather); + unmapped_page = ops->unmap(domain, iova, pgsize, iotlb_gather, + &freelist_head); if (!unmapped_page) break; @@ -2009,6 +2012,9 @@ static size_t __iommu_unmap(struct iommu_domain *domain, unmapped += unmapped_page; } + if (freelist) + *freelist = freelist_head; + trace_unmap(orig_iova, size, unmapped); return unmapped; } @@ -2016,12 +2022,20 @@ static size_t __iommu_unmap(struct iommu_domain *domain, size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size) { + const struct iommu_ops *ops = domain->ops; struct iommu_iotlb_gather iotlb_gather; + struct page *freelist; size_t ret; iommu_iotlb_gather_init(&iotlb_gather); - ret = __iommu_unmap(domain, iova, size, &iotlb_gather); - iommu_tlb_sync(domain, &iotlb_gather); + ret = __iommu_unmap(domain, iova, size, &iotlb_gather, + &freelist); + + if (ops->flush_iotlb_range) + ops->flush_iotlb_range(domain, iova, ret, + freelist); + else + iommu_tlb_sync(domain, &iotlb_gather); return ret; } @@ -2029,9 +2043,10 @@ EXPORT_SYMBOL_GPL(iommu_unmap); size_t iommu_unmap_fast(struct iommu_domain *domain, unsigned long iova, size_t size, - struct iommu_iotlb_gather *iotlb_gather) + struct iommu_iotlb_gather *iotlb_gather, + struct page **freelist) { - return __iommu_unmap(domain, iova, size, iotlb_gather); + return __iommu_unmap(domain, iova, size, iotlb_gather, freelist); } EXPORT_SYMBOL_GPL(iommu_unmap_fast); diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c index d02edd2751f3..63bbee653859 100644 --- a/drivers/iommu/ipmmu-vmsa.c +++ b/drivers/iommu/ipmmu-vmsa.c @@ -693,7 +693,8 @@ static int ipmmu_map(struct iommu_domain *io_domain, unsigned long iova, } static size_t ipmmu_unmap(struct iommu_domain *io_domain, unsigned long iova, - size_t size, struct iommu_iotlb_gather *gather) + size_t size, struct iommu_iotlb_gather *gather, + struct page **freelist) { struct ipmmu_vmsa_domain *domain = to_vmsa_domain(io_domain); diff --git a/drivers/iommu/msm_iommu.c b/drivers/iommu/msm_iommu.c index 93f14bca26ee..66d1587ab714 100644 --- a/drivers/iommu/msm_iommu.c +++ b/drivers/iommu/msm_iommu.c @@ -518,7 +518,8 @@ static int msm_iommu_map(struct iommu_domain *domain, unsigned long iova, } static size_t msm_iommu_unmap(struct iommu_domain *domain, unsigned long iova, - size_t len, struct iommu_iotlb_gather *gather) + size_t len, struct iommu_iotlb_gather *gather, + struct page **freelist) { struct msm_priv *priv = to_msm_priv(domain); unsigned long flags; diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c index 6fc1f5ecf91e..6bd9f39bb259 100644 --- a/drivers/iommu/mtk_iommu.c +++ b/drivers/iommu/mtk_iommu.c @@ -402,7 +402,8 @@ static int mtk_iommu_map(struct iommu_domain *domain, unsigned long iova, static size_t mtk_iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size, - struct iommu_iotlb_gather *gather) + struct iommu_iotlb_gather *gather, + struct page **freelist) { struct mtk_iommu_domain *dom = to_mtk_domain(domain); diff --git a/drivers/iommu/mtk_iommu_v1.c b/drivers/iommu/mtk_iommu_v1.c index e93b94ecac45..f94d225c3404 100644 --- a/drivers/iommu/mtk_iommu_v1.c +++ b/drivers/iommu/mtk_iommu_v1.c @@ -325,7 +325,8 @@ static int mtk_iommu_map(struct iommu_domain *domain, unsigned long iova, static size_t mtk_iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size, - struct iommu_iotlb_gather *gather) + struct iommu_iotlb_gather *gather, + struct page **freelist) { struct mtk_iommu_domain *dom = to_mtk_domain(domain); unsigned long flags; diff --git a/drivers/iommu/omap-iommu.c b/drivers/iommu/omap-iommu.c index be551cc34be4..c5e012267f2b 100644 --- a/drivers/iommu/omap-iommu.c +++ b/drivers/iommu/omap-iommu.c @@ -1383,7 +1383,8 @@ static int omap_iommu_map(struct iommu_domain *domain, unsigned long da, } static size_t omap_iommu_unmap(struct iommu_domain *domain, unsigned long da, - size_t size, struct iommu_iotlb_gather *gather) + size_t size, struct iommu_iotlb_gather *gather, + struct page **freelist) { struct omap_iommu_domain *omap_domain = to_omap_domain(domain); struct device *dev = omap_domain->dev; diff --git a/drivers/iommu/qcom_iommu.c b/drivers/iommu/qcom_iommu.c index 52f38292df5b..99ebf34e50be 100644 --- a/drivers/iommu/qcom_iommu.c +++ b/drivers/iommu/qcom_iommu.c @@ -440,7 +440,8 @@ static int qcom_iommu_map(struct iommu_domain *domain, unsigned long iova, } static size_t qcom_iommu_unmap(struct iommu_domain *domain, unsigned long iova, - size_t size, struct iommu_iotlb_gather *gather) + size_t size, struct iommu_iotlb_gather *gather, + struct page **freelist) { size_t ret; unsigned long flags; diff --git a/drivers/iommu/rockchip-iommu.c b/drivers/iommu/rockchip-iommu.c index b33cdd5aad81..ec16e01c376a 100644 --- a/drivers/iommu/rockchip-iommu.c +++ b/drivers/iommu/rockchip-iommu.c @@ -795,7 +795,8 @@ static int rk_iommu_map(struct iommu_domain *domain, unsigned long _iova, } static size_t rk_iommu_unmap(struct iommu_domain *domain, unsigned long _iova, - size_t size, struct iommu_iotlb_gather *gather) + size_t size, struct iommu_iotlb_gather *gather, + struct page **freelist) { struct rk_iommu_domain *rk_domain = to_rk_domain(domain); unsigned long flags; diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c index 1137f3ddcb85..d69d7cf4dee3 100644 --- a/drivers/iommu/s390-iommu.c +++ b/drivers/iommu/s390-iommu.c @@ -315,7 +315,8 @@ static phys_addr_t s390_iommu_iova_to_phys(struct iommu_domain *domain, static size_t s390_iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size, - struct iommu_iotlb_gather *gather) + struct iommu_iotlb_gather *gather, + struct page **freelist) { struct s390_domain *s390_domain = to_s390_domain(domain); int flags = ZPCI_PTE_INVALID; diff --git a/drivers/iommu/tegra-gart.c b/drivers/iommu/tegra-gart.c index 3fb7ba72507d..68e7eee9172f 100644 --- a/drivers/iommu/tegra-gart.c +++ b/drivers/iommu/tegra-gart.c @@ -207,7 +207,8 @@ static inline int __gart_iommu_unmap(struct gart_device *gart, } static size_t gart_iommu_unmap(struct iommu_domain *domain, unsigned long iova, - size_t bytes, struct iommu_iotlb_gather *gather) + size_t bytes, struct iommu_iotlb_gather *gather, + struct page **freelist) { struct gart_device *gart = gart_handle; int err; diff --git a/drivers/iommu/tegra-smmu.c b/drivers/iommu/tegra-smmu.c index 63a147b623e6..0c5e5f2c3c7d 100644 --- a/drivers/iommu/tegra-smmu.c +++ b/drivers/iommu/tegra-smmu.c @@ -686,7 +686,8 @@ static int tegra_smmu_map(struct iommu_domain *domain, unsigned long iova, } static size_t tegra_smmu_unmap(struct iommu_domain *domain, unsigned long iova, - size_t size, struct iommu_iotlb_gather *gather) + size_t size, struct iommu_iotlb_gather *gather, + struct page **freelist) { struct tegra_smmu_as *as = to_smmu_as(domain); dma_addr_t pte_dma; diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c index 315c7cc4f99d..e74baab27b61 100644 --- a/drivers/iommu/virtio-iommu.c +++ b/drivers/iommu/virtio-iommu.c @@ -750,7 +750,8 @@ static int viommu_map(struct iommu_domain *domain, unsigned long iova, } static size_t viommu_unmap(struct iommu_domain *domain, unsigned long iova, - size_t size, struct iommu_iotlb_gather *gather) + size_t size, struct iommu_iotlb_gather *gather, + struct page *freelist) { int ret = 0; size_t unmapped; diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 2ada8e6cdb88..d24ea1181c03 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -685,7 +685,7 @@ static size_t unmap_unpin_fast(struct vfio_domain *domain, if (entry) { unmapped = iommu_unmap_fast(domain->domain, *iova, len, - iotlb_gather); + iotlb_gather, NULL); if (!unmapped) { kfree(entry); diff --git a/include/linux/iommu.h b/include/linux/iommu.h index f2223cbb5fd5..61cac25410b5 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -219,6 +219,7 @@ struct iommu_iotlb_gather { * @map: map a physically contiguous memory region to an iommu domain * @unmap: unmap a physically contiguous memory region from an iommu domain * @flush_iotlb_all: Synchronously flush all hardware TLBs for this domain + * @flush_iotlb_range: Flush given iova range of hardware TLBs for this domain * @iotlb_sync_map: Sync mappings created recently using @map to the hardware * @iotlb_sync: Flush all queued ranges from the hardware TLBs and empty flush * queue @@ -262,8 +263,12 @@ struct iommu_ops { int (*map)(struct iommu_domain *domain, unsigned long iova, phys_addr_t paddr, size_t size, int prot, gfp_t gfp); size_t (*unmap)(struct iommu_domain *domain, unsigned long iova, - size_t size, struct iommu_iotlb_gather *iotlb_gather); + size_t size, struct iommu_iotlb_gather *iotlb_gather, + struct page **freelist); void (*flush_iotlb_all)(struct iommu_domain *domain); + void (*flush_iotlb_range)(struct iommu_domain *domain, + unsigned long iova, size_t size, + struct page *freelist); void (*iotlb_sync_map)(struct iommu_domain *domain); void (*iotlb_sync)(struct iommu_domain *domain, struct iommu_iotlb_gather *iotlb_gather); @@ -444,7 +449,8 @@ extern size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size); extern size_t iommu_unmap_fast(struct iommu_domain *domain, unsigned long iova, size_t size, - struct iommu_iotlb_gather *iotlb_gather); + struct iommu_iotlb_gather *iotlb_gather, + struct page **freelist); extern size_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova, struct scatterlist *sg,unsigned int nents, int prot); extern size_t iommu_map_sg_atomic(struct iommu_domain *domain, @@ -518,12 +524,20 @@ extern void iommu_domain_window_disable(struct iommu_domain *domain, u32 wnd_nr) extern int report_iommu_fault(struct iommu_domain *domain, struct device *dev, unsigned long iova, int flags); -static inline void iommu_flush_tlb_all(struct iommu_domain *domain) +static inline void iommu_flush_iotlb_all(struct iommu_domain *domain) { if (domain->ops->flush_iotlb_all) domain->ops->flush_iotlb_all(domain); } +static inline void flush_iotlb_range(struct iommu_domain *domain, + unsigned long iova, size_t size, + struct page *freelist) +{ + if (domain->ops->flush_iotlb_range) + domain->ops->flush_iotlb_range(domain, iova, size, freelist); +} + static inline void iommu_tlb_sync(struct iommu_domain *domain, struct iommu_iotlb_gather *iotlb_gather) { @@ -699,7 +713,8 @@ static inline size_t iommu_unmap(struct iommu_domain *domain, static inline size_t iommu_unmap_fast(struct iommu_domain *domain, unsigned long iova, int gfp_order, - struct iommu_iotlb_gather *iotlb_gather) + struct iommu_iotlb_gather *iotlb_gather, + struct page **freelist) { return 0; } @@ -718,7 +733,7 @@ static inline size_t iommu_map_sg_atomic(struct iommu_domain *domain, return 0; } -static inline void iommu_flush_tlb_all(struct iommu_domain *domain) +static inline void iommu_flush_iotlb_all(struct iommu_domain *domain) { } -- 2.20.1
Tom Murphy
2019-Dec-21 15:03 UTC
[PATCH 5/8] iommu: Add iommu_dma_free_cpu_cached_iovas function
to dma-iommu ops Add a iommu_dma_free_cpu_cached_iovas function to allow drivers which use the dma-iommu ops to free cached cpu iovas. Signed-off-by: Tom Murphy <murphyt7 at tcd.ie> --- drivers/iommu/dma-iommu.c | 9 +++++++++ include/linux/dma-iommu.h | 3 +++ 2 files changed, 12 insertions(+) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index df28facdfb8b..4eac3cd35443 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -50,6 +50,15 @@ struct iommu_dma_cookie { struct iommu_domain *fq_domain; }; +void iommu_dma_free_cpu_cached_iovas(unsigned int cpu, + struct iommu_domain *domain) +{ + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + + free_cpu_cached_iovas(cpu, iovad); +} + static void iommu_dma_entry_dtor(unsigned long data) { struct page *freelist = (struct page *)data; diff --git a/include/linux/dma-iommu.h b/include/linux/dma-iommu.h index 2112f21f73d8..316d22a4a860 100644 --- a/include/linux/dma-iommu.h +++ b/include/linux/dma-iommu.h @@ -37,6 +37,9 @@ void iommu_dma_compose_msi_msg(struct msi_desc *desc, void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list); +void iommu_dma_free_cpu_cached_iovas(unsigned int cpu, + struct iommu_domain *domain); + #else /* CONFIG_IOMMU_DMA */ struct iommu_domain; -- 2.20.1
Tom Murphy
2019-Dec-21 15:03 UTC
[PATCH 6/8] iommu: allow the dma-iommu api to use bounce buffers
Allow the dma-iommu api to use bounce buffers for untrusted devices. This is a copy of the intel bounce buffer code. Signed-off-by: Tom Murphy <murphyt7 at tcd.ie> --- drivers/iommu/dma-iommu.c | 93 ++++++++++++++++++++++++++++++++------- drivers/iommu/iommu.c | 10 +++++ include/linux/iommu.h | 9 +++- 3 files changed, 95 insertions(+), 17 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 4eac3cd35443..cf778db7d84d 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -20,9 +20,11 @@ #include <linux/irq.h> #include <linux/mm.h> #include <linux/pci.h> +#include <linux/swiotlb.h> #include <linux/scatterlist.h> #include <linux/vmalloc.h> #include <linux/crash_dump.h> +#include <linux/dma-direct.h> struct iommu_dma_msi_page { struct list_head list; @@ -505,29 +507,89 @@ static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr, iommu_tlb_sync(domain, &iotlb_gather); } + iommu_dma_free_iova(cookie, dma_addr, size, freelist); } +static void __iommu_dma_unmap_swiotlb(struct device *dev, dma_addr_t dma_addr, + size_t size, enum dma_data_direction dir, + unsigned long attrs) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iommu_dma_cookie *cookie = domain->iova_cookie; + struct iova_domain *iovad = &cookie->iovad; + size_t iova_off = iova_offset(iovad, dma_addr); + size_t aligned_size = iova_align(iovad, size + iova_off); + phys_addr_t phys; + + phys = iommu_iova_to_phys(domain, dma_addr); + if (WARN_ON(!phys)) + return; + + __iommu_dma_unmap(dev, dma_addr, size); + +#ifdef CONFIG_SWIOTLB + if (unlikely(is_swiotlb_buffer(phys))) + swiotlb_tbl_unmap_single(dev, phys, size, + aligned_size, dir, attrs); +#endif +} + static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, - size_t size, int prot, dma_addr_t dma_mask) + size_t org_size, dma_addr_t dma_mask, bool coherent, + enum dma_data_direction dir, unsigned long attrs) { + int prot = dma_info_to_prot(dir, coherent, attrs); struct iommu_domain *domain = iommu_get_dma_domain(dev); struct iommu_dma_cookie *cookie = domain->iova_cookie; struct iova_domain *iovad = &cookie->iovad; size_t iova_off = iova_offset(iovad, phys); + size_t aligned_size = iova_align(iovad, org_size + iova_off); dma_addr_t iova; if (unlikely(iommu_dma_deferred_attach(dev, domain))) return DMA_MAPPING_ERROR; - size = iova_align(iovad, size + iova_off); +#ifdef CONFIG_SWIOTLB + /* + * If both the physical buffer start address and size are + * page aligned, we don't need to use a bounce page. + */ + if (iommu_needs_bounce_buffer(dev) + && !iova_offset(iovad, phys | org_size)) { + phys = swiotlb_tbl_map_single(dev, + __phys_to_dma(dev, io_tlb_start), + phys, org_size, aligned_size, dir, attrs); + + if (phys == DMA_MAPPING_ERROR) + return DMA_MAPPING_ERROR; + + /* Cleanup the padding area. */ + void *padding_start = phys_to_virt(phys); + size_t padding_size = aligned_size; + + if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && + (dir == DMA_TO_DEVICE || + dir == DMA_BIDIRECTIONAL)) { + padding_start += org_size; + padding_size -= org_size; + } - iova = iommu_dma_alloc_iova(domain, size, dma_mask, dev); + memset(padding_start, 0, padding_size); + } +#endif + + iova = iommu_dma_alloc_iova(domain, aligned_size, dma_mask, dev); if (!iova) return DMA_MAPPING_ERROR; - if (iommu_map_atomic(domain, iova, phys - iova_off, size, prot)) { - iommu_dma_free_iova(cookie, iova, size, NULL); + if (iommu_map_atomic(domain, iova, phys - iova_off, aligned_size, + prot)) { + + if (unlikely(is_swiotlb_buffer(phys))) + swiotlb_tbl_unmap_single(dev, phys, aligned_size, + aligned_size, dir, attrs); + iommu_dma_free_iova(cookie, iova, aligned_size, NULL); return DMA_MAPPING_ERROR; } return iova + iova_off; @@ -761,10 +823,10 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, { phys_addr_t phys = page_to_phys(page) + offset; bool coherent = dev_is_dma_coherent(dev); - int prot = dma_info_to_prot(dir, coherent, attrs); dma_addr_t dma_handle; - dma_handle = __iommu_dma_map(dev, phys, size, prot, dma_get_mask(dev)); + dma_handle = __iommu_dma_map(dev, phys, size, dma_get_mask(dev), + coherent, dir, attrs); if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC) && dma_handle != DMA_MAPPING_ERROR) arch_sync_dma_for_device(phys, size, dir); @@ -776,7 +838,7 @@ static void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle, { if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) iommu_dma_sync_single_for_cpu(dev, dma_handle, size, dir); - __iommu_dma_unmap(dev, dma_handle, size); + __iommu_dma_unmap_swiotlb(dev, dma_handle, size, dir, attrs); } /* @@ -960,21 +1022,20 @@ static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg, sg = tmp; } end = sg_dma_address(sg) + sg_dma_len(sg); - __iommu_dma_unmap(dev, start, end - start); + __iommu_dma_unmap_swiotlb(dev, start, end - start, dir, attrs); } static dma_addr_t iommu_dma_map_resource(struct device *dev, phys_addr_t phys, size_t size, enum dma_data_direction dir, unsigned long attrs) { - return __iommu_dma_map(dev, phys, size, - dma_info_to_prot(dir, false, attrs) | IOMMU_MMIO, - dma_get_mask(dev)); + return __iommu_dma_map(dev, phys, size, dma_get_mask(dev), false, dir, + attrs); } static void iommu_dma_unmap_resource(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir, unsigned long attrs) { - __iommu_dma_unmap(dev, handle, size); + __iommu_dma_unmap_swiotlb(dev, handle, size, dir, attrs); } static void __iommu_dma_free(struct device *dev, size_t size, void *cpu_addr) @@ -1056,7 +1117,6 @@ static void *iommu_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, gfp_t gfp, unsigned long attrs) { bool coherent = dev_is_dma_coherent(dev); - int ioprot = dma_info_to_prot(DMA_BIDIRECTIONAL, coherent, attrs); struct page *page = NULL; void *cpu_addr; @@ -1074,8 +1134,9 @@ static void *iommu_dma_alloc(struct device *dev, size_t size, if (!cpu_addr) return NULL; - *handle = __iommu_dma_map(dev, page_to_phys(page), size, ioprot, - dev->coherent_dma_mask); + *handle = __iommu_dma_map(dev, page_to_phys(page), size, + dev->coherent_dma_mask, coherent, DMA_BIDIRECTIONAL, + attrs); if (*handle == DMA_MAPPING_ERROR) { __iommu_dma_free(dev, size, cpu_addr); return NULL; diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index cec728f40d9c..e5653cb20c83 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -2236,6 +2236,16 @@ void iommu_get_resv_regions(struct device *dev, struct list_head *list) ops->get_resv_regions(dev, list); } +int iommu_needs_bounce_buffer(struct device *dev) +{ + const struct iommu_ops *ops = dev->bus->iommu_ops; + + if (ops && ops->needs_bounce_buffer) + return ops->needs_bounce_buffer(dev); + + return 0; +} + void iommu_put_resv_regions(struct device *dev, struct list_head *list) { const struct iommu_ops *ops = dev->bus->iommu_ops; diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 61cac25410b5..d377ffa362a7 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -280,6 +280,7 @@ struct iommu_ops { enum iommu_attr attr, void *data); int (*domain_set_attr)(struct iommu_domain *domain, enum iommu_attr attr, void *data); + int (*needs_bounce_buffer)(struct device *dev); /* Request/Free a list of reserved regions for a device */ void (*get_resv_regions)(struct device *dev, struct list_head *list); @@ -460,6 +461,7 @@ extern phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t io extern void iommu_set_fault_handler(struct iommu_domain *domain, iommu_fault_handler_t handler, void *token); +extern int iommu_needs_bounce_buffer(struct device *dev); extern void iommu_get_resv_regions(struct device *dev, struct list_head *list); extern void iommu_put_resv_regions(struct device *dev, struct list_head *list); extern int iommu_request_dm_for_dev(struct device *dev); @@ -530,7 +532,7 @@ static inline void iommu_flush_iotlb_all(struct iommu_domain *domain) domain->ops->flush_iotlb_all(domain); } -static inline void flush_iotlb_range(struct iommu_domain *domain, +static inline void iommu_flush_iotlb_range(struct iommu_domain *domain, unsigned long iova, size_t size, struct page *freelist) { @@ -764,6 +766,11 @@ static inline void iommu_set_fault_handler(struct iommu_domain *domain, { } +static inline int iommu_needs_bounce_buffer(struct device *dev) +{ + return 0; +} + static inline void iommu_get_resv_regions(struct device *dev, struct list_head *list) { -- 2.20.1
Tom Murphy
2019-Dec-21 15:03 UTC
[PATCH 7/8] iommu/vt-d: Convert intel iommu driver to the iommu ops
Convert the intel iommu driver to the dma-iommu api. Remove the iova handling and reserve region code from the intel iommu driver. Signed-off-by: Tom Murphy <murphyt7 at tcd.ie> --- drivers/iommu/Kconfig | 1 + drivers/iommu/intel-iommu.c | 742 +++--------------------------------- include/linux/intel-iommu.h | 1 - 3 files changed, 55 insertions(+), 689 deletions(-) diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig index 0b9d78a0f3ac..4126bb2794c7 100644 --- a/drivers/iommu/Kconfig +++ b/drivers/iommu/Kconfig @@ -188,6 +188,7 @@ config INTEL_IOMMU select NEED_DMA_MAP_STATE select DMAR_TABLE select SWIOTLB + select IOMMU_DMA help DMA remapping (DMAR) devices support enables independent address translations for Direct Memory Access (DMA) from devices. diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index 675ca2aa6e20..e673e1ee9288 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -31,6 +31,7 @@ #include <linux/io.h> #include <linux/iova.h> #include <linux/iommu.h> +#include <linux/dma-iommu.h> #include <linux/intel-iommu.h> #include <linux/syscore_ops.h> #include <linux/tboot.h> @@ -41,7 +42,6 @@ #include <linux/dma-direct.h> #include <linux/crash_dump.h> #include <linux/numa.h> -#include <linux/swiotlb.h> #include <asm/irq_remapping.h> #include <asm/cacheflush.h> #include <asm/iommu.h> @@ -380,9 +380,6 @@ EXPORT_SYMBOL_GPL(intel_iommu_gfx_mapped); static DEFINE_SPINLOCK(device_domain_lock); static LIST_HEAD(device_domain_list); -#define device_needs_bounce(d) (!intel_no_bounce && dev_is_pci(d) && \ - to_pci_dev(d)->untrusted) - /* * Iterate over elements in device_domain_list and call the specified * callback @fn against each element. @@ -1180,13 +1177,6 @@ static void dma_free_pagelist(struct page *freelist) } } -static void iova_entry_free(unsigned long data) -{ - struct page *freelist = (struct page *)data; - - dma_free_pagelist(freelist); -} - /* iommu handling */ static int iommu_alloc_root_entry(struct intel_iommu *iommu) { @@ -1530,16 +1520,14 @@ static inline void __mapping_notify_one(struct intel_iommu *iommu, iommu_flush_write_buffer(iommu); } -static void iommu_flush_iova(struct iova_domain *iovad) +static void intel_flush_iotlb_all(struct iommu_domain *domain) { - struct dmar_domain *domain; + struct dmar_domain *dmar_domain = to_dmar_domain(domain); int idx; - domain = container_of(iovad, struct dmar_domain, iovad); - - for_each_domain_iommu(idx, domain) { + for_each_domain_iommu(idx, dmar_domain) { struct intel_iommu *iommu = g_iommus[idx]; - u16 did = domain->iommu_did[iommu->seq_id]; + u16 did = dmar_domain->iommu_did[iommu->seq_id]; iommu->flush.flush_iotlb(iommu, did, 0, 0, DMA_TLB_DSI_FLUSH); @@ -1784,53 +1772,6 @@ static int domain_detach_iommu(struct dmar_domain *domain, return count; } -static struct iova_domain reserved_iova_list; -static struct lock_class_key reserved_rbtree_key; - -static int dmar_init_reserved_ranges(void) -{ - struct pci_dev *pdev = NULL; - struct iova *iova; - int i; - - init_iova_domain(&reserved_iova_list, VTD_PAGE_SIZE, IOVA_START_PFN); - - lockdep_set_class(&reserved_iova_list.iova_rbtree_lock, - &reserved_rbtree_key); - - /* IOAPIC ranges shouldn't be accessed by DMA */ - iova = reserve_iova(&reserved_iova_list, IOVA_PFN(IOAPIC_RANGE_START), - IOVA_PFN(IOAPIC_RANGE_END)); - if (!iova) { - pr_err("Reserve IOAPIC range failed\n"); - return -ENODEV; - } - - /* Reserve all PCI MMIO to avoid peer-to-peer access */ - for_each_pci_dev(pdev) { - struct resource *r; - - for (i = 0; i < PCI_NUM_RESOURCES; i++) { - r = &pdev->resource[i]; - if (!r->flags || !(r->flags & IORESOURCE_MEM)) - continue; - iova = reserve_iova(&reserved_iova_list, - IOVA_PFN(r->start), - IOVA_PFN(r->end)); - if (!iova) { - pci_err(pdev, "Reserve iova for %pR failed\n", r); - return -ENODEV; - } - } - } - return 0; -} - -static void domain_reserve_special_ranges(struct dmar_domain *domain) -{ - copy_reserved_iova(&reserved_iova_list, &domain->iovad); -} - static inline int guestwidth_to_adjustwidth(int gaw) { int agaw; @@ -1850,16 +1791,11 @@ static int domain_init(struct dmar_domain *domain, struct intel_iommu *iommu, { int adjust_width, agaw; unsigned long sagaw; - int err; - - init_iova_domain(&domain->iovad, VTD_PAGE_SIZE, IOVA_START_PFN); - - err = init_iova_flush_queue(&domain->iovad, - iommu_flush_iova, iova_entry_free); - if (err) - return err; + int ret; - domain_reserve_special_ranges(domain); + ret = iommu_get_dma_cookie(&domain->domain); + if (ret) + return ret; /* calculate AGAW */ if (guest_width > cap_mgaw(iommu->cap)) @@ -1910,7 +1846,7 @@ static void domain_exit(struct dmar_domain *domain) /* destroy iovas */ if (domain->domain.type == IOMMU_DOMAIN_DMA) - put_iova_domain(&domain->iovad); + iommu_put_dma_cookie(&domain->domain); if (domain->pgd) { struct page *freelist; @@ -2439,20 +2375,6 @@ static struct dmar_domain *find_domain(struct device *dev) return NULL; } -static struct dmar_domain *deferred_attach_domain(struct device *dev) -{ - if (unlikely(dev->archdata.iommu == DEFER_DEVICE_DOMAIN_INFO)) { - struct iommu_domain *domain; - - dev->archdata.iommu = NULL; - domain = iommu_get_domain_for_dev(dev); - if (domain) - intel_iommu_attach_device(domain, dev); - } - - return find_domain(dev); -} - static inline struct device_domain_info * dmar_search_domain_by_dev_info(int segment, int bus, int devfn) { @@ -3363,39 +3285,6 @@ static int __init init_dmars(void) return ret; } -/* This takes a number of _MM_ pages, not VTD pages */ -static unsigned long intel_alloc_iova(struct device *dev, - struct dmar_domain *domain, - unsigned long nrpages, uint64_t dma_mask) -{ - unsigned long iova_pfn; - - /* Restrict dma_mask to the width that the iommu can handle */ - dma_mask = min_t(uint64_t, DOMAIN_MAX_ADDR(domain->gaw), dma_mask); - /* Ensure we reserve the whole size-aligned region */ - nrpages = __roundup_pow_of_two(nrpages); - - if (!dmar_forcedac && dma_mask > DMA_BIT_MASK(32)) { - /* - * First try to allocate an io virtual address in - * DMA_BIT_MASK(32) and if that fails then try allocating - * from higher range - */ - iova_pfn = alloc_iova_fast(&domain->iovad, nrpages, - IOVA_PFN(DMA_BIT_MASK(32)), false); - if (iova_pfn) - return iova_pfn; - } - iova_pfn = alloc_iova_fast(&domain->iovad, nrpages, - IOVA_PFN(dma_mask), true); - if (unlikely(!iova_pfn)) { - dev_err(dev, "Allocating %ld-page iova failed", nrpages); - return 0; - } - - return iova_pfn; -} - static struct dmar_domain *get_private_domain_for_dev(struct device *dev) { struct dmar_domain *domain, *tmp; @@ -3444,528 +3333,6 @@ static struct dmar_domain *get_private_domain_for_dev(struct device *dev) return domain; } -static dma_addr_t __intel_map_single(struct device *dev, phys_addr_t paddr, - size_t size, int dir, u64 dma_mask) -{ - struct dmar_domain *domain; - phys_addr_t start_paddr; - unsigned long iova_pfn; - int prot = 0; - int ret; - struct intel_iommu *iommu; - unsigned long paddr_pfn = paddr >> PAGE_SHIFT; - - BUG_ON(dir == DMA_NONE); - - domain = deferred_attach_domain(dev); - if (!domain) - return DMA_MAPPING_ERROR; - - iommu = domain_get_iommu(domain); - size = aligned_nrpages(paddr, size); - - iova_pfn = intel_alloc_iova(dev, domain, dma_to_mm_pfn(size), dma_mask); - if (!iova_pfn) - goto error; - - /* - * Check if DMAR supports zero-length reads on write only - * mappings.. - */ - if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL || \ - !cap_zlr(iommu->cap)) - prot |= DMA_PTE_READ; - if (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL) - prot |= DMA_PTE_WRITE; - /* - * paddr - (paddr + size) might be partial page, we should map the whole - * page. Note: if two part of one page are separately mapped, we - * might have two guest_addr mapping to the same host paddr, but this - * is not a big problem - */ - ret = domain_pfn_mapping(domain, mm_to_dma_pfn(iova_pfn), - mm_to_dma_pfn(paddr_pfn), size, prot); - if (ret) - goto error; - - start_paddr = (phys_addr_t)iova_pfn << PAGE_SHIFT; - start_paddr += paddr & ~PAGE_MASK; - - trace_map_single(dev, start_paddr, paddr, size << VTD_PAGE_SHIFT); - - return start_paddr; - -error: - if (iova_pfn) - free_iova_fast(&domain->iovad, iova_pfn, dma_to_mm_pfn(size)); - dev_err(dev, "Device request: %zx@%llx dir %d --- failed\n", - size, (unsigned long long)paddr, dir); - return DMA_MAPPING_ERROR; -} - -static dma_addr_t intel_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, - enum dma_data_direction dir, - unsigned long attrs) -{ - return __intel_map_single(dev, page_to_phys(page) + offset, size, dir, - *dev->dma_mask); -} - -static dma_addr_t intel_map_resource(struct device *dev, phys_addr_t phys_addr, - size_t size, enum dma_data_direction dir, - unsigned long attrs) -{ - return __intel_map_single(dev, phys_addr, size, dir, *dev->dma_mask); -} - -static void intel_unmap(struct device *dev, dma_addr_t dev_addr, size_t size) -{ - struct dmar_domain *domain; - unsigned long start_pfn, last_pfn; - unsigned long nrpages; - unsigned long iova_pfn; - struct intel_iommu *iommu; - struct page *freelist; - struct pci_dev *pdev = NULL; - - domain = find_domain(dev); - BUG_ON(!domain); - - iommu = domain_get_iommu(domain); - - iova_pfn = IOVA_PFN(dev_addr); - - nrpages = aligned_nrpages(dev_addr, size); - start_pfn = mm_to_dma_pfn(iova_pfn); - last_pfn = start_pfn + nrpages - 1; - - if (dev_is_pci(dev)) - pdev = to_pci_dev(dev); - - freelist = domain_unmap(domain, start_pfn, last_pfn, NULL); - if (intel_iommu_strict || (pdev && pdev->untrusted) || - !has_iova_flush_queue(&domain->iovad)) { - iommu_flush_iotlb_psi(iommu, domain, start_pfn, - nrpages, !freelist, 0); - /* free iova */ - free_iova_fast(&domain->iovad, iova_pfn, dma_to_mm_pfn(nrpages)); - dma_free_pagelist(freelist); - } else { - queue_iova(&domain->iovad, iova_pfn, nrpages, - (unsigned long)freelist); - /* - * queue up the release of the unmap to save the 1/6th of the - * cpu used up by the iotlb flush operation... - */ - } - - trace_unmap_single(dev, dev_addr, size); -} - -static void intel_unmap_page(struct device *dev, dma_addr_t dev_addr, - size_t size, enum dma_data_direction dir, - unsigned long attrs) -{ - intel_unmap(dev, dev_addr, size); -} - -static void intel_unmap_resource(struct device *dev, dma_addr_t dev_addr, - size_t size, enum dma_data_direction dir, unsigned long attrs) -{ - intel_unmap(dev, dev_addr, size); -} - -static void *intel_alloc_coherent(struct device *dev, size_t size, - dma_addr_t *dma_handle, gfp_t flags, - unsigned long attrs) -{ - struct page *page = NULL; - int order; - - size = PAGE_ALIGN(size); - order = get_order(size); - - if (gfpflags_allow_blocking(flags)) { - unsigned int count = size >> PAGE_SHIFT; - - page = dma_alloc_from_contiguous(dev, count, order, - flags & __GFP_NOWARN); - } - - if (!page) - page = alloc_pages(flags, order); - if (!page) - return NULL; - memset(page_address(page), 0, size); - - *dma_handle = __intel_map_single(dev, page_to_phys(page), size, - DMA_BIDIRECTIONAL, - dev->coherent_dma_mask); - if (*dma_handle != DMA_MAPPING_ERROR) - return page_address(page); - if (!dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT)) - __free_pages(page, order); - - return NULL; -} - -static void intel_free_coherent(struct device *dev, size_t size, void *vaddr, - dma_addr_t dma_handle, unsigned long attrs) -{ - int order; - struct page *page = virt_to_page(vaddr); - - size = PAGE_ALIGN(size); - order = get_order(size); - - intel_unmap(dev, dma_handle, size); - if (!dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT)) - __free_pages(page, order); -} - -static void intel_unmap_sg(struct device *dev, struct scatterlist *sglist, - int nelems, enum dma_data_direction dir, - unsigned long attrs) -{ - dma_addr_t startaddr = sg_dma_address(sglist) & PAGE_MASK; - unsigned long nrpages = 0; - struct scatterlist *sg; - int i; - - for_each_sg(sglist, sg, nelems, i) { - nrpages += aligned_nrpages(sg_dma_address(sg), sg_dma_len(sg)); - } - - intel_unmap(dev, startaddr, nrpages << VTD_PAGE_SHIFT); - - trace_unmap_sg(dev, startaddr, nrpages << VTD_PAGE_SHIFT); -} - -static int intel_map_sg(struct device *dev, struct scatterlist *sglist, int nelems, - enum dma_data_direction dir, unsigned long attrs) -{ - int i; - struct dmar_domain *domain; - size_t size = 0; - int prot = 0; - unsigned long iova_pfn; - int ret; - struct scatterlist *sg; - unsigned long start_vpfn; - struct intel_iommu *iommu; - - BUG_ON(dir == DMA_NONE); - - domain = deferred_attach_domain(dev); - if (!domain) - return 0; - - iommu = domain_get_iommu(domain); - - for_each_sg(sglist, sg, nelems, i) - size += aligned_nrpages(sg->offset, sg->length); - - iova_pfn = intel_alloc_iova(dev, domain, dma_to_mm_pfn(size), - *dev->dma_mask); - if (!iova_pfn) { - sglist->dma_length = 0; - return 0; - } - - /* - * Check if DMAR supports zero-length reads on write only - * mappings.. - */ - if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL || \ - !cap_zlr(iommu->cap)) - prot |= DMA_PTE_READ; - if (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL) - prot |= DMA_PTE_WRITE; - - start_vpfn = mm_to_dma_pfn(iova_pfn); - - ret = domain_sg_mapping(domain, start_vpfn, sglist, size, prot); - if (unlikely(ret)) { - dma_pte_free_pagetable(domain, start_vpfn, - start_vpfn + size - 1, - agaw_to_level(domain->agaw) + 1); - free_iova_fast(&domain->iovad, iova_pfn, dma_to_mm_pfn(size)); - return 0; - } - - trace_map_sg(dev, iova_pfn << PAGE_SHIFT, - sg_phys(sglist), size << VTD_PAGE_SHIFT); - - return nelems; -} - -static u64 intel_get_required_mask(struct device *dev) -{ - return DMA_BIT_MASK(32); -} - -static const struct dma_map_ops intel_dma_ops = { - .alloc = intel_alloc_coherent, - .free = intel_free_coherent, - .map_sg = intel_map_sg, - .unmap_sg = intel_unmap_sg, - .map_page = intel_map_page, - .unmap_page = intel_unmap_page, - .map_resource = intel_map_resource, - .unmap_resource = intel_unmap_resource, - .dma_supported = dma_direct_supported, - .mmap = dma_common_mmap, - .get_sgtable = dma_common_get_sgtable, - .get_required_mask = intel_get_required_mask, -}; - -static void -bounce_sync_single(struct device *dev, dma_addr_t addr, size_t size, - enum dma_data_direction dir, enum dma_sync_target target) -{ - struct dmar_domain *domain; - phys_addr_t tlb_addr; - - domain = find_domain(dev); - if (WARN_ON(!domain)) - return; - - tlb_addr = intel_iommu_iova_to_phys(&domain->domain, addr); - if (is_swiotlb_buffer(tlb_addr)) - swiotlb_tbl_sync_single(dev, tlb_addr, size, dir, target); -} - -static dma_addr_t -bounce_map_single(struct device *dev, phys_addr_t paddr, size_t size, - enum dma_data_direction dir, unsigned long attrs, - u64 dma_mask) -{ - size_t aligned_size = ALIGN(size, VTD_PAGE_SIZE); - struct dmar_domain *domain; - struct intel_iommu *iommu; - unsigned long iova_pfn; - unsigned long nrpages; - phys_addr_t tlb_addr; - int prot = 0; - int ret; - - domain = deferred_attach_domain(dev); - if (WARN_ON(dir == DMA_NONE || !domain)) - return DMA_MAPPING_ERROR; - - iommu = domain_get_iommu(domain); - if (WARN_ON(!iommu)) - return DMA_MAPPING_ERROR; - - nrpages = aligned_nrpages(0, size); - iova_pfn = intel_alloc_iova(dev, domain, - dma_to_mm_pfn(nrpages), dma_mask); - if (!iova_pfn) - return DMA_MAPPING_ERROR; - - /* - * Check if DMAR supports zero-length reads on write only - * mappings.. - */ - if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL || - !cap_zlr(iommu->cap)) - prot |= DMA_PTE_READ; - if (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL) - prot |= DMA_PTE_WRITE; - - /* - * If both the physical buffer start address and size are - * page aligned, we don't need to use a bounce page. - */ - if (!IS_ALIGNED(paddr | size, VTD_PAGE_SIZE)) { - tlb_addr = swiotlb_tbl_map_single(dev, - __phys_to_dma(dev, io_tlb_start), - paddr, size, aligned_size, dir, attrs); - if (tlb_addr == DMA_MAPPING_ERROR) { - goto swiotlb_error; - } else { - /* Cleanup the padding area. */ - void *padding_start = phys_to_virt(tlb_addr); - size_t padding_size = aligned_size; - - if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && - (dir == DMA_TO_DEVICE || - dir == DMA_BIDIRECTIONAL)) { - padding_start += size; - padding_size -= size; - } - - memset(padding_start, 0, padding_size); - } - } else { - tlb_addr = paddr; - } - - ret = domain_pfn_mapping(domain, mm_to_dma_pfn(iova_pfn), - tlb_addr >> VTD_PAGE_SHIFT, nrpages, prot); - if (ret) - goto mapping_error; - - trace_bounce_map_single(dev, iova_pfn << PAGE_SHIFT, paddr, size); - - return (phys_addr_t)iova_pfn << PAGE_SHIFT; - -mapping_error: - if (is_swiotlb_buffer(tlb_addr)) - swiotlb_tbl_unmap_single(dev, tlb_addr, size, - aligned_size, dir, attrs); -swiotlb_error: - free_iova_fast(&domain->iovad, iova_pfn, dma_to_mm_pfn(nrpages)); - dev_err(dev, "Device bounce map: %zx@%llx dir %d --- failed\n", - size, (unsigned long long)paddr, dir); - - return DMA_MAPPING_ERROR; -} - -static void -bounce_unmap_single(struct device *dev, dma_addr_t dev_addr, size_t size, - enum dma_data_direction dir, unsigned long attrs) -{ - size_t aligned_size = ALIGN(size, VTD_PAGE_SIZE); - struct dmar_domain *domain; - phys_addr_t tlb_addr; - - domain = find_domain(dev); - if (WARN_ON(!domain)) - return; - - tlb_addr = intel_iommu_iova_to_phys(&domain->domain, dev_addr); - if (WARN_ON(!tlb_addr)) - return; - - intel_unmap(dev, dev_addr, size); - if (is_swiotlb_buffer(tlb_addr)) - swiotlb_tbl_unmap_single(dev, tlb_addr, size, - aligned_size, dir, attrs); - - trace_bounce_unmap_single(dev, dev_addr, size); -} - -static dma_addr_t -bounce_map_page(struct device *dev, struct page *page, unsigned long offset, - size_t size, enum dma_data_direction dir, unsigned long attrs) -{ - return bounce_map_single(dev, page_to_phys(page) + offset, - size, dir, attrs, *dev->dma_mask); -} - -static dma_addr_t -bounce_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size, - enum dma_data_direction dir, unsigned long attrs) -{ - return bounce_map_single(dev, phys_addr, size, - dir, attrs, *dev->dma_mask); -} - -static void -bounce_unmap_page(struct device *dev, dma_addr_t dev_addr, size_t size, - enum dma_data_direction dir, unsigned long attrs) -{ - bounce_unmap_single(dev, dev_addr, size, dir, attrs); -} - -static void -bounce_unmap_resource(struct device *dev, dma_addr_t dev_addr, size_t size, - enum dma_data_direction dir, unsigned long attrs) -{ - bounce_unmap_single(dev, dev_addr, size, dir, attrs); -} - -static void -bounce_unmap_sg(struct device *dev, struct scatterlist *sglist, int nelems, - enum dma_data_direction dir, unsigned long attrs) -{ - struct scatterlist *sg; - int i; - - for_each_sg(sglist, sg, nelems, i) - bounce_unmap_page(dev, sg->dma_address, - sg_dma_len(sg), dir, attrs); -} - -static int -bounce_map_sg(struct device *dev, struct scatterlist *sglist, int nelems, - enum dma_data_direction dir, unsigned long attrs) -{ - int i; - struct scatterlist *sg; - - for_each_sg(sglist, sg, nelems, i) { - sg->dma_address = bounce_map_page(dev, sg_page(sg), - sg->offset, sg->length, - dir, attrs); - if (sg->dma_address == DMA_MAPPING_ERROR) - goto out_unmap; - sg_dma_len(sg) = sg->length; - } - - return nelems; - -out_unmap: - bounce_unmap_sg(dev, sglist, i, dir, attrs | DMA_ATTR_SKIP_CPU_SYNC); - return 0; -} - -static void -bounce_sync_single_for_cpu(struct device *dev, dma_addr_t addr, - size_t size, enum dma_data_direction dir) -{ - bounce_sync_single(dev, addr, size, dir, SYNC_FOR_CPU); -} - -static void -bounce_sync_single_for_device(struct device *dev, dma_addr_t addr, - size_t size, enum dma_data_direction dir) -{ - bounce_sync_single(dev, addr, size, dir, SYNC_FOR_DEVICE); -} - -static void -bounce_sync_sg_for_cpu(struct device *dev, struct scatterlist *sglist, - int nelems, enum dma_data_direction dir) -{ - struct scatterlist *sg; - int i; - - for_each_sg(sglist, sg, nelems, i) - bounce_sync_single(dev, sg_dma_address(sg), - sg_dma_len(sg), dir, SYNC_FOR_CPU); -} - -static void -bounce_sync_sg_for_device(struct device *dev, struct scatterlist *sglist, - int nelems, enum dma_data_direction dir) -{ - struct scatterlist *sg; - int i; - - for_each_sg(sglist, sg, nelems, i) - bounce_sync_single(dev, sg_dma_address(sg), - sg_dma_len(sg), dir, SYNC_FOR_DEVICE); -} - -static const struct dma_map_ops bounce_dma_ops = { - .alloc = intel_alloc_coherent, - .free = intel_free_coherent, - .map_sg = bounce_map_sg, - .unmap_sg = bounce_unmap_sg, - .map_page = bounce_map_page, - .unmap_page = bounce_unmap_page, - .sync_single_for_cpu = bounce_sync_single_for_cpu, - .sync_single_for_device = bounce_sync_single_for_device, - .sync_sg_for_cpu = bounce_sync_sg_for_cpu, - .sync_sg_for_device = bounce_sync_sg_for_device, - .map_resource = bounce_map_resource, - .unmap_resource = bounce_unmap_resource, - .dma_supported = dma_direct_supported, -}; - static inline int iommu_domain_cache_init(void) { int ret = 0; @@ -4648,7 +4015,7 @@ static void free_all_cpu_cached_iovas(unsigned int cpu) if (!domain || domain->domain.type != IOMMU_DOMAIN_DMA) continue; - free_cpu_cached_iovas(cpu, &domain->iovad); + iommu_dma_free_cpu_cached_iovas(cpu, &domain->domain); } } } @@ -4917,12 +4284,6 @@ int __init intel_iommu_init(void) if (list_empty(&dmar_atsr_units)) pr_info("No ATSR found\n"); - if (dmar_init_reserved_ranges()) { - if (force_on) - panic("tboot: Failed to reserve iommu ranges\n"); - goto out_free_reserved_range; - } - if (dmar_map_gfx) intel_iommu_gfx_mapped = 1; @@ -4933,7 +4294,7 @@ int __init intel_iommu_init(void) if (force_on) panic("tboot: Failed to initialize DMARs\n"); pr_err("Initialization failed\n"); - goto out_free_reserved_range; + goto out_free_dmar; } up_write(&dmar_global_lock); @@ -4983,8 +4344,6 @@ int __init intel_iommu_init(void) return 0; -out_free_reserved_range: - put_iova_domain(&reserved_iova_list); out_free_dmar: intel_iommu_free_dmars(); up_write(&dmar_global_lock); @@ -5087,18 +4446,6 @@ static int md_domain_init(struct dmar_domain *domain, int guest_width) return 0; } -static void intel_init_iova_domain(struct dmar_domain *dmar_domain) -{ - init_iova_domain(&dmar_domain->iovad, VTD_PAGE_SIZE, IOVA_START_PFN); - copy_reserved_iova(&reserved_iova_list, &dmar_domain->iovad); - - if (init_iova_flush_queue(&dmar_domain->iovad, iommu_flush_iova, - iova_entry_free)) { - pr_warn("iova flush queue initialization failed\n"); - intel_iommu_strict = 1; - } -} - static struct iommu_domain *intel_iommu_domain_alloc(unsigned type) { struct dmar_domain *dmar_domain; @@ -5119,8 +4466,9 @@ static struct iommu_domain *intel_iommu_domain_alloc(unsigned type) return NULL; } - if (type == IOMMU_DOMAIN_DMA) - intel_init_iova_domain(dmar_domain); + if (type == IOMMU_DOMAIN_DMA && + iommu_get_dma_cookie(&dmar_domain->domain)) + return NULL; domain_update_iommu_cap(dmar_domain); @@ -5319,6 +4667,9 @@ static int intel_iommu_attach_device(struct iommu_domain *domain, { int ret; + /* Clear deferred attach */ + dev->archdata.iommu = NULL; + if (domain->type == IOMMU_DOMAIN_UNMANAGED && device_is_rmrr_locked(dev)) { dev_warn(dev, "Device is ineligible for IOMMU domain attach due to platform RMRR requirement. Contact your platform vendor.\n"); @@ -5533,6 +4884,7 @@ static int intel_iommu_add_device(struct device *dev) struct intel_iommu *iommu; struct iommu_group *group; u64 dma_mask = *dev->dma_mask; + dma_addr_t base; u8 bus, devfn; int ret; @@ -5555,6 +4907,7 @@ static int intel_iommu_add_device(struct device *dev) iommu_group_put(group); + base = IOVA_START_PFN << VTD_PAGE_SHIFT; domain = iommu_get_domain_for_dev(dev); dmar_domain = to_dmar_domain(domain); if (domain->type == IOMMU_DOMAIN_DMA) { @@ -5573,7 +4926,9 @@ static int intel_iommu_add_device(struct device *dev) "Device uses a private identity domain.\n"); } } else { - dev->dma_ops = &intel_dma_ops; + iommu_setup_dma_ops(dev, base, + __DOMAIN_MAX_ADDR(dmar_domain->gaw) - + base); } } else { if (device_def_domain_type(dev) == IOMMU_DOMAIN_DMA) { @@ -5590,15 +4945,12 @@ static int intel_iommu_add_device(struct device *dev) dev_info(dev, "Device uses a private dma domain.\n"); } - dev->dma_ops = &intel_dma_ops; + iommu_setup_dma_ops(dev, base, + __DOMAIN_MAX_ADDR(dmar_domain->gaw) - + base); } } - if (device_needs_bounce(dev)) { - dev_info(dev, "Use Intel IOMMU bounce page dma_ops\n"); - set_dma_ops(dev, &bounce_dma_ops); - } - return 0; } @@ -5620,6 +4972,31 @@ static void intel_iommu_remove_device(struct device *dev) set_dma_ops(dev, NULL); } +static int intel_iommu_domain_get_attr(struct iommu_domain *domain, + enum iommu_attr attr, void *data) +{ + switch (domain->type) { + case IOMMU_DOMAIN_UNMANAGED: + return -ENODEV; + case IOMMU_DOMAIN_DMA: + switch (attr) { + case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE: + *(int *)data = !intel_iommu_strict; + return 0; + default: + return -ENODEV; + } + break; + default: + return -EINVAL; + } +} + +static int intel_iommu_needs_bounce_buffer(struct device *d) +{ + return !intel_no_bounce && dev_is_pci(d) && to_pci_dev(d)->untrusted; +} + static void intel_iommu_get_resv_regions(struct device *device, struct list_head *head) { @@ -5737,19 +5114,6 @@ int intel_iommu_enable_pasid(struct intel_iommu *iommu, struct device *dev) return ret; } -static void intel_iommu_apply_resv_region(struct device *dev, - struct iommu_domain *domain, - struct iommu_resv_region *region) -{ - struct dmar_domain *dmar_domain = to_dmar_domain(domain); - unsigned long start, end; - - start = IOVA_PFN(region->start); - end = IOVA_PFN(region->start + region->length - 1); - - WARN_ON_ONCE(!reserve_iova(&dmar_domain->iovad, start, end)); -} - #ifdef CONFIG_INTEL_IOMMU_SVM struct intel_iommu *intel_svm_device_to_iommu(struct device *dev) { @@ -5916,13 +5280,15 @@ const struct iommu_ops intel_iommu_ops = { .aux_get_pasid = intel_iommu_aux_get_pasid, .map = intel_iommu_map, .unmap = intel_iommu_unmap, + .flush_iotlb_all = intel_flush_iotlb_all, .flush_iotlb_range = intel_iommu_flush_iotlb_range, .iova_to_phys = intel_iommu_iova_to_phys, .add_device = intel_iommu_add_device, .remove_device = intel_iommu_remove_device, + .domain_get_attr = intel_iommu_domain_get_attr, + .needs_bounce_buffer = intel_iommu_needs_bounce_buffer, .get_resv_regions = intel_iommu_get_resv_regions, .put_resv_regions = intel_iommu_put_resv_regions, - .apply_resv_region = intel_iommu_apply_resv_region, .device_group = pci_device_group, .dev_has_feat = intel_iommu_dev_has_feat, .dev_feat_enabled = intel_iommu_dev_feat_enabled, diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h index 6d8bf4bdf240..d07f14340870 100644 --- a/include/linux/intel-iommu.h +++ b/include/linux/intel-iommu.h @@ -489,7 +489,6 @@ struct dmar_domain { bool has_iotlb_device; struct list_head devices; /* all devices' list */ struct list_head auxd; /* link to device's auxiliary list */ - struct iova_domain iovad; /* iova's that belong to this domain */ struct dma_pte *pgd; /* virtual address */ int gaw; /* max guest address width */ -- 2.20.1
Tom Murphy
2019-Dec-21 15:04 UTC
[PATCH 8/8] DO NOT MERGE: iommu: disable list appending in dma-iommu
ops __finalise_sg Disable combining sg segments in the dma-iommu api. Combining the sg segments exposes a bug in the intel i915 driver which causes visual artifacts and the screen to freeze. This is most likely because of how the i915 handles the returned list. It probably doesn't respect the returned value specifying the number of elements in the list and instead depends on the previous behaviour of the intel iommu driver which would return the same number of elements in the output list as in the input list. Signed-off-by: Tom Murphy <murphyt7 at tcd.ie> --- drivers/iommu/dma-iommu.c | 38 +++++++------------------------------- 1 file changed, 7 insertions(+), 31 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index cf778db7d84d..d7547b912c87 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -853,8 +853,7 @@ static int __finalise_sg(struct device *dev, struct scatterlist *sg, int nents, { struct scatterlist *s, *cur = sg; unsigned long seg_mask = dma_get_seg_boundary(dev); - unsigned int cur_len = 0, max_len = dma_get_max_seg_size(dev); - int i, count = 0; + int i; for_each_sg(sg, s, nents, i) { /* Restore this segment's original unaligned fields first */ @@ -862,39 +861,16 @@ static int __finalise_sg(struct device *dev, struct scatterlist *sg, int nents, unsigned int s_length = sg_dma_len(s); unsigned int s_iova_len = s->length; + if (i > 0) + cur = sg_next(cur); + s->offset += s_iova_off; s->length = s_length; - sg_dma_address(s) = DMA_MAPPING_ERROR; - sg_dma_len(s) = 0; - - /* - * Now fill in the real DMA data. If... - * - there is a valid output segment to append to - * - and this segment starts on an IOVA page boundary - * - but doesn't fall at a segment boundary - * - and wouldn't make the resulting output segment too long - */ - if (cur_len && !s_iova_off && (dma_addr & seg_mask) && - (max_len - cur_len >= s_length)) { - /* ...then concatenate it with the previous one */ - cur_len += s_length; - } else { - /* Otherwise start the next output segment */ - if (i > 0) - cur = sg_next(cur); - cur_len = s_length; - count++; - - sg_dma_address(cur) = dma_addr + s_iova_off; - } - - sg_dma_len(cur) = cur_len; + sg_dma_address(cur) = dma_addr + s_iova_off; + sg_dma_len(cur) = s_length; dma_addr += s_iova_len; - - if (s_length + s_iova_off < s_iova_len) - cur_len = 0; } - return count; + return nents; } /* -- 2.20.1
Jani Nikula
2019-Dec-23 10:37 UTC
[PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
On Sat, 21 Dec 2019, Tom Murphy <murphyt7 at tcd.ie> wrote:> This patchset converts the intel iommu driver to the dma-iommu api. > > While converting the driver I exposed a bug in the intel i915 driver > which causes a huge amount of artifacts on the screen of my > laptop. You can see a picture of it here: > https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg > > This issue is most likely in the i915 driver and is most likely caused > by the driver not respecting the return value of the > dma_map_ops::map_sg function. You can see the driver ignoring the > return value here: > https://github.com/torvalds/linux/blob/7e0165b2f1a912a06e381e91f0f4e495f4ac3736/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c#L51 > > Previously this didn?t cause issues because the intel map_sg always > returned the same number of elements as the input scatter gather list > but with the change to this dma-iommu api this is no longer the > case. I wasn?t able to track the bug down to a specific line of code > unfortunately. > > Could someone from the intel team look at this?Let me get this straight. There is current API that on success always returns the same number of elements as the input scatter gather list. You propose to change the API so that this is no longer the case? A quick check of various dma_map_sg() calls in the kernel seems to indicate checking for 0 for errors and then ignoring the non-zero return is a common pattern. Are you sure it's okay to make the change you're proposing? Anyway, due to the time of year and all, I'd like to ask you to file a bug against i915 at [1] so this is not forgotten, and please let's not merge the changes before this is resolved. Thanks, Jani. [1] https://gitlab.freedesktop.org/drm/intel/issues/new -- Jani Nikula, Intel Open Source Graphics Center
Robin Murphy
2019-Dec-23 11:29 UTC
[PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
On 2019-12-23 10:37 am, Jani Nikula wrote:> On Sat, 21 Dec 2019, Tom Murphy <murphyt7 at tcd.ie> wrote: >> This patchset converts the intel iommu driver to the dma-iommu api. >> >> While converting the driver I exposed a bug in the intel i915 driver >> which causes a huge amount of artifacts on the screen of my >> laptop. You can see a picture of it here: >> https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg >> >> This issue is most likely in the i915 driver and is most likely caused >> by the driver not respecting the return value of the >> dma_map_ops::map_sg function. You can see the driver ignoring the >> return value here: >> https://github.com/torvalds/linux/blob/7e0165b2f1a912a06e381e91f0f4e495f4ac3736/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c#L51 >> >> Previously this didn?t cause issues because the intel map_sg always >> returned the same number of elements as the input scatter gather list >> but with the change to this dma-iommu api this is no longer the >> case. I wasn?t able to track the bug down to a specific line of code >> unfortunately. >> >> Could someone from the intel team look at this? > > Let me get this straight. There is current API that on success always > returns the same number of elements as the input scatter gather > list. You propose to change the API so that this is no longer the case?No, the API for dma_map_sg() has always been that it may return fewer DMA segments than nents - see Documentation/DMA-API.txt (and otherwise, the return value would surely be a simple success/fail condition). Relying on a particular implementation behaviour has never been strictly correct, even if it does happen to be a very common behaviour.> A quick check of various dma_map_sg() calls in the kernel seems to > indicate checking for 0 for errors and then ignoring the non-zero return > is a common pattern. Are you sure it's okay to make the change you're > proposing?Various code uses tricks like just iterating the mapped list until the first segment with zero sg_dma_len(). Others may well simply have bugs. Robin.> Anyway, due to the time of year and all, I'd like to ask you to file a > bug against i915 at [1] so this is not forgotten, and please let's not > merge the changes before this is resolved. > > > Thanks, > Jani. > > > [1] https://gitlab.freedesktop.org/drm/intel/issues/new > >
kbuild test robot
2019-Dec-24 10:20 UTC
[PATCH 6/8] iommu: allow the dma-iommu api to use bounce buffers
Hi Tom, Thank you for the patch! Perhaps something to improve: [auto build test WARNING on rockchip/for-next] [cannot apply to iommu/next tegra/for-next vfio/next linus/master v5.5-rc3 next-20191219] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system. BTW, we also suggest to use '--base' option to specify the base tree in git format-patch, please see https://stackoverflow.com/a/37406982] url: https://github.com/0day-ci/linux/commits/Tom-Murphy/Convert-the-intel-iommu-driver-to-the-dma-iommu-api/20191224-171249 base: https://git.kernel.org/pub/scm/linux/kernel/git/mmind/linux-rockchip.git for-next config: x86_64-defconfig (attached as .config) compiler: gcc-7 (Debian 7.5.0-3) 7.5.0 reproduce: # save the attached .config to linux build tree make ARCH=x86_64 If you fix the issue, kindly add following tag Reported-by: kbuild test robot <lkp at intel.com> All warnings (new ones prefixed by >>): drivers//iommu/dma-iommu.c: In function '__iommu_dma_map':>> drivers//iommu/dma-iommu.c:568:3: warning: ISO C90 forbids mixed declarations and code [-Wdeclaration-after-statement]void *padding_start = phys_to_virt(phys); ^~~~ vim +568 drivers//iommu/dma-iommu.c 537 538 static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, 539 size_t org_size, dma_addr_t dma_mask, bool coherent, 540 enum dma_data_direction dir, unsigned long attrs) 541 { 542 int prot = dma_info_to_prot(dir, coherent, attrs); 543 struct iommu_domain *domain = iommu_get_dma_domain(dev); 544 struct iommu_dma_cookie *cookie = domain->iova_cookie; 545 struct iova_domain *iovad = &cookie->iovad; 546 size_t iova_off = iova_offset(iovad, phys); 547 size_t aligned_size = iova_align(iovad, org_size + iova_off); 548 dma_addr_t iova; 549 550 if (unlikely(iommu_dma_deferred_attach(dev, domain))) 551 return DMA_MAPPING_ERROR; 552 553 #ifdef CONFIG_SWIOTLB 554 /* 555 * If both the physical buffer start address and size are 556 * page aligned, we don't need to use a bounce page. 557 */ 558 if (iommu_needs_bounce_buffer(dev) 559 && !iova_offset(iovad, phys | org_size)) { 560 phys = swiotlb_tbl_map_single(dev, 561 __phys_to_dma(dev, io_tlb_start), 562 phys, org_size, aligned_size, dir, attrs); 563 564 if (phys == DMA_MAPPING_ERROR) 565 return DMA_MAPPING_ERROR; 566 567 /* Cleanup the padding area. */ > 568 void *padding_start = phys_to_virt(phys); 569 size_t padding_size = aligned_size; 570 571 if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && 572 (dir == DMA_TO_DEVICE || 573 dir == DMA_BIDIRECTIONAL)) { 574 padding_start += org_size; 575 padding_size -= org_size; 576 } 577 578 memset(padding_start, 0, padding_size); 579 } 580 #endif 581 582 iova = iommu_dma_alloc_iova(domain, aligned_size, dma_mask, dev); 583 if (!iova) 584 return DMA_MAPPING_ERROR; 585 586 if (iommu_map_atomic(domain, iova, phys - iova_off, aligned_size, 587 prot)) { 588 589 if (unlikely(is_swiotlb_buffer(phys))) 590 swiotlb_tbl_unmap_single(dev, phys, aligned_size, 591 aligned_size, dir, attrs); 592 iommu_dma_free_iova(cookie, iova, aligned_size, NULL); 593 return DMA_MAPPING_ERROR; 594 } 595 return iova + iova_off; 596 } 597 --- 0-DAY kernel test infrastructure Open Source Technology Center https://lists.01.org/hyperkitty/list/kbuild-all at lists.01.org Intel Corporation -------------- next part -------------- A non-text attachment was scrubbed... Name: .config.gz Type: application/gzip Size: 28860 bytes Desc: not available URL: <http://lists.linuxfoundation.org/pipermail/virtualization/attachments/20191224/91f89e3b/attachment-0001.gz>
Robin Murphy
2020-Aug-26 18:14 UTC
[PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
Hi Tom, On 2019-12-21 15:03, Tom Murphy wrote:> This patchset converts the intel iommu driver to the dma-iommu api. > > While converting the driver I exposed a bug in the intel i915 driver which causes a huge amount of artifacts on the screen of my laptop. You can see a picture of it here: > https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg > > This issue is most likely in the i915 driver and is most likely caused by the driver not respecting the return value of the dma_map_ops::map_sg function. You can see the driver ignoring the return value here: > https://github.com/torvalds/linux/blob/7e0165b2f1a912a06e381e91f0f4e495f4ac3736/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c#L51 > > Previously this didn?t cause issues because the intel map_sg always returned the same number of elements as the input scatter gather list but with the change to this dma-iommu api this is no longer the case. I wasn?t able to track the bug down to a specific line of code unfortunately. > > Could someone from the intel team look at this? > > > I have been testing on a lenovo x1 carbon 5th generation. Let me know if there?s any more information you need. > > To allow my patch set to be tested I have added a patch (patch 8/8) in this series to disable combining sg segments in the dma-iommu api which fixes the bug but it doesn't fix the actual problem. > > As part of this patch series I copied the intel bounce buffer code to the dma-iommu path. The addition of the bounce buffer code took me by surprise. I did most of my development on this patch series before the bounce buffer code was added and my reimplementation in the dma-iommu path is very rushed and not properly tested but I?m running out of time to work on this patch set. > > On top of that I also didn?t port over the intel tracing code from this commit: > https://github.com/torvalds/linux/commit/3b53034c268d550d9e8522e613a14ab53b8840d8#diff-6b3e7c4993f05e76331e463ab1fc87e1 > So all the work in that commit is now wasted. The code will need to be removed and reimplemented in the dma-iommu path. I would like to take the time to do this but I really don?t have the time at the moment and I want to get these changes out before the iommu code changes any more.Further to what we just discussed at LPC, I've realised that tracepoints are actually something I could do with *right now* for debugging my Arm DMA ops series, so if I'm going to hack something up anyway I may as well take responsibility for polishing it into a proper patch as well :) Robin.> > Tom Murphy (8): > iommu/vt-d: clean up 32bit si_domain assignment > iommu/vt-d: Use default dma_direct_* mapping functions for direct > mapped devices > iommu/vt-d: Remove IOVA handling code from non-dma_ops path > iommu: Handle freelists when using deferred flushing in iommu drivers > iommu: Add iommu_dma_free_cpu_cached_iovas function > iommu: allow the dma-iommu api to use bounce buffers > iommu/vt-d: Convert intel iommu driver to the iommu ops > DO NOT MERGE: iommu: disable list appending in dma-iommu > > drivers/iommu/Kconfig | 1 + > drivers/iommu/amd_iommu.c | 14 +- > drivers/iommu/arm-smmu-v3.c | 3 +- > drivers/iommu/arm-smmu.c | 3 +- > drivers/iommu/dma-iommu.c | 183 +++++-- > drivers/iommu/exynos-iommu.c | 3 +- > drivers/iommu/intel-iommu.c | 936 ++++---------------------------- > drivers/iommu/iommu.c | 39 +- > drivers/iommu/ipmmu-vmsa.c | 3 +- > drivers/iommu/msm_iommu.c | 3 +- > drivers/iommu/mtk_iommu.c | 3 +- > drivers/iommu/mtk_iommu_v1.c | 3 +- > drivers/iommu/omap-iommu.c | 3 +- > drivers/iommu/qcom_iommu.c | 3 +- > drivers/iommu/rockchip-iommu.c | 3 +- > drivers/iommu/s390-iommu.c | 3 +- > drivers/iommu/tegra-gart.c | 3 +- > drivers/iommu/tegra-smmu.c | 3 +- > drivers/iommu/virtio-iommu.c | 3 +- > drivers/vfio/vfio_iommu_type1.c | 2 +- > include/linux/dma-iommu.h | 3 + > include/linux/intel-iommu.h | 1 - > include/linux/iommu.h | 32 +- > 23 files changed, 345 insertions(+), 908 deletions(-) >
Tom Murphy
2020-Aug-26 18:23 UTC
[PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
That would be great! On Wed., Aug. 26, 2020, 2:14 p.m. Robin Murphy, <robin.murphy at arm.com> wrote:> Hi Tom, > > On 2019-12-21 15:03, Tom Murphy wrote: > > This patchset converts the intel iommu driver to the dma-iommu api. > > > > While converting the driver I exposed a bug in the intel i915 driver > which causes a huge amount of artifacts on the screen of my laptop. You can > see a picture of it here: > > > https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg > > > > This issue is most likely in the i915 driver and is most likely caused > by the driver not respecting the return value of the dma_map_ops::map_sg > function. You can see the driver ignoring the return value here: > > > https://github.com/torvalds/linux/blob/7e0165b2f1a912a06e381e91f0f4e495f4ac3736/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c#L51 > > > > Previously this didn?t cause issues because the intel map_sg always > returned the same number of elements as the input scatter gather list but > with the change to this dma-iommu api this is no longer the case. I wasn?t > able to track the bug down to a specific line of code unfortunately. > > > > Could someone from the intel team look at this? > > > > > > I have been testing on a lenovo x1 carbon 5th generation. Let me know if > there?s any more information you need. > > > > To allow my patch set to be tested I have added a patch (patch 8/8) in > this series to disable combining sg segments in the dma-iommu api which > fixes the bug but it doesn't fix the actual problem. > > > > As part of this patch series I copied the intel bounce buffer code to > the dma-iommu path. The addition of the bounce buffer code took me by > surprise. I did most of my development on this patch series before the > bounce buffer code was added and my reimplementation in the dma-iommu path > is very rushed and not properly tested but I?m running out of time to work > on this patch set. > > > > On top of that I also didn?t port over the intel tracing code from this > commit: > > > https://github.com/torvalds/linux/commit/3b53034c268d550d9e8522e613a14ab53b8840d8#diff-6b3e7c4993f05e76331e463ab1fc87e1 > > So all the work in that commit is now wasted. The code will need to be > removed and reimplemented in the dma-iommu path. I would like to take the > time to do this but I really don?t have the time at the moment and I want > to get these changes out before the iommu code changes any more. > > Further to what we just discussed at LPC, I've realised that tracepoints > are actually something I could do with *right now* for debugging my Arm > DMA ops series, so if I'm going to hack something up anyway I may as > well take responsibility for polishing it into a proper patch as well :) > > Robin. > > > > > Tom Murphy (8): > > iommu/vt-d: clean up 32bit si_domain assignment > > iommu/vt-d: Use default dma_direct_* mapping functions for direct > > mapped devices > > iommu/vt-d: Remove IOVA handling code from non-dma_ops path > > iommu: Handle freelists when using deferred flushing in iommu drivers > > iommu: Add iommu_dma_free_cpu_cached_iovas function > > iommu: allow the dma-iommu api to use bounce buffers > > iommu/vt-d: Convert intel iommu driver to the iommu ops > > DO NOT MERGE: iommu: disable list appending in dma-iommu > > > > drivers/iommu/Kconfig | 1 + > > drivers/iommu/amd_iommu.c | 14 +- > > drivers/iommu/arm-smmu-v3.c | 3 +- > > drivers/iommu/arm-smmu.c | 3 +- > > drivers/iommu/dma-iommu.c | 183 +++++-- > > drivers/iommu/exynos-iommu.c | 3 +- > > drivers/iommu/intel-iommu.c | 936 ++++---------------------------- > > drivers/iommu/iommu.c | 39 +- > > drivers/iommu/ipmmu-vmsa.c | 3 +- > > drivers/iommu/msm_iommu.c | 3 +- > > drivers/iommu/mtk_iommu.c | 3 +- > > drivers/iommu/mtk_iommu_v1.c | 3 +- > > drivers/iommu/omap-iommu.c | 3 +- > > drivers/iommu/qcom_iommu.c | 3 +- > > drivers/iommu/rockchip-iommu.c | 3 +- > > drivers/iommu/s390-iommu.c | 3 +- > > drivers/iommu/tegra-gart.c | 3 +- > > drivers/iommu/tegra-smmu.c | 3 +- > > drivers/iommu/virtio-iommu.c | 3 +- > > drivers/vfio/vfio_iommu_type1.c | 2 +- > > include/linux/dma-iommu.h | 3 + > > include/linux/intel-iommu.h | 1 - > > include/linux/iommu.h | 32 +- > > 23 files changed, 345 insertions(+), 908 deletions(-) > > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.linuxfoundation.org/pipermail/virtualization/attachments/20200826/1866e503/attachment.html>
Maybe Matching Threads
- [PATCH V6 4/5] iommu/dma-iommu: Use the dev->coherent_dma_mask
- [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
- [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
- [PATCH 1/5] swiotlb: Introduce concept of swiotlb_pool
- [PATCH 4/8] iommu: Handle freelists when using deferred flushing in iommu drivers