search for: flush_iotlb_rang

Displaying 8 results from an estimated 8 matches for "flush_iotlb_rang".

Did you mean: flush_iotlb_range
2020 Aug 18
0
[PATCH V2 1/2] Add new flush_iotlb_range and handle freelists when using iommu_unmap_fast
On 2020-08-18 07:04, Tom Murphy wrote: > Add a flush_iotlb_range to allow flushing of an iova range instead of a > full flush in the dma-iommu path. > > Allow the iommu_unmap_fast to return newly freed page table pages and > pass the freelist to queue_iova in the dma-iommu ops path. > > This patch is useful for iommu drivers (in this case th...
2020 Aug 17
1
[PATCH 1/2] Add new flush_iotlb_range and handle freelists when using iommu_unmap_fast
Add a flush_iotlb_range to allow flushing of an iova range instead of a full flush in the dma-iommu path. Allow the iommu_unmap_fast to return newly freed page table pages and pass the freelist to queue_iova in the dma-iommu ops path. This patch is useful for iommu drivers (in this case the intel iommu driver) which ne...
2020 Aug 18
3
[PATCH V2 1/2] Add new flush_iotlb_range and handle freelists when using iommu_unmap_fast
Add a flush_iotlb_range to allow flushing of an iova range instead of a full flush in the dma-iommu path. Allow the iommu_unmap_fast to return newly freed page table pages and pass the freelist to queue_iova in the dma-iommu ops path. This patch is useful for iommu drivers (in this case the intel iommu driver) which ne...
2020 Aug 18
3
[PATCH V2 1/2] Add new flush_iotlb_range and handle freelists when using iommu_unmap_fast
Add a flush_iotlb_range to allow flushing of an iova range instead of a full flush in the dma-iommu path. Allow the iommu_unmap_fast to return newly freed page table pages and pass the freelist to queue_iova in the dma-iommu ops path. This patch is useful for iommu drivers (in this case the intel iommu driver) which ne...
2019 Dec 21
0
[PATCH 4/8] iommu: Handle freelists when using deferred flushing in iommu drivers
...struct iommu_iotlb_gather *gather, + struct page **freelist) { struct protection_domain *domain = to_pdomain(dom); @@ -2668,6 +2669,16 @@ static void amd_iommu_flush_iotlb_all(struct iommu_domain *domain) spin_unlock_irqrestore(&dom->lock, flags); } +static void amd_iommu_flush_iotlb_range(struct iommu_domain *domain, + unsigned long iova, size_t size, + struct page *freelist) +{ + struct protection_domain *dom = to_pdomain(domain); + + domain_flush_pages(dom, iova, size); + domain_flush_complete(dom); +} + static void amd_iommu_iotlb_sync(struct iommu_domain *domain,...
2019 Dec 21
13
[PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
This patchset converts the intel iommu driver to the dma-iommu api. While converting the driver I exposed a bug in the intel i915 driver which causes a huge amount of artifacts on the screen of my laptop. You can see a picture of it here: https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg This issue is most likely in the i915 driver and is most likely caused by the
2019 Dec 21
13
[PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
This patchset converts the intel iommu driver to the dma-iommu api. While converting the driver I exposed a bug in the intel i915 driver which causes a huge amount of artifacts on the screen of my laptop. You can see a picture of it here: https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg This issue is most likely in the i915 driver and is most likely caused by the
2019 Dec 21
0
[PATCH 6/8] iommu: allow the dma-iommu api to use bounce buffers
...extern void iommu_put_resv_regions(struct device *dev, struct list_head *list); extern int iommu_request_dm_for_dev(struct device *dev); @@ -530,7 +532,7 @@ static inline void iommu_flush_iotlb_all(struct iommu_domain *domain) domain->ops->flush_iotlb_all(domain); } -static inline void flush_iotlb_range(struct iommu_domain *domain, +static inline void iommu_flush_iotlb_range(struct iommu_domain *domain, unsigned long iova, size_t size, struct page *freelist) { @@ -764,6 +766,11 @@ static inline void iommu_set_fault_handler(struct iommu_domain *domain, { } +static inline int iommu_nee...