search for: iotlb_sync_map

Displaying 20 results from an estimated 20 matches for "iotlb_sync_map".

2023 Sep 04
0
[PATCH 1/2] iommu/virtio: Make use of ops->iotlb_sync_map
Hi Niklas, Thanks for following up with these patches On Fri, Aug 25, 2023 at 05:21:25PM +0200, Niklas Schnelle wrote: > Pull out the sync operation from viommu_map_pages() by implementing > ops->iotlb_sync_map. This allows the common IOMMU code to map multiple > elements of an sg with a single sync (see iommu_map_sg()). Furthermore, > it is also a requirement for IOMMU_CAP_DEFERRED_FLUSH. > > Link: https://lore.kernel.org/lkml/20230726111433.1105665-1-schnelle at linux.ibm.com/ > Signed-o...
2023 Sep 04
1
[PATCH 2/2] iommu/virtio: Add ops->flush_iotlb_all and enable deferred flush
...+++++++ > 1 file changed, 12 insertions(+) > > diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c > index fb73dec5b953..1b7526494490 100644 > --- a/drivers/iommu/virtio-iommu.c > +++ b/drivers/iommu/virtio-iommu.c > @@ -924,6 +924,15 @@ static int viommu_iotlb_sync_map(struct iommu_domain *domain, > return viommu_sync_req(vdomain->viommu); > } > > +static void viommu_flush_iotlb_all(struct iommu_domain *domain) > +{ > + struct viommu_domain *vdomain = to_viommu_domain(domain); > + > + if (!vdomain->nr_endpoints) > + return;...
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
...ain, unsigned long iova, + phys_addr_t paddr, size_t size, int prot, gfp_t gfp) { const struct iommu_domain_ops *ops = domain->ops; int ret; + might_sleep_if(gfpflags_allow_blocking(gfp)); + ret = __iommu_map(domain, iova, paddr, size, prot, gfp); if (ret == 0 && ops->iotlb_sync_map) ops->iotlb_sync_map(domain, iova, size); return ret; } - -int iommu_map(struct iommu_domain *domain, unsigned long iova, - phys_addr_t paddr, size_t size, int prot) -{ - might_sleep(); - return _iommu_map(domain, iova, paddr, size, prot, GFP_KERNEL); -} EXPORT_SYMBOL_GPL(iommu_ma...
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
...ain, unsigned long iova, + phys_addr_t paddr, size_t size, int prot, gfp_t gfp) { const struct iommu_domain_ops *ops = domain->ops; int ret; + might_sleep_if(gfpflags_allow_blocking(gfp)); + ret = __iommu_map(domain, iova, paddr, size, prot, gfp); if (ret == 0 && ops->iotlb_sync_map) ops->iotlb_sync_map(domain, iova, size); return ret; } - -int iommu_map(struct iommu_domain *domain, unsigned long iova, - phys_addr_t paddr, size_t size, int prot) -{ - might_sleep(); - return _iommu_map(domain, iova, paddr, size, prot, GFP_KERNEL); -} EXPORT_SYMBOL_GPL(iommu_ma...
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
...ain, unsigned long iova, + phys_addr_t paddr, size_t size, int prot, gfp_t gfp) { const struct iommu_domain_ops *ops = domain->ops; int ret; + might_sleep_if(gfpflags_allow_blocking(gfp)); + ret = __iommu_map(domain, iova, paddr, size, prot, gfp); if (ret == 0 && ops->iotlb_sync_map) ops->iotlb_sync_map(domain, iova, size); return ret; } - -int iommu_map(struct iommu_domain *domain, unsigned long iova, - phys_addr_t paddr, size_t size, int prot) -{ - might_sleep(); - return _iommu_map(domain, iova, paddr, size, prot, GFP_KERNEL); -} EXPORT_SYMBOL_GPL(iommu_ma...
2023 Sep 06
1
[PATCH 2/2] iommu/virtio: Add ops->flush_iotlb_all and enable deferred flush
...ff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c > > > > index fb73dec5b953..1b7526494490 100644 > > > > --- a/drivers/iommu/virtio-iommu.c > > > > +++ b/drivers/iommu/virtio-iommu.c > > > > @@ -924,6 +924,15 @@ static int viommu_iotlb_sync_map(struct iommu_domain *domain, > > > > return viommu_sync_req(vdomain->viommu); > > > > } > > > > > > > > +static void viommu_flush_iotlb_all(struct iommu_domain *domain) > > > > +{ > > > > + struct viommu_domain *vdom...
2023 Sep 06
1
[PATCH 2/2] iommu/virtio: Add ops->flush_iotlb_all and enable deferred flush
...ff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c > > > > index fb73dec5b953..1b7526494490 100644 > > > > --- a/drivers/iommu/virtio-iommu.c > > > > +++ b/drivers/iommu/virtio-iommu.c > > > > @@ -924,6 +924,15 @@ static int viommu_iotlb_sync_map(struct iommu_domain *domain, > > > > return viommu_sync_req(vdomain->viommu); > > > > } > > > > > > > > +static void viommu_flush_iotlb_all(struct iommu_domain *domain) > > > > +{ > > > > + struct viommu_domain *vdom...
2020 Aug 18
3
[PATCH V2 1/2] Add new flush_iotlb_range and handle freelists when using iommu_unmap_fast
...map a physically contiguous memory region to an iommu domain * @unmap: unmap a physically contiguous memory region from an iommu domain * @flush_iotlb_all: Synchronously flush all hardware TLBs for this domain + * @flush_iotlb_range: Flush given iova range of hardware TLBs for this domain * @iotlb_sync_map: Sync mappings created recently using @map to the hardware * @iotlb_sync: Flush all queued ranges from the hardware TLBs and empty flush * queue @@ -248,8 +249,12 @@ struct iommu_ops { int (*map)(struct iommu_domain *domain, unsigned long iova, phys_addr_t paddr, size_t size,...
2020 Aug 18
3
[PATCH V2 1/2] Add new flush_iotlb_range and handle freelists when using iommu_unmap_fast
...map a physically contiguous memory region to an iommu domain * @unmap: unmap a physically contiguous memory region from an iommu domain * @flush_iotlb_all: Synchronously flush all hardware TLBs for this domain + * @flush_iotlb_range: Flush given iova range of hardware TLBs for this domain * @iotlb_sync_map: Sync mappings created recently using @map to the hardware * @iotlb_sync: Flush all queued ranges from the hardware TLBs and empty flush * queue @@ -248,8 +249,12 @@ struct iommu_ops { int (*map)(struct iommu_domain *domain, unsigned long iova, phys_addr_t paddr, size_t size,...
2019 Dec 21
0
[PATCH 4/8] iommu: Handle freelists when using deferred flushing in iommu drivers
...map a physically contiguous memory region to an iommu domain * @unmap: unmap a physically contiguous memory region from an iommu domain * @flush_iotlb_all: Synchronously flush all hardware TLBs for this domain + * @flush_iotlb_range: Flush given iova range of hardware TLBs for this domain * @iotlb_sync_map: Sync mappings created recently using @map to the hardware * @iotlb_sync: Flush all queued ranges from the hardware TLBs and empty flush * queue @@ -262,8 +263,12 @@ struct iommu_ops { int (*map)(struct iommu_domain *domain, unsigned long iova, phys_addr_t paddr, size_t size,...
2020 Aug 17
1
[PATCH 1/2] Add new flush_iotlb_range and handle freelists when using iommu_unmap_fast
...map a physically contiguous memory region to an iommu domain * @unmap: unmap a physically contiguous memory region from an iommu domain * @flush_iotlb_all: Synchronously flush all hardware TLBs for this domain + * @flush_iotlb_range: Flush given iova range of hardware TLBs for this domain * @iotlb_sync_map: Sync mappings created recently using @map to the hardware * @iotlb_sync: Flush all queued ranges from the hardware TLBs and empty flush * queue @@ -248,8 +249,12 @@ struct iommu_ops { int (*map)(struct iommu_domain *domain, unsigned long iova, phys_addr_t paddr, size_t size,...
2020 Aug 18
0
[PATCH V2 1/2] Add new flush_iotlb_range and handle freelists when using iommu_unmap_fast
...guous memory region to an iommu domain > * @unmap: unmap a physically contiguous memory region from an iommu domain > * @flush_iotlb_all: Synchronously flush all hardware TLBs for this domain > + * @flush_iotlb_range: Flush given iova range of hardware TLBs for this domain > * @iotlb_sync_map: Sync mappings created recently using @map to the hardware > * @iotlb_sync: Flush all queued ranges from the hardware TLBs and empty flush > * queue > @@ -248,8 +249,12 @@ struct iommu_ops { > int (*map)(struct iommu_domain *domain, unsigned long iova, > ph...
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 18
10
[PATCH v2 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2019 Dec 21
13
[PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
This patchset converts the intel iommu driver to the dma-iommu api. While converting the driver I exposed a bug in the intel i915 driver which causes a huge amount of artifacts on the screen of my laptop. You can see a picture of it here: https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg This issue is most likely in the i915 driver and is most likely caused by the
2019 Dec 21
13
[PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
This patchset converts the intel iommu driver to the dma-iommu api. While converting the driver I exposed a bug in the intel i915 driver which causes a huge amount of artifacts on the screen of my laptop. You can see a picture of it here: https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg This issue is most likely in the i915 driver and is most likely caused by the