Displaying 20 results from an estimated 463 matches for "iovas".
Did you mean:
iova
2020 Aug 18
3
[PATCH V2 1/2] Add new flush_iotlb_range and handle freelists when using iommu_unmap_fast
Add a flush_iotlb_range to allow flushing of an iova range instead of a
full flush in the dma-iommu path.
Allow the iommu_unmap_fast to return newly freed page table pages and
pass the freelist to queue_iova in the dma-iommu ops path.
This patch is useful for iommu drivers (in this case the intel iommu
driver) which need to wait for the ioTLB to be flushed before newly
free/unmapped page table
2020 Aug 18
3
[PATCH V2 1/2] Add new flush_iotlb_range and handle freelists when using iommu_unmap_fast
Add a flush_iotlb_range to allow flushing of an iova range instead of a
full flush in the dma-iommu path.
Allow the iommu_unmap_fast to return newly freed page table pages and
pass the freelist to queue_iova in the dma-iommu ops path.
This patch is useful for iommu drivers (in this case the intel iommu
driver) which need to wait for the ioTLB to be flushed before newly
free/unmapped page table
2020 Aug 18
0
[PATCH V2 1/2] Add new flush_iotlb_range and handle freelists when using iommu_unmap_fast
On 2020-08-18 07:04, Tom Murphy wrote:
> Add a flush_iotlb_range to allow flushing of an iova range instead of a
> full flush in the dma-iommu path.
>
> Allow the iommu_unmap_fast to return newly freed page table pages and
> pass the freelist to queue_iova in the dma-iommu ops path.
>
> This patch is useful for iommu drivers (in this case the intel iommu
> driver) which
2020 Aug 17
1
[PATCH 1/2] Add new flush_iotlb_range and handle freelists when using iommu_unmap_fast
Add a flush_iotlb_range to allow flushing of an iova range instead of a
full flush in the dma-iommu path.
Allow the iommu_unmap_fast to return newly freed page table pages and
pass the freelist to queue_iova in the dma-iommu ops path.
This patch is useful for iommu drivers (in this case the intel iommu
driver) which need to wait for the ioTLB to be flushed before newly
free/unmapped page table
2019 Dec 21
0
[PATCH 4/8] iommu: Handle freelists when using deferred flushing in iommu drivers
Allow the iommu_unmap_fast to return newly freed page table pages and
pass the freelist to queue_iova in the dma-iommu ops path.
This is useful for iommu drivers (in this case the intel iommu driver)
which need to wait for the ioTLB to be flushed before newly
free/unmapped page table pages can be freed. This way we can still batch
ioTLB free operations and handle the freelists.
Signed-off-by:
2019 Dec 21
0
[PATCH 3/8] iommu/vt-d: Remove IOVA handling code from non-dma_ops path
Remove all IOVA handling code from the non-dma_ops path in the intel
iommu driver.
There's no need for the non-dma_ops path to keep track of IOVAs. The
whole point of the non-dma_ops path is that it allows the IOVAs to be
handled separately. The IOVA handling code removed in this patch is
pointless.
Signed-off-by: Tom Murphy <murphyt7 at tcd.ie>
---
drivers/iommu/intel-iommu.c | 89 ++++++++++++++-----------------------
1 file changed...
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
The internal mechanisms support this, but instead of exposting the gfp to
the caller it wrappers it into iommu_map() and iommu_map_atomic()
Fix this instead of adding more variants for GFP_KERNEL_ACCOUNT.
Signed-off-by: Jason Gunthorpe <jgg at nvidia.com>
---
arch/arm/mm/dma-mapping.c | 11 +++++++----
.../gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c | 3 ++-
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
The internal mechanisms support this, but instead of exposting the gfp to
the caller it wrappers it into iommu_map() and iommu_map_atomic()
Fix this instead of adding more variants for GFP_KERNEL_ACCOUNT.
Signed-off-by: Jason Gunthorpe <jgg at nvidia.com>
---
arch/arm/mm/dma-mapping.c | 11 +++++++----
.../gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c | 3 ++-
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
The internal mechanisms support this, but instead of exposting the gfp to
the caller it wrappers it into iommu_map() and iommu_map_atomic()
Fix this instead of adding more variants for GFP_KERNEL_ACCOUNT.
Signed-off-by: Jason Gunthorpe <jgg at nvidia.com>
---
arch/arm/mm/dma-mapping.c | 11 +++++++----
.../gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c | 3 ++-
2020 Aug 05
2
[PATCH 1/4] vdpa: introduce config op to get valid iova range
On Wed, Jun 17, 2020 at 11:29:44AM +0800, Jason Wang wrote:
> This patch introduce a config op to get valid iova range from the vDPA
> device.
>
> Signed-off-by: Jason Wang <jasowang at redhat.com>
> ---
> include/linux/vdpa.h | 14 ++++++++++++++
> 1 file changed, 14 insertions(+)
>
> diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h
> index
2020 Aug 05
2
[PATCH 1/4] vdpa: introduce config op to get valid iova range
On Wed, Jun 17, 2020 at 11:29:44AM +0800, Jason Wang wrote:
> This patch introduce a config op to get valid iova range from the vDPA
> device.
>
> Signed-off-by: Jason Wang <jasowang at redhat.com>
> ---
> include/linux/vdpa.h | 14 ++++++++++++++
> 1 file changed, 14 insertions(+)
>
> diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h
> index
2020 Jun 17
12
[PATCH 0/4] vDPA: API for reporting IOVA range
Hi All:
This series introduces API for reporing IOVA range. This is a must for
userspace to work correclty:
- for the process that uses vhost-vDPA directly to properly allocate
IOVA
- for VM(qemu), when vIOMMU is not enabled, fail early if GPA is out
of range
- for VM(qemu), when vIOMMU is enabled, determine a valid guest
address width
Please review.
Thanks
Jason Wang (4):
vdpa:
2020 Jun 17
12
[PATCH 0/4] vDPA: API for reporting IOVA range
Hi All:
This series introduces API for reporing IOVA range. This is a must for
userspace to work correclty:
- for the process that uses vhost-vDPA directly to properly allocate
IOVA
- for VM(qemu), when vIOMMU is not enabled, fail early if GPA is out
of range
- for VM(qemu), when vIOMMU is enabled, determine a valid guest
address width
Please review.
Thanks
Jason Wang (4):
vdpa:
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2023 Jan 18
10
[PATCH v2 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2015 Dec 31
4
[PATCH RFC] vhost: basic device IOTLB support
This patch tries to implement an device IOTLB for vhost. This could be
used with for co-operation with userspace(qemu) implementation of
iommu for a secure DMA environment in guest.
The idea is simple. When vhost meets an IOTLB miss, it will request
the assistance of userspace to do the translation, this is done
through:
- Fill the translation request in a preset userspace address (This