Displaying 20 results from an estimated 1000 matches similar to: "[PATCH v2 00/10] Let iommufd charge IOPTE allocations to the memory cgroup"
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
The internal mechanisms support this, but instead of exposting the gfp to
the caller it wrappers it into iommu_map() and iommu_map_atomic()
Fix this instead of adding more variants for GFP_KERNEL_ACCOUNT.
Signed-off-by: Jason Gunthorpe <jgg at nvidia.com>
---
arch/arm/mm/dma-mapping.c | 11 +++++++----
.../gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c | 3 ++-
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
The internal mechanisms support this, but instead of exposting the gfp to
the caller it wrappers it into iommu_map() and iommu_map_atomic()
Fix this instead of adding more variants for GFP_KERNEL_ACCOUNT.
Signed-off-by: Jason Gunthorpe <jgg at nvidia.com>
---
arch/arm/mm/dma-mapping.c | 11 +++++++----
.../gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c | 3 ++-
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
The internal mechanisms support this, but instead of exposting the gfp to
the caller it wrappers it into iommu_map() and iommu_map_atomic()
Fix this instead of adding more variants for GFP_KERNEL_ACCOUNT.
Signed-off-by: Jason Gunthorpe <jgg at nvidia.com>
---
arch/arm/mm/dma-mapping.c | 11 +++++++----
.../gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c | 3 ++-
2019 Jun 13
8
[PATCH v4 0/5] iommu/amd: Convert the AMD iommu driver to the dma-iommu api
Convert the AMD iommu driver to the dma-iommu api. Remove the iova
handling and reserve region code from the AMD iommu driver.
Change-log:
V4:
-Rebase on top of linux-next
-Split the removing of the unnecessary locking in the amd iommu driver into a seperate patch
-refactor the "iommu/dma-iommu: Handle deferred devices" patch and address comments
v3:
-rename dma_limit to dma_mask
-exit
2019 Jun 13
8
[PATCH v4 0/5] iommu/amd: Convert the AMD iommu driver to the dma-iommu api
Convert the AMD iommu driver to the dma-iommu api. Remove the iova
handling and reserve region code from the AMD iommu driver.
Change-log:
V4:
-Rebase on top of linux-next
-Split the removing of the unnecessary locking in the amd iommu driver into a seperate patch
-refactor the "iommu/dma-iommu: Handle deferred devices" patch and address comments
v3:
-rename dma_limit to dma_mask
-exit
2019 Sep 08
7
[PATCH v6 0/5] iommu/amd: Convert the AMD iommu driver to the dma-iommu api
Convert the AMD iommu driver to the dma-iommu api. Remove the iova
handling and reserve region code from the AMD iommu driver.
Change-log:
V6:
-add more details to the description of patch 001-iommu-amd-Remove-unnecessary-locking-from-AMD-iommu-.patch
-rename handle_deferred_device to iommu_dma_deferred_attach
-fix double tabs in 0003-iommu-dma-iommu-Handle-deferred-devices.patch
V5:
-Rebase on
2020 Aug 18
3
[PATCH V2 1/2] Add new flush_iotlb_range and handle freelists when using iommu_unmap_fast
Add a flush_iotlb_range to allow flushing of an iova range instead of a
full flush in the dma-iommu path.
Allow the iommu_unmap_fast to return newly freed page table pages and
pass the freelist to queue_iova in the dma-iommu ops path.
This patch is useful for iommu drivers (in this case the intel iommu
driver) which need to wait for the ioTLB to be flushed before newly
free/unmapped page table
2020 Aug 18
3
[PATCH V2 1/2] Add new flush_iotlb_range and handle freelists when using iommu_unmap_fast
Add a flush_iotlb_range to allow flushing of an iova range instead of a
full flush in the dma-iommu path.
Allow the iommu_unmap_fast to return newly freed page table pages and
pass the freelist to queue_iova in the dma-iommu ops path.
This patch is useful for iommu drivers (in this case the intel iommu
driver) which need to wait for the ioTLB to be flushed before newly
free/unmapped page table
2019 Dec 21
13
[PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
This patchset converts the intel iommu driver to the dma-iommu api.
While converting the driver I exposed a bug in the intel i915 driver which causes a huge amount of artifacts on the screen of my laptop. You can see a picture of it here:
https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg
This issue is most likely in the i915 driver and is most likely caused by the
2019 Dec 21
13
[PATCH 0/8] Convert the intel iommu driver to the dma-iommu api
This patchset converts the intel iommu driver to the dma-iommu api.
While converting the driver I exposed a bug in the intel i915 driver which causes a huge amount of artifacts on the screen of my laptop. You can see a picture of it here:
https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg
This issue is most likely in the i915 driver and is most likely caused by the
2023 May 15
3
[PATCH v2 0/2] iommu/virtio: Fixes
One fix reported by Akihiko, and another found while going over the
driver.
Jean-Philippe Brucker (2):
iommu/virtio: Detach domain on endpoint release
iommu/virtio: Return size mapped for a detached domain
drivers/iommu/virtio-iommu.c | 57 ++++++++++++++++++++++++++----------
1 file changed, 41 insertions(+), 16 deletions(-)
--
2.40.0
2015 Apr 16
15
[PATCH 0/6] map big page by platform IOMMU
Hi,
Generally the the imported buffers which has memory type TTM_PL_TT are
mapped as small pages probably due to lack of big page allocation. But the
platform device which also use memory type TTM_PL_TT, like GK20A, can
*allocate* big page though the IOMMU hardware inside the SoC. This is a try
to map the imported buffers as big pages in GMMU by the platform IOMMU. With
some preparation work to
2020 Apr 14
35
[PATCH v2 00/33] iommu: Move iommu_group setup to IOMMU core code
Hi,
here is the second version of this patch-set. The first version with
some more introductory text can be found here:
https://lore.kernel.org/lkml/20200407183742.4344-1-joro at 8bytes.org/
Changes v1->v2:
* Rebased to v5.7-rc1
* Re-wrote the arm-smmu changes as suggested by Robin Murphy
* Re-worked the Exynos patches to hopefully not break the
driver anymore
* Fixed a missing
2020 Apr 14
35
[PATCH v2 00/33] iommu: Move iommu_group setup to IOMMU core code
Hi,
here is the second version of this patch-set. The first version with
some more introductory text can be found here:
https://lore.kernel.org/lkml/20200407183742.4344-1-joro at 8bytes.org/
Changes v1->v2:
* Rebased to v5.7-rc1
* Re-wrote the arm-smmu changes as suggested by Robin Murphy
* Re-worked the Exynos patches to hopefully not break the
driver anymore
* Fixed a missing
2020 Apr 29
35
[PATCH v3 00/34] iommu: Move iommu_group setup to IOMMU core code
Hi,
here is the third version of this patch-set. Older versions can be found
here:
v1: https://lore.kernel.org/lkml/20200407183742.4344-1-joro at 8bytes.org/
(Has some more introductory text)
v2: https://lore.kernel.org/lkml/20200414131542.25608-1-joro at 8bytes.org/
Changes v2 -> v3:
* Rebased v5.7-rc3
* Added a missing iommu_group_put() as reported by Lu Baolu.
* Added a