search for: pgcount

Displaying 10 results from an estimated 10 matches for "pgcount".

Did you mean: pgocount
2023 Apr 14
2
[PATCH] iommu/virtio: Detach domain on endpoint release
..._ids; i++) { + req.endpoint = cpu_to_le32(fwspec->ids[i]); + WARN_ON(viommu_send_req_sync(vdev->viommu, &req, sizeof(req))); + } + vdev->vdomain = NULL; +} + static int viommu_map_pages(struct iommu_domain *domain, unsigned long iova, phys_addr_t paddr, size_t pgsize, size_t pgcount, int prot, gfp_t gfp, size_t *mapped) @@ -990,6 +1012,7 @@ static void viommu_release_device(struct device *dev) { struct viommu_endpoint *vdev = dev_iommu_priv_get(dev); + viommu_detach_dev(vdev); iommu_put_resv_regions(dev, &vdev->resv_regions); kfree(vdev); } -- 2.40.0
2023 Apr 14
2
[PATCH] iommu/virtio: Detach domain on endpoint release
..._ids; i++) { + req.endpoint = cpu_to_le32(fwspec->ids[i]); + WARN_ON(viommu_send_req_sync(vdev->viommu, &req, sizeof(req))); + } + vdev->vdomain = NULL; +} + static int viommu_map_pages(struct iommu_domain *domain, unsigned long iova, phys_addr_t paddr, size_t pgsize, size_t pgcount, int prot, gfp_t gfp, size_t *mapped) @@ -990,6 +1012,7 @@ static void viommu_release_device(struct device *dev) { struct viommu_endpoint *vdev = dev_iommu_priv_get(dev); + viommu_detach_dev(vdev); iommu_put_resv_regions(dev, &vdev->resv_regions); kfree(vdev); } -- 2.40.0
2023 May 10
1
[PATCH] iommu/virtio: Detach domain on endpoint release
...of(req))); > + } just a late question: don't you need to decrement vdomain's nr_endpoints? Thanks Eric > + vdev->vdomain = NULL; > +} > + > static int viommu_map_pages(struct iommu_domain *domain, unsigned long iova, > phys_addr_t paddr, size_t pgsize, size_t pgcount, > int prot, gfp_t gfp, size_t *mapped) > @@ -990,6 +1012,7 @@ static void viommu_release_device(struct device *dev) > { > struct viommu_endpoint *vdev = dev_iommu_priv_get(dev); > > + viommu_detach_dev(vdev); > iommu_put_resv_regions(dev, &vdev->resv_regi...
2023 May 15
3
[PATCH v2 0/2] iommu/virtio: Fixes
One fix reported by Akihiko, and another found while going over the driver. Jean-Philippe Brucker (2): iommu/virtio: Detach domain on endpoint release iommu/virtio: Return size mapped for a detached domain drivers/iommu/virtio-iommu.c | 57 ++++++++++++++++++++++++++---------- 1 file changed, 41 insertions(+), 16 deletions(-) -- 2.40.0
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 18
10
[PATCH v2 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first