Displaying 18 results from an estimated 18 matches for "iommu_cach".
Did you mean:
iommu_cache
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
...ivers/vfio/vfio_iommu_type1.c
@@ -1480,7 +1480,8 @@ static int vfio_iommu_map(struct vfio_iommu *iommu, dma_addr_t iova,
list_for_each_entry(d, &iommu->domain_list, next) {
ret = iommu_map(d->domain, iova, (phys_addr_t)pfn << PAGE_SHIFT,
- npage << PAGE_SHIFT, prot | IOMMU_CACHE);
+ npage << PAGE_SHIFT, prot | IOMMU_CACHE,
+ GFP_KERNEL);
if (ret)
goto unwind;
@@ -1777,8 +1778,8 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu,
size = npage << PAGE_SHIFT;
}
- ret = iommu_map(domain->domain, iova, phys,
- size, dma->...
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
...ivers/vfio/vfio_iommu_type1.c
@@ -1480,7 +1480,8 @@ static int vfio_iommu_map(struct vfio_iommu *iommu, dma_addr_t iova,
list_for_each_entry(d, &iommu->domain_list, next) {
ret = iommu_map(d->domain, iova, (phys_addr_t)pfn << PAGE_SHIFT,
- npage << PAGE_SHIFT, prot | IOMMU_CACHE);
+ npage << PAGE_SHIFT, prot | IOMMU_CACHE,
+ GFP_KERNEL);
if (ret)
goto unwind;
@@ -1777,8 +1778,8 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu,
size = npage << PAGE_SHIFT;
}
- ret = iommu_map(domain->domain, iova, phys,
- size, dma->...
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
...ivers/vfio/vfio_iommu_type1.c
@@ -1480,7 +1480,8 @@ static int vfio_iommu_map(struct vfio_iommu *iommu, dma_addr_t iova,
list_for_each_entry(d, &iommu->domain_list, next) {
ret = iommu_map(d->domain, iova, (phys_addr_t)pfn << PAGE_SHIFT,
- npage << PAGE_SHIFT, prot | IOMMU_CACHE);
+ npage << PAGE_SHIFT, prot | IOMMU_CACHE,
+ GFP_KERNEL);
if (ret)
goto unwind;
@@ -1777,8 +1778,8 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu,
size = npage << PAGE_SHIFT;
}
- ret = iommu_map(domain->domain, iova, phys,
- size, dma->...
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2023 Jan 18
10
[PATCH v2 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2023 Jan 18
4
[PATCH v2 04/10] iommu/dma: Use the gfp parameter in __iommu_dma_alloc_noncontiguous()
...@@ static struct page **__iommu_dma_alloc_noncontiguous(struct device *dev,
if (!iova)
goto out_free_pages;
- if (sg_alloc_table_from_pages(sgt, pages, count, 0, size, GFP_KERNEL))
+ if (sg_alloc_table_from_pages(sgt, pages, count, 0, size, gfp))
goto out_free_iova;
if (!(ioprot & IOMMU_CACHE)) {
--
2.39.0
2023 Jan 18
4
[PATCH v2 04/10] iommu/dma: Use the gfp parameter in __iommu_dma_alloc_noncontiguous()
...@@ static struct page **__iommu_dma_alloc_noncontiguous(struct device *dev,
if (!iova)
goto out_free_pages;
- if (sg_alloc_table_from_pages(sgt, pages, count, 0, size, GFP_KERNEL))
+ if (sg_alloc_table_from_pages(sgt, pages, count, 0, size, gfp))
goto out_free_iova;
if (!(ioprot & IOMMU_CACHE)) {
--
2.39.0
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit
the amount of kernel memory a iommufd file descriptor can pin down. The
various internal data structures already use GFP_KERNEL_ACCOUNT to charge
its own memory.
However, one of the biggest consumers of kernel memory is the IOPTEs
stored under the iommu_domain and these allocations are not tracked.
This series is the first
2023 Jan 20
0
[PATCH v2 04/10] iommu/dma: Use the gfp parameter in __iommu_dma_alloc_noncontiguous()
...ontiguous(struct device *dev,
> if (!iova)
> goto out_free_pages;
>
> - if (sg_alloc_table_from_pages(sgt, pages, count, 0, size, GFP_KERNEL))
> + if (sg_alloc_table_from_pages(sgt, pages, count, 0, size, gfp))
> goto out_free_iova;
>
> if (!(ioprot & IOMMU_CACHE)) {
2015 Feb 20
6
[PATCH v4 0/6] nouveau/gk20a: RAM device removal & IOMMU support
Changes since v3:
- Use a single dma_attr for all DMA-API allocations in instmem instead of one
per allocation
- Use device.info.ram_size instead of pfb->ram to check whether VRAM is present
outside of nvkm
Changes since v2:
- Cleaner changes for ltc
- Fixed typos in gk20a instmem IOMMU comments
Changes since v1:
- Add missing else condition in ltc
- Remove extra flags that slipped into
2015 Feb 11
9
[PATCH v2 0/6] nouveau/gk20a: RAM device removal & IOMMU support
Changes since v1:
- Add missing else condition in ltc
- Remove extra flags that slipped into nouveau_display.c and nv84_fence.c.
Original cover letter:
Patches 1-3 make the presence of a RAM device optional, and remove GK20A's dummy
RAM driver we were using so far. On chips using shared memory, such a device
can confuse the driver into moving objects where there is no need to, and can
trick
2015 Jan 23
8
[PATCH 0/6] nouveau/gk20a: RAM device removal & IOMMU support
A series I have waited too long to submit, and the recent refactoring made
me pay the price of my perfectionism, so here are the features that are at least
completed
Patches 1-3 make the presence of a RAM device optional, and remove GK20A's dummy
RAM driver we were using so far. On chips using shared memory, such a device
can confuse the driver into moving objects where there is no need to,
2015 Feb 17
8
[PATCH v3 0/6] nouveau/gk20a: RAM device removal & IOMMU support
Thanks Ilia for the v2 review! Here is the v3 of this IOMMU support for GK20A
series.
Changes since v2:
- Cleaner changes for ltc
- Fixed typos in gk20a instmem IOMMU comments
Changes since v1:
- Add missing else condition in ltc
- Remove extra flags that slipped into nouveau_display.c and nv84_fence.c.
Original cover letter:
Patches 1-3 make the presence of a RAM device optional, and remove
2020 Sep 24
30
[RFC PATCH 00/24] Control VQ support in vDPA
Hi All:
This series tries to add the support for control virtqueue in vDPA.
Control virtqueue is used by networking device for accepting various
commands from the driver. It's a must to support multiqueue and other
configurations.
When used by vhost-vDPA bus driver for VM, the control virtqueue
should be shadowed via userspace VMM (Qemu) instead of being assigned
directly to Guest. This is
2020 Sep 24
30
[RFC PATCH 00/24] Control VQ support in vDPA
Hi All:
This series tries to add the support for control virtqueue in vDPA.
Control virtqueue is used by networking device for accepting various
commands from the driver. It's a must to support multiqueue and other
configurations.
When used by vhost-vDPA bus driver for VM, the control virtqueue
should be shadowed via userspace VMM (Qemu) instead of being assigned
directly to Guest. This is