search for: iommu_read

Displaying 20 results from an estimated 61 matches for "iommu_read".

2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
...rivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c @@ -475,7 +475,8 @@ gk20a_instobj_ctor_iommu(struct gk20a_instmem *imem, u32 npages, u32 align, u32 offset = (r->offset + i) << imem->iommu_pgshift; ret = iommu_map(imem->domain, offset, node->dma_addrs[i], - PAGE_SIZE, IOMMU_READ | IOMMU_WRITE); + PAGE_SIZE, IOMMU_READ | IOMMU_WRITE, + GFP_KERNEL); if (ret < 0) { nvkm_error(subdev, "IOMMU mapping failure: %d\n", ret); diff --git a/drivers/gpu/drm/tegra/drm.c b/drivers/gpu/drm/tegra/drm.c index 7bd2e65c2a16c5..6ca9f396e55be4 100644 --- a/drivers/g...
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
...rivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c @@ -475,7 +475,8 @@ gk20a_instobj_ctor_iommu(struct gk20a_instmem *imem, u32 npages, u32 align, u32 offset = (r->offset + i) << imem->iommu_pgshift; ret = iommu_map(imem->domain, offset, node->dma_addrs[i], - PAGE_SIZE, IOMMU_READ | IOMMU_WRITE); + PAGE_SIZE, IOMMU_READ | IOMMU_WRITE, + GFP_KERNEL); if (ret < 0) { nvkm_error(subdev, "IOMMU mapping failure: %d\n", ret); diff --git a/drivers/gpu/drm/tegra/drm.c b/drivers/gpu/drm/tegra/drm.c index 7bd2e65c2a16c5..6ca9f396e55be4 100644 --- a/drivers/g...
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
...rivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c @@ -475,7 +475,8 @@ gk20a_instobj_ctor_iommu(struct gk20a_instmem *imem, u32 npages, u32 align, u32 offset = (r->offset + i) << imem->iommu_pgshift; ret = iommu_map(imem->domain, offset, node->dma_addrs[i], - PAGE_SIZE, IOMMU_READ | IOMMU_WRITE); + PAGE_SIZE, IOMMU_READ | IOMMU_WRITE, + GFP_KERNEL); if (ret < 0) { nvkm_error(subdev, "IOMMU mapping failure: %d\n", ret); diff --git a/drivers/gpu/drm/tegra/drm.c b/drivers/gpu/drm/tegra/drm.c index 7bd2e65c2a16c5..6ca9f396e55be4 100644 --- a/drivers/g...
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 18
10
[PATCH v2 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2017 Oct 09
0
[virtio-dev] [RFC] virtio-iommu version 0.4
...}; + struct virtio_iommu_req_map *req; pr_debug("map %llu 0x%lx -> 0x%llx (%zu)\n", vdomain->id, iova, paddr, size); @@ -564,17 +566,30 @@ static int viommu_map(struct iommu_domain *domain, unsigned long iova, if (!vdomain->attached) return -ENODEV; - if (prot & IOMMU_READ) - req.flags |= cpu_to_le32(VIRTIO_IOMMU_MAP_F_READ); - - if (prot & IOMMU_WRITE) - req.flags |= cpu_to_le32(VIRTIO_IOMMU_MAP_F_WRITE); - ret = viommu_tlb_map(vdomain, iova, paddr, size); if (ret) return ret; - ret = viommu_send_req_sync(vdomain->viommu, &req); + req = kzalloc(...
2017 Aug 17
0
[PATCH 08/13] drm/nouveau/imem/gk20a: Use sychronized interface of the IOMMU-API
...t;pages[i]); @@ -458,14 +459,15 @@ gk20a_instobj_ctor_iommu(struct gk20a_instmem *imem, u32 npages, u32 align, for (i = 0; i < npages; i++) { u32 offset = (r->offset + i) << imem->iommu_pgshift; - ret = iommu_map(imem->domain, offset, node->dma_addrs[i], - PAGE_SIZE, IOMMU_READ | IOMMU_WRITE); + ret = iommu_map_sync(imem->domain, offset, node->dma_addrs[i], + PAGE_SIZE, IOMMU_READ | IOMMU_WRITE); if (ret < 0) { nvkm_error(subdev, "IOMMU mapping failure: %d\n", ret); while (i-- > 0) { offset -= PAGE_SIZE; - iommu_unmap(im...
2015 Apr 16
2
[PATCH 6/6] mmu: gk20a: implement IOMMU mapping for big pages
...npages; i++, list++) { > + ret = iommu_map(plat->gpu->iommu.domain, > + (node->offset + i) << PAGE_SHIFT, > + *list, > + PAGE_SIZE, > + IOMMU_READ | IOMMU_WRITE); > + > + if (ret < 0) > + return; > + > + nv_trace(mmu, "IOMMU: IOVA=0x%016llx-> IOMMU -> PA=%016llx\n", > + (u64)(node->offset + i) << PAGE_SHIFT, (u64)(*lis...
2015 Apr 16
15
[PATCH 0/6] map big page by platform IOMMU
Hi, Generally the the imported buffers which has memory type TTM_PL_TT are mapped as small pages probably due to lack of big page allocation. But the platform device which also use memory type TTM_PL_TT, like GK20A, can *allocate* big page though the IOMMU hardware inside the SoC. This is a try to map the imported buffers as big pages in GMMU by the platform IOMMU. With some preparation work to
2015 Feb 11
0
[PATCH v2 6/6] instmem/gk20a: add IOMMU support
...full!\n"); + goto free_pages; + } + + /* Map into GPU address space */ + for (i = 0; i < npages; i++) { + struct page *p = node->pages[i]; + u32 offset = (r->offset + i) << priv->iommu_pgshift; + + ret = iommu_map(priv->domain, offset, page_to_phys(p), + PAGE_SIZE, IOMMU_READ | IOMMU_WRITE); + if (ret < 0) { + nv_error(priv, "IOMMU mapping failure: %d\n", ret); + + while (i-- > 0) { + offset -= PAGE_SIZE; + iommu_unmap(priv->domain, offset, PAGE_SIZE); + } + goto release_area; + } + } + + /* Bit 34 tells that an address is to be resolv...
2017 Apr 07
0
[RFC PATCH linux] iommu: Add virtio-iommu driver
...le32(vdomain->id), + .virt_addr = cpu_to_le64(iova), + .phys_addr = cpu_to_le64(paddr), + .size = cpu_to_le64(size), + }; + + pr_debug("map %llu 0x%lx -> 0x%llx (%zu)\n", vdomain->id, iova, + paddr, size); + + if (!vdomain->attached) + return -ENODEV; + + if (prot & IOMMU_READ) + req.flags |= cpu_to_le32(VIRTIO_IOMMU_MAP_F_READ); + + if (prot & IOMMU_WRITE) + req.flags |= cpu_to_le32(VIRTIO_IOMMU_MAP_F_WRITE); + + ret = viommu_tlb_map(vdomain, iova, paddr, size); + if (ret) + return ret; + + ret = viommu_send_req_sync(vdomain->viommu, &req); + if (ret) + v...
2017 Jun 16
1
[virtio-dev] [RFC PATCH linux] iommu: Add virtio-iommu driver
...in->id, iova, > + paddr, size); A query, when I am tracing above prints I see same physical address is mapped with two different virtual address, do you know why kernel does this? Thanks -Bharat > + > + if (!vdomain->attached) > + return -ENODEV; > + > + if (prot & IOMMU_READ) > + req.flags |= cpu_to_le32(VIRTIO_IOMMU_MAP_F_READ); > + > + if (prot & IOMMU_WRITE) > + req.flags |= cpu_to_le32(VIRTIO_IOMMU_MAP_F_WRITE); > + > + ret = viommu_tlb_map(vdomain, iova, paddr, size); > + if (ret) > + return ret; > + > + ret = viommu_send_req_s...
2017 Jun 16
1
[virtio-dev] [RFC PATCH linux] iommu: Add virtio-iommu driver
...in->id, iova, > + paddr, size); A query, when I am tracing above prints I see same physical address is mapped with two different virtual address, do you know why kernel does this? Thanks -Bharat > + > + if (!vdomain->attached) > + return -ENODEV; > + > + if (prot & IOMMU_READ) > + req.flags |= cpu_to_le32(VIRTIO_IOMMU_MAP_F_READ); > + > + if (prot & IOMMU_WRITE) > + req.flags |= cpu_to_le32(VIRTIO_IOMMU_MAP_F_WRITE); > + > + ret = viommu_tlb_map(vdomain, iova, paddr, size); > + if (ret) > + return ret; > + > + ret = viommu_send_req_s...
2015 Feb 20
6
[PATCH v4 0/6] nouveau/gk20a: RAM device removal & IOMMU support
Changes since v3: - Use a single dma_attr for all DMA-API allocations in instmem instead of one per allocation - Use device.info.ram_size instead of pfb->ram to check whether VRAM is present outside of nvkm Changes since v2: - Cleaner changes for ltc - Fixed typos in gk20a instmem IOMMU comments Changes since v1: - Add missing else condition in ltc - Remove extra flags that slipped into
2015 Feb 11
9
[PATCH v2 0/6] nouveau/gk20a: RAM device removal & IOMMU support
Changes since v1: - Add missing else condition in ltc - Remove extra flags that slipped into nouveau_display.c and nv84_fence.c. Original cover letter: Patches 1-3 make the presence of a RAM device optional, and remove GK20A's dummy RAM driver we were using so far. On chips using shared memory, such a device can confuse the driver into moving objects where there is no need to, and can trick
2015 Jan 23
8
[PATCH 0/6] nouveau/gk20a: RAM device removal & IOMMU support
A series I have waited too long to submit, and the recent refactoring made me pay the price of my perfectionism, so here are the features that are at least completed Patches 1-3 make the presence of a RAM device optional, and remove GK20A's dummy RAM driver we were using so far. On chips using shared memory, such a device can confuse the driver into moving objects where there is no need to,