search for: iommu_pgshift

Displaying 20 results from an estimated 21 matches for "iommu_pgshift".

2015 Apr 17
4
[PATCH 2/6] instmem/gk20a: refer to IOMMU physical translation bit
...> index dd0994d9ebfc..69ef5eae3279 100644 > --- a/drm/nouveau/nvkm/subdev/instmem/gk20a.c > +++ b/drm/nouveau/nvkm/subdev/instmem/gk20a.c > @@ -89,6 +89,7 @@ struct gk20a_instmem_priv { > struct nvkm_mm *mm; > struct iommu_domain *domain; > unsigned long iommu_pgshift; > + unsigned long iommu_phys_addr_bit; > > /* Only used by DMA API */ > struct dma_attrs attrs; > @@ -169,8 +170,8 @@ gk20a_instobj_dtor_iommu(struct gk20a_instobj_priv *_node) > r = list_first_entry(&_node->mem->regions, struct nvkm_mm_nod...
2015 Feb 11
0
[PATCH v2 6/6] instmem/gk20a: add IOMMU support
...se; + + /* array of base.mem->size pages */ + struct page *pages[]; +}; + struct gk20a_instmem_priv { struct nvkm_instmem base; spinlock_t lock; u64 addr; + + /* Only used if IOMMU if present */ + struct mutex *mm_mutex; + struct nvkm_mm *mm; + struct iommu_domain *domain; + unsigned long iommu_pgshift; }; +/* + * Use PRAMIN to read/write data and avoid coherency issues. + * PRAMIN uses the GPU path and ensures data will always be coherent. + * + * A dynamic mapping based solution would be desirable in the future, but + * the issue remains of how to maintain coherency efficiently. On ARM it is...
2015 Sep 04
4
[PATCH 0/4] tegra: DMA mask and IOMMU bit fixes
These 4 patches fix two issues that existed on Tegra regarding DMA: 1) The bit indicating whether to use an IOMMU or not was hardcoded ; make this a platform property and use it in instmem 2) The DMA mask was not set for platform devices. Fix this by converting more pci_dma* to the DMA API, and use that more generic code to set the DMA mask properly for all platforms. Tested on both x86
2017 Aug 17
0
[PATCH 08/13] drm/nouveau/imem/gk20a: Use sychronized interface of the IOMMU-API
...rm/nouveau/nvkm/subdev/instmem/gk20a.c @@ -322,8 +322,9 @@ gk20a_instobj_dtor_iommu(struct nvkm_memory *memory) /* Unmap pages from GPU address space and free them */ for (i = 0; i < node->base.mem.size; i++) { - iommu_unmap(imem->domain, - (r->offset + i) << imem->iommu_pgshift, PAGE_SIZE); + iommu_unmap_sync(imem->domain, + (r->offset + i) << imem->iommu_pgshift, + PAGE_SIZE); dma_unmap_page(dev, node->dma_addrs[i], PAGE_SIZE, DMA_BIDIRECTIONAL); __free_page(node->pages[i]); @@ -458,14 +459,15 @@ gk20a_instobj_ctor_iommu(str...
2015 Apr 16
15
[PATCH 0/6] map big page by platform IOMMU
Hi, Generally the the imported buffers which has memory type TTM_PL_TT are mapped as small pages probably due to lack of big page allocation. But the platform device which also use memory type TTM_PL_TT, like GK20A, can *allocate* big page though the IOMMU hardware inside the SoC. This is a try to map the imported buffers as big pages in GMMU by the platform IOMMU. With some preparation work to
2015 Feb 20
6
[PATCH v4 0/6] nouveau/gk20a: RAM device removal & IOMMU support
Changes since v3: - Use a single dma_attr for all DMA-API allocations in instmem instead of one per allocation - Use device.info.ram_size instead of pfb->ram to check whether VRAM is present outside of nvkm Changes since v2: - Cleaner changes for ltc - Fixed typos in gk20a instmem IOMMU comments Changes since v1: - Add missing else condition in ltc - Remove extra flags that slipped into
2015 Feb 11
9
[PATCH v2 0/6] nouveau/gk20a: RAM device removal & IOMMU support
Changes since v1: - Add missing else condition in ltc - Remove extra flags that slipped into nouveau_display.c and nv84_fence.c. Original cover letter: Patches 1-3 make the presence of a RAM device optional, and remove GK20A's dummy RAM driver we were using so far. On chips using shared memory, such a device can confuse the driver into moving objects where there is no need to, and can trick
2015 Jan 23
8
[PATCH 0/6] nouveau/gk20a: RAM device removal & IOMMU support
A series I have waited too long to submit, and the recent refactoring made me pay the price of my perfectionism, so here are the features that are at least completed Patches 1-3 make the presence of a RAM device optional, and remove GK20A's dummy RAM driver we were using so far. On chips using shared memory, such a device can confuse the driver into moving objects where there is no need to,
2015 Feb 17
8
[PATCH v3 0/6] nouveau/gk20a: RAM device removal & IOMMU support
Thanks Ilia for the v2 review! Here is the v3 of this IOMMU support for GK20A series. Changes since v2: - Cleaner changes for ltc - Fixed typos in gk20a instmem IOMMU comments Changes since v1: - Add missing else condition in ltc - Remove extra flags that slipped into nouveau_display.c and nv84_fence.c. Original cover letter: Patches 1-3 make the presence of a RAM device optional, and remove
2019 Sep 16
15
[PATCH 00/11] drm/nouveau: Enable GP10B by default
From: Thierry Reding <treding at nvidia.com> Hi, the GPU on Jetson TX2 (GP10B) does not work properly on all devices. Why exactly is not clear, but there are slight differences between the SKUs that were tested. It turns out that the biggest issue is that on some devices (e.g. the one that I have), pulsing the GPU reset twice as is done in the current code (once as part of the power-ungate
2015 Nov 11
2
[PATCH] instmem/gk20a: use DMA API CPU mapping
...trs); + if (!node->base.vaddr) { nvkm_error(subdev, "cannot allocate DMA memory\n"); return -ENOMEM; } @@ -611,18 +591,14 @@ gk20a_instmem_new(struct nvkm_device *device, int index, imem->mm = &tdev->iommu.mm; imem->domain = tdev->iommu.domain; imem->iommu_pgshift = tdev->iommu.pgshift; - imem->cpu_map = gk20a_instobj_cpu_map_iommu; imem->iommu_bit = tdev->func->iommu_bit; nvkm_info(&imem->base.subdev, "using IOMMU\n"); } else { init_dma_attrs(&imem->attrs); - /* We will access the memory through our own...
2015 Nov 11
0
[PATCH] instmem/gk20a: use DMA API CPU mapping
...> nvkm_error(subdev, "cannot allocate DMA memory\n"); > return -ENOMEM; > } > @@ -611,18 +591,14 @@ gk20a_instmem_new(struct nvkm_device *device, int index, > imem->mm = &tdev->iommu.mm; > imem->domain = tdev->iommu.domain; > imem->iommu_pgshift = tdev->iommu.pgshift; > - imem->cpu_map = gk20a_instobj_cpu_map_iommu; > imem->iommu_bit = tdev->func->iommu_bit; > > nvkm_info(&imem->base.subdev, "using IOMMU\n"); > } else { > init_dma_attrs(&imem->attrs); > - /* We will...
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
...ecf5a8fbc2a..a4ac94a2ab57fc 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c @@ -475,7 +475,8 @@ gk20a_instobj_ctor_iommu(struct gk20a_instmem *imem, u32 npages, u32 align, u32 offset = (r->offset + i) << imem->iommu_pgshift; ret = iommu_map(imem->domain, offset, node->dma_addrs[i], - PAGE_SIZE, IOMMU_READ | IOMMU_WRITE); + PAGE_SIZE, IOMMU_READ | IOMMU_WRITE, + GFP_KERNEL); if (ret < 0) { nvkm_error(subdev, "IOMMU mapping failure: %d\n", ret); diff --git a/drivers/gpu/drm/tegr...
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
...ecf5a8fbc2a..a4ac94a2ab57fc 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c @@ -475,7 +475,8 @@ gk20a_instobj_ctor_iommu(struct gk20a_instmem *imem, u32 npages, u32 align, u32 offset = (r->offset + i) << imem->iommu_pgshift; ret = iommu_map(imem->domain, offset, node->dma_addrs[i], - PAGE_SIZE, IOMMU_READ | IOMMU_WRITE); + PAGE_SIZE, IOMMU_READ | IOMMU_WRITE, + GFP_KERNEL); if (ret < 0) { nvkm_error(subdev, "IOMMU mapping failure: %d\n", ret); diff --git a/drivers/gpu/drm/tegr...
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
...ecf5a8fbc2a..a4ac94a2ab57fc 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c @@ -475,7 +475,8 @@ gk20a_instobj_ctor_iommu(struct gk20a_instmem *imem, u32 npages, u32 align, u32 offset = (r->offset + i) << imem->iommu_pgshift; ret = iommu_map(imem->domain, offset, node->dma_addrs[i], - PAGE_SIZE, IOMMU_READ | IOMMU_WRITE); + PAGE_SIZE, IOMMU_READ | IOMMU_WRITE, + GFP_KERNEL); if (ret < 0) { nvkm_error(subdev, "IOMMU mapping failure: %d\n", ret); diff --git a/drivers/gpu/drm/tegr...
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 06
8
[PATCH 0/8] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 18
10
[PATCH v2 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first