similar to: [PATCH] instmem/gk20a: use DMA API CPU mapping

Displaying 20 results from an estimated 1000 matches similar to: "[PATCH] instmem/gk20a: use DMA API CPU mapping"

2017 Jan 30
2
[PATCH] drm/nouveau: gk20a: Turn instmem lock into mutex
From: Thierry Reding <treding at nvidia.com> The gk20a implementation of instance memory uses vmap()/vunmap() to map memory regions into the kernel's virtual address space. These functions may sleep, so protecting them by a spin lock is not safe. This triggers a warning if the DEBUG_ATOMIC_SLEEP Kconfig option is enabled. Fix this by using a mutex instead. Signed-off-by: Thierry Reding
2015 Nov 09
2
[PATCH] instmem/gk20a: fix race conditions
The LRU list used for recycling CPU mappings was handling concurrency very poorly. For instance, if an instobj was acquired twice before being released once, it would end up into the LRU list even though there is still a client accessing it. This patch fixes this by properly counting how many clients are currently using a given instobj. While at it, we also raise errors when inconsistencies are
2017 Feb 24
1
[PATCH] drm/nouveau: gk20a: Turn instmem lock into mutex
On 02/24/2017 01:20 AM, Thierry Reding wrote: > * PGP Signed by an unknown key > > On Mon, Jan 30, 2017 at 09:03:07PM +0100, Thierry Reding wrote: >> From: Thierry Reding <treding at nvidia.com> >> >> The gk20a implementation of instance memory uses vmap()/vunmap() to map >> memory regions into the kernel's virtual address space. These functions >>
2015 Nov 11
0
[PATCH] instmem/gk20a: use DMA API CPU mapping
On 11/11/2015 06:07 PM, Alexandre Courbot wrote: > Commit 69c4938249fb ("drm/nouveau/instmem/gk20a: use direct CPU access") > tried to be smart while using the DMA-API by managing the CPU mappings of > buffers allocated with the DMA-API by itself. In doing so, it relied > on dma_to_phys() which is an architecture-private function not > available everywhere. This broke the
2023 Dec 08
1
[PATCH] drm/nouveau: Fixup gk20a instobj hierarchy
From: Thierry Reding <treding at nvidia.com> Commit 12c9b05da918 ("drm/nouveau/imem: support allocations not preserved across suspend") uses container_of() to cast from struct nvkm_memory to struct nvkm_instobj, assuming that all instance objects are derived from struct nvkm_instobj. For the gk20a family that's not the case and they are derived from struct nvkm_memory instead.
2015 Oct 26
2
[PATCH] instmem/gk20a: exclusively acquire instobjs
Although I would not have expected this to happen, we seem to run into race conditions if instobjs are accessed concurrently. Use a global lock for safety. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/nvkm/subdev/instmem/gk20a.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) diff --git a/drm/nouveau/nvkm/subdev/instmem/gk20a.c
2023 Dec 14
1
[PATCH] drm/nouveau: Fixup gk20a instobj hierarchy
On 08/12/2023 10:46, Thierry Reding wrote: > From: Thierry Reding <treding at nvidia.com> > > Commit 12c9b05da918 ("drm/nouveau/imem: support allocations not > preserved across suspend") uses container_of() to cast from struct > nvkm_memory to struct nvkm_instobj, assuming that all instance objects > are derived from struct nvkm_instobj. For the gk20a family
2015 Nov 11
0
[PATCH] instmem/gk20a: fix race conditions
On 11/09/2015 05:37 PM, Alexandre Courbot wrote: > The LRU list used for recycling CPU mappings was handling concurrency > very poorly. For instance, if an instobj was acquired twice before being > released once, it would end up into the LRU list even though there is > still a client accessing it. > > This patch fixes this by properly counting how many clients are > currently
2017 Feb 23
0
[PATCH] drm/nouveau: gk20a: Turn instmem lock into mutex
On Mon, Jan 30, 2017 at 09:03:07PM +0100, Thierry Reding wrote: > From: Thierry Reding <treding at nvidia.com> > > The gk20a implementation of instance memory uses vmap()/vunmap() to map > memory regions into the kernel's virtual address space. These functions > may sleep, so protecting them by a spin lock is not safe. This triggers > a warning if the
2015 Nov 04
0
[PATCH] instmem/gk20a: exclusively acquire instobjs
On 10/26/2015 03:54 PM, Alexandre Courbot wrote: > Although I would not have expected this to happen, we seem to run into > race conditions if instobjs are accessed concurrently. Use a global lock > for safety. I wouldn't expect this to be an issue either. Before merging such a large hammer of a fix, I'd strongly prefer to see at least a better justification for why this is
2016 Jun 10
0
[PATCH v4 14/44] drm/nouveau: dma-mapping: Use unsigned long for dma_attrs
Split out subsystem specific changes for easier reviews. This will be squashed with main commit. Signed-off-by: Krzysztof Kozlowski <k.kozlowski at samsung.com> --- drivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c
2016 Jun 30
0
[PATCH v5 14/44] drm/nouveau: dma-mapping: Use unsigned long for dma_attrs
Split out subsystem specific changes for easier reviews. This will be squashed with main commit. Signed-off-by: Krzysztof Kozlowski <k.kozlowski at samsung.com> --- drivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c
2016 Jul 13
0
[PATCH v6 15/46] drm/nouveau: dma-mapping: Use unsigned long for dma_attrs
Split out subsystem specific changes for easier reviews. This will be squashed with main commit. Signed-off-by: Krzysztof Kozlowski <k.kozlowski at samsung.com> [for drm] Acked-by: Daniel Vetter <daniel.vetter at ffwll.ch> --- drivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git
2017 Dec 08
3
[PATCH] drm/nouveau/imem/nv50: fix incorrect use of refcount API
Commit be55287aa5b ("drm/nouveau/imem/nv50: embed nvkm_instobj directly into nv04_instobj") introduced some new calls to the refcount api to the nv50 mapping code. In one particular instance, it does the following: if (!refcount_inc_not_zero(&iobj->maps)) { ... refcount_inc(&iobj->maps); } i.e., it calls refcount_inc() to increment the
2017 Aug 17
0
[PATCH 08/13] drm/nouveau/imem/gk20a: Use sychronized interface of the IOMMU-API
From: Joerg Roedel <jroedel at suse.de> The map and unmap functions of the IOMMU-API changed their semantics: They do no longer guarantee that the hardware TLBs are synchronized with the page-table updates they made. To make conversion easier, new synchronized functions have been introduced which give these guarantees again until the code is converted to use the new TLB-flush interface of
2016 Jun 10
1
[PATCH v4 00/44] dma-mapping: Use unsigned long for dma_attrs
Hi, This is fourth approach for replacing struct dma_attrs with unsigned long. The main patch (1/44) doing the change is split into many subpatches for easier review (2-42). They should be squashed together when applying. *Important:* Patchset is tested on my ARM platforms and *only* build tested on allyesconfigs: ARM, ARM64, i386, x86_64 and powerpc. Please kindly provide reviewes and tests
2017 Dec 18
1
[PATCH] drm/nouveau/imem/nv50: fix incorrect use of refcount API
Hey Ard, It seems that Ben already committed a similar patch to his tree (see [0]). I do not know whether he is planning to have it part of a pull request of fixes for 4.15. Best regards, Pierre [0]: https://github.com/skeggsb/nouveau/commit/9068f1df2394f0e4ab2b2a28cac06b462fe0a0aa On 2017-12-18 — 09:27, Ard Biesheuvel wrote: > On 8 December 2017 at 19:30, Ard Biesheuvel <ard.biesheuvel
2016 Mar 03
0
[PATCH] instmem/gk20a: add write barrier when releasing DMA object
When using the DMA-API for instmem, we may obtain a write-combined mapping. For such cases, add a write barrier in gk20a_instobj_release_dma() to make sure that all writes have reached memory at this time. Signed-off-by: Alexandre Courbot <acourbot at nvidia.com> --- drm/nouveau/nvkm/subdev/instmem/gk20a.c | 2 ++ 1 file changed, 2 insertions(+) diff --git
2015 Sep 04
4
[PATCH 0/4] tegra: DMA mask and IOMMU bit fixes
These 4 patches fix two issues that existed on Tegra regarding DMA: 1) The bit indicating whether to use an IOMMU or not was hardcoded ; make this a platform property and use it in instmem 2) The DMA mask was not set for platform devices. Fix this by converting more pci_dma* to the DMA API, and use that more generic code to set the DMA mask properly for all platforms. Tested on both x86
2019 Sep 16
0
[PATCH 1/2] drm/nouveau: tegra: Fix NULL pointer dereference
From: Thierry Reding <treding at nvidia.com> Fill in BAR2 callbacks for instance memory. There's no BAR2 on Tegra GPUs, but buffers are all in system memory anyway, so just return the plain address. Signed-off-by: Thierry Reding <treding at nvidia.com> --- .../drm/nouveau/nvkm/subdev/instmem/gk20a.c | 30 +++++++++++++++++++ 1 file changed, 30 insertions(+) diff --git