Displaying 20 results from an estimated 52 matches for "nvkm_memori".
Did you mean:
nvkm_memory
2015 Nov 11
2
[PATCH] instmem/gk20a: use DMA API CPU mapping
Commit 69c4938249fb ("drm/nouveau/instmem/gk20a: use direct CPU access")
tried to be smart while using the DMA-API by managing the CPU mappings of
buffers allocated with the DMA-API by itself. In doing so, it relied
on dma_to_phys() which is an architecture-private function not
available everywhere. This broke the build on several architectures.
Since there is no reliable and portable
2015 Nov 11
0
[PATCH] instmem/gk20a: use DMA API CPU mapping
On 11/11/2015 06:07 PM, Alexandre Courbot wrote:
> Commit 69c4938249fb ("drm/nouveau/instmem/gk20a: use direct CPU access")
> tried to be smart while using the DMA-API by managing the CPU mappings of
> buffers allocated with the DMA-API by itself. In doing so, it relied
> on dma_to_phys() which is an architecture-private function not
> available everywhere. This broke the
2017 Jan 30
2
[PATCH] drm/nouveau: gk20a: Turn instmem lock into mutex
From: Thierry Reding <treding at nvidia.com>
The gk20a implementation of instance memory uses vmap()/vunmap() to map
memory regions into the kernel's virtual address space. These functions
may sleep, so protecting them by a spin lock is not safe. This triggers
a warning if the DEBUG_ATOMIC_SLEEP Kconfig option is enabled. Fix this
by using a mutex instead.
Signed-off-by: Thierry Reding
2015 Oct 26
2
[PATCH] instmem/gk20a: exclusively acquire instobjs
Although I would not have expected this to happen, we seem to run into
race conditions if instobjs are accessed concurrently. Use a global lock
for safety.
Signed-off-by: Alexandre Courbot <acourbot at nvidia.com>
---
drm/nouveau/nvkm/subdev/instmem/gk20a.c | 15 ++++++---------
1 file changed, 6 insertions(+), 9 deletions(-)
diff --git a/drm/nouveau/nvkm/subdev/instmem/gk20a.c
2017 Feb 24
1
[PATCH] drm/nouveau: gk20a: Turn instmem lock into mutex
On 02/24/2017 01:20 AM, Thierry Reding wrote:
> * PGP Signed by an unknown key
>
> On Mon, Jan 30, 2017 at 09:03:07PM +0100, Thierry Reding wrote:
>> From: Thierry Reding <treding at nvidia.com>
>>
>> The gk20a implementation of instance memory uses vmap()/vunmap() to map
>> memory regions into the kernel's virtual address space. These functions
>>
2023 Dec 08
1
[PATCH] drm/nouveau: Fixup gk20a instobj hierarchy
From: Thierry Reding <treding at nvidia.com>
Commit 12c9b05da918 ("drm/nouveau/imem: support allocations not
preserved across suspend") uses container_of() to cast from struct
nvkm_memory to struct nvkm_instobj, assuming that all instance objects
are derived from struct nvkm_instobj. For the gk20a family that's not
the case and they are derived from struct nvkm_memory instead.
2023 Dec 14
1
[PATCH] drm/nouveau: Fixup gk20a instobj hierarchy
On 08/12/2023 10:46, Thierry Reding wrote:
> From: Thierry Reding <treding at nvidia.com>
>
> Commit 12c9b05da918 ("drm/nouveau/imem: support allocations not
> preserved across suspend") uses container_of() to cast from struct
> nvkm_memory to struct nvkm_instobj, assuming that all instance objects
> are derived from struct nvkm_instobj. For the gk20a family
2017 Feb 23
0
[PATCH] drm/nouveau: gk20a: Turn instmem lock into mutex
On Mon, Jan 30, 2017 at 09:03:07PM +0100, Thierry Reding wrote:
> From: Thierry Reding <treding at nvidia.com>
>
> The gk20a implementation of instance memory uses vmap()/vunmap() to map
> memory regions into the kernel's virtual address space. These functions
> may sleep, so protecting them by a spin lock is not safe. This triggers
> a warning if the
2015 Nov 09
2
[PATCH] instmem/gk20a: fix race conditions
The LRU list used for recycling CPU mappings was handling concurrency
very poorly. For instance, if an instobj was acquired twice before being
released once, it would end up into the LRU list even though there is
still a client accessing it.
This patch fixes this by properly counting how many clients are
currently using a given instobj.
While at it, we also raise errors when inconsistencies are
2015 Nov 04
0
[PATCH] instmem/gk20a: exclusively acquire instobjs
On 10/26/2015 03:54 PM, Alexandre Courbot wrote:
> Although I would not have expected this to happen, we seem to run into
> race conditions if instobjs are accessed concurrently. Use a global lock
> for safety.
I wouldn't expect this to be an issue either.
Before merging such a large hammer of a fix, I'd strongly prefer to see
at least a better justification for why this is
2019 Sep 16
0
[PATCH 1/2] drm/nouveau: tegra: Fix NULL pointer dereference
From: Thierry Reding <treding at nvidia.com>
Fill in BAR2 callbacks for instance memory. There's no BAR2 on Tegra
GPUs, but buffers are all in system memory anyway, so just return the
plain address.
Signed-off-by: Thierry Reding <treding at nvidia.com>
---
.../drm/nouveau/nvkm/subdev/instmem/gk20a.c | 30 +++++++++++++++++++
1 file changed, 30 insertions(+)
diff --git
2015 Nov 11
0
[PATCH] instmem/gk20a: fix race conditions
On 11/09/2015 05:37 PM, Alexandre Courbot wrote:
> The LRU list used for recycling CPU mappings was handling concurrency
> very poorly. For instance, if an instobj was acquired twice before being
> released once, it would end up into the LRU list even though there is
> still a client accessing it.
>
> This patch fixes this by properly counting how many clients are
> currently
2019 Sep 16
6
[PATCH 0/2] drm/nouveau: Two more fixes
From: Thierry Reding <treding at nvidia.com>
Hi Ben,
I messed up the ordering of patches in my tree a bit, so these two fixes
got separated from the others. I don't consider these particularily
urgent because the crash that the first one fixes only happens on gp10b
which we don't enable by default yet and the second patch fixes a crash
that only happens on module unload (or driver
2019 Sep 16
9
[PATCH 0/6] drm/nouveau: Preparatory work for GV11B support
From: Thierry Reding <treding at nvidia.com>
Hi Ben,
these are a couple of patches that are in preparation for adding GV11B
support. The fundamental issue that these are trying to solve is that
the GV11B is the first Tegra incarnation of the GPU where the aperture
really matters. All prior generations would accept any of them.
For dGPUs we usually allocate memory in VRAM, so the default
2017 Dec 08
3
[PATCH] drm/nouveau/imem/nv50: fix incorrect use of refcount API
Commit be55287aa5b ("drm/nouveau/imem/nv50: embed nvkm_instobj directly
into nv04_instobj") introduced some new calls to the refcount api to
the nv50 mapping code. In one particular instance, it does the
following:
if (!refcount_inc_not_zero(&iobj->maps)) {
...
refcount_inc(&iobj->maps);
}
i.e., it calls refcount_inc() to increment the
2019 Nov 08
1
[PATCH] RFC: drm/nouveau: Make BAR1 support optional
From: Thierry Reding <treding at nvidia.com>
The purpose of BAR1 is primarily to make memory accesses coherent.
However, some GPUs do not have BAR1 functionality. For example, the
GV11B found on the Xavier SoC is DMA coherent and therefore doesn't
need BAR1.
Implement a variant of FIFO channels that work without a mapping of
instance memory through BAR1.
XXX ensure memory barriers are
2016 Mar 01
1
[PATCH 1/2] fifo/gf100: take runlist target into account
Bits 28:29 of RUNLIST_BASE specify the memory target of the runlist. Set
it to 0x3 (SYS_MEM_NONCOHERENT) if the runlist object resides in system
memory.
Signed-off-by: Alexandre Courbot <acourbot at nvidia.com>
---
drm/nouveau/nvkm/engine/fifo/gf100.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drm/nouveau/nvkm/engine/fifo/gf100.c
2023 Jul 14
2
[PATCH] drm/nouveau/fifo:Fix Nineteen occurrences of the gk104.c error: ERROR: : trailing statements should be on next line
Signed-off-by: ZhiHu <huzhi001 at 208suo.com>
---
.../gpu/drm/nouveau/nvkm/engine/fifo/gk104.c | 40 ++++++++++++++-----
1 file changed, 29 insertions(+), 11 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gk104.c
b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gk104.c
index d8a4d773a58c..b99e0a7c96bb 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/gk104.c
+++
2019 Dec 17
1
[PATCH] drm/nouveau: Add correct turing page kinds
Turing introduced a new simplified page kind
scheme, reducing the number of possible page
kinds from 256 to 16. It also is the first
NVIDIA GPU in which the highest possible page
kind value is not reserved as an "invalid" page
kind.
To address this, the invalid page kind is made
an explicit property of the MMU HAL, and a new
table of page kinds is added to the tu102 MMU
HAL.
One
2017 Dec 18
1
[PATCH] drm/nouveau/imem/nv50: fix incorrect use of refcount API
Hey Ard,
It seems that Ben already committed a similar patch to his tree (see [0]). I do
not know whether he is planning to have it part of a pull request of fixes for
4.15.
Best regards,
Pierre
[0]: https://github.com/skeggsb/nouveau/commit/9068f1df2394f0e4ab2b2a28cac06b462fe0a0aa
On 2017-12-18 — 09:27, Ard Biesheuvel wrote:
> On 8 December 2017 at 19:30, Ard Biesheuvel <ard.biesheuvel