similar to: [PATCH v4 0/2] drm/nouveau/dmem: Fix Vulnerability and Device Channels configuration

Displaying 20 results from an estimated 900 matches similar to: "[PATCH v4 0/2] drm/nouveau/dmem: Fix Vulnerability and Device Channels configuration"

2024 Oct 08
2
[PATCH v3 0/2] drm/nouveau/dmem: Fix Vulnerability and Device Channels configuration
From: Yonatan Maman <Ymaman at Nvidia.com> This patch series addresses two critical issues in the Nouveau driver related to device channels, error handling, and sensitive data leaks. - Vulnerability in migrate_to_ram: The migrate_to_ram function might return a dirty HIGH_USER page when a copy push command (FW channel) fails, potentially exposing sensitive data and posing a security
2024 Sep 23
1
[PATCH 2/2] nouveau/dmem: Fix memory leak in `migrate_to_ram` upon copy error
A copy push command might fail, causing `migrate_to_ram` to return a dirty HIGH_USER page to the user. This exposes a security vulnerability in the nouveau driver. To prevent memory leaks in `migrate_to_ram` upon a copy error, allocate a zero page for the destination page. Signed-off-by: Yonatan Maman <Ymaman at Nvidia.com> Signed-off-by: Gal Shalom <GalShalom at Nvidia.com> ---
2024 Sep 23
2
[PATCH 0/2] *** BUG Fix for Nouveau Memory***
This patch series addresses two critical issues in the Nouveau driver related to device channels, error handling and memory leaking. - Memory Leak in migrate_to_ram - the migrate_to_ram function was identified as leaking memory when a copy push command fails. This results in the function returning a dirty HIGH_USER page, which can expose sensitive information and pose a security risk. To mitigate
2024 Oct 15
5
[PATCH v1 0/4] GPU Direct RDMA (P2P DMA) for Device Private Pages
From: Yonatan Maman <Ymaman at Nvidia.com> This patch series aims to enable Peer-to-Peer (P2P) DMA access in GPU-centric applications that utilize RDMA and private device pages. This enhancement is crucial for minimizing data transfer overhead by allowing the GPU to directly expose device private page data to devices such as NICs, eliminating the need to traverse system RAM, which is the
2024 Oct 08
1
[PATCH v3 1/2] nouveau/dmem: Fix privileged error in copy engine channel
From: Yonatan Maman <Ymaman at Nvidia.com> When `nouveau_dmem_copy_one` is called, the following error occurs: [272146.675156] nouveau 0000:06:00.0: fifo: PBDMA9: 00000004 [HCE_PRIV] ch 1 00000300 00003386 This indicates that a copy push command triggered a Host Copy Engine Privileged error on channel 1 (Copy Engine channel). To address this issue, modify the Copy Engine channel to allow
2024 Sep 23
1
[PATCH 1/2] nouveau/dmem: Fix privileged error in copy engine channel
When `nouveau_dmem_copy_one` is called, the following error occurs: [272146.675156] nouveau 0000:06:00.0: fifo: PBDMA9: 00000004 [HCE_PRIV] ch 1 00000300 00003386 This indicates that a copy push command triggered a Host Copy Engine Privileged error on channel 1 (Copy Engine channel). To address this issue, modify the Copy Engine channel to allow privileged push commands Fixes: 6de125383a5cc
2024 Oct 16
2
[PATCH v1 0/4] GPU Direct RDMA (P2P DMA) for Device Private Pages
On 16/10/2024 7:23, Christoph Hellwig wrote: > On Tue, Oct 15, 2024 at 06:23:44PM +0300, Yonatan Maman wrote: >> From: Yonatan Maman <Ymaman at Nvidia.com> >> >> This patch series aims to enable Peer-to-Peer (P2P) DMA access in >> GPU-centric applications that utilize RDMA and private device pages. This >> enhancement is crucial for minimizing data transfer
2024 Oct 18
1
[PATCH v1 0/4] GPU Direct RDMA (P2P DMA) for Device Private Pages
? 2024/10/16 17:16, Yonatan Maman ??: > > > On 16/10/2024 7:23, Christoph Hellwig wrote: >> On Tue, Oct 15, 2024 at 06:23:44PM +0300, Yonatan Maman wrote: >>> From: Yonatan Maman <Ymaman at Nvidia.com> >>> >>> This patch series aims to enable Peer-to-Peer (P2P) DMA access in >>> GPU-centric applications that utilize RDMA and private device
2019 Feb 22
1
[PATCH] drm/nouveau/dmem: Fix a NULL vs IS_ERR() check
The hmm_devmem_add() function doesn't return NULL, it returns error pointers. Fixes: 5be73b690875 ("drm/nouveau/dmem: device memory helpers for SVM") Signed-off-by: Dan Carpenter <dan.carpenter at oracle.com> --- drivers/gpu/drm/nouveau/nouveau_dmem.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c
2023 Aug 05
1
[PATCH drm-misc-next] nouveau/dmem: fix copy-paste error in nouveau_dmem_migrate_chunk()
Fix call to nouveau_fence_emit() with wrong channel parameter. Fixes: 7f2a0b50b2b2 ("drm/nouveau: fence: separate fence alloc and emit") Signed-off-by: Danilo Krummrich <dakr at redhat.com> --- drivers/gpu/drm/nouveau/nouveau_dmem.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c
2019 Feb 21
1
[PATCH -next] drm/nouveau/dmem: remove set but not used variable 'drm'
Fixes gcc '-Wunused-but-set-variable' warning: drivers/gpu/drm/nouveau/nouveau_dmem.c: In function 'nouveau_dmem_free': drivers/gpu/drm/nouveau/nouveau_dmem.c:103:22: warning: variable 'drm' set but not used [-Wunused-but-set-variable] struct nouveau_drm *drm; ^ Signed-off-by: YueHaibing <yuehaibing at huawei.com> ---
2019 Jun 14
3
[PATCH] drm/nouveau/dmem: missing mutex_lock in error path
In nouveau_dmem_pages_alloc(), the drm->dmem->mutex is unlocked before calling nouveau_dmem_chunk_alloc(). Reacquire the lock before continuing to the next page. Signed-off-by: Ralph Campbell <rcampbell at nvidia.com> --- I found this while testing Jason Gunthorpe's hmm tree but this is independant of those changes. I guess it could go through David Airlie's tree for nouveau
2024 Mar 06
1
[PATCH v3] nouveau/dmem: handle kcalloc() allocation failure
The kcalloc() in nouveau_dmem_evict_chunk() will return null if the physical memory has run out. As a result, if we dereference src_pfns, dst_pfns or dma_addrs, the null pointer dereference bugs will happen. Moreover, the GPU is going away. If the kcalloc() fails, we could not evict all pages mapping a chunk. So this patch adds a __GFP_NOFAIL flag in kcalloc(). Finally, as there is no need to
2019 Jun 17
34
dev_pagemap related cleanups v2
Hi Dan, Jérôme and Jason, below is a series that cleans up the dev_pagemap interface so that it is more easily usable, which removes the need to wrap it in hmm and thus allowing to kill a lot of code Note: this series is on top of the rdma/hmm branch + the dev_pagemap releas fix series from Dan that went into 5.2-rc5. Git tree: git://git.infradead.org/users/hch/misc.git
2020 Mar 16
14
ensure device private pages have an owner v2
When acting on device private mappings a driver needs to know if the device (or other entity in case of kvmppc) actually owns this private mapping. This series adds an owner field and converts the migrate_vma code over to check it. I looked into doing the same for hmm_range_fault, but as far as I can tell that code has never been wired up to actually work for device private memory, so instead of
2019 Mar 21
3
Nouveau dmem NULL Pointer deref (SVM)
Hi, just for your information and maybe for some help: with 5.1rc1 and SVM enabled i see the following backtrace [1] when the nouveau card (reverse prime) goes to sleep, for now i have papered over with [2] which leaves me with userspace hangs. Any pointers where to look for the actual culprit? PS: Card is: nouveau 0000:01:00.0: NVIDIA GP106 (136000a1) Greetings, Tobias [1]: BUG: unable
2019 Jun 26
41
dev_pagemap related cleanups v3
Hi Dan, Jérôme and Jason, below is a series that cleans up the dev_pagemap interface so that it is more easily usable, which removes the need to wrap it in hmm and thus allowing to kill a lot of code Note: this series is on top of Linux 5.2-rc5 and has some minor conflicts with the hmm tree that are easy to resolve. Diffstat summary: 32 files changed, 361 insertions(+), 1012 deletions(-) Git
2020 Sep 14
5
[PATCH] mm: remove extra ZONE_DEVICE struct page refcount
ZONE_DEVICE struct pages have an extra reference count that complicates the code for put_page() and several places in the kernel that need to check the reference count to see that a page is not being used (gup, compaction, migration, etc.). Clean up the code so the reference count doesn't need to be treated specially for ZONE_DEVICE. Signed-off-by: Ralph Campbell <rcampbell at
2020 Nov 06
12
[PATCH v3 0/6] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers. Earlier versions were posted previously [1] and [2]. The patches apply cleanly to the linux-mm 5.10.0-rc2 tree. There are a lot of other THP patches being posted. I don't think there are any semantic conflicts but there may be some merge conflicts depending on
2020 Sep 02
10
[PATCH v2 0/7] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers. An earlier version was posted previously [1]. This version now supports splitting a THP midway in the migration process which led to a number of changes. The patches apply cleanly to the current linux-mm tree. Since there are a couple of patches in linux-mm from Dan