search for: nouveau_dmem_pages_alloc

Displaying 10 results from an estimated 10 matches for "nouveau_dmem_pages_alloc".

2019 Jun 14
3
[PATCH] drm/nouveau/dmem: missing mutex_lock in error path
In nouveau_dmem_pages_alloc(), the drm->dmem->mutex is unlocked before calling nouveau_dmem_chunk_alloc(). Reacquire the lock before continuing to the next page. Signed-off-by: Ralph Campbell <rcampbell at nvidia.com> --- I found this while testing Jason Gunthorpe's hmm tree but this is independant of those...
2019 Jun 14
1
[PATCH] drm/nouveau/dmem: missing mutex_lock in error path
On 6/13/19 5:49 PM, John Hubbard wrote: > On 6/13/19 5:11 PM, Ralph Campbell wrote: >> In nouveau_dmem_pages_alloc(), the drm->dmem->mutex is unlocked before >> calling nouveau_dmem_chunk_alloc(). >> Reacquire the lock before continuing to the next page. >> >> Signed-off-by: Ralph Campbell <rcampbell at nvidia.com> >> --- >> >> I found this while testing Jas...
2020 Apr 21
2
[PATCH] nouveau/hmm: fix nouveau_dmem_chunk allocations
In nouveau_dmem_init(), a number of struct nouveau_dmem_chunk are allocated and put on the dmem->chunk_empty list. Then in nouveau_dmem_pages_alloc(), a nouveau_dmem_chunk is removed from the list and GPU memory is allocated. However, the nouveau_dmem_chunk is never removed from the chunk_empty list nor placed on the chunk_free or chunk_full lists. This results in only one chunk ever being actually used (2MB) and quickly leads to migration to...
2019 Jun 14
0
[PATCH] drm/nouveau/dmem: missing mutex_lock in error path
On 6/13/19 5:11 PM, Ralph Campbell wrote: > In nouveau_dmem_pages_alloc(), the drm->dmem->mutex is unlocked before > calling nouveau_dmem_chunk_alloc(). > Reacquire the lock before continuing to the next page. > > Signed-off-by: Ralph Campbell <rcampbell at nvidia.com> > --- > > I found this while testing Jason Gunthorpe's hmm tre...
2019 Jun 14
0
[PATCH v2] drm/nouveau/dmem: missing mutex_lock in error path
In nouveau_dmem_pages_alloc(), the drm->dmem->mutex is unlocked before calling nouveau_dmem_chunk_alloc() as shown when CONFIG_PROVE_LOCKING is enabled: [ 1294.871933] ===================================== [ 1294.876656] WARNING: bad unlock balance detected! [ 1294.881375] 5.2.0-rc3+ #5 Not tainted [ 1294.885048] -----...
2019 Jul 26
0
[PATCH AUTOSEL 5.2 85/85] drm/nouveau/dmem: missing mutex_lock in error path
From: Ralph Campbell <rcampbell at nvidia.com> [ Upstream commit d304654bd79332ace9ac46b9a3d8de60acb15da3 ] In nouveau_dmem_pages_alloc(), the drm->dmem->mutex is unlocked before calling nouveau_dmem_chunk_alloc() as shown when CONFIG_PROVE_LOCKING is enabled: [ 1294.871933] ===================================== [ 1294.876656] WARNING: bad unlock balance detected! [ 1294.881375] 5.2.0-rc3+ #5 Not tainted [ 1294.885048] -----...
2019 Jun 14
0
[PATCH] drm/nouveau/dmem: missing mutex_lock in error path
On Thu, Jun 13, 2019 at 05:11:21PM -0700, Ralph Campbell wrote: > In nouveau_dmem_pages_alloc(), the drm->dmem->mutex is unlocked before > calling nouveau_dmem_chunk_alloc(). > Reacquire the lock before continuing to the next page. > > Signed-off-by: Ralph Campbell <rcampbell at nvidia.com> > --- > > I found this while testing Jason Gunthorpe's hmm tre...
2019 Jun 14
0
[PATCH] drm/nouveau/dmem: missing mutex_lock in error path
...k it's good to have it in there. If you look at git log, you'll see that it's common to include the symptoms, including the backtrace. It helps people see if they are hitting the same problem, for one thing. > > As for the "return 0", If you follow the call chain, > nouveau_dmem_pages_alloc() is only ever called for one page so this > currently "works" but I agree it is a bit of a time bomb. There are a > number of other bugs that I can see that need fixing but I think those > should be separate patches. > Yes of course. I called it out for the benefit of the e...
2020 Apr 23
0
[PATCH] nouveau/hmm: fix nouveau_dmem_chunk allocations
On Tue, Apr 21, 2020 at 04:11:07PM -0700, Ralph Campbell wrote: > In nouveau_dmem_init(), a number of struct nouveau_dmem_chunk are allocated > and put on the dmem->chunk_empty list. Then in nouveau_dmem_pages_alloc(), > a nouveau_dmem_chunk is removed from the list and GPU memory is allocated. > However, the nouveau_dmem_chunk is never removed from the chunk_empty > list nor placed on the chunk_free or chunk_full lists. This results > in only one chunk ever being actually used (2MB) and quickly le...
2020 Mar 16
6
[PATCH 2/2] mm: remove device private page support from hmm_range_fault
On 3/16/20 10:52 AM, Christoph Hellwig wrote: > No driver has actually used properly wire up and support this feature. > There is various code related to it in nouveau, but as far as I can tell > it never actually got turned on, and the only changes since the initial > commit are global cleanups. This is not actually true. OpenCL 2.x does support SVM with nouveau and device private