search for: nouveau_range_fault

Displaying 20 results from an estimated 62 matches for "nouveau_range_fault".

2019 Jul 23
2
[PATCH 4/6] nouveau: unlock mmap_sem on all errors from nouveau_range_fault
On Mon, Jul 22, 2019 at 11:44:24AM +0200, Christoph Hellwig wrote: > Currently nouveau_svm_fault expects nouveau_range_fault to never unlock > mmap_sem, but the latter unlocks it for a random selection of error > codes. Fix this up by always unlocking mmap_sem for non-zero return > values in nouveau_range_fault, and only unlocking it in the caller > for successful returns. > > Signed-off-by: Christoph...
2019 Jul 23
2
[PATCH 4/6] nouveau: unlock mmap_sem on all errors from nouveau_range_fault
On Tue, Jul 23, 2019 at 06:30:48PM +0200, Christoph Hellwig wrote: > On Tue, Jul 23, 2019 at 03:18:28PM +0000, Jason Gunthorpe wrote: > > Hum.. > > > > The caller does this: > > > > again: > > ret = nouveau_range_fault(&svmm->mirror, &range); > > if (ret == 0) { > > mutex_lock(&svmm->mutex); > > if (!nouveau_range_done(&range)) { > > mutex_unlock(&svmm->mutex); > > goto again; > > > > And we can't call nouveau_range_fault(...
2019 Jul 03
0
[PATCH 3/6] nouveau: remove the block parameter to nouveau_range_fault
.../gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c index 033a9241a14a..9a9f71e4be29 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c @@ -491,8 +491,7 @@ static inline bool nouveau_range_done(struct hmm_range *range) } static int -nouveau_range_fault(struct hmm_mirror *mirror, struct hmm_range *range, - bool block) +nouveau_range_fault(struct hmm_mirror *mirror, struct hmm_range *range) { long ret; @@ -510,7 +509,7 @@ nouveau_range_fault(struct hmm_mirror *mirror, struct hmm_range *range, return -EAGAIN; } - ret = hmm_range_fau...
2019 Jul 22
0
[PATCH 3/6] nouveau: remove the block parameter to nouveau_range_fault
.../gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c index cde09003c06b..5dd83a46578f 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c @@ -484,8 +484,7 @@ static inline bool nouveau_range_done(struct hmm_range *range) } static int -nouveau_range_fault(struct hmm_mirror *mirror, struct hmm_range *range, - bool block) +nouveau_range_fault(struct hmm_mirror *mirror, struct hmm_range *range) { long ret; @@ -503,7 +502,7 @@ nouveau_range_fault(struct hmm_mirror *mirror, struct hmm_range *range, return -EAGAIN; } - ret = hmm_range_fau...
2019 Aug 06
0
[PATCH 03/15] nouveau: pass struct nouveau_svmm to nouveau_range_fault
...f --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c index a74530b5a523..98072fd48cf7 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c @@ -485,23 +485,23 @@ nouveau_range_done(struct hmm_range *range) } static int -nouveau_range_fault(struct hmm_mirror *mirror, struct hmm_range *range) +nouveau_range_fault(struct nouveau_svmm *svmm, struct hmm_range *range) { long ret; range->default_flags = 0; range->pfn_flags_mask = -1UL; - ret = hmm_range_register(range, mirror, + ret = hmm_range_register(range, &svmm-&gt...
2019 Jul 03
1
[PATCH 4/5] nouveau: unlock mmap_sem on all errors from nouveau_range_fault
On 7/3/19 11:45 AM, Christoph Hellwig wrote: > Currently nouveau_svm_fault expects nouveau_range_fault to never unlock > mmap_sem, but the latter unlocks it for a random selection of error > codes. Fix this up by always unlocking mmap_sem for non-zero return > values in nouveau_range_fault, and only unlocking it in the caller > for successful returns. > > Signed-off-by: Christoph...
2019 Jul 03
0
[PATCH 4/5] nouveau: unlock mmap_sem on all errors from nouveau_range_fault
Currently nouveau_svm_fault expects nouveau_range_fault to never unlock mmap_sem, but the latter unlocks it for a random selection of error codes. Fix this up by always unlocking mmap_sem for non-zero return values in nouveau_range_fault, and only unlocking it in the caller for successful returns. Signed-off-by: Christoph Hellwig <hch at lst.de>...
2019 Jul 22
0
[PATCH 4/6] nouveau: unlock mmap_sem on all errors from nouveau_range_fault
Currently nouveau_svm_fault expects nouveau_range_fault to never unlock mmap_sem, but the latter unlocks it for a random selection of error codes. Fix this up by always unlocking mmap_sem for non-zero return values in nouveau_range_fault, and only unlocking it in the caller for successful returns. Signed-off-by: Christoph Hellwig <hch at lst.de>...
2019 Jul 23
0
[PATCH 4/6] nouveau: unlock mmap_sem on all errors from nouveau_range_fault
On Tue, Jul 23, 2019 at 03:18:28PM +0000, Jason Gunthorpe wrote: > Hum.. > > The caller does this: > > again: > ret = nouveau_range_fault(&svmm->mirror, &range); > if (ret == 0) { > mutex_lock(&svmm->mutex); > if (!nouveau_range_done(&range)) { > mutex_unlock(&svmm->mutex); > goto again; > > And we can't call nouveau_range_fault() -> hmm_range_fault() without &...
2019 Jul 30
0
[PATCH 03/13] nouveau: pass struct nouveau_svmm to nouveau_range_fault
...f --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c index a74530b5a523..b889d5ec4c7e 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c @@ -485,14 +485,14 @@ nouveau_range_done(struct hmm_range *range) } static int -nouveau_range_fault(struct hmm_mirror *mirror, struct hmm_range *range) +nouveau_range_fault(struct nouveau_svmm *svmm, struct hmm_range *range) { long ret; range->default_flags = 0; range->pfn_flags_mask = -1UL; - ret = hmm_range_register(range, mirror, + ret = hmm_range_register(range, &svmm-&gt...
2019 Jul 22
15
hmm_range_fault related fixes and legacy API removal v2
Hi Jérôme, Ben and Jason, below is a series against the hmm tree which fixes up the mmap_sem locking in nouveau and while at it also removes leftover legacy HMM APIs only used by nouveau. The first 4 patches are a bug fix for nouveau, which I suspect should go into this merge window even if the code is marked as staging, just to avoid people copying the breakage. Changes since v1: - don't
2019 Jul 23
0
[PATCH 4/6] nouveau: unlock mmap_sem on all errors from nouveau_range_fault
...e wrote: > That reminds me, this code is also leaking hmm_range_unregister() in > the success path, right? No, that is done by hmm_vma_range_done / nouveau_range_done for the success path. > > I think the right way to structure this is to move the goto again and > related into the nouveau_range_fault() so the whole retry algorithm is > sensibly self contained. Then we'd take svmm->mutex inside the helper and let the caller unlock that. Either way it is a bit of a mess, and I'd rather prefer if someone has the hardware would do a grand rewrite of this path eventually. Alternativ...
2019 Jul 03
10
hmm_range_fault related fixes and legacy API removal v2
Hi Jérôme, Ben and Jason, below is a series against the hmm tree which fixes up the mmap_sem locking in nouveau and while at it also removes leftover legacy HMM APIs only used by nouveau. Changes since v1: - don't return the valid state from hmm_range_unregister - additional nouveau cleanups
2019 Jul 23
1
[PATCH 3/6] nouveau: remove the block parameter to nouveau_range_fault
On Mon, Jul 22, 2019 at 11:44:23AM +0200, Christoph Hellwig wrote: > The parameter is always false, so remove it as well as the -EAGAIN > handling that can only happen for the non-blocking case. ? Did the EAGAIN handling get removed in this patch? > Signed-off-by: Christoph Hellwig <hch at lst.de> > drivers/gpu/drm/nouveau/nouveau_svm.c | 7 +++---- > 1 file changed, 3
2019 Jul 30
2
[PATCH 03/13] nouveau: pass struct nouveau_svmm to nouveau_range_fault
On Tue, Jul 30, 2019 at 08:51:53AM +0300, Christoph Hellwig wrote: > This avoid having to abuse the vma field in struct hmm_range to unlock > the mmap_sem. I think the change inside hmm_range_fault got lost on rebase, it is now using: up_read(&range->hmm->mm->mmap_sem); But, yes, lets change it to use svmm->mm and try to keep struct hmm opaque to drivers
2019 Jul 30
0
[PATCH 03/13] nouveau: pass struct nouveau_svmm to nouveau_range_fault
On Tue, Jul 30, 2019 at 12:35:59PM +0000, Jason Gunthorpe wrote: > On Tue, Jul 30, 2019 at 08:51:53AM +0300, Christoph Hellwig wrote: > > This avoid having to abuse the vma field in struct hmm_range to unlock > > the mmap_sem. > > I think the change inside hmm_range_fault got lost on rebase, it is > now using: > >
2019 Jul 30
1
[PATCH 03/13] nouveau: pass struct nouveau_svmm to nouveau_range_fault
On Tue, Jul 30, 2019 at 03:10:38PM +0200, Christoph Hellwig wrote: > On Tue, Jul 30, 2019 at 12:35:59PM +0000, Jason Gunthorpe wrote: > > On Tue, Jul 30, 2019 at 08:51:53AM +0300, Christoph Hellwig wrote: > > > This avoid having to abuse the vma field in struct hmm_range to unlock > > > the mmap_sem. > > > > I think the change inside hmm_range_fault got lost
2019 Jul 24
10
hmm_range_fault related fixes and legacy API removal v3
Hi Jérôme, Ben and Jason, below is a series against the hmm tree which fixes up the mmap_sem locking in nouveau and while at it also removes leftover legacy HMM APIs only used by nouveau. The first 4 patches are a bug fix for nouveau, which I suspect should go into this merge window even if the code is marked as staging, just to avoid people copying the breakage. Changes since v2: - new patch
2019 Jul 03
8
hmm_range_fault related fixes and legacy API removal
Hi Jérôme, Ben and Jason, below is a series against the hmm tree which fixes up the mmap_sem locking in nouveau and while at it also removes leftover legacy HMM APIs only used by nouveau.
2020 Jun 19
0
[PATCH 08/16] nouveau/hmm: fault one page at a time
...uveau_dmem_page_addr(page) | + NVIF_VMM_PFNMAP_V0_V | + NVIF_VMM_PFNMAP_V0_VRAM; + else + ioctl_addr[0] = page_to_phys(page) | + NVIF_VMM_PFNMAP_V0_V | + NVIF_VMM_PFNMAP_V0_HOST; + if (range->hmm_pfns[0] & HMM_PFN_WRITE) + ioctl_addr[0] |= NVIF_VMM_PFNMAP_V0_W; } static int nouveau_range_fault(struct nouveau_svmm *svmm, struct nouveau_drm *drm, void *data, u32 size, - unsigned long hmm_pfns[], u64 *ioctl_addr, + u64 *ioctl_addr, unsigned long hmm_flags, struct svm_notifier *notifier) { unsigned long timeout = jiffies + msecs_to_jiffies(HMM_RA...