search for: mmap_sem

Displaying 20 results from an estimated 247 matches for "mmap_sem".

2019 Jul 23
2
[PATCH 4/6] nouveau: unlock mmap_sem on all errors from nouveau_range_fault
On Mon, Jul 22, 2019 at 11:44:24AM +0200, Christoph Hellwig wrote: > Currently nouveau_svm_fault expects nouveau_range_fault to never unlock > mmap_sem, but the latter unlocks it for a random selection of error > codes. Fix this up by always unlocking mmap_sem for non-zero return > values in nouveau_range_fault, and only unlocking it in the caller > for successful returns. > > Signed-off-by: Christoph Hellwig <hch at lst.de>...
2008 Nov 05
0
[PATCH] blktap: ensure vma->vm_mm''s mmap_sem is being held whenever it is being modified
...2008-11-04/drivers/xen/blktap/blktap.c 2008-11-05 14:27:58.000000000 +0100 @@ -611,9 +611,13 @@ static int blktap_release(struct inode * /* Clear any active mappings and free foreign map table */ if (info->vma) { + struct mm_struct *mm = info->vma->vm_mm; + + down_write(&mm->mmap_sem); zap_page_range( info->vma, info->vma->vm_start, info->vma->vm_end - info->vma->vm_start, NULL); + up_write(&mm->mmap_sem); kfree(info->vma->vm_private_data); @@ -993,12 +997,13 @@ static void fast_flush_area(pending_req_ int tapidx) {...
2019 Jul 03
1
[PATCH 4/5] nouveau: unlock mmap_sem on all errors from nouveau_range_fault
On 7/3/19 11:45 AM, Christoph Hellwig wrote: > Currently nouveau_svm_fault expects nouveau_range_fault to never unlock > mmap_sem, but the latter unlocks it for a random selection of error > codes. Fix this up by always unlocking mmap_sem for non-zero return > values in nouveau_range_fault, and only unlocking it in the caller > for successful returns. > > Signed-off-by: Christoph Hellwig <hch at lst.de>...
2019 Oct 15
0
[PATCH hmm 10/15] nouveau: use mmu_notifier directly for invalidate_range_start
...truct drm_device *dev, void *data, .fault_replay = true, }, sizeof(struct gp100_vmm_v0), &cli->svm.vmm); if (ret) - goto done; + goto out_free; - /* Enable HMM mirroring of CPU address-space to VMM. */ - svmm->mm = get_task_mm(current); - down_write(&svmm->mm->mmap_sem); + down_write(&current->mm->mmap_sem); svmm->mirror.ops = &nouveau_svmm; - ret = hmm_mirror_register(&svmm->mirror, svmm->mm); - if (ret == 0) { - cli->svm.svmm = svmm; - cli->svm.cli = cli; - } - up_write(&svmm->mm->mmap_sem); - mmput(svmm->mm); +...
2019 Jul 03
0
[PATCH 4/5] nouveau: unlock mmap_sem on all errors from nouveau_range_fault
Currently nouveau_svm_fault expects nouveau_range_fault to never unlock mmap_sem, but the latter unlocks it for a random selection of error codes. Fix this up by always unlocking mmap_sem for non-zero return values in nouveau_range_fault, and only unlocking it in the caller for successful returns. Signed-off-by: Christoph Hellwig <hch at lst.de> --- drivers/gpu/drm/nouv...
2019 Jul 22
0
[PATCH 4/6] nouveau: unlock mmap_sem on all errors from nouveau_range_fault
Currently nouveau_svm_fault expects nouveau_range_fault to never unlock mmap_sem, but the latter unlocks it for a random selection of error codes. Fix this up by always unlocking mmap_sem for non-zero return values in nouveau_range_fault, and only unlocking it in the caller for successful returns. Signed-off-by: Christoph Hellwig <hch at lst.de> --- drivers/gpu/drm/nouv...
2019 Jul 22
15
hmm_range_fault related fixes and legacy API removal v2
Hi Jérôme, Ben and Jason, below is a series against the hmm tree which fixes up the mmap_sem locking in nouveau and while at it also removes leftover legacy HMM APIs only used by nouveau. The first 4 patches are a bug fix for nouveau, which I suspect should go into this merge window even if the code is marked as staging, just to avoid people copying the breakage. Changes since v1: - don...
2019 Jul 03
8
hmm_range_fault related fixes and legacy API removal
Hi Jérôme, Ben and Jason, below is a series against the hmm tree which fixes up the mmap_sem locking in nouveau and while at it also removes leftover legacy HMM APIs only used by nouveau.
2019 Jul 03
10
hmm_range_fault related fixes and legacy API removal v2
Hi Jérôme, Ben and Jason, below is a series against the hmm tree which fixes up the mmap_sem locking in nouveau and while at it also removes leftover legacy HMM APIs only used by nouveau. Changes since v1: - don't return the valid state from hmm_range_unregister - additional nouveau cleanups
2019 Oct 29
2
[PATCH v2 12/15] drm/amdgpu: Call find_vma under mmap_sem
On 2019-10-28 4:10 p.m., Jason Gunthorpe wrote: > From: Jason Gunthorpe <jgg at mellanox.com> > > find_vma() must be called under the mmap_sem, reorganize this code to > do the vma check after entering the lock. > > Further, fix the unlocked use of struct task_struct's mm, instead use > the mm from hmm_mirror which has an active mm_grab. Also the mm_grab > must be converted to a mm_get before acquiring mmap_sem or calli...
2019 Jul 23
2
[PATCH 4/6] nouveau: unlock mmap_sem on all errors from nouveau_range_fault
...== 0) { > > mutex_lock(&svmm->mutex); > > if (!nouveau_range_done(&range)) { > > mutex_unlock(&svmm->mutex); > > goto again; > > > > And we can't call nouveau_range_fault() -> hmm_range_fault() without > > holding the mmap_sem, so we can't allow nouveau_range_fault to unlock > > it. > > Goto again can only happen if nouveau_range_fault was successful, > in which case we did not drop mmap_sem. Oh, right we switch from success = number of pages to success =0.. However the reason this looks so weird to...
2019 Jul 24
10
hmm_range_fault related fixes and legacy API removal v3
Hi Jérôme, Ben and Jason, below is a series against the hmm tree which fixes up the mmap_sem locking in nouveau and while at it also removes leftover legacy HMM APIs only used by nouveau. The first 4 patches are a bug fix for nouveau, which I suspect should go into this merge window even if the code is marked as staging, just to avoid people copying the breakage. Changes since v2: - new...
2019 Jul 23
0
[PATCH 4/6] nouveau: unlock mmap_sem on all errors from nouveau_range_fault
...irror, &range); > if (ret == 0) { > mutex_lock(&svmm->mutex); > if (!nouveau_range_done(&range)) { > mutex_unlock(&svmm->mutex); > goto again; > > And we can't call nouveau_range_fault() -> hmm_range_fault() without > holding the mmap_sem, so we can't allow nouveau_range_fault to unlock > it. Goto again can only happen if nouveau_range_fault was successful, in which case we did not drop mmap_sem. Also: > ret = hmm_range_fault(range, true); > if (ret <= 0) { > if (ret == 0) > ret = -EBUSY; > -...
2019 Oct 28
0
[PATCH v2 12/15] drm/amdgpu: Call find_vma under mmap_sem
From: Jason Gunthorpe <jgg at mellanox.com> find_vma() must be called under the mmap_sem, reorganize this code to do the vma check after entering the lock. Further, fix the unlocked use of struct task_struct's mm, instead use the mm from hmm_mirror which has an active mm_grab. Also the mm_grab must be converted to a mm_get before acquiring mmap_sem or calling find_vma(). Fixes: 6...
2019 Jul 01
0
[PATCH 20/22] mm: move hmm_vma_fault to nouveau
...ags. + */ + range->default_flags = 0; + range->pfn_flags_mask = -1UL; + + ret = hmm_range_register(range, mirror, + range->start, range->end, + PAGE_SHIFT); + if (ret) + return (int)ret; + + if (!hmm_range_wait_until_valid(range, NOUVEAU_RANGE_FAULT_TIMEOUT)) { + /* + * The mmap_sem was taken by driver we release it here and + * returns -EAGAIN which correspond to mmap_sem have been + * drop in the old API. + */ + up_read(&range->vma->vm_mm->mmap_sem); + return -EAGAIN; + } + + ret = hmm_range_fault(range, block); + if (ret <= 0) { + if (ret == -EBUSY...
2019 Jul 03
1
[PATCH 20/22] mm: move hmm_vma_fault to nouveau
...>pfn_flags_mask = -1UL; > + > + ret = hmm_range_register(range, mirror, > + range->start, range->end, > + PAGE_SHIFT); > + if (ret) > + return (int)ret; > + > + if (!hmm_range_wait_until_valid(range, NOUVEAU_RANGE_FAULT_TIMEOUT)) { > + /* > + * The mmap_sem was taken by driver we release it here and > + * returns -EAGAIN which correspond to mmap_sem have been > + * drop in the old API. > + */ > + up_read(&range->vma->vm_mm->mmap_sem); > + return -EAGAIN; > + } > + > + ret = hmm_range_fault(range, block); &g...
2019 Aug 06
0
[PATCH 03/15] nouveau: pass struct nouveau_svmm to nouveau_range_fault
We'll need the nouveau_svmm structure to improve the function soon. For now this allows using the svmm->mm reference to unlock the mmap_sem, and thus the same dereference chain that the caller uses to lock and unlock it. Signed-off-by: Christoph Hellwig <hch at lst.de> --- drivers/gpu/drm/nouveau/nouveau_svm.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c...
2019 Oct 15
0
[PATCH hmm 11/15] nouveau: use mmu_range_notifier instead of hmm_mirror
...psvmm; if (svmm) { - hmm_mirror_unregister(&svmm->mirror); mutex_lock(&svmm->mutex); svmm->vmm = NULL; mutex_unlock(&svmm->mutex); @@ -357,15 +343,10 @@ nouveau_svmm_init(struct drm_device *dev, void *data, goto out_free; down_write(&current->mm->mmap_sem); - svmm->mirror.ops = &nouveau_svmm; - ret = hmm_mirror_register(&svmm->mirror, current->mm); - if (ret) - goto out_mm_unlock; - svmm->notifier.ops = &nouveau_mn_ops; ret = __mmu_notifier_register(&svmm->notifier, current->mm); if (ret) - goto out_hmm_unre...
2019 Jul 22
2
[PATCH 1/6] mm: always return EBUSY for invalid ranges in hmm_range_{fault, snapshot}
On Mon, Jul 22, 2019 at 3:14 PM Christoph Hellwig <hch at lst.de> wrote: > > We should not have two different error codes for the same condition. In > addition this really complicates the code due to the special handling of > EAGAIN that drops the mmap_sem due to the FAULT_FLAG_ALLOW_RETRY logic > in the core vm. > > Signed-off-by: Christoph Hellwig <hch at lst.de> > Reviewed-by: Ralph Campbell <rcampbell at nvidia.com> > Reviewed-by: Felix Kuehling <Felix.Kuehling at amd.com> > --- > Documentation/vm/hmm.rst |...
2019 Jul 22
0
[PATCH 2/6] mm: move hmm_vma_range_done and hmm_vma_fault to nouveau
...ault_flags = 0; + range->pfn_flags_mask = -1UL; + + ret = hmm_range_register(range, mirror, + range->start, range->end, + PAGE_SHIFT); + if (ret) + return (int)ret; + + if (!hmm_range_wait_until_valid(range, HMM_RANGE_DEFAULT_TIMEOUT)) { + up_read(&range->vma->vm_mm->mmap_sem); + return -EAGAIN; + } + + ret = hmm_range_fault(range, block); + if (ret <= 0) { + if (ret == -EBUSY || !ret) { + up_read(&range->vma->vm_mm->mmap_sem); + ret = -EBUSY; + } else if (ret == -EAGAIN) + ret = -EBUSY; + hmm_range_unregister(range); + return ret; + } + retur...