search for: mmget_not_zero

Displaying 20 results from an estimated 55 matches for "mmget_not_zero".

2023 Mar 24
1
[PATCH v3 8/8] vdpa_sim: add support for user VA
...er's mm has been bound. So, only when the bus supports user VA > (e.g. vhost-vdpa). > > vdpasim_mm_work_fn work is used to serialize the binding to a new > address space when the .bind_mm callback is invoked, and unbinding > when the .unbind_mm callback is invoked. > > Call mmget_not_zero()/kthread_use_mm() inside the worker function > to pin the address space only as long as needed, following the > documentation of mmget() in include/linux/sched/mm.h: > > * Never use this function to pin this address space for an > * unbounded/indefinite amount of time. > &g...
2023 Mar 23
2
[PATCH v3 8/8] vdpa_sim: add support for user VA
...So, only when the bus supports user VA >> (e.g. vhost-vdpa). >> >> vdpasim_mm_work_fn work is used to serialize the binding to a new >> address space when the .bind_mm callback is invoked, and unbinding >> when the .unbind_mm callback is invoked. >> >> Call mmget_not_zero()/kthread_use_mm() inside the worker function >> to pin the address space only as long as needed, following the >> documentation of mmget() in include/linux/sched/mm.h: >> >> * Never use this function to pin this address space for an >> * unbounded/indefinite amoun...
2023 Mar 23
1
[PATCH v3 8/8] vdpa_sim: add support for user VA
...er's mm has been bound. So, only when the bus supports user VA > (e.g. vhost-vdpa). > > vdpasim_mm_work_fn work is used to serialize the binding to a new > address space when the .bind_mm callback is invoked, and unbinding > when the .unbind_mm callback is invoked. > > Call mmget_not_zero()/kthread_use_mm() inside the worker function > to pin the address space only as long as needed, following the > documentation of mmget() in include/linux/sched/mm.h: > > * Never use this function to pin this address space for an > * unbounded/indefinite amount of time. I wonder...
2023 Mar 24
1
[PATCH v3 8/8] vdpa_sim: add support for user VA
...vhost-vdpa). >> >> >> >> vdpasim_mm_work_fn work is used to serialize the binding to a new >> >> address space when the .bind_mm callback is invoked, and unbinding >> >> when the .unbind_mm callback is invoked. >> >> >> >> Call mmget_not_zero()/kthread_use_mm() inside the worker function >> >> to pin the address space only as long as needed, following the >> >> documentation of mmget() in include/linux/sched/mm.h: >> >> >> >> * Never use this function to pin this address space for an &gt...
2020 Mar 19
0
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
...mirror *dmirror = container_of(mni, struct dmirror, notifier); > + struct mm_struct *mm = dmirror->mm; > + > + /* > + * If the process doesn't exist, we don't need to invalidate the > + * device page table since the address space will be torn down. > + */ > + if (!mmget_not_zero(mm)) > + return true; Why? Don't the notifiers provide for this already. mmget_not_zero() is required before calling hmm_range_fault() though > +static int dmirror_fault(struct dmirror *dmirror, unsigned long start, > + unsigned long end, bool write) > +{ > + struct mm_st...
2020 Mar 17
4
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
On 3/17/20 5:59 AM, Christoph Hellwig wrote: > On Tue, Mar 17, 2020 at 09:47:55AM -0300, Jason Gunthorpe wrote: >> I've been using v7 of Ralph's tester and it is working well - it has >> DEVICE_PRIVATE support so I think it can test this flow too. Ralph are >> you able? >> >> This hunk seems trivial enough to me, can we include it now? > > I can send
2020 Mar 19
2
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
...of(mni, struct dmirror, notifier); >> + struct mm_struct *mm = dmirror->mm; >> + >> + /* >> + * If the process doesn't exist, we don't need to invalidate the >> + * device page table since the address space will be torn down. >> + */ >> + if (!mmget_not_zero(mm)) >> + return true; > > Why? Don't the notifiers provide for this already. > > mmget_not_zero() is required before calling hmm_range_fault() though This is a workaround for a problem I don't quite understand. If you change tools/testing/selftests/vm/hmm-tests.c line...
2019 Oct 29
2
[PATCH v2 12/15] drm/amdgpu: Call find_vma under mmap_sem
...start < vma->vm_start)) { > - r = -EFAULT; > - goto out; > - } > - if (unlikely((gtt->userflags & AMDGPU_GEM_USERPTR_ANONONLY) && > - vma->vm_file)) { > - r = -EPERM; > - goto out; > - } > + mm = mirror->hmm->mmu_notifier.mm; > + if (!mmget_not_zero(mm)) /* Happens during process shutdown */ This works because mirror->hmm->mmu_notifier holds an mmgrab reference to the mm? So the MM will not just go away, but if the mmget refcount is 0, it means the mm is marked for destruction and shouldn't be used any more. > + return -ESRC...
2020 Mar 20
0
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
...t; > + struct mm_struct *mm = dmirror->mm; > > > + > > > + /* > > > + * If the process doesn't exist, we don't need to invalidate the > > > + * device page table since the address space will be torn down. > > > + */ > > > + if (!mmget_not_zero(mm)) > > > + return true; > > > > Why? Don't the notifiers provide for this already. > > > > mmget_not_zero() is required before calling hmm_range_fault() though Oh... This is the invalidate_all path during invalidation IMHO you should test the invalidation...
2023 Mar 21
3
[PATCH v3 5/8] vdpa_sim: make devices agnostic for work management
Let's move work management inside the vdpa_sim core. This way we can easily change how we manage the works, without having to change the devices each time. Acked-by: Eugenio P??rez Martin <eperezma at redhat.com> Acked-by: Jason Wang <jasowang at redhat.com> Signed-off-by: Stefano Garzarella <sgarzare at redhat.com> --- drivers/vdpa/vdpa_sim/vdpa_sim.h | 3 ++-
2020 Jun 19
0
[PATCH 08/16] nouveau/hmm: fault one page at a time
...limit = start + (ARRAY_SIZE(args.phys) << PAGE_SHIFT); + limit = start + PAGE_SIZE; if (start < svmm->unmanaged.limit) limit = min_t(u64, limit, svmm->unmanaged.start); - SVMM_DBG(svmm, "wndw %016llx-%016llx", start, limit); - mm = svmm->notifier.mm; - if (!mmget_not_zero(mm)) { - nouveau_svm_fault_cancel_fault(svm, buffer->fault[fi]); - continue; - } - - /* Intersect fault window with the CPU VMA, cancelling - * the fault if the address is invalid. + /* + * Prepare the GPU-side update of all pages within the + * fault window, determining required pa...
2020 Jul 01
0
[PATCH v3 1/5] nouveau/hmm: fault one page at a time
...limit = start + (ARRAY_SIZE(args.phys) << PAGE_SHIFT); + limit = start + PAGE_SIZE; if (start < svmm->unmanaged.limit) limit = min_t(u64, limit, svmm->unmanaged.start); - SVMM_DBG(svmm, "wndw %016llx-%016llx", start, limit); - mm = svmm->notifier.mm; - if (!mmget_not_zero(mm)) { - nouveau_svm_fault_cancel_fault(svm, buffer->fault[fi]); - continue; - } - - /* Intersect fault window with the CPU VMA, cancelling - * the fault if the address is invalid. + /* + * Prepare the GPU-side update of all pages within the + * fault window, determining required pa...
2020 Apr 04
0
[PATCH 5/6] kernel: better document the use_mm/unuse_mm API contract
...a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c index dee01c371bf5..92e9b340dbc2 100644 --- a/drivers/gpu/drm/i915/gvt/kvmgt.c +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c @@ -2048,7 +2048,7 @@ static int kvmgt_rw_gpa(unsigned long handle, unsigned long gpa, if (kthread) { if (!mmget_not_zero(kvm->mm)) return -EFAULT; - use_mm(kvm->mm); + kthread_use_mm(kvm->mm); } idx = srcu_read_lock(&kvm->srcu); @@ -2057,7 +2057,7 @@ static int kvmgt_rw_gpa(unsigned long handle, unsigned long gpa, srcu_read_unlock(&kvm->srcu, idx); if (kthread) { - unuse_mm(k...
2020 Apr 16
0
[PATCH 2/3] kernel: better document the use_mm/unuse_mm API contract
...a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c index ca1dd6e6f395..f2927575b793 100644 --- a/drivers/gpu/drm/i915/gvt/kvmgt.c +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c @@ -2048,7 +2048,7 @@ static int kvmgt_rw_gpa(unsigned long handle, unsigned long gpa, if (kthread) { if (!mmget_not_zero(kvm->mm)) return -EFAULT; - use_mm(kvm->mm); + kthread_use_mm(kvm->mm); } idx = srcu_read_lock(&kvm->srcu); @@ -2057,7 +2057,7 @@ static int kvmgt_rw_gpa(unsigned long handle, unsigned long gpa, srcu_read_unlock(&kvm->srcu, idx); if (kthread) { - unuse_mm(k...
2020 May 05
1
[PATCH hmm v2 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...5,6 @@ The usage pattern is:: > range.start = ...; > range.end = ...; > range.pfns = ...; That should be: range.hmm_pfns = ...; > - range.flags = ...; > - range.values = ...; > - range.pfn_shift = ...; > > if (!mmget_not_zero(interval_sub->notifier.mm)) > return -EFAULT; > @@ -229,15 +226,10 @@ The hmm_range struct has 2 fields, default_flags and pfn_flags_mask, that specif > fault or snapshot policy for the whole range instead of having to set them > for each entry in the pfns array. >...
2019 Oct 28
0
[PATCH v2 12/15] drm/amdgpu: Call find_vma under mmap_sem
...vma = find_vma(mm, start); - if (unlikely(!vma || start < vma->vm_start)) { - r = -EFAULT; - goto out; - } - if (unlikely((gtt->userflags & AMDGPU_GEM_USERPTR_ANONONLY) && - vma->vm_file)) { - r = -EPERM; - goto out; - } + mm = mirror->hmm->mmu_notifier.mm; + if (!mmget_not_zero(mm)) /* Happens during process shutdown */ + return -ESRCH; range = kzalloc(sizeof(*range), GFP_KERNEL); if (unlikely(!range)) { @@ -847,6 +837,17 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages) hmm_range_wait_until_valid(range, HMM_RANGE_DEFAULT_TIMEOUT);...
2020 Apr 04
0
[PATCH 6/6] kernel: set USER_DS in kthread_use_mm
...gt;mm); mmput(worker->mm); worker->mm = NULL; @@ -420,14 +419,11 @@ static void io_wq_switch_mm(struct io_worker *worker, struct io_wq_work *work) mmput(worker->mm); worker->mm = NULL; } - if (!work->mm) { - set_fs(KERNEL_DS); + if (!work->mm) return; - } + if (mmget_not_zero(work->mm)) { kthread_use_mm(work->mm); - if (!worker->mm) - set_fs(USER_DS); worker->mm = work->mm; /* hang on to this mm */ work->mm = NULL; diff --git a/fs/io_uring.c b/fs/io_uring.c index 367406381044..c332a34e8b34 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @...
2020 Apr 16
0
[PATCH 3/3] kernel: set USER_DS in kthread_use_mm
...gt;mm); mmput(worker->mm); worker->mm = NULL; @@ -421,14 +420,11 @@ static void io_wq_switch_mm(struct io_worker *worker, struct io_wq_work *work) mmput(worker->mm); worker->mm = NULL; } - if (!work->mm) { - set_fs(KERNEL_DS); + if (!work->mm) return; - } + if (mmget_not_zero(work->mm)) { kthread_use_mm(work->mm); - if (!worker->mm) - set_fs(USER_DS); worker->mm = work->mm; /* hang on to this mm */ work->mm = NULL; diff --git a/fs/io_uring.c b/fs/io_uring.c index 8a8148512da7..40f90b98a18a 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @...
2020 Apr 04
14
improve use_mm / unuse_mm
Hi all, this series improves the use_mm / unuse_mm interface by better documenting the assumptions, and my taking the set_fs manipulations spread over the callers into the core API.
2020 Apr 04
14
improve use_mm / unuse_mm
Hi all, this series improves the use_mm / unuse_mm interface by better documenting the assumptions, and my taking the set_fs manipulations spread over the callers into the core API.