search for: mmu_interval_notifier_insert

Displaying 20 results from an estimated 26 matches for "mmu_interval_notifier_insert".

2019 Nov 13
2
[PATCH v3 02/14] mm/mmu_notifier: add an interval tree notifier
> +int mmu_interval_notifier_insert(struct mmu_interval_notifier *mni, > + struct mm_struct *mm, unsigned long start, > + unsigned long length, > + const struct mmu_interval_notifier_ops *ops); > +int mmu_interval_notifier_insert_locked( > + struct mmu_interval_notifier *mni, struct mm_struct *mm, > + un...
2019 Nov 12
0
[PATCH v3 06/14] RDMA/hfi1: Use mmu_interval_notifier_insert for user_exp_rcv
...t) { - hfi1_cdbg(TID, "Failed to insert RB node %u 0x%lx, 0x%lx %d", - node->rcventry, node->mmu.addr, node->phys, ret); - pci_unmap_single(dd->pcidev, phys, npages * PAGE_SIZE, - PCI_DMA_FROMDEVICE); - kfree(node); - return -EFAULT; + if (fd->use_mn) { + ret = mmu_interval_notifier_insert( + &node->notifier, tbuf->vaddr + (pageidx * PAGE_SIZE), + npages * PAGE_SIZE, fd->mm); + if (ret) + goto out_unmap; + /* + * FIXME: This is in the wrong order, the notifier should be + * established before the pages are pinned by pin_rcv_pages. + */ + mmu_interval_read_...
2020 Jan 13
0
[PATCH v6 2/6] mm/mmu_notifier: add mmu_interval_notifier_put()
...returns. */ struct mmu_interval_notifier_ops { bool (*invalidate)(struct mmu_interval_notifier *mni, const struct mmu_notifier_range *range, unsigned long cur_seq); + void (*release)(struct mmu_interval_notifier *mni); }; struct mmu_interval_notifier { @@ -304,6 +309,7 @@ int mmu_interval_notifier_insert_safe( unsigned long start, unsigned long length, const struct mmu_interval_notifier_ops *ops); void mmu_interval_notifier_remove(struct mmu_interval_notifier *mni); +void mmu_interval_notifier_put(struct mmu_interval_notifier *mni); /** * mmu_interval_set_seq - Save the invalidation seque...
2020 Jan 13
0
[PATCH v6 1/6] mm/mmu_notifier: add mmu_interval_notifier_insert_safe()
mmu_interval_notifier_insert() can't be called safely from inside the invalidate() callback because it can acquire the mmap_sem lock which might already be held. Insertion might be needed when the invalidate() callback creates a "hole" in the interval being tracked (i.e., the event type MMU_NOTIFY_UNMAP) and the...
2019 Nov 13
0
[PATCH v3 02/14] mm/mmu_notifier: add an interval tree notifier
On Wed, Nov 13, 2019 at 05:59:52AM -0800, Christoph Hellwig wrote: > > +int mmu_interval_notifier_insert(struct mmu_interval_notifier *mni, > > + struct mm_struct *mm, unsigned long start, > > + unsigned long length, > > + const struct mmu_interval_notifier_ops *ops); > > +int mmu_interval_notifier_insert_locked( > > + struct mmu_interval_notifi...
2019 Nov 12
0
[PATCH v3 02/14] mm/mmu_notifier: add an interval tree notifier
...FIG_LOCKDEP @@ -263,6 +289,81 @@ extern int __mmu_notifier_register(struct mmu_notifier *mn, struct mm_struct *mm); extern void mmu_notifier_unregister(struct mmu_notifier *mn, struct mm_struct *mm); + +unsigned long mmu_interval_read_begin(struct mmu_interval_notifier *mni); +int mmu_interval_notifier_insert(struct mmu_interval_notifier *mni, + struct mm_struct *mm, unsigned long start, + unsigned long length, + const struct mmu_interval_notifier_ops *ops); +int mmu_interval_notifier_insert_locked( + struct mmu_interval_notifier *mni, struct mm_struct *mm, + unsigned long start, unsigned lo...
2020 Jan 16
2
[PATCH v6 5/6] nouveau: use new mmu interval notifiers
...t path.. > > > > Jason > > > > ODP doesn't have this problem because users have to call ib_reg_mr() > before any I/O can happen to the process address space. ODP supports a single 'full VA' call at process startup, just like these cases. > That is when mmu_interval_notifier_insert() / > mmu_interval_notifier_remove() can be called and the driver doesn't > have to worry about the interval changing sizes or being removed > while I/O is happening. No, for the 'ODP full process VA' (aka implicit ODP) mode it dynamically maintains a list of intervals. ODP...
2019 Nov 12
20
[PATCH hmm v3 00/14] Consolidate the mmu notifier interval_tree and locking
...inux-next, a git tree is available here: https://github.com/jgunthorpe/linux/commits/mmu_notifier v3: - Rename mmu_range_notifier to mmu_interval_notifier for clarity Avoids confusion with struct mmu_notifier_range - Fix bugs in odp, amdgpu and xen gntdev from testing - Make ops an argument to mmu_interval_notifier_insert() to make it harder to misuse - Update many comments - Add testing of mm_count during insertion v2: https://lore.kernel.org/r/20191028201032.6352-1-jgg at ziepe.ca v1: https://lore.kernel.org/r/20191015181242.8343-1-jgg at ziepe.ca Absent any new discussion I think this will go to Linus at the...
2019 Nov 12
0
[PATCH v3 12/14] drm/amdgpu: Use mmu_interval_notifier instead of hmm_mirror
...updates * @@ -235,12 +133,12 @@ struct amdgpu_mn *amdgpu_mn_get(struct amdgpu_device *adev, int amdgpu_mn_register(struct amdgpu_bo *bo, unsigned long addr) { if (bo->kfd_bo) - bo->notifier.ops = &amdgpu_mn_hsa_ops; - else - bo->notifier.ops = &amdgpu_mn_gfx_ops; - - return mmu_interval_notifier_insert(&bo->notifier, addr, - amdgpu_bo_size(bo), current->mm); + return mmu_interval_notifier_insert(&bo->notifier, current->mm, + addr, amdgpu_bo_size(bo), + &amdgpu_mn_hsa_ops); + return mmu_interval_notifier_insert(&bo->notifier, current->mm...
2020 Jun 19
0
[PATCH 08/16] nouveau/hmm: fault one page at a time
...*/ - hmm_pfns[pi++] = HMM_PFN_REQ_FAULT; - break; - case 3: /* PREFETCH. */ - hmm_pfns[pi++] = 0; - break; - default: - hmm_pfns[pi++] = HMM_PFN_REQ_FAULT | - HMM_PFN_REQ_WRITE; - break; - } - args.i.p.size = pi << PAGE_SHIFT; + notifier.svmm = svmm; + ret = mmu_interval_notifier_insert(&notifier.notifier, mm, + args.i.p.addr, args.i.p.size, + &nouveau_svm_mni_ops); + if (!ret) { + ret = nouveau_range_fault(svmm, svm->drm, &args, + sizeof(args), args.phys, hmm_flags, &notifier); + mmu_interval_notifier_remove(&notifier.notifier); + }...
2020 Jul 01
0
[PATCH v3 1/5] nouveau/hmm: fault one page at a time
...*/ - hmm_pfns[pi++] = HMM_PFN_REQ_FAULT; - break; - case 3: /* PREFETCH. */ - hmm_pfns[pi++] = 0; - break; - default: - hmm_pfns[pi++] = HMM_PFN_REQ_FAULT | - HMM_PFN_REQ_WRITE; - break; - } - args.i.p.size = pi << PAGE_SHIFT; + notifier.svmm = svmm; + ret = mmu_interval_notifier_insert(&notifier.notifier, mm, + args.i.p.addr, args.i.p.size, + &nouveau_svm_mni_ops); + if (!ret) { + ret = nouveau_range_fault(svmm, svm->drm, &args, + sizeof(args), args.phys, hmm_flags, &notifier); + mmu_interval_notifier_remove(&notifier.notifier); + }...
2020 Jan 14
2
[PATCH v6 5/6] nouveau: use new mmu interval notifiers
On Mon, Jan 13, 2020 at 02:47:02PM -0800, Ralph Campbell wrote: > void > nouveau_svmm_fini(struct nouveau_svmm **psvmm) > { > struct nouveau_svmm *svmm = *psvmm; > + struct mmu_interval_notifier *mni; > + > if (svmm) { > mutex_lock(&svmm->mutex); > + while (true) { > + mni = mmu_interval_notifier_find(svmm->mm, > +
2020 Jan 16
0
[PATCH v6 5/6] nouveau: use new mmu interval notifiers
...gt;>> >> >> ODP doesn't have this problem because users have to call ib_reg_mr() >> before any I/O can happen to the process address space. > > ODP supports a single 'full VA' call at process startup, just like > these cases. > >> That is when mmu_interval_notifier_insert() / >> mmu_interval_notifier_remove() can be called and the driver doesn't >> have to worry about the interval changing sizes or being removed >> while I/O is happening. > > No, for the 'ODP full process VA' (aka implicit ODP) mode it > dynamically maintains...
2019 Nov 23
1
[PATCH v3 02/14] mm/mmu_notifier: add an interval tree notifier
On 11/13/19 8:46 AM, Jason Gunthorpe wrote: > On Wed, Nov 13, 2019 at 05:59:52AM -0800, Christoph Hellwig wrote: >>> +int mmu_interval_notifier_insert(struct mmu_interval_notifier *mni, >>> + struct mm_struct *mm, unsigned long start, >>> + unsigned long length, >>> + const struct mmu_interval_notifier_ops *ops); >>> +int mmu_interval_notifier_insert_locked( >>> + struct mmu_...
2020 Jan 15
0
[PATCH v6 5/6] nouveau: use new mmu interval notifiers
...0) { > > I'm still not sure this is a better approach than what ODP does. It > looks very expensive on the fault path.. > > Jason > ODP doesn't have this problem because users have to call ib_reg_mr() before any I/O can happen to the process address space. That is when mmu_interval_notifier_insert() / mmu_interval_notifier_remove() can be called and the driver doesn't have to worry about the interval changing sizes or being removed while I/O is happening. For GPU like devices, I'm trying to allow hardware access to any user level address without pre-registering it. That means inserti...
2025 Jan 24
3
[PATCH v1 0/2] nouveau/svm: fix + cleanup for nouveau_atomic_range_fault()
One fix and a minor cleanup. Only compile-tested due to lack of HW, so I'd be happy if someone with access to HW could test. But not sure how easy this is to trigger. Likely some concurrent MADV_DONTNEED on the PTE we just converted might be able to trigger it. Cc: Karol Herbst <kherbst at redhat.com> Cc: Lyude Paul <lyude at redhat.com> Cc: Danilo Krummrich <dakr at
2020 Jan 13
9
[PATCH v6 0/6] mm/hmm/test: add self tests for HMM
...v5: Added mmu interval notifier insert/remove/update callable from the invalidate() callback Updated HMM tests to use the new core interval notifier API Changes v1 -> v4: https://lore.kernel.org/linux-mm/20191104222141.5173-1-rcampbell at nvidia.com Ralph Campbell (6): mm/mmu_notifier: add mmu_interval_notifier_insert_safe() mm/mmu_notifier: add mmu_interval_notifier_put() mm/notifier: add mmu_interval_notifier_update() mm/mmu_notifier: add mmu_interval_notifier_find() nouveau: use new mmu interval notifiers mm/hmm/test: add self tests for HMM MAINTAINERS | 3 + drivers/...
2020 Mar 19
0
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
...*/ > + dmirror = kzalloc(sizeof(*dmirror), GFP_KERNEL); > + if (dmirror == NULL) > + return -ENOMEM; > + > + dmirror->mdevice = container_of(cdev, struct dmirror_device, cdevice); > + mutex_init(&dmirror->mutex); > + xa_init(&dmirror->pt); > + > + ret = mmu_interval_notifier_insert(&dmirror->notifier, current->mm, > + 0, ULONG_MAX & PAGE_MASK, &dmirror_min_ops); > + if (ret) { > + kfree(dmirror); > + return ret; > + } > + > + /* Pairs with the mmdrop() in dmirror_fops_release(). */ > + mmgrab(current->mm); > + dmirror->m...
2020 Mar 17
4
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
On 3/17/20 5:59 AM, Christoph Hellwig wrote: > On Tue, Mar 17, 2020 at 09:47:55AM -0300, Jason Gunthorpe wrote: >> I've been using v7 of Ralph's tester and it is working well - it has >> DEVICE_PRIVATE support so I think it can test this flow too. Ralph are >> you able? >> >> This hunk seems trivial enough to me, can we include it now? > > I can send
2020 Jun 30
6
[PATCH v2 0/5] mm/hmm/nouveau: add PMD system memory mapping
The goal for this series is to introduce the hmm_range_fault() output array flags HMM_PFN_PMD and HMM_PFN_PUD. This allows a device driver to know that a given 4K PFN is actually mapped by the CPU using either a PMD sized or PUD sized CPU page table entry and therefore the device driver can safely map system memory using larger device MMU PTEs. The series is based on 5.8.0-rc3 and is intended for