Jason Gunthorpe
2020-Jan-16 16:00 UTC
[Nouveau] [PATCH v6 5/6] nouveau: use new mmu interval notifiers
On Wed, Jan 15, 2020 at 02:09:47PM -0800, Ralph Campbell wrote:> I don't understand the lifetime/membership issue. The driver is the only thing > that allocates, inserts, or removes struct mmu_interval_notifier and thus > completely controls the lifetime.If the returned value is on the defered list it could be freed at any moment. The existing locks do not prevent it.> > > + ret = nouveau_svmm_interval_find(svmm, &range); > > > + if (ret) { > > > + up_read(&mm->mmap_sem); > > > + return ret; > > > + } > > > + range.notifier_seq = mmu_interval_read_begin(range.notifier); > > > ret = hmm_range_fault(&range, 0); > > > up_read(&mm->mmap_sem); > > > if (ret <= 0) { > > > > I'm still not sure this is a better approach than what ODP does. It > > looks very expensive on the fault path.. > > > > Jason > > > > ODP doesn't have this problem because users have to call ib_reg_mr() > before any I/O can happen to the process address space.ODP supports a single 'full VA' call at process startup, just like these cases.> That is when mmu_interval_notifier_insert() / > mmu_interval_notifier_remove() can be called and the driver doesn't > have to worry about the interval changing sizes or being removed > while I/O is happening.No, for the 'ODP full process VA' (aka implicit ODP) mode it dynamically maintains a list of intervals. ODP chooses the align the dynamic intervals to it's HW page table levels, and not to SW VMAs. This is much simpler to manage and faster to fault, at the cost of capturing more VA for invalidations which have to be probed against the HW shadow PTEs.> It isn't that expensive, there is an extra driver lock/unlock as > part of the lookup and possibly a find_vma() and kmalloc(GFP_ATOMIC) > for new intervals. Also, the deferred interval updates for munmap(). > Compared to the cost of updating PTEs in the device and GPU fault > handling, this is minimal overhead.Well, compared to ODP which does a single xa lookup with no lock to find its interval, this looks very expensive and not parallel. I think if there is merit in having ranges cover the vmas and track changes then there is probably merit in having the core code provide much of that logic, not the driver. But it would be interesting to see some kind of analysis on the two methods to decide if the complexity is worthwhile. Jason
Ralph Campbell
2020-Jan-16 20:16 UTC
[Nouveau] [PATCH v6 5/6] nouveau: use new mmu interval notifiers
On 1/16/20 8:00 AM, Jason Gunthorpe wrote:> On Wed, Jan 15, 2020 at 02:09:47PM -0800, Ralph Campbell wrote: > >> I don't understand the lifetime/membership issue. The driver is the only thing >> that allocates, inserts, or removes struct mmu_interval_notifier and thus >> completely controls the lifetime. > > If the returned value is on the defered list it could be freed at any > moment. The existing locks do not prevent it. > >>>> + ret = nouveau_svmm_interval_find(svmm, &range); >>>> + if (ret) { >>>> + up_read(&mm->mmap_sem); >>>> + return ret; >>>> + } >>>> + range.notifier_seq = mmu_interval_read_begin(range.notifier); >>>> ret = hmm_range_fault(&range, 0); >>>> up_read(&mm->mmap_sem); >>>> if (ret <= 0) { >>> >>> I'm still not sure this is a better approach than what ODP does. It >>> looks very expensive on the fault path.. >>> >>> Jason >>> >> >> ODP doesn't have this problem because users have to call ib_reg_mr() >> before any I/O can happen to the process address space. > > ODP supports a single 'full VA' call at process startup, just like > these cases. > >> That is when mmu_interval_notifier_insert() / >> mmu_interval_notifier_remove() can be called and the driver doesn't >> have to worry about the interval changing sizes or being removed >> while I/O is happening. > > No, for the 'ODP full process VA' (aka implicit ODP) mode it > dynamically maintains a list of intervals. ODP chooses the align the > dynamic intervals to it's HW page table levels, and not to SW VMAs. > This is much simpler to manage and faster to fault, at the cost of > capturing more VA for invalidations which have to be probed against > the HW shadow PTEs. > >> It isn't that expensive, there is an extra driver lock/unlock as >> part of the lookup and possibly a find_vma() and kmalloc(GFP_ATOMIC) >> for new intervals. Also, the deferred interval updates for munmap(). >> Compared to the cost of updating PTEs in the device and GPU fault >> handling, this is minimal overhead. > > Well, compared to ODP which does a single xa lookup with no lock to > find its interval, this looks very expensive and not parallel. > > I think if there is merit in having ranges cover the vmas and track > changes then there is probably merit in having the core code provide > much of that logic, not the driver. > > But it would be interesting to see some kind of analysis on the two > methods to decide if the complexity is worthwhile. > > Jason >Can you point me to the latest ODP code? Seems like my understanding is quite off.
Jason Gunthorpe
2020-Jan-16 20:21 UTC
[Nouveau] [PATCH v6 5/6] nouveau: use new mmu interval notifiers
On Thu, Jan 16, 2020 at 12:16:30PM -0800, Ralph Campbell wrote:> Can you point me to the latest ODP code? Seems like my understanding is > quite off.https://elixir.bootlin.com/linux/v5.5-rc6/source/drivers/infiniband/hw/mlx5/odp.c Look for the word 'implicit' mlx5_ib_invalidate_range() releases the interval_notifier when there are no populated shadow PTEs in its leaf pagefault_implicit_mr() creates an interval_notifier that covers the level in the page table that needs population. Notice it just uses an unlocked xa_load to find the page table level. The locking is pretty tricky as it relies on RCU, but the fault flow is fairly lightweight. Jason
Maybe Matching Threads
- [PATCH v6 5/6] nouveau: use new mmu interval notifiers
- [PATCH v6 5/6] nouveau: use new mmu interval notifiers
- [PATCH v6 5/6] nouveau: use new mmu interval notifiers
- [PATCH v6 5/6] nouveau: use new mmu interval notifiers
- [PATCH v3 02/14] mm/mmu_notifier: add an interval tree notifier