Jason Gunthorpe
2019-Jul-24 18:08 UTC
[Nouveau] [PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
On Wed, Jul 24, 2019 at 07:58:58PM +0200, Michal Hocko wrote:> On Wed 24-07-19 12:28:58, Jason Gunthorpe wrote: > > On Wed, Jul 24, 2019 at 09:05:53AM +0200, Christoph Hellwig wrote: > > > Looks good: > > > > > > Reviewed-by: Christoph Hellwig <hch at lst.de> > > > > > > One comment on a related cleanup: > > > > > > > list_for_each_entry(mirror, &hmm->mirrors, list) { > > > > int rc; > > > > > > > > - rc = mirror->ops->sync_cpu_device_pagetables(mirror, &update); > > > > + rc = mirror->ops->sync_cpu_device_pagetables(mirror, nrange); > > > > if (rc) { > > > > - if (WARN_ON(update.blockable || rc != -EAGAIN)) > > > > + if (WARN_ON(mmu_notifier_range_blockable(nrange) || > > > > + rc != -EAGAIN)) > > > > continue; > > > > ret = -EAGAIN; > > > > break; > > > > > > This magic handling of error seems odd. I think we should merge rc and > > > ret into one variable and just break out if any error happens instead > > > or claiming in the comments -EAGAIN is the only valid error and then > > > ignoring all others here. > > > > The WARN_ON is enforcing the rules already commented near > > mmuu_notifier_ops.invalidate_start - we could break or continue, it > > doesn't much matter how to recover from a broken driver, but since we > > did the WARN_ON this should sanitize the ret to EAGAIN or 0 > > > > Humm. Actually having looked this some more, I wonder if this is a > > problem: > > > > I see in __oom_reap_task_mm(): > > > > if (mmu_notifier_invalidate_range_start_nonblock(&range)) { > > tlb_finish_mmu(&tlb, range.start, range.end); > > ret = false; > > continue; > > } > > unmap_page_range(&tlb, vma, range.start, range.end, NULL); > > mmu_notifier_invalidate_range_end(&range); > > > > Which looks like it creates an unbalanced start/end pairing if any > > start returns EAGAIN? > > > > This does not seem OK.. Many users require start/end to be paired to > > keep track of their internal locking. Ie for instance hmm breaks > > because the hmm->notifiers counter becomes unable to get to 0. > > > > Below is the best idea I've had so far.. > > > > Michal, what do you think? > > IIRC we have discussed this with Jerome back then when I've introduced > this code and unless I misremember he said the current code was OK.Nope, it has always been broken.> Maybe new users have started relying on a new semantic in the meantime, > back then, none of the notifier has even started any action in blocking > mode on a EAGAIN bailout. Most of them simply did trylock early in the > process and bailed out so there was nothing to do for the range_end > callback.Single notifiers are not the problem. I tried to make this clear in the commit message, but lets be more explicit. We have *two* notifiers registered to the mm, A and B: A invalidate_range_start: (has no blocking) spin_lock() counter++ spin_unlock() A invalidate_range_end: spin_lock() counter-- spin_unlock() And this one: B invalidate_range_start: (has blocking) if (!try_mutex_lock()) return -EAGAIN; counter++ mutex_unlock() B invalidate_range_end: spin_lock() counter-- spin_unlock() So now the oom path does: invalidate_range_start_non_blocking: for each mn: a->invalidate_range_start b->invalidate_range_start rc = EAGAIN Now we SKIP A's invalidate_range_end even though A had no idea this would happen has state that needs to be unwound. A is broken. B survived just fine. A and B *alone* work fine, combined they fail. When the commit was landed you can use KVM as an example of A and RDMA ODP as an example of B Jason
Michal Hocko
2019-Jul-24 18:56 UTC
[Nouveau] [PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
On Wed 24-07-19 15:08:37, Jason Gunthorpe wrote:> On Wed, Jul 24, 2019 at 07:58:58PM +0200, Michal Hocko wrote:[...]> > Maybe new users have started relying on a new semantic in the meantime, > > back then, none of the notifier has even started any action in blocking > > mode on a EAGAIN bailout. Most of them simply did trylock early in the > > process and bailed out so there was nothing to do for the range_end > > callback. > > Single notifiers are not the problem. I tried to make this clear in > the commit message, but lets be more explicit. > > We have *two* notifiers registered to the mm, A and B: > > A invalidate_range_start: (has no blocking) > spin_lock() > counter++ > spin_unlock() > > A invalidate_range_end: > spin_lock() > counter-- > spin_unlock() > > And this one: > > B invalidate_range_start: (has blocking) > if (!try_mutex_lock()) > return -EAGAIN; > counter++ > mutex_unlock() > > B invalidate_range_end: > spin_lock() > counter-- > spin_unlock() > > So now the oom path does: > > invalidate_range_start_non_blocking: > for each mn: > a->invalidate_range_start > b->invalidate_range_start > rc = EAGAIN > > Now we SKIP A's invalidate_range_end even though A had no idea this > would happen has state that needs to be unwound. A is broken. > > B survived just fine. > > A and B *alone* work fine, combined they fail.But that requires that they share some state, right?> When the commit was landed you can use KVM as an example of A and RDMA > ODP as an example of BCould you point me where those two share the state please? KVM seems to be using kvm->mmu_notifier_count but I do not know where to look for the RDMA... -- Michal Hocko SUSE Labs
Michal Hocko
2019-Jul-24 18:59 UTC
[Nouveau] [PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
On Wed 24-07-19 20:56:17, Michal Hocko wrote:> On Wed 24-07-19 15:08:37, Jason Gunthorpe wrote: > > On Wed, Jul 24, 2019 at 07:58:58PM +0200, Michal Hocko wrote: > [...] > > > Maybe new users have started relying on a new semantic in the meantime, > > > back then, none of the notifier has even started any action in blocking > > > mode on a EAGAIN bailout. Most of them simply did trylock early in the > > > process and bailed out so there was nothing to do for the range_end > > > callback. > > > > Single notifiers are not the problem. I tried to make this clear in > > the commit message, but lets be more explicit. > > > > We have *two* notifiers registered to the mm, A and B: > > > > A invalidate_range_start: (has no blocking) > > spin_lock() > > counter++ > > spin_unlock() > > > > A invalidate_range_end: > > spin_lock() > > counter-- > > spin_unlock() > > > > And this one: > > > > B invalidate_range_start: (has blocking) > > if (!try_mutex_lock()) > > return -EAGAIN; > > counter++ > > mutex_unlock() > > > > B invalidate_range_end: > > spin_lock() > > counter-- > > spin_unlock() > > > > So now the oom path does: > > > > invalidate_range_start_non_blocking: > > for each mn: > > a->invalidate_range_start > > b->invalidate_range_start > > rc = EAGAIN > > > > Now we SKIP A's invalidate_range_end even though A had no idea this > > would happen has state that needs to be unwound. A is broken. > > > > B survived just fine. > > > > A and B *alone* work fine, combined they fail. > > But that requires that they share some state, right? > > > When the commit was landed you can use KVM as an example of A and RDMA > > ODP as an example of B > > Could you point me where those two share the state please? KVM seems to > be using kvm->mmu_notifier_count but I do not know where to look for the > RDMA...Scratch that. ELONGDAY... I can see your point. It is all or nothing that doesn't really work here. Looking back at your patch it seems reasonable but I am not sure what is supposed to be a behavior for notifiers that failed. -- Michal Hocko SUSE Labs
Apparently Analagous Threads
- [PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
- [PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
- [PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
- [PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
- [PATCH] mm/hmm: replace hmm_update with mmu_notifier_range