search for: invalidate_range_end

Displaying 20 results from an estimated 50 matches for "invalidate_range_end".

2019 Jul 24
2
[PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
...if (mmu_notifier_invalidate_range_start_nonblock(&range)) { > > tlb_finish_mmu(&tlb, range.start, range.end); > > ret = false; > > continue; > > } > > unmap_page_range(&tlb, vma, range.start, range.end, NULL); > > mmu_notifier_invalidate_range_end(&range); > > > > Which looks like it creates an unbalanced start/end pairing if any > > start returns EAGAIN? > > > > This does not seem OK.. Many users require start/end to be paired to > > keep track of their internal locking. Ie for instance hmm breaks &...
2019 Jul 24
2
[PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
...> > the commit message, but lets be more explicit. > > > > We have *two* notifiers registered to the mm, A and B: > > > > A invalidate_range_start: (has no blocking) > > spin_lock() > > counter++ > > spin_unlock() > > > > A invalidate_range_end: > > spin_lock() > > counter-- > > spin_unlock() > > > > And this one: > > > > B invalidate_range_start: (has blocking) > > if (!try_mutex_lock()) > > return -EAGAIN; > > counter++ > > mutex_unlock()...
2019 Jul 24
0
[PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
...are not the problem. I tried to make this clear in > the commit message, but lets be more explicit. > > We have *two* notifiers registered to the mm, A and B: > > A invalidate_range_start: (has no blocking) > spin_lock() > counter++ > spin_unlock() > > A invalidate_range_end: > spin_lock() > counter-- > spin_unlock() > > And this one: > > B invalidate_range_start: (has blocking) > if (!try_mutex_lock()) > return -EAGAIN; > counter++ > mutex_unlock() > > B invalidate_range_end: > spin_lock(...
2019 Jul 24
0
[PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
...re explicit. > > > > > > We have *two* notifiers registered to the mm, A and B: > > > > > > A invalidate_range_start: (has no blocking) > > > spin_lock() > > > counter++ > > > spin_unlock() > > > > > > A invalidate_range_end: > > > spin_lock() > > > counter-- > > > spin_unlock() > > > > > > And this one: > > > > > > B invalidate_range_start: (has blocking) > > > if (!try_mutex_lock()) > > > return -EAGAIN; > &g...
2019 Jul 31
2
[PATCH V2 4/9] vhost: reset invalidate_count in vhost_set_vring_num_addr()
...Jul 31, 2019 at 09:29:28PM +0800, Jason Wang wrote: > > On 2019/7/31 ??8:41, Jason Gunthorpe wrote: > > On Wed, Jul 31, 2019 at 04:46:50AM -0400, Jason Wang wrote: > > > The vhost_set_vring_num_addr() could be called in the middle of > > > invalidate_range_start() and invalidate_range_end(). If we don't reset > > > invalidate_count after the un-registering of MMU notifier, the > > > invalidate_cont will run out of sync (e.g never reach zero). This will > > > in fact disable the fast accessor path. Fixing by reset the count to > > > zero. >...
2019 Jul 31
2
[PATCH V2 4/9] vhost: reset invalidate_count in vhost_set_vring_num_addr()
On Wed, Jul 31, 2019 at 04:46:50AM -0400, Jason Wang wrote: > The vhost_set_vring_num_addr() could be called in the middle of > invalidate_range_start() and invalidate_range_end(). If we don't reset > invalidate_count after the un-registering of MMU notifier, the > invalidate_cont will run out of sync (e.g never reach zero). This will > in fact disable the fast accessor path. Fixing by reset the count to > zero. > > Reported-by: Michael S. Tsirkin &l...
2019 Jul 31
2
[PATCH V2 4/9] vhost: reset invalidate_count in vhost_set_vring_num_addr()
On Wed, Jul 31, 2019 at 04:46:50AM -0400, Jason Wang wrote: > The vhost_set_vring_num_addr() could be called in the middle of > invalidate_range_start() and invalidate_range_end(). If we don't reset > invalidate_count after the un-registering of MMU notifier, the > invalidate_cont will run out of sync (e.g never reach zero). This will > in fact disable the fast accessor path. Fixing by reset the count to > zero. > > Reported-by: Michael S. Tsirkin &l...
2019 Jul 24
5
[PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
...onder if this is a problem: I see in __oom_reap_task_mm(): if (mmu_notifier_invalidate_range_start_nonblock(&range)) { tlb_finish_mmu(&tlb, range.start, range.end); ret = false; continue; } unmap_page_range(&tlb, vma, range.start, range.end, NULL); mmu_notifier_invalidate_range_end(&range); Which looks like it creates an unbalanced start/end pairing if any start returns EAGAIN? This does not seem OK.. Many users require start/end to be paired to keep track of their internal locking. Ie for instance hmm breaks because the hmm->notifiers counter becomes unable to get t...
2019 Mar 08
1
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...also wonder here: when page is write protected then > > > > it does not look like .invalidate_range is invoked. > > > > > > > > E.g. mm/ksm.c calls > > > > > > > > mmu_notifier_invalidate_range_start and > > > > mmu_notifier_invalidate_range_end but not mmu_notifier_invalidate_range. > > > > > > > > Similarly, rmap in page_mkclean_one will not call > > > > mmu_notifier_invalidate_range. > > > > > > > > If I'm right vhost won't get notified when page is write-protected si...
2019 Mar 08
1
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...also wonder here: when page is write protected then > > > > it does not look like .invalidate_range is invoked. > > > > > > > > E.g. mm/ksm.c calls > > > > > > > > mmu_notifier_invalidate_range_start and > > > > mmu_notifier_invalidate_range_end but not mmu_notifier_invalidate_range. > > > > > > > > Similarly, rmap in page_mkclean_one will not call > > > > mmu_notifier_invalidate_range. > > > > > > > > If I'm right vhost won't get notified when page is write-protected si...
2019 Jul 24
1
[PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
...mmu notifiers is > very small. > > This seems workable and does not need more driver review/update... > > However, hmm's implementation still needs more fixing. Can we take one step back, please? The only reason why drivers implement both ->invalidate_range_start and ->invalidate_range_end and expect them to be called paired is to keep some form of counter of active invalidation "sections". So instead of doctoring around undo schemes the only sane answer is to take such a counter into the core VM code instead of having each driver struggle with it.
2019 Jul 23
1
[PATCH 4/6] vhost: reset invalidate_count in vhost_set_vring_num_addr()
On Tue, Jul 23, 2019 at 03:57:16AM -0400, Jason Wang wrote: > The vhost_set_vring_num_addr() could be called in the middle of > invalidate_range_start() and invalidate_range_end(). If we don't reset > invalidate_count after the un-registering of MMU notifier, the > invalidate_cont will run out of sync (e.g never reach zero). This will > in fact disable the fast accessor path. Fixing by reset the count to > zero. > > Reported-by: Michael S. Tsirkin &l...
2019 Mar 11
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...o wonder here: when page is write protected then >>>>> it does not look like .invalidate_range is invoked. >>>>> >>>>> E.g. mm/ksm.c calls >>>>> >>>>> mmu_notifier_invalidate_range_start and >>>>> mmu_notifier_invalidate_range_end but not mmu_notifier_invalidate_range. >>>>> >>>>> Similarly, rmap in page_mkclean_one will not call >>>>> mmu_notifier_invalidate_range. >>>>> >>>>> If I'm right vhost won't get notified when page is write-protected...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...vqs, int iov_limit) > > > { > > > > I also wonder here: when page is write protected then > > it does not look like .invalidate_range is invoked. > > > > E.g. mm/ksm.c calls > > > > mmu_notifier_invalidate_range_start and > > mmu_notifier_invalidate_range_end but not mmu_notifier_invalidate_range. > > > > Similarly, rmap in page_mkclean_one will not call > > mmu_notifier_invalidate_range. > > > > If I'm right vhost won't get notified when page is write-protected since you > > didn't install start/end not...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...vqs, int iov_limit) > > > { > > > > I also wonder here: when page is write protected then > > it does not look like .invalidate_range is invoked. > > > > E.g. mm/ksm.c calls > > > > mmu_notifier_invalidate_range_start and > > mmu_notifier_invalidate_range_end but not mmu_notifier_invalidate_range. > > > > Similarly, rmap in page_mkclean_one will not call > > mmu_notifier_invalidate_range. > > > > If I'm right vhost won't get notified when page is write-protected since you > > didn't install start/end not...
2019 Jul 23
4
[PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
The hmm_mirror_ops callback function sync_cpu_device_pagetables() passes a struct hmm_update which is a simplified version of struct mmu_notifier_range. This is unnecessary so replace hmm_update with mmu_notifier_range directly. Signed-off-by: Ralph Campbell <rcampbell at nvidia.com> Cc: "Jérôme Glisse" <jglisse at redhat.com> Cc: Jason Gunthorpe <jgg at mellanox.com>
2019 Mar 08
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...v_limit) >>>> { >>> I also wonder here: when page is write protected then >>> it does not look like .invalidate_range is invoked. >>> >>> E.g. mm/ksm.c calls >>> >>> mmu_notifier_invalidate_range_start and >>> mmu_notifier_invalidate_range_end but not mmu_notifier_invalidate_range. >>> >>> Similarly, rmap in page_mkclean_one will not call >>> mmu_notifier_invalidate_range. >>> >>> If I'm right vhost won't get notified when page is write-protected since you >>> didn't insta...
2019 Jul 23
0
[PATCH 4/6] vhost: reset invalidate_count in vhost_set_vring_num_addr()
The vhost_set_vring_num_addr() could be called in the middle of invalidate_range_start() and invalidate_range_end(). If we don't reset invalidate_count after the un-registering of MMU notifier, the invalidate_cont will run out of sync (e.g never reach zero). This will in fact disable the fast accessor path. Fixing by reset the count to zero. Reported-by: Michael S. Tsirkin <mst at redhat.com> Fixes:...
2019 Jul 31
0
[PATCH V2 4/9] vhost: reset invalidate_count in vhost_set_vring_num_addr()
The vhost_set_vring_num_addr() could be called in the middle of invalidate_range_start() and invalidate_range_end(). If we don't reset invalidate_count after the un-registering of MMU notifier, the invalidate_cont will run out of sync (e.g never reach zero). This will in fact disable the fast accessor path. Fixing by reset the count to zero. Reported-by: Michael S. Tsirkin <mst at redhat.com> Fixes:...
2019 Jul 31
0
[PATCH V2 4/9] vhost: reset invalidate_count in vhost_set_vring_num_addr()
On 2019/7/31 ??8:41, Jason Gunthorpe wrote: > On Wed, Jul 31, 2019 at 04:46:50AM -0400, Jason Wang wrote: >> The vhost_set_vring_num_addr() could be called in the middle of >> invalidate_range_start() and invalidate_range_end(). If we don't reset >> invalidate_count after the un-registering of MMU notifier, the >> invalidate_cont will run out of sync (e.g never reach zero). This will >> in fact disable the fast accessor path. Fixing by reset the count to >> zero. >> >> Reported-by...