Displaying 20 results from an estimated 29 matches for "mmu_notifier_invalidate_range_end".
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...> +
> void vhost_dev_init(struct vhost_dev *dev,
> struct vhost_virtqueue **vqs, int nvqs, int iov_limit)
> {
I also wonder here: when page is write protected then
it does not look like .invalidate_range is invoked.
E.g. mm/ksm.c calls
mmu_notifier_invalidate_range_start and
mmu_notifier_invalidate_range_end but not mmu_notifier_invalidate_range.
Similarly, rmap in page_mkclean_one will not call
mmu_notifier_invalidate_range.
If I'm right vhost won't get notified when page is write-protected since you
didn't install start/end notifiers. Note that end notifier can be called
with page locke...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...> +
> void vhost_dev_init(struct vhost_dev *dev,
> struct vhost_virtqueue **vqs, int nvqs, int iov_limit)
> {
I also wonder here: when page is write protected then
it does not look like .invalidate_range is invoked.
E.g. mm/ksm.c calls
mmu_notifier_invalidate_range_start and
mmu_notifier_invalidate_range_end but not mmu_notifier_invalidate_range.
Similarly, rmap in page_mkclean_one will not call
mmu_notifier_invalidate_range.
If I'm right vhost won't get notified when page is write-protected since you
didn't install start/end notifiers. Note that end notifier can be called
with page locke...
2019 Mar 07
5
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...**vqs, int nvqs, int iov_limit)
> > > {
> >
> > I also wonder here: when page is write protected then
> > it does not look like .invalidate_range is invoked.
> >
> > E.g. mm/ksm.c calls
> >
> > mmu_notifier_invalidate_range_start and
> > mmu_notifier_invalidate_range_end but not mmu_notifier_invalidate_range.
> >
> > Similarly, rmap in page_mkclean_one will not call
> > mmu_notifier_invalidate_range.
> >
> > If I'm right vhost won't get notified when page is write-protected since you
> > didn't install start/end not...
2019 Mar 07
5
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...**vqs, int nvqs, int iov_limit)
> > > {
> >
> > I also wonder here: when page is write protected then
> > it does not look like .invalidate_range is invoked.
> >
> > E.g. mm/ksm.c calls
> >
> > mmu_notifier_invalidate_range_start and
> > mmu_notifier_invalidate_range_end but not mmu_notifier_invalidate_range.
> >
> > Similarly, rmap in page_mkclean_one will not call
> > mmu_notifier_invalidate_range.
> >
> > If I'm right vhost won't get notified when page is write-protected since you
> > didn't install start/end not...
2019 Mar 08
1
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...; > > I also wonder here: when page is write protected then
> > > > it does not look like .invalidate_range is invoked.
> > > >
> > > > E.g. mm/ksm.c calls
> > > >
> > > > mmu_notifier_invalidate_range_start and
> > > > mmu_notifier_invalidate_range_end but not mmu_notifier_invalidate_range.
> > > >
> > > > Similarly, rmap in page_mkclean_one will not call
> > > > mmu_notifier_invalidate_range.
> > > >
> > > > If I'm right vhost won't get notified when page is write-protected si...
2019 Mar 08
1
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...; > > I also wonder here: when page is write protected then
> > > > it does not look like .invalidate_range is invoked.
> > > >
> > > > E.g. mm/ksm.c calls
> > > >
> > > > mmu_notifier_invalidate_range_start and
> > > > mmu_notifier_invalidate_range_end but not mmu_notifier_invalidate_range.
> > > >
> > > > Similarly, rmap in page_mkclean_one will not call
> > > > mmu_notifier_invalidate_range.
> > > >
> > > > If I'm right vhost won't get notified when page is write-protected si...
2019 Mar 08
1
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...; > > I also wonder here: when page is write protected then
> > > > it does not look like .invalidate_range is invoked.
> > > >
> > > > E.g. mm/ksm.c calls
> > > >
> > > > mmu_notifier_invalidate_range_start and
> > > > mmu_notifier_invalidate_range_end but not mmu_notifier_invalidate_range.
> > > >
> > > > Similarly, rmap in page_mkclean_one will not call
> > > > mmu_notifier_invalidate_range.
> > > >
> > > > If I'm right vhost won't get notified when page is write-protected si...
2019 Mar 07
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...*dev,
> > struct vhost_virtqueue **vqs, int nvqs, int iov_limit)
> > {
>
> I also wonder here: when page is write protected then
> it does not look like .invalidate_range is invoked.
>
> E.g. mm/ksm.c calls
>
> mmu_notifier_invalidate_range_start and
> mmu_notifier_invalidate_range_end but not mmu_notifier_invalidate_range.
>
> Similarly, rmap in page_mkclean_one will not call
> mmu_notifier_invalidate_range.
>
> If I'm right vhost won't get notified when page is write-protected since you
> didn't install start/end notifiers. Note that end notifier...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...**vqs, int nvqs, int iov_limit)
> > > {
> >
> > I also wonder here: when page is write protected then
> > it does not look like .invalidate_range is invoked.
> >
> > E.g. mm/ksm.c calls
> >
> > mmu_notifier_invalidate_range_start and
> > mmu_notifier_invalidate_range_end but not mmu_notifier_invalidate_range.
> >
> > Similarly, rmap in page_mkclean_one will not call
> > mmu_notifier_invalidate_range.
> >
> > If I'm right vhost won't get notified when page is write-protected since you
> > didn't install start/end not...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...**vqs, int nvqs, int iov_limit)
> > > {
> >
> > I also wonder here: when page is write protected then
> > it does not look like .invalidate_range is invoked.
> >
> > E.g. mm/ksm.c calls
> >
> > mmu_notifier_invalidate_range_start and
> > mmu_notifier_invalidate_range_end but not mmu_notifier_invalidate_range.
> >
> > Similarly, rmap in page_mkclean_one will not call
> > mmu_notifier_invalidate_range.
> >
> > If I'm right vhost won't get notified when page is write-protected since you
> > didn't install start/end not...
2019 Mar 08
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...nvqs, int iov_limit)
>>>> {
>>> I also wonder here: when page is write protected then
>>> it does not look like .invalidate_range is invoked.
>>>
>>> E.g. mm/ksm.c calls
>>>
>>> mmu_notifier_invalidate_range_start and
>>> mmu_notifier_invalidate_range_end but not mmu_notifier_invalidate_range.
>>>
>>> Similarly, rmap in page_mkclean_one will not call
>>> mmu_notifier_invalidate_range.
>>>
>>> If I'm right vhost won't get notified when page is write-protected since you
>>> didn't insta...
2019 Mar 11
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...gt;> I also wonder here: when page is write protected then
>>>>> it does not look like .invalidate_range is invoked.
>>>>>
>>>>> E.g. mm/ksm.c calls
>>>>>
>>>>> mmu_notifier_invalidate_range_start and
>>>>> mmu_notifier_invalidate_range_end but not mmu_notifier_invalidate_range.
>>>>>
>>>>> Similarly, rmap in page_mkclean_one will not call
>>>>> mmu_notifier_invalidate_range.
>>>>>
>>>>> If I'm right vhost won't get notified when page is write-protected...
2023 Mar 28
3
[PATCH] mm: Take a page reference when removing device exclusive entries
...vma->vm_mm, vmf->address & PAGE_MASK,
(vmf->address & PAGE_MASK) + PAGE_SIZE, NULL);
@@ -3637,6 +3648,7 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf)
pte_unmap_unlock(vmf->pte, vmf->ptl);
folio_unlock(folio);
+ put_page(vmf->page);
mmu_notifier_invalidate_range_end(&range);
return 0;
--
2.39.2
2019 Jul 24
5
[PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
...ome more, I wonder if this is a
problem:
I see in __oom_reap_task_mm():
if (mmu_notifier_invalidate_range_start_nonblock(&range)) {
tlb_finish_mmu(&tlb, range.start, range.end);
ret = false;
continue;
}
unmap_page_range(&tlb, vma, range.start, range.end, NULL);
mmu_notifier_invalidate_range_end(&range);
Which looks like it creates an unbalanced start/end pairing if any
start returns EAGAIN?
This does not seem OK.. Many users require start/end to be paired to
keep track of their internal locking. Ie for instance hmm breaks
because the hmm->notifiers counter becomes unable to get t...
2023 Mar 30
4
[PATCH v2] mm: Take a page reference when removing device exclusive entries
...vma->vm_mm, vmf->address & PAGE_MASK,
(vmf->address & PAGE_MASK) + PAGE_SIZE, NULL);
@@ -3577,6 +3590,7 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf)
pte_unmap_unlock(vmf->pte, vmf->ptl);
folio_unlock(folio);
+ folio_put(folio);
mmu_notifier_invalidate_range_end(&range);
return 0;
--
2.39.2
2019 Mar 08
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...nvqs, int iov_limit)
>>>> {
>>> I also wonder here: when page is write protected then
>>> it does not look like .invalidate_range is invoked.
>>>
>>> E.g. mm/ksm.c calls
>>>
>>> mmu_notifier_invalidate_range_start and
>>> mmu_notifier_invalidate_range_end but not mmu_notifier_invalidate_range.
>>>
>>> Similarly, rmap in page_mkclean_one will not call
>>> mmu_notifier_invalidate_range.
>>>
>>> If I'm right vhost won't get notified when page is write-protected since you
>>> didn't insta...
2019 Jul 24
0
[PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
...; I see in __oom_reap_task_mm():
>
> if (mmu_notifier_invalidate_range_start_nonblock(&range)) {
> tlb_finish_mmu(&tlb, range.start, range.end);
> ret = false;
> continue;
> }
> unmap_page_range(&tlb, vma, range.start, range.end, NULL);
> mmu_notifier_invalidate_range_end(&range);
>
> Which looks like it creates an unbalanced start/end pairing if any
> start returns EAGAIN?
>
> This does not seem OK.. Many users require start/end to be paired to
> keep track of their internal locking. Ie for instance hmm breaks
> because the hmm->notifi...
2023 Mar 29
1
[PATCH] mm: Take a page reference when removing device exclusive entries
...vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf)
>
> pte_unmap_unlock(vmf->pte, vmf->ptl);
> folio_unlock(folio);
> + put_page(vmf->page);
folio_put(folio)
There, I just saved you 3 calls to compound_head(), saving roughly 150
bytes of kernel text.
> mmu_notifier_invalidate_range_end(&range);
> return 0;
> --
> 2.39.2
>
>
2019 Sep 06
0
possible deadlock in __mmu_notifier_invalidate_range_end
...39;")
============================================
WARNING: possible recursive locking detected
5.3.0-rc6-next-20190830 #75 Not tainted
--------------------------------------------
oom_reaper/1065 is trying to acquire lock:
ffffffff8904ff60 (mmu_notifier_invalidate_range_start){+.+.}, at:
__mmu_notifier_invalidate_range_end+0x0/0x360 mm/mmu_notifier.c:169
but task is already holding lock:
ffffffff8904ff60 (mmu_notifier_invalidate_range_start){+.+.}, at:
__oom_reap_task_mm+0x196/0x490 mm/oom_kill.c:542
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(m...
2023 Mar 28
1
[PATCH] mm: Take a page reference when removing device exclusive entries
...E_MASK,
> (vmf->address & PAGE_MASK) + PAGE_SIZE, NULL);
> @@ -3637,6 +3648,7 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf)
>
> pte_unmap_unlock(vmf->pte, vmf->ptl);
> folio_unlock(folio);
> + put_page(vmf->page);
>
> mmu_notifier_invalidate_range_end(&range);
> return 0;