Displaying 5 results from an estimated 5 matches for "__oom_reap_task_mm".
2019 Sep 06
0
possible deadlock in __mmu_notifier_invalidate_range_end
...-----------------
oom_reaper/1065 is trying to acquire lock:
ffffffff8904ff60 (mmu_notifier_invalidate_range_start){+.+.}, at:
__mmu_notifier_invalidate_range_end+0x0/0x360 mm/mmu_notifier.c:169
but task is already holding lock:
ffffffff8904ff60 (mmu_notifier_invalidate_range_start){+.+.}, at:
__oom_reap_task_mm+0x196/0x490 mm/oom_kill.c:542
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(mmu_notifier_invalidate_range_start);
lock(mmu_notifier_invalidate_range_start);
*** DEADLOCK ***
May be due to missing lock nesting notation
2 l...
2019 Jul 24
5
[PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
...ented near
mmuu_notifier_ops.invalidate_start - we could break or continue, it
doesn't much matter how to recover from a broken driver, but since we
did the WARN_ON this should sanitize the ret to EAGAIN or 0
Humm. Actually having looked this some more, I wonder if this is a
problem:
I see in __oom_reap_task_mm():
if (mmu_notifier_invalidate_range_start_nonblock(&range)) {
tlb_finish_mmu(&tlb, range.start, range.end);
ret = false;
continue;
}
unmap_page_range(&tlb, vma, range.start, range.end, NULL);
mmu_notifier_invalidate_range_end(&range);
Which looks like it c...
2019 Jul 24
0
[PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
...date_start - we could break or continue, it
> doesn't much matter how to recover from a broken driver, but since we
> did the WARN_ON this should sanitize the ret to EAGAIN or 0
>
> Humm. Actually having looked this some more, I wonder if this is a
> problem:
>
> I see in __oom_reap_task_mm():
>
> if (mmu_notifier_invalidate_range_start_nonblock(&range)) {
> tlb_finish_mmu(&tlb, range.start, range.end);
> ret = false;
> continue;
> }
> unmap_page_range(&tlb, vma, range.start, range.end, NULL);
> mmu_notifier_invalidate_range...
2019 Jul 24
2
[PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
...inue, it
> > doesn't much matter how to recover from a broken driver, but since we
> > did the WARN_ON this should sanitize the ret to EAGAIN or 0
> >
> > Humm. Actually having looked this some more, I wonder if this is a
> > problem:
> >
> > I see in __oom_reap_task_mm():
> >
> > if (mmu_notifier_invalidate_range_start_nonblock(&range)) {
> > tlb_finish_mmu(&tlb, range.start, range.end);
> > ret = false;
> > continue;
> > }
> > unmap_page_range(&tlb, vma, range.start, range.end, NULL);
&g...
2019 Jul 23
4
[PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
The hmm_mirror_ops callback function sync_cpu_device_pagetables() passes
a struct hmm_update which is a simplified version of struct
mmu_notifier_range. This is unnecessary so replace hmm_update with
mmu_notifier_range directly.
Signed-off-by: Ralph Campbell <rcampbell at nvidia.com>
Cc: "Jérôme Glisse" <jglisse at redhat.com>
Cc: Jason Gunthorpe <jgg at mellanox.com>