Displaying 20 results from an estimated 142 matches for "handle_mm_fault".
2019 Aug 23
6
[PATCH 0/2] mm/hmm: two bug fixes for hmm_range_fault()
I have been working on converting Jerome's hmm_dummy driver and self
tests into a stand-alone set of tests to be included in
tools/testing/selftests/vm and came across these two bug fixes in the
process. The tests aren't quite ready to be posted as a patch.
I'm posting the fixes now since I thought they shouldn't wait.
They should probably have a fixes line but with all the HMM
2007 May 22
1
Kernel Panic in wct4xxp during unload on Zaptel-1.4.4
...ff8013c7e5>{do_softirq+49}
<5> <ffffffff80110bf5>{apic_timer_interrupt+133} <EOI>
<ffffffff8011c21a>{flush_tlb_page+44}
<5> <ffffffff80169106>{do_wp_page+1127}
<ffffffff80123ed3>{do_page_fault+575}
<5> <ffffffff80169ff2>{handle_mm_fault+1228}
<ffffffff80123e9a>{do_page_fault+518}
<5> <ffffffff8011026a>{system_call+126}
<ffffffff80132bc6>{schedule_tail+202}
<5> <ffffffff80110d91>{error_exit+0}
<5>
<5> Code: 8b 40 10 89 44 24 58 e8 3d 80 1a e0 31 c0 f6 44 24 58 07 0f
<...
2004 Jun 10
1
ext3 EIP
...un 10 10:28:59 shawarma kernel: [do_page_cache_readahead+231/288] do_page_cache_readahead+0xe7/0x120
Jun 10 10:28:59 shawarma kernel: [filemap_nopage+705/800] filemap_nopage+0x2c1/0x320
Jun 10 10:28:59 shawarma kernel: [do_no_page+144/672] do_no_page+0x90/0x2a0
Jun 10 10:28:59 shawarma kernel: [handle_mm_fault+166/256] handle_mm_fault+0xa6/0x100
Jun 10 10:28:59 shawarma kernel: [do_page_fault+263/1181] do_page_fault+0x107/0x49d
Jun 10 10:28:59 shawarma kernel: [recalc_task_prio+139/384] recalc_task_prio+0x8b/0x180
Jun 10 10:28:59 shawarma kernel: [schedule+605/1056] schedule+0x25d/0x420
Jun 10 10:28:5...
2020 Mar 16
4
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
...te_page check in nouveau.
>
> Signed-off-by: Christoph Hellwig <hch at lst.de>
Getting rid of HMM_PFN_DEVICE_PRIVATE seems reasonable to me since a driver can
look at the struct page but what if a driver needs to fault in a page from
another device's private memory? Should it call handle_mm_fault()?
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 1 -
> drivers/gpu/drm/nouveau/nouveau_dmem.c | 5 +++--
> drivers/gpu/drm/nouveau/nouveau_svm.c | 1 -
> include/linux/hmm.h | 2 --
> mm/hmm.c | 25 +++++-------...
2019 Aug 23
0
[PATCH 2/2] mm/hmm: hmm_range_fault() infinite loop
Normally, callers to handle_mm_fault() are supposed to check the
vma->vm_flags first. hmm_range_fault() checks for VM_READ but doesn't
check for VM_WRITE if the caller requests a page to be faulted in
with write permission (via the hmm_range.pfns[] value).
If the vma is write protected, this can result in an infinite loop:
hm...
2019 Jul 23
0
[PATCH 1/6] mm: always return EBUSY for invalid ranges in hmm_range_{fault, snapshot}
...; > - return -EAGAIN;
> > - }
> > + if (!range->valid)
> > + return -EBUSY;
>
> Is it fine to remove up_read(&hmm->mm->mmap_sem) ?
It seems very subtle, but under the covers this calls
handle_mm_fault() with FAULT_FLAG_ALLOW_RETRY which will cause the
mmap sem to become unlocked along the -EAGAIN return path.
I think without the commit message I wouldn't have been able to
understand that, so Christoph, could you also add the comment below
please?
Otherwise
Reviewed-by: Jason Gunthorpe <...
2020 Mar 17
2
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
...Tue, Mar 17, 2020 at 09:15:36AM -0300, Jason Gunthorpe wrote:
> > Getting rid of HMM_PFN_DEVICE_PRIVATE seems reasonable to me since a driver can
> > look at the struct page but what if a driver needs to fault in a page from
> > another device's private memory? Should it call handle_mm_fault()?
>
> Isn't that what this series basically does?
>
> The dev_private_owner is set to the type of pgmap the device knows how
> to handle, and everything else is automatically faulted for the
> device.
>
> If the device does not know how to handle device_private then i...
2004 Sep 29
3
Oops from netlink or what?
...00c0 00000f60 c0218cf7
Sep 27 19:02:58 rebecca kernel: 00001000 000000d0 c40de2e0 00000645
c40de2e0 c40de2e0 c61e00c0 00000645
Sep 27 19:02:58 rebecca kernel: Call Trace:
Sep 27 19:02:58 rebecca kernel: [tc_dump_tfilter+180/608]
tc_dump_tfilter+0xb4/0x260
Sep 27 19:02:58 rebecca kernel: [handle_mm_fault+224/336]
handle_mm_fault+0xe0/0x150
Sep 27 19:02:58 rebecca kernel: [alloc_skb+71/240] alloc_skb+0x47/0xf0
Sep 27 19:02:58 rebecca kernel: [netlink_dump+89/464] netlink_dump+0x59/0x1d0
Sep 27 19:02:58 rebecca kernel: [netlink_dump_start+167/240]
netlink_dump_start+0xa7/0xf0
Sep 27 19:02:58 reb...
2019 Jul 22
2
[PATCH 1/6] mm: always return EBUSY for invalid ranges in hmm_range_{fault, snapshot}
On Mon, Jul 22, 2019 at 3:14 PM Christoph Hellwig <hch at lst.de> wrote:
>
> We should not have two different error codes for the same condition. In
> addition this really complicates the code due to the special handling of
> EAGAIN that drops the mmap_sem due to the FAULT_FLAG_ALLOW_RETRY logic
> in the core vm.
>
> Signed-off-by: Christoph Hellwig <hch at
2006 Jan 06
2
3ware disk failure -> hang
...de/0x1e8
Jan 6 01:04:10 $SERVER kernel: [<c01a97c3>] task_has_capability+0x4a/0x52
Jan 6 01:04:10 $SERVER kernel: [<c0225cd5>] sg_scsi_ioctl+0x2bf/0x3c1
Jan 6 01:04:10 $SERVER kernel: [<c02261aa>] scsi_cmd_ioctl+0x3d3/0x475
Jan 6 01:04:10 $SERVER kernel: [<c014d41b>] handle_mm_fault+0xbd/0x175
Jan 6 01:04:10 $SERVER kernel: [<c011ad67>] do_page_fault+0x1ae/0x5c6
Jan 6 01:04:10 $SERVER kernel: [<c014e4a6>] vma_adjust+0x286/0x2d6
Jan 6 01:04:10 $SERVER kernel: [<f88228ea>] sd_ioctl+0xb3/0xd4 [sd_mod]
Jan 6 01:04:10 $SERVER kernel: [<c02246e8>] blk...
2019 Aug 23
0
[PATCH 1/2] mm/hmm: hmm_range_fault() NULL pointer bug
...ult() /* calls find_vma() but no range check */
walk_page_range() /* calls find_vma(), sets walk->vma = NULL */
__walk_page_range()
walk_pgd_range()
walk_p4d_range()
walk_pud_range()
hmm_vma_walk_hole()
hmm_vma_walk_hole_()
hmm_vma_do_fault()
handle_mm_fault(vma=0)
Signed-off-by: Ralph Campbell <rcampbell at nvidia.com>
---
mm/hmm.c | 13 +++++++++----
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/mm/hmm.c b/mm/hmm.c
index fc05c8fe78b4..29371485fe94 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -229,6 +229,9 @@ static int hmm_vma_do_f...
2006 Feb 09
0
Repeated kernel "oops" / oom-killer with Ralph Passgang''s xen 3.0.0 Debian packages
...;c014da0f>] do_no_page+0xaf/0x3c0
[<c03f5334>] kbd_init+0x64/0xa0
[<c03feb7c>] ip_auto_config_setup+0x1bc/0x230
[<c03feb7c>] ip_auto_config_setup+0x1bc/0x230
[<c014b269>] pte_alloc_map+0x49/0x1d0
[<c03feb7c>] ip_auto_config_setup+0x1bc/0x230
[<c014df90>] handle_mm_fault+0xf0/0x1d0
[<c03f5334>] kbd_init+0x64/0xa0
[<c03f0b64>] kmem_cache_init+0x294/0x330
[<c03feb7c>] ip_auto_config_setup+0x1bc/0x230
[<c03f5334>] kbd_init+0x64/0xa0
[<c011507c>] do_page_fault+0x1cc/0x66a
[<c03f5334>] kbd_init+0x64/0xa0
[<c01175e8>] reca...
2008 Jul 10
11
Xen status in lenny?
Hi,
AFAIK, the status of Xen in lenny is currently the following:
- no dom0 kernel
- domU kernel only for i386 (no domU kernel for amd64)
I was told (I don't remember where) that this is because the vanilla
kernel only supports domU for i386, and has no dom0 support, so distros
have to port the patches to their kernels (please correct me if I'm
wrong).
However:
- etch shipped with dom0
2018 Aug 05
2
[PATCH net-next 0/6] virtio_net: Add ethtool stat items
...0x350
[ 46.168160] ? netlink_unicast+0x6a0/0x6a0
[ 46.168168] sock_sendmsg+0xdb/0x160
[ 46.168193] ___sys_sendmsg+0x6b3/0xbd0
[ 46.168207] ? copy_msghdr_from_user+0x350/0x350
[ 46.168221] ? do_raw_spin_unlock+0xae/0x310
[ 46.168248] ? _raw_spin_unlock+0x2e/0x50
[ 46.168257] ? __handle_mm_fault+0xb65/0x2e90
[ 46.168278] ? handle_mm_fault+0x28f/0xa70
[ 46.168284] ? kvm_clock_read+0x1f/0x30
[ 46.168289] ? kvm_sched_clock_read+0x5/0x10
[ 46.168303] ? __do_page_fault+0x549/0xd00
[ 46.168308] ? kvm_clock_read+0x1f/0x30
[ 46.168313] ? kvm_sched_clock_read+0x5/0x10
[ 46.16831...
2018 Aug 05
2
[PATCH net-next 0/6] virtio_net: Add ethtool stat items
...0x350
[ 46.168160] ? netlink_unicast+0x6a0/0x6a0
[ 46.168168] sock_sendmsg+0xdb/0x160
[ 46.168193] ___sys_sendmsg+0x6b3/0xbd0
[ 46.168207] ? copy_msghdr_from_user+0x350/0x350
[ 46.168221] ? do_raw_spin_unlock+0xae/0x310
[ 46.168248] ? _raw_spin_unlock+0x2e/0x50
[ 46.168257] ? __handle_mm_fault+0xb65/0x2e90
[ 46.168278] ? handle_mm_fault+0x28f/0xa70
[ 46.168284] ? kvm_clock_read+0x1f/0x30
[ 46.168289] ? kvm_sched_clock_read+0x5/0x10
[ 46.168303] ? __do_page_fault+0x549/0xd00
[ 46.168308] ? kvm_clock_read+0x1f/0x30
[ 46.168313] ? kvm_sched_clock_read+0x5/0x10
[ 46.16831...
2008 May 10
2
kernel- 2.6.25.3 + xen 3.2
Hi
Does anyone uses 2.6.25-3 kernel and xen-3.2?
I have a problem with kernel 2.6.24.
I find some patch for this kernel?
Regards,
Albert
_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users
2016 May 25
3
[PATCH] x86/paravirt: Do not trace _paravirt_ident_*() functions
...8867 ffffea0000008030
Call Trace:
[<ffffffff81cc5f47>] _raw_spin_lock+0x27/0x30
[<ffffffff8122c15b>] handle_pte_fault+0x13db/0x16b0
[<ffffffff811bf4cb>] ? function_trace_call+0x15b/0x180
[<ffffffff8122ad85>] ? handle_pte_fault+0x5/0x16b0
[<ffffffff8122e322>] handle_mm_fault+0x312/0x670
[<ffffffff81231068>] ? find_vma+0x68/0x70
[<ffffffff810ab741>] __do_page_fault+0x1b1/0x4e0
[<ffffffff810aba92>] do_page_fault+0x22/0x30
[<ffffffff81cc7f68>] page_fault+0x28/0x30
[<ffffffff81574af5>] ? copy_user_enhanced_fast_string+0x5/0x10
[<...
2016 May 25
3
[PATCH] x86/paravirt: Do not trace _paravirt_ident_*() functions
...8867 ffffea0000008030
Call Trace:
[<ffffffff81cc5f47>] _raw_spin_lock+0x27/0x30
[<ffffffff8122c15b>] handle_pte_fault+0x13db/0x16b0
[<ffffffff811bf4cb>] ? function_trace_call+0x15b/0x180
[<ffffffff8122ad85>] ? handle_pte_fault+0x5/0x16b0
[<ffffffff8122e322>] handle_mm_fault+0x312/0x670
[<ffffffff81231068>] ? find_vma+0x68/0x70
[<ffffffff810ab741>] __do_page_fault+0x1b1/0x4e0
[<ffffffff810aba92>] do_page_fault+0x22/0x30
[<ffffffff81cc7f68>] page_fault+0x28/0x30
[<ffffffff81574af5>] ? copy_user_enhanced_fast_string+0x5/0x10
[<...
2023 Jul 26
1
[PATCH] vdpa/mlx5: Fix crash on shutdown for when no ndev exists
...Call Trace:
<TASK>
? __die+0x20/0x60
? page_fault_oops+0x14c/0x3c0
? exc_page_fault+0x75/0x140
? asm_exc_page_fault+0x22/0x30
? mlx5v_shutdown+0xe/0x50 [mlx5_vdpa]
device_shutdown+0x13e/0x1e0
kernel_restart+0x36/0x90
__do_sys_reboot+0x141/0x210
? vfs_writev+0xcd/0x140
? handle_mm_fault+0x161/0x260
? do_writev+0x6b/0x110
do_syscall_64+0x3d/0x90
entry_SYSCALL_64_after_hwframe+0x46/0xb0
RIP: 0033:0x7f496990fb56
RSP: 002b:00007fffc7bdde88 EFLAGS: 00000206 ORIG_RAX: 00000000000000a9
RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f496990fb56
RDX: 0000000001234567 RSI:...
2013 Aug 27
7
[PATCH] Btrfs: fix deadlock in uuid scan kthread
...te+0x2c/0x80 [btrfs]
[36700.671757] [<ffffffffa05d4c9e>] btrfs_ioctl_snap_create+0x5e/0x80 [btrfs]
[36700.671759] [<ffffffff8113a764>] ? handle_pte_fault+0x84/0x920
[36700.671764] [<ffffffffa05d87eb>] btrfs_ioctl+0xf0b/0x1d00 [btrfs]
[36700.671766] [<ffffffff8113c120>] ? handle_mm_fault+0x210/0x310
[36700.671768] [<ffffffff816f83a4>] ? __do_page_fault+0x284/0x4e0
[36700.671770] [<ffffffff81180aa6>] do_vfs_ioctl+0x96/0x550
[36700.671772] [<ffffffff81170fe3>] ? __sb_end_write+0x33/0x70
[36700.671774] [<ffffffff81180ff1>] SyS_ioctl+0x91/0xb0
[36700.671775]...