Displaying 20 results from an estimated 74 matches for "handle_pte_fault".
2016 May 25
3
[PATCH] x86/paravirt: Do not trace _paravirt_ident_*() functions
...8800d4aefba0 ffffffff81cc5f47 ffff8800d4aefc60 ffffffff8122c15b
ffff8800d4aefcb0 ffff8800d4aefbd0 ffffffff811bf4cb 0000000000000002
0000000000000015 ffff8800d2276050 80000000c0fd8867 ffffea0000008030
Call Trace:
[<ffffffff81cc5f47>] _raw_spin_lock+0x27/0x30
[<ffffffff8122c15b>] handle_pte_fault+0x13db/0x16b0
[<ffffffff811bf4cb>] ? function_trace_call+0x15b/0x180
[<ffffffff8122ad85>] ? handle_pte_fault+0x5/0x16b0
[<ffffffff8122e322>] handle_mm_fault+0x312/0x670
[<ffffffff81231068>] ? find_vma+0x68/0x70
[<ffffffff810ab741>] __do_page_fault+0x1b1/0x4e0...
2016 May 25
3
[PATCH] x86/paravirt: Do not trace _paravirt_ident_*() functions
...8800d4aefba0 ffffffff81cc5f47 ffff8800d4aefc60 ffffffff8122c15b
ffff8800d4aefcb0 ffff8800d4aefbd0 ffffffff811bf4cb 0000000000000002
0000000000000015 ffff8800d2276050 80000000c0fd8867 ffffea0000008030
Call Trace:
[<ffffffff81cc5f47>] _raw_spin_lock+0x27/0x30
[<ffffffff8122c15b>] handle_pte_fault+0x13db/0x16b0
[<ffffffff811bf4cb>] ? function_trace_call+0x15b/0x180
[<ffffffff8122ad85>] ? handle_pte_fault+0x5/0x16b0
[<ffffffff8122e322>] handle_mm_fault+0x312/0x670
[<ffffffff81231068>] ? find_vma+0x68/0x70
[<ffffffff810ab741>] __do_page_fault+0x1b1/0x4e0...
2016 Sep 02
0
[PATCH] x86/paravirt: Do not trace _paravirt_ident_*() functions
...5f47 ffff8800d4aefc60 ffffffff8122c15b
> ffff8800d4aefcb0 ffff8800d4aefbd0 ffffffff811bf4cb 0000000000000002
> 0000000000000015 ffff8800d2276050 80000000c0fd8867 ffffea0000008030
> Call Trace:
> [<ffffffff81cc5f47>] _raw_spin_lock+0x27/0x30
> [<ffffffff8122c15b>] handle_pte_fault+0x13db/0x16b0
> [<ffffffff811bf4cb>] ? function_trace_call+0x15b/0x180
> [<ffffffff8122ad85>] ? handle_pte_fault+0x5/0x16b0
> [<ffffffff8122e322>] handle_mm_fault+0x312/0x670
> [<ffffffff81231068>] ? find_vma+0x68/0x70
> [<ffffffff810ab741>] __...
2013 Aug 27
7
[PATCH] Btrfs: fix deadlock in uuid scan kthread
...7] [<ffffffffa05d4bf2>] btrfs_ioctl_snap_create_transid+0x142/0x190 [btrfs]
[36700.671752] [<ffffffffa05d4c6c>] ? btrfs_ioctl_snap_create+0x2c/0x80 [btrfs]
[36700.671757] [<ffffffffa05d4c9e>] btrfs_ioctl_snap_create+0x5e/0x80 [btrfs]
[36700.671759] [<ffffffff8113a764>] ? handle_pte_fault+0x84/0x920
[36700.671764] [<ffffffffa05d87eb>] btrfs_ioctl+0xf0b/0x1d00 [btrfs]
[36700.671766] [<ffffffff8113c120>] ? handle_mm_fault+0x210/0x310
[36700.671768] [<ffffffff816f83a4>] ? __do_page_fault+0x284/0x4e0
[36700.671770] [<ffffffff81180aa6>] do_vfs_ioctl+0x96/0x550...
2013 Jul 01
1
[PATCH] drm/nouveau: fix locking in nouveau_crtc_page_flip
...ffa0347c80>] nouveau_bo_validate+0x1c/0x1e [nouveau]
[<ffffffffa0347d52>] nouveau_ttm_fault_reserve_notify+0xd0/0xd7 [nouveau]
[<ffffffffa019abad>] ttm_bo_vm_fault+0x69/0x394 [ttm]
[<ffffffff8114eaed>] __do_fault+0x6e/0x496
[<ffffffff811515fb>] handle_pte_fault+0x84/0x861
[<ffffffff81152de4>] handle_mm_fault+0x1e2/0x2b1
[<ffffffff816f5fec>] __do_page_fault+0x15e/0x517
[<ffffffff816f63dc>] do_page_fault+0x37/0x6b
[<ffffffff816f3122>] page_fault+0x22/0x30
other info that might help us debug this:
Possib...
2015 Jan 07
1
[PATCH v8 34/50] vhost/net: virtio 1.0 byte swap
...] ? sock_recvmsg+0x133/0x160
> <4> [<ffffffff8109afa0>] ? autoremove_wake_function+0x0/0x40
> <4> [<ffffffff81136941>] ? lru_cache_add_lru+0x21/0x40
> <4> [<ffffffff8115522d>] ? page_add_new_anon_rmap+0x9d/0xf0
> <4> [<ffffffff8114aeef>] ? handle_pte_fault+0x4af/0xb00
> <4> [<ffffffff81451f14>] ? move_addr_to_kernel+0x64/0x70
> <4> [<ffffffff814538b6>] __sys_sendmsg+0x406/0x420
> <4> [<ffffffff8104a98c>] ? __do_page_fault+0x1ec/0x480
> <4> [<ffffffff814523d9>] ? sys_sendto+0x139/0x190
> &...
2015 Jan 07
1
[PATCH v8 34/50] vhost/net: virtio 1.0 byte swap
...] ? sock_recvmsg+0x133/0x160
> <4> [<ffffffff8109afa0>] ? autoremove_wake_function+0x0/0x40
> <4> [<ffffffff81136941>] ? lru_cache_add_lru+0x21/0x40
> <4> [<ffffffff8115522d>] ? page_add_new_anon_rmap+0x9d/0xf0
> <4> [<ffffffff8114aeef>] ? handle_pte_fault+0x4af/0xb00
> <4> [<ffffffff81451f14>] ? move_addr_to_kernel+0x64/0x70
> <4> [<ffffffff814538b6>] __sys_sendmsg+0x406/0x420
> <4> [<ffffffff8104a98c>] ? __do_page_fault+0x1ec/0x480
> <4> [<ffffffff814523d9>] ? sys_sendto+0x139/0x190
> &...
2014 Dec 01
2
[PATCH v8 34/50] vhost/net: virtio 1.0 byte swap
I had to add an explicit tag to suppress compiler warning:
gcc isn't smart enough to notice that
len is always initialized since function is called with size > 0.
Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
Reviewed-by: Cornelia Huck <cornelia.huck at de.ibm.com>
---
drivers/vhost/net.c | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)
diff
2014 Dec 01
2
[PATCH v8 34/50] vhost/net: virtio 1.0 byte swap
I had to add an explicit tag to suppress compiler warning:
gcc isn't smart enough to notice that
len is always initialized since function is called with size > 0.
Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
Reviewed-by: Cornelia Huck <cornelia.huck at de.ibm.com>
---
drivers/vhost/net.c | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)
diff
2016 Nov 25
0
[PATCH 0/3] virtio/vringh: kill off ACCESS_ONCE()
...t
> ok.
>
>> And the mm/ code is perfectly fine with these PTE accesses being done
>> NOT atomic.
>
> That strikes me as surprising. Is there some mutual exclusion that
> prevents writes from occuring wherever a READ_ONCE() happens to a PTE?
See for example mm/memory.c handle_pte_fault.
---snip----
/*
* some architectures can have larger ptes than wordsize,
* e.g.ppc44x-defconfig has CONFIG_PTE_64BIT=y and
* CONFIG_32BIT=y, so READ_ONCE or ACCESS_ONCE cannot guarantee
* atomic accesses. The cod...
2015 Jan 06
0
[PATCH v8 34/50] vhost/net: virtio 1.0 byte swap
...ffffffff81453d73>] ? sock_recvmsg+0x133/0x160
<4> [<ffffffff8109afa0>] ? autoremove_wake_function+0x0/0x40
<4> [<ffffffff81136941>] ? lru_cache_add_lru+0x21/0x40
<4> [<ffffffff8115522d>] ? page_add_new_anon_rmap+0x9d/0xf0
<4> [<ffffffff8114aeef>] ? handle_pte_fault+0x4af/0xb00
<4> [<ffffffff81451f14>] ? move_addr_to_kernel+0x64/0x70
<4> [<ffffffff814538b6>] __sys_sendmsg+0x406/0x420
<4> [<ffffffff8104a98c>] ? __do_page_fault+0x1ec/0x480
<4> [<ffffffff814523d9>] ? sys_sendto+0x139/0x190
<4> [<ffffffff810...
2015 Jan 06
0
[PATCH v8 34/50] vhost/net: virtio 1.0 byte swap
...ffffffff81453d73>] ? sock_recvmsg+0x133/0x160
<4> [<ffffffff8109afa0>] ? autoremove_wake_function+0x0/0x40
<4> [<ffffffff81136941>] ? lru_cache_add_lru+0x21/0x40
<4> [<ffffffff8115522d>] ? page_add_new_anon_rmap+0x9d/0xf0
<4> [<ffffffff8114aeef>] ? handle_pte_fault+0x4af/0xb00
<4> [<ffffffff81451f14>] ? move_addr_to_kernel+0x64/0x70
<4> [<ffffffff814538b6>] __sys_sendmsg+0x406/0x420
<4> [<ffffffff8104a98c>] ? __do_page_fault+0x1ec/0x480
<4> [<ffffffff814523d9>] ? sys_sendto+0x139/0x190
<4> [<ffffffff810...
2013 Feb 19
0
kernel BUG at kernel-xen-3.7.9/linux-3.7/fs/buffer.c:2952
...ffff804f8949>] dump_stack+0x69/0x6f
[ 643.146146] [<ffffffff804fac88>] bad_page+0xe7/0xfb
[ 643.146155] [<ffffffff800edf4a>] get_page_from_freelist+0x63a/0x750
[ 643.146164] [<ffffffff800ee1da>] __alloc_pages_nodemask+0x17a/0x950
[ 643.146172] [<ffffffff8010e17b>] handle_pte_fault+0x41b/0x7d0
[ 643.146182] [<ffffffff80506ece>] __do_page_fault+0x19e/0x540
[ 643.146190] [<ffffffff80503c98>] page_fault+0x28/0x30
[ 643.146200] [<00007fd2d8a9961b>] 0x7fd2d8a9961a
linux-3.7/fs/buffer.c:2952 -> BUG_ON(!bh->b_end_io);
What is the nature of this bug...
2014 Oct 13
2
kernel crashes after soft lockups in xen domU
...al_irq_restore+0x7/0x8
[354047.224054] [<ffffffff8135049f>] ?
_raw_spin_unlock_irqrestore+0xe/0xf
[354047.224054] [<ffffffff810be97d>] ? pagevec_lru_move_fn+0x8f/0xb5
[354047.224054] [<ffffffff810beb8a>] ? __lru_cache_add+0x4a/0x51
[354047.224054] [<ffffffff810d1537>] ? handle_pte_fault+0x224/0x79f
[354047.224054] [<ffffffff810ceacb>] ? pmd_val+0x7/0x8
[354047.224054] [<ffffffff810ceb49>] ? pte_offset_kernel+0x16/0x35
[354047.224054] [<ffffffff813533ee>] ? do_page_fault+0x320/0x345
[354047.224054] [<ffffffff81003223>] ? xen_end_context_switch+0xe/0x1c
[...
2014 Feb 03
3
Memory leak - how to investigate
...om_kill_process+0x82/0x2a0
[<ffffffff8111d201>] ? select_bad_process+0xe1/0x120
[<ffffffff8111d700>] ? out_of_memory+0x220/0x3c0
[<ffffffff8112c3dc>] ? __alloc_pages_nodemask+0x8ac/0x8d0
[<ffffffff81160d6a>] ? alloc_pages_vma+0x9a/0x150
[<ffffffff81143f0b>] ? handle_pte_fault+0x76b/0xb50
[<ffffffffa00c60f9>] ? ext4_check_acl+0x29/0x90 [ext4]
[<ffffffff81075887>] ? current_fs_time+0x27/0x30
[<ffffffff8114452a>] ? handle_mm_fault+0x23a/0x310
[<ffffffff810474e9>] ? __do_page_fault+0x139/0x480
[<ffffffff8114aaba>] ? do_mmap_pgoff+0x33...
2012 Jul 27
1
kernel BUG at fs/buffer.c:2886! Linux 3.5.0
Hello
Get this on first write made ( by deliver sending mail to inform of the
restart of services )
Home partition (the one receiving the mail) is based on ocfs2 created
from drbd block device in primary/primary mode
These drbd devices are based on lvm.
system is running linux-3.5.0, identical symptom with linux 3.3 and 3.2
but working with linux 3.0 kernel
reproduced on two machines ( so
2015 Mar 30
2
[PATCH 0/9] qspinlock stuff -v15
....56%-- ext4_do_update_inode
|--2.54%-- try_to_wake_up
|--2.46%-- pgd_free
|--2.32%-- cache_alloc_refill
|--2.32%-- pgd_alloc
|--2.32%-- free_pcppages_bulk
|--1.88%-- do_wp_page
|--1.77%-- handle_pte_fault
|--1.58%-- do_anonymous_page
|--1.56%-- rmqueue_bulk.clone.0
|--1.35%-- copy_pte_range
|--1.25%-- zap_pte_range
|--1.13%-- cache_flusharray
|--0.88%-- __pmd_alloc
|--0.70%-- wake_up_new_t...
2015 Mar 30
2
[PATCH 0/9] qspinlock stuff -v15
....56%-- ext4_do_update_inode
|--2.54%-- try_to_wake_up
|--2.46%-- pgd_free
|--2.32%-- cache_alloc_refill
|--2.32%-- pgd_alloc
|--2.32%-- free_pcppages_bulk
|--1.88%-- do_wp_page
|--1.77%-- handle_pte_fault
|--1.58%-- do_anonymous_page
|--1.56%-- rmqueue_bulk.clone.0
|--1.35%-- copy_pte_range
|--1.25%-- zap_pte_range
|--1.13%-- cache_flusharray
|--0.88%-- __pmd_alloc
|--0.70%-- wake_up_new_t...
2014 Nov 05
0
kernel crashes after soft lockups in xen domU
...t; [354047.224054] [<ffffffff8135049f>] ?
> _raw_spin_unlock_irqrestore+0xe/0xf
> [354047.224054] [<ffffffff810be97d>] ? pagevec_lru_move_fn+0x8f/0xb5
> [354047.224054] [<ffffffff810beb8a>] ? __lru_cache_add+0x4a/0x51
> [354047.224054] [<ffffffff810d1537>] ? handle_pte_fault+0x224/0x79f
> [354047.224054] [<ffffffff810ceacb>] ? pmd_val+0x7/0x8
> [354047.224054] [<ffffffff810ceb49>] ? pte_offset_kernel+0x16/0x35
> [354047.224054] [<ffffffff813533ee>] ? do_page_fault+0x320/0x345
> [354047.224054] [<ffffffff81003223>] ? xen_end_conte...
2018 Jul 17
2
Samba 4.8.3 out of memory error
...kernel: [<ffffffff8116e2b2>] ?
read_swap_cache_async+0xf2/0x160
Jul 16 14:14:36 soda kernel: [<ffffffff8116ee09>] ? valid_swaphandles+0x69/0x160
Jul 16 14:14:36 soda kernel: [<ffffffff8116e3a7>] ? swapin_readahead+0x87/0xc0
Jul 16 14:14:36 soda kernel: [<ffffffff8115d175>] ? handle_pte_fault+0x6c5/0xac0
Jul 16 14:14:36 soda kernel: [<ffffffff8117167d>] ?
free_swap_and_cache+0x5d/0x120
Jul 16 14:14:36 soda kernel: [<ffffffff8115d81a>] ? handle_mm_fault+0x2aa/0x3f0
Jul 16 14:14:36 soda kernel: [<ffffffff81053671>] ? __do_page_fault+0x141/0x500
Jul 16 14:14:36 soda kerne...