search for: entry_syscall64_slow_path

Displaying 15 results from an estimated 15 matches for "entry_syscall64_slow_path".

2017 Jun 05
0
BUG: KASAN: use-after-free in free_old_xmit_skbs
...01] ? update_stack_state+0x402/0x780 > > [ 310.049307] ? account_entity_enqueue+0x730/0x730 > > [ 310.050322] ? __rb_erase_color+0x27d0/0x27d0 > > [ 310.051286] ? update_curr_fair+0x70/0x70 > > [ 310.052206] ? enqueue_entity+0x2450/0x2450 > > [ 310.053124] ? entry_SYSCALL64_slow_path+0x25/0x25 > > [ 310.054082] ? dequeue_entity+0x27a/0x1520 > > [ 310.054967] ? bpf_prog_alloc+0x320/0x320 > > [ 310.055822] ? yield_to_task_fair+0x110/0x110 > > [ 310.056708] ? set_next_entity+0x2f2/0xa90 > > [ 310.057574] ? dequeue_task_fair+0xc09/0x2ec0 >...
2017 Jun 05
0
BUG: KASAN: use-after-free in free_old_xmit_skbs
...01] ? update_stack_state+0x402/0x780 > > [ 310.049307] ? account_entity_enqueue+0x730/0x730 > > [ 310.050322] ? __rb_erase_color+0x27d0/0x27d0 > > [ 310.051286] ? update_curr_fair+0x70/0x70 > > [ 310.052206] ? enqueue_entity+0x2450/0x2450 > > [ 310.053124] ? entry_SYSCALL64_slow_path+0x25/0x25 > > [ 310.054082] ? dequeue_entity+0x27a/0x1520 > > [ 310.054967] ? bpf_prog_alloc+0x320/0x320 > > [ 310.055822] ? yield_to_task_fair+0x110/0x110 > > [ 310.056708] ? set_next_entity+0x2f2/0xa90 > > [ 310.057574] ? dequeue_task_fair+0xc09/0x2ec0 >...
2017 Apr 20
0
Testing kernel crash: 4.9.23-26.el6.x86_64
...ffffffff8115b199>] ? __audit_syscall_exit+0x229/0x2b0 [59826.070019] [<ffffffff8178a06e>] SyS_sendto+0xe/0x10 [59826.070019] [<ffffffff81003e7a>] do_syscall_64+0x7a/0x240 [59826.070019] [<ffffffff8106f1a7>] ? do_page_fault+0x37/0x90 [59826.070019] [<ffffffff818d83eb>] entry_SYSCALL64_slow_path+0x25/0x25 [59826.070019] Code: e8 36 41 39 c6 74 17 48 8b 4d b8 44 89 f2 44 89 fe 4c 89 e7 e8 2c f7 ff ff 48 89 c3 eb 2c 49 63 44 24 20 4d 8b 04 24 48 8d 4a 01 <48> 8b 1c 07 48 89 f8 49 8d 30 e8 eb b 4 1e 00 3c 01 75 95 49 63 [59826.070019] RIP [<ffffffff812405d6>] kmem_cache_alloc_no...
2017 Nov 06
2
ctdb vacuum timeouts and record locks
On Thu, 2 Nov 2017 11:17:27 -0700, Computerisms Corporation via samba <samba at lists.samba.org> wrote: > This occurred again this morning, when the user reported the problem, I > found in the ctdb logs that vacuuming has been going on since last > night. The need to fix it was urgent (when isn't it?) so I didn't have > time to poke around for clues, but immediately
2017 Nov 15
0
ctdb vacuum timeouts and record locks
...c05ed958>] __fuse_request_send+0x78/0x80 [fuse] [<ffffffffc05f0bdd>] fuse_simple_request+0xbd/0x190 [fuse] [<ffffffffc05f6c37>] fuse_setlk+0x177/0x190 [fuse] [<ffffffffa0659467>] SyS_flock+0x117/0x190 [<ffffffffa0403b1c>] do_syscall_64+0x7c/0xf0 [<ffffffffa0a0632f>] entry_SYSCALL64_slow_path+0x25/0x25 [<ffffffffffffffff>] 0xffffffffffffffff I am still not too sure how to interpret this, but I think this is pointing me to the gluster file system, so will see what I can find chasing that down... > > peace & happiness, > martin >
2017 Mar 07
0
panic in virtio console startup in v4.11-rc1
...12007] do_init_module+0x5f/0x1f8 [ 2.412007] load_module+0x2651/0x2a20 [ 2.412007] ? __symbol_put+0x60/0x60 [ 2.412007] ? vfs_read+0x11b/0x130 [ 2.412007] SYSC_finit_module+0xdf/0x110 [ 2.412007] SyS_finit_module+0xe/0x10 [ 2.412007] do_syscall_64+0x67/0x180 [ 2.412007] entry_SYSCALL64_slow_path+0x25/0x25 [ 2.412007] RIP: 0033:0x7fc2eccc6bf9 [ 2.412007] RSP: 002b:00007ffc233ca698 EFLAGS: 00000246 ORIG_RAX: 0000000000000139 [ 2.412007] RAX: ffffffffffffffda RBX: 000055ffbe27ff60 RCX: 00007fc2eccc6bf9 [ 2.412007] RDX: 0000000000000000 RSI: 00007fc2ed7ff995 RDI: 0000000000000006 [...
2017 Mar 07
0
panic in virtio console startup in v4.11-rc1
...12007] do_init_module+0x5f/0x1f8 [ 2.412007] load_module+0x2651/0x2a20 [ 2.412007] ? __symbol_put+0x60/0x60 [ 2.412007] ? vfs_read+0x11b/0x130 [ 2.412007] SYSC_finit_module+0xdf/0x110 [ 2.412007] SyS_finit_module+0xe/0x10 [ 2.412007] do_syscall_64+0x67/0x180 [ 2.412007] entry_SYSCALL64_slow_path+0x25/0x25 [ 2.412007] RIP: 0033:0x7fc2eccc6bf9 [ 2.412007] RSP: 002b:00007ffc233ca698 EFLAGS: 00000246 ORIG_RAX: 0000000000000139 [ 2.412007] RAX: ffffffffffffffda RBX: 000055ffbe27ff60 RCX: 00007fc2eccc6bf9 [ 2.412007] RDX: 0000000000000000 RSI: 00007fc2ed7ff995 RDI: 0000000000000006 [...
2017 Nov 15
1
ctdb vacuum timeouts and record locks
...est_send+0x78/0x80 [fuse] > [<ffffffffc05f0bdd>] fuse_simple_request+0xbd/0x190 [fuse] > [<ffffffffc05f6c37>] fuse_setlk+0x177/0x190 [fuse] > [<ffffffffa0659467>] SyS_flock+0x117/0x190 > [<ffffffffa0403b1c>] do_syscall_64+0x7c/0xf0 > [<ffffffffa0a0632f>] entry_SYSCALL64_slow_path+0x25/0x25 > [<ffffffffffffffff>] 0xffffffffffffffff > > I am still not too sure how to interpret this, but I think this is > pointing me to the gluster file system, so will see what I can find > chasing that down... Yes, it does look like it is in the gluster filesystem. A...
2017 Oct 27
2
ctdb vacuum timeouts and record locks
...c07c3958>] __fuse_request_send+0x78/0x80 [fuse] [<ffffffffc07c6bdd>] fuse_simple_request+0xbd/0x190 [fuse] [<ffffffffc07ccc37>] fuse_setlk+0x177/0x190 [fuse] [<ffffffff816592f7>] SyS_flock+0x117/0x190 [<ffffffff81403b1c>] do_syscall_64+0x7c/0xf0 [<ffffffff81a0632f>] entry_SYSCALL64_slow_path+0x25/0x25 [<ffffffffffffffff>] 0xffffffffffffffff This might happen twice in a day or once in a week, doesn't seem consistent, and so far I haven't found any catalyst. My setup is two servers, the OS is debian and is running samba AD on dedicated SSDs, and each server has a RAID a...
2016 Dec 05
1
Oops with CONFIG_VMAP_STCK and bond device + virtio-net
...18830d>] ? __audit_syscall_entry+0xad/0xf0 [<ffffffffbc111775>] ? trace_hardirqs_on_caller+0xf5/0x1b0 [<ffffffffbc7924b4>] __sys_sendmsg+0x54/0x90 [<ffffffffbc792502>] SyS_sendmsg+0x12/0x20 [<ffffffffbc003eec>] do_syscall_64+0x6c/0x1f0 [<ffffffffbc917589>] entry_SYSCALL64_slow_path+0x25/0x25 Code: ca 75 2c 49 8b 55 08 f6 c2 01 75 25 83 e2 03 81 e3 ff 0f 00 00 45 89 65 14 48 RIP [<ffffffffbc4896fc>] sg_init_one+0x8c/0xa0 RSP <ffffb06e41043698> ---[ end trace 9076d2284efbf735 ]--- This looks like an issue with CONFIG_VMAP_STACK since bond_enslave uses struct...
2016 Dec 05
1
Oops with CONFIG_VMAP_STCK and bond device + virtio-net
...18830d>] ? __audit_syscall_entry+0xad/0xf0 [<ffffffffbc111775>] ? trace_hardirqs_on_caller+0xf5/0x1b0 [<ffffffffbc7924b4>] __sys_sendmsg+0x54/0x90 [<ffffffffbc792502>] SyS_sendmsg+0x12/0x20 [<ffffffffbc003eec>] do_syscall_64+0x6c/0x1f0 [<ffffffffbc917589>] entry_SYSCALL64_slow_path+0x25/0x25 Code: ca 75 2c 49 8b 55 08 f6 c2 01 75 25 83 e2 03 81 e3 ff 0f 00 00 45 89 65 14 48 RIP [<ffffffffbc4896fc>] sg_init_one+0x8c/0xa0 RSP <ffffb06e41043698> ---[ end trace 9076d2284efbf735 ]--- This looks like an issue with CONFIG_VMAP_STACK since bond_enslave uses struct...
2017 Nov 15
0
hung disk sleep process
...c047e958>] __fuse_request_send+0x78/0x80 [fuse] [<ffffffffc0481bdd>] fuse_simple_request+0xbd/0x190 [fuse] [<ffffffffc0487c37>] fuse_setlk+0x177/0x190 [fuse] [<ffffffff8b259467>] SyS_flock+0x117/0x190 [<ffffffff8b003b1c>] do_syscall_64+0x7c/0xf0 [<ffffffff8b60632f>] entry_SYSCALL64_slow_path+0x25/0x25 [<ffffffffffffffff>] 0xffffffffffffffff Once ps shows the D state, the only fix I have found is to reboot the server. After a reboot, things are fine again, until they are not. Given that gluster is the only thing I know of that uses fuse in this system, I guess this is the nex...
2017 Oct 27
0
ctdb vacuum timeouts and record locks
...est_send+0x78/0x80 [fuse] > [<ffffffffc07c6bdd>] fuse_simple_request+0xbd/0x190 [fuse] > [<ffffffffc07ccc37>] fuse_setlk+0x177/0x190 [fuse] > [<ffffffff816592f7>] SyS_flock+0x117/0x190 > [<ffffffff81403b1c>] do_syscall_64+0x7c/0xf0 > [<ffffffff81a0632f>] entry_SYSCALL64_slow_path+0x25/0x25 > [<ffffffffffffffff>] 0xffffffffffffffff I'm pretty sure gstack used to be shipped as an example in the gdb package in Debian. However, it isn't there and changelog.Debian.gz doesn't mention it. I had a quick try of pstack but couldn't get sense out of it. :-...
2016 Aug 22
12
[Bug 97438] New: Running a lot of Firefox instances causes kernel page fault.
https://bugs.freedesktop.org/show_bug.cgi?id=97438 Bug ID: 97438 Summary: Running a lot of Firefox instances causes kernel page fault. Product: xorg Version: unspecified Hardware: x86-64 (AMD64) OS: Linux (All) Status: NEW Severity: major Priority: medium Component:
2017 Sep 11
2
Nouveau: kernel hang on Optimus+Intel+NVidia GeForce 1060m
...trace+0xd1/0xe0 [ 2.512175] ? do_init_module+0x5b/0x1d3 [ 2.512176] ? load_module+0x1c7b/0x2338 [ 2.512178] ? check_version+0x106/0x106 [ 2.512180] ? SYSC_finit_module+0x9a/0xa5 [ 2.512180] ? SYSC_finit_module+0x9a/0xa5 [ 2.512182] ? do_syscall_64+0x54/0x61 [ 2.512184] ? entry_SYSCALL64_slow_path+0x25/0x25 [ 2.512184] Code: 76 2b 49 8b 7c 24 10 4c 8b 67 50 4d 85 e4 75 04 4c 8b 67 10 e8 0d 8e c6 e0 4c 89 e2 48 c7 c7 42 47 7b a0 48 89 c6 e8 c0 39 98 e0 <0f> ff 41 80 e6 10 0f 84 fc 00 00 00 83 7b 48 03 76 1f 8b 43 20 [ 2.512198] ---[ end trace 4250ab84830e3651 ]--- [ 2.528903]...