Displaying 20 results from an estimated 200 matches similar to: "BUG: KASAN: use-after-free in free_old_xmit_skbs"
2017 Apr 20
0
Testing kernel crash: 4.9.23-26.el6.x86_64
Hello CentOS Xen Heroes,
Yesterday, I have installed testing kernel 4.9.23-26.el6.x86_64 from virt-xen-testing repo.
It crashed today morning.
Hardware is a pretty ancient, testing machine (CO6 PV guests only), but had no
problems yet. It was stable on 4.9*, including testing 4.9.15-22.el6.x86_64
Console output:
[59826.069427] general protection fault: 0000 [#1] SMP
[59826.069463] Modules
2018 Nov 13
0
centos 7.5 crashed, kernel BUG at net/core/skbuff.c:3668!
...
[ 176.025679] random: crng init done
[ 411.168635] netem: version 1.3
[ 456.059840] ------------[ cut here ]------------
[ 456.059849] WARNING: CPU: 4 PID: 1918 at net/ipv4/tcp_output.c:1048 tcp_set_skb_tso_segs+0xeb/0x100
[ 456.059851] Modules linked in: sch_netem cirrus ttm drm_kms_helper crc32_pclmul syscopyarea ghash_clmulni_intel sysfillrect sysimgblt fb_sys_fops aesni_intel lrw
2017 Nov 15
0
ctdb vacuum timeouts and record locks
Hi Martin,
well, it has been over a week since my last hung process, but got
another one today...
>> So, not sure how to determine if this is a gluster problem, an lxc
>> problem, or a ctdb/smbd problem. Thoughts/suggestions are welcome...
>
> You need a stack trace of the stuck smbd process. If it is wedged in a
> system call on the cluster filesystem then you can blame
2017 Mar 07
0
panic in virtio console startup in v4.11-rc1
I built a v4.11-rc1 kernel, and booted it in a KVM guest and got this
oops during startup. My config is attached:
[ 2.386786] virtio-pci 0000:00:07.0: virtio_pci: leaving for legacy driver
[ 2.395650] BUG: unable to handle kernel paging request at ffff97a3ec5f4000
[ 2.396005] IP: string+0x43/0x80
[ 2.396005] PGD 2d25b067
[ 2.396005] PUD 2d25f067
[ 2.396005] PMD 12c696063
[
2017 Mar 07
0
panic in virtio console startup in v4.11-rc1
I built a v4.11-rc1 kernel, and booted it in a KVM guest and got this
oops during startup. My config is attached:
[ 2.386786] virtio-pci 0000:00:07.0: virtio_pci: leaving for legacy driver
[ 2.395650] BUG: unable to handle kernel paging request at ffff97a3ec5f4000
[ 2.396005] IP: string+0x43/0x80
[ 2.396005] PGD 2d25b067
[ 2.396005] PUD 2d25f067
[ 2.396005] PMD 12c696063
[
2013 Jun 10
1
btrfs-cleaner Blocked on xfstests 068
I''m running into a problem with the btrfs-cleaner thread becoming
blocked on xfstests 068.
The test locks up indefinitely without completing (normally it
finished in about 45 seconds on my test box).
I''ve replicated the issue on 3.10.0_rc5 and the for-linus branch of 3.9.0.
I ran a git bisect on the 3.9.0 for-linus branch, and tracked my issue
to the following commit:
commit
2017 Nov 15
1
ctdb vacuum timeouts and record locks
On Tue, 14 Nov 2017 22:48:57 -0800, Computerisms Corporation via samba
<samba at lists.samba.org> wrote:
> well, it has been over a week since my last hung process, but got
> another one today...
> >> So, not sure how to determine if this is a gluster problem, an lxc
> >> problem, or a ctdb/smbd problem. Thoughts/suggestions are welcome...
> >
> >
2016 Dec 05
1
Oops with CONFIG_VMAP_STCK and bond device + virtio-net
Hi,
Fedora got a bug report https://bugzilla.redhat.com/show_bug.cgi?id=1401612
In qemu with two virtio-net interfaces:
$ ip l
...
5: ens14: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 52:54:00:e9:64:41 brd ff:ff:ff:ff:ff:ff
6: ens15: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
2016 Dec 05
1
Oops with CONFIG_VMAP_STCK and bond device + virtio-net
Hi,
Fedora got a bug report https://bugzilla.redhat.com/show_bug.cgi?id=1401612
In qemu with two virtio-net interfaces:
$ ip l
...
5: ens14: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 52:54:00:e9:64:41 brd ff:ff:ff:ff:ff:ff
6: ens15: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
2017 Nov 15
0
hung disk sleep process
Hi,
I have a problem, but am not really sure the question I need to ask. So
going to lay it all down and maybe someone can point me in the right
direction...
I have a replicated gluster volume across two servers. Each server has
its OS installed on an SSD, and a RAID array is mounted on each server
as a brick. Both servers run a Samba AD, among other things, and an LXC
container for a
2017 Oct 27
0
ctdb vacuum timeouts and record locks
Hi Bob,
On Thu, 26 Oct 2017 22:44:30 -0700, Computerisms Corporation via samba
<samba at lists.samba.org> wrote:
> I set up a ctdb cluster a couple months back. Things seemed pretty
> solid for the first 2-3 weeks, but then I started getting reports of
> people not being able to access files, or some times directories. It
> has taken me a while to figure some stuff out,
2017 Oct 27
2
ctdb vacuum timeouts and record locks
Hi List,
I set up a ctdb cluster a couple months back. Things seemed pretty
solid for the first 2-3 weeks, but then I started getting reports of
people not being able to access files, or some times directories. It
has taken me a while to figure some stuff out, but it seems the common
denominator to this happening is vacuuming timeouts for locking.tdb in
the ctdb log, which might go on
2018 Mar 19
0
get_user_pages returning 0 (was Re: kernel BUG at drivers/vhost/vhost.c:LINE!)
Hello!
The following code triggered by syzbot
r = get_user_pages_fast(log, 1, 1, &page);
if (r < 0)
return r;
BUG_ON(r != 1);
Just looking at get_user_pages_fast's documentation this seems
impossible - it is supposed to only ever return # of pages
pinned or errno.
However, poking at code, I see at least one path that might cause this:
2018 Jul 26
0
net-next boot error
[ Added Thomas Gleixner ]
On Thu, 26 Jul 2018 11:34:39 +0200
Dmitry Vyukov <dvyukov at google.com> wrote:
> On Thu, Jul 26, 2018 at 11:29 AM, syzbot
> <syzbot+604f8271211546f5b3c7 at syzkaller.appspotmail.com> wrote:
> > Hello,
> >
> > syzbot found the following crash on:
> >
> > HEAD commit: dc66fe43b7eb rds: send: Fix dead code in rds_sendmsg
2017 Nov 06
2
ctdb vacuum timeouts and record locks
On Thu, 2 Nov 2017 11:17:27 -0700, Computerisms Corporation via samba
<samba at lists.samba.org> wrote:
> This occurred again this morning, when the user reported the problem, I
> found in the ctdb logs that vacuuming has been going on since last
> night. The need to fix it was urgent (when isn't it?) so I didn't have
> time to poke around for clues, but immediately
2013 Jan 08
10
kernel BUG at fs/btrfs/volumes.c:3707 still not fixed in 3.7.1 (btrfs-zero-log required) but shown as "RIP btrfs_num_copies"
Unfortunately my laptop deadlocks from time to time, and too often
it triggers this bug in btrfs which is quite hard to recover from.
The bigger problem is that all the user sees (if anything) is seemingly
unrelated info, namely, "RIP: btrfs_num_copies+0x42/0x0b" or somesuch
http://marc.merlins.org/tmp/btrfs_num_copies.jpg
It''s only if you have serial console, or netconsole,
2005 Nov 09
2
FreeBSD DomU Fatal trap 12 in cpu_switch_load_gs
I see the following with a FreeBSD 5.3 DomU on 2.0.7
Fatal trap 12: page fault while in kernel mode
fault virtual address = 0xca7a0088
fault code = supervisor read, page not present
instruction pointer = 0x819:0xc0183343
stack pointer = 0x821:0xca069d30
frame pointer = 0x821:0x0
code segment = base 0x235, limit 0x8485e7c3, type 0x9
2018 Jul 26
2
net-next boot error
On Thu, Jul 26, 2018 at 11:29 AM, syzbot
<syzbot+604f8271211546f5b3c7 at syzkaller.appspotmail.com> wrote:
> Hello,
>
> syzbot found the following crash on:
>
> HEAD commit: dc66fe43b7eb rds: send: Fix dead code in rds_sendmsg
> git tree: net-next
> console output: https://syzkaller.appspot.com/x/log.txt?x=127874c8400000
> kernel config:
2018 Jul 26
2
net-next boot error
On Thu, Jul 26, 2018 at 11:29 AM, syzbot
<syzbot+604f8271211546f5b3c7 at syzkaller.appspotmail.com> wrote:
> Hello,
>
> syzbot found the following crash on:
>
> HEAD commit: dc66fe43b7eb rds: send: Fix dead code in rds_sendmsg
> git tree: net-next
> console output: https://syzkaller.appspot.com/x/log.txt?x=127874c8400000
> kernel config:
2006 Jul 03
1
Problem with CentOS 4.3 on kernel and ipvsadm
I have installed two CentOS 4.3 boxes with LVS (from
http://mirror.centos.org/centos/4/csgfs/ ) but all boxes died withe this
error frequently:
kernel panic - not syncing: fs/block_dev.c:396: spin_lock
(fs/block_dev.c:c0361c0) already locked by fs/block_dev.c/287.
I have read from this thread http://threebit.net/mail-archive/centos/msg00243.html that this is an unsolved problem.
So i have