similar to: list_add corruption problem

Displaying 11 results from an estimated 11 matches similar to: "list_add corruption problem"

2014 Jan 30
2
CentOS 6.5: NFS server crashes with list_add corruption errors
Hi, I'm running CentOS 6.5 as NFS server (v3 and v4) and exporting Ext4 and XFS filesystem. After many months that all works fine today the server crash: Jan 30 09:46:13 qb-storage kernel: ------------[ cut here ]------------ Jan 30 09:46:13 qb-storage kernel: WARNING: at lib/list_debug.c:26 __list_add+0x6d/0xa0() (Not tainted) Jan 30 09:46:13 qb-storage kernel: Hardware name: PowerEdge
2020 May 06
0
[PATCH] iommu/virtio: reverse arguments to list_add
On Tue, May 05, 2020 at 08:47:47PM +0200, Julia Lawall wrote: > Elsewhere in the file, there is a list_for_each_entry with > &vdev->resv_regions as the second argument, suggesting that > &vdev->resv_regions is the list head. So exchange the > arguments on the list_add call to put the list head in the > second argument. > > Fixes: 2a5a31487445
2020 May 08
0
[PATCH] iommu/virtio: reverse arguments to list_add
On Tue, May 05, 2020 at 08:47:47PM +0200, Julia Lawall wrote: > Elsewhere in the file, there is a list_for_each_entry with > &vdev->resv_regions as the second argument, suggesting that > &vdev->resv_regions is the list head. So exchange the > arguments on the list_add call to put the list head in the > second argument. > > Fixes: 2a5a31487445
2010 Aug 17
0
Re: [GIT PULL] devel/pat + devel/kms.fixes-0.5 on RV730 PRO [Radeon HD 4650]
> Can you give provide: > lspci -vvv Yes > full serial log console output? No. > and what were you doing when this happend? (starting X, playing games?) The previous dmesg was captured during about 10 hr day. Stack traces were cumulated at the end one by one ( about 6-9 similar entries). Now i rebooted the box. Started Virt-Manager and F13 guest . Then begun rotate cube ( desktop)
2011 Jan 07
0
Debian 5 image. inspect-os
Oh nice, I can reproduce it here too, although only if I first load the 'logfs' module. It does seem to be a plain bug in the logfs module. Do you mind if I post the 1MB image you sent me into a public kernel bug report? Rich. ---------------------------------------------------------------------- mount -o -t logfs /dev/vda /sysroot/ [ 107.679255] BUG: unable to handle kernel NULL
2010 Aug 18
2
Re: [GIT PULL] devel/pat + devel/kms.fixes-0.5 on RV730 PRO [Radeon HD 4650] Stack trace
Just surfing Net for 2-3 hr concerning "dracut" on Fedora 13. Several "warnings"  at the end of dmesg log. Seems like reaction on some system events scheduled to run on regular basis, but not user''s activity. Boris. --- On Tue, 8/17/10, Boris Derzhavets <bderzhavets@yahoo.com> wrote: From: Boris Derzhavets <bderzhavets@yahoo.com> Subject: Re: [Xen-devel]
2013 Sep 10
1
Errors on NFS server
CentOS 6.4 x86_64 Kernel: 2.6.32-358.14.1.el6.x86_64 I have been noticing repeatedly that after a couple of weeks of uptime my NFS server starts to generate the following error: ------------[ cut here ]------------ WARNING: at lib/list_debug.c:26 __list_add+0x6d/0xa0() (Tainted: G W ---
2013 Nov 28
3
[Bug 877] New: nftables - Set - define core dumps
https://bugzilla.netfilter.org/show_bug.cgi?id=877 Summary: nftables - Set - define core dumps Product: nftables Version: unspecified Platform: x86_64 OS/Version: Ubuntu Status: NEW Severity: major Priority: P5 Component: nft AssignedTo: pablo at netfilter.org ReportedBy: anandrm at
2014 Mar 10
1
gfs2 and quotas - system crash
I have tried sending this before, but it did not appear to get through. Hello, When using gfs2 with quotas on a SAN that is providing storage to two clustered systems running CentOS6.5, one of the systems can crash. This crash appears to be caused when a user tries to add something to a SAN disk when they have exceeded their quota on that disk. Sometimes a stack trace is produced in
2013 Oct 08
1
OT: errors compiling kernel module as a rpm package
Hi all, I am trying to compile openswitch's kernel module in a CentOS 6.4 host, but fails in rpm-check: Requires: kernel(__alloc_percpu) = 0x55f2580b kernel(__alloc_skb) = 0x25421969 kernel(__dev_get_by_index) = 0x6a6d551b kernel(__init_waitqueue_head) = 0xffc7c184 kernel(__ip_select_ident) = 0x848695b3 kernel(__kmalloc) = 0x5a34a45c kernel(__list_add) = 0x0343a1a8 kernel(__nla_put) =
2013 Aug 22
13
Lustre buffer cache causes large system overhead.
We have just discovered that a large buffer cache generated from traversing a lustre file system will cause a significant system overhead for applications with high memory demands. We have seen a 50% slowdown or worse for applications. Even High Performance Linpack, that have no file IO whatsoever is affected. The only remedy seems to be to empty the buffer cache from memory by running