similar to: corruption of in-memory data detected (xfs)

Displaying 20 results from an estimated 100 matches similar to: "corruption of in-memory data detected (xfs)"

2015 Sep 21
2
Centos 6.6, apparent xfs corruption
Hi all - After several months of worry-free operation, we received the following kernel messages about an xfs filesystem running under CentOS 6.6. The proximate causes appear to be "Internal error xfs_trans_cancel" and "Corruption of in-memory data detected. Shutting down filesystem". The filesystem is back up, mounted, appears to be working OK underlying a Splunk datastore.
2017 Nov 16
2
xfs_rename error and brick offline
Hi, I have a 5-nodes GlusterFS cluster with Distributed-Replicate. There are 180 bricks in total. The OS is CentOS6.5, and GlusterFS is 3.11.0. I find many bricks are offline when we generate some empty files and rename them. I see xfs call trace in every node. For example, Nov 16 11:15:12 node10 kernel: XFS (rdc00d28p2): Internal error xfs_trans_cancel at line 1948 of file fs/xfs/xfs_trans.c.
2015 Sep 21
0
Centos 6.6, apparent xfs corruption
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 I think you need to read this from the bottom up: "Corruption of in-memory data detected. Shutting down filesystem" so XFS calls xfs_do_force_shutdown to shut down the filesystem. The call comes from fs/xfs/xfs_trans.c which fails, and so reports "Internal error xfs_trans_cancel". In other words, I would look at the memory
2017 Nov 16
0
xfs_rename error and brick offline
On Thu, Nov 16, 2017 at 6:23 AM, Paul <flypen at gmail.com> wrote: > Hi, > > I have a 5-nodes GlusterFS cluster with Distributed-Replicate. There are > 180 bricks in total. The OS is CentOS6.5, and GlusterFS is 3.11.0. I find > many bricks are offline when we generate some empty files and rename them. > I see xfs call trace in every node. > > For example, > Nov 16
2016 Dec 25
1
System freeze if mount cifs share with option "hard", and samba server is not available
I know "hard" means command will hang if network is broken. But it seems that's not actuate. It's CPU that will hang. Affected scope: I tested it against CentOS 5/6/7, it can be reproduced on all the systems. the debug logs below are captured from CentOS 7 (cifs.ko v2.05) How to reproduce this problem? 1. mount a cifs share with option "hard" 2. stop samba
2009 Oct 26
9
Latest Pv_ops dom0 fails to boot
Hi, I found latest pv_ops dom0, commit 3dd81018a392941fcc722ee521de344527481eb8, fails to boot with call trace. And commit 34ffcd2bde0018cf78d5b4f1f5427c38a3e9b502 has no such issue. Could anyone help on this issue? Call trace messages: ####### Mounting proc filesystem Mounting sysfs filesystem Creating /dev [ 0.860962] init[1]: segfault at ffffffff8104f1e8 ip ffffffff8104f1e8 sp
2013 Aug 22
13
Lustre buffer cache causes large system overhead.
We have just discovered that a large buffer cache generated from traversing a lustre file system will cause a significant system overhead for applications with high memory demands. We have seen a 50% slowdown or worse for applications. Even High Performance Linpack, that have no file IO whatsoever is affected. The only remedy seems to be to empty the buffer cache from memory by running
2009 Sep 09
4
Dmesg log for 2.6.31-rc8 kernel been built on F12 (rawhide) vs log for same kernel been built on F11 and installed on F12
Previous 2.6.31-rc8 kernel was built on F11 and installed with modules on F12. Current kernel has been built on F12 (2.6.31-0.204.rc9.fc12.x86_64) and installed on F12 before loading under Xen 3.4.1. Dmesg log looks similar to Michael Yuong''s ''rc7.git4''  kernel for F12. Boris. --- On Tue, 9/8/09, Boris Derzhavets <bderzhavets@yahoo.com> wrote: From: Boris
2007 Oct 27
1
Oops with Nouveau on amd64 with nv15 card and kernel 2.6.22 (debian sid)
Hello, I've build drm and nouveau from git and got this error while starting Xorg. Another thing is that before the install instructions on the wiki worked I had to install xorg-dev. This was the error I got before installing xorg-dev: ./configure: line 20257: syntax error near unexpected token `RANDR,' ./configure: line 20257: `XORG_DRIVER_CHECK_EXT(RANDR, randrproto)' I'm not
2007 Dec 06
6
DomU (Centos 5) with dedicated e1000 (intel) device dropping packets
Hello everybody, I''ve finished with pci export from DomU to Dom0 (Debian Etch) but now i have a new problem, and a big one. My ethernet card is dropping packets but after some time (i can''t tell how) It can work for a day (not in production so not hard tested) and then all packets are dropped. Look at the ifconfig output : eth0      Link encap:Ethernet  HWaddr
2007 Jul 30
3
kmod-drbd-smp (2.6.9-55.0.2.EL) has unknown symbols (kmod-drbd not).
Hi ! Not very blocking because the smp module loads perfectly. # yum --exclude=kmod-drbd*\plus\* install kmod-drbd Setting up Install Process Setting up repositories Reading repository metadata in from local files Excluding Packages in global exclude list Finished Reducing CentOS-4 - Plus to included packages only Finished Parsing package install arguments Resolving Dependencies -->
2008 Sep 03
5
Maintain VNC session across reboots?
Hello, I have Ubuntu installed as a domU on the debian dom0, and connect via VNC in dom0. This is great for doing installs, and seeing the console output. However the VNC session doesn''t seem to keep the active connection during a domU reboot. Has anyone got this working? If so, how? -- John _______________________________________________ Xen-users mailing list
2010 Sep 17
1
General protection fault
Hello I have an active-active DRBD cluster using OCFS2 as the filesystem on the drbd devices. I started getting a "general protection fault error" when trying to mount any one of the ocfs2 volumes I have, even when running mount on a single node, with no mounted FSs on the other node. The kernel trace follows at the bottom of this message. Does anyone know what could have been
2006 Nov 08
1
XFS Issues
We are in the process of migrating XFS filesystems from one storage array to another. Both are arrays are mounted locally on the same CentOS 4.4 system (x86_64). We are running kernel 2.6.9-42.0.2.ELsmp along with kernel-module-xfs-2.6.9-42.0.2.ELsmp-0.1-3. The issue we are having is that while the copy is running (using rsync) the system will log these message periodically: kernel: XFS:
2012 Nov 15
3
Likely mem leak in 3.7
Starting with 3.7 rc1, my workstation seems to loose ram. Up until (and including) 3.6, used-(buffers+cached) was roughly the same as sum(rss) (taking shared into account). Now there is an approx 6G gap. When the box first starts, it is clearly less swappy than with <= 3.6; I can''t tell whether that is related. The reduced swappiness persists. It seems to get worse when I update
2009 Apr 17
0
problem with 5.3 upgrade or just bad timing?
I've been experiencing delays access data off my file server since I upgraded to 5.3... either I hosed something, have bad hardware or very unlikely, found a bug. When reading or writing data, the stream to the hdd's stops every 5-10 min and %iowait goes through the roof. I checked the logs and they are filled with this diagnostic data that I can't readily decipher. my setup
2005 Mar 13
0
warnings when installing modules with latest from -unstable
I just cloned a xeno-unstable.bk tree ( < 30 minutes ago). autofs.ko didn''t get generated for some reason, so I did a separate make modules pass. Then I did a make modules_install from which I got a heap of unknown symbols warnings. I thought this might be fallout from the switch to 2.6.11. -Kip WARNING: /lib/modules/2.6.11-xen0/kernel/fs/fat/fat.ko needs unknown symbol _spi
2009 Apr 03
1
Memory Leak with stock Squirrelmail, PHP, mysql, apache since 5.3
Hi list, We are experiencing a memory leak on our SquirrelMail server since the 5.3 update. The server is fully updated, only stock rpms. The httpd processes are eating all the memory and after swapping like hell, the server became unresponsive and we must hard-reboot it. The server is not that much loaded (max 10-15 concurrent users but with tons of mail in their inbox). Configuration's
2007 Oct 08
0
Xen crash
Hi, I'm new to this list and joined since I am volunteering as a tech admin for a non profit organization called CouchSurfing (.com) where we tried to move the web servers to Xen zones and this has proven quite unstable since the our defined zones tends to crash on a daily basis with the latest CentOS 5 Xen updates. The physical boxes have 2 quad core 1.6GHz Xeon CPU's and 4 GB RAM, there
2007 Oct 08
1
Xen crash
Hi Centos-virt, I'm new to this list and joined since I am volunteering as a tech admin for a non profit organization called CouchSurfing (.com) where we tried to move the web servers to Xen zones and this has proven quite unstable since the our defined zones tends to crash on a daily basis with the latest CentOS 5 Xen updates. The physical boxes have 2 quad core 1.6GHz Xeon CPU's and 4