Displaying 5 results from an estimated 5 matches for "xfs_trans_cancel".
2015 Sep 21
2
Centos 6.6, apparent xfs corruption
Hi all -
After several months of worry-free operation, we received the following
kernel messages about an xfs filesystem running under CentOS 6.6. The
proximate causes appear to be "Internal error xfs_trans_cancel" and
"Corruption of in-memory data detected. Shutting down filesystem". The
filesystem is back up, mounted, appears to be working OK underlying a
Splunk datastore. Does anyone have a suggestion on diagnosis or known
problems? Many thanks.....Nick Geo
Sep 18 20:35:15 gries kernel: X...
2015 Sep 21
0
Centos 6.6, apparent xfs corruption
...E-----
Hash: SHA1
I think you need to read this from the bottom up:
"Corruption of in-memory data detected. Shutting down filesystem"
so XFS calls xfs_do_force_shutdown to shut down the filesystem. The
call comes from fs/xfs/xfs_trans.c which fails, and so reports
"Internal error xfs_trans_cancel".
In other words, I would look at the memory corruption first. This
_could_ be a kernel problem, but I would suggest starting with an
extended memory check, it smells to me of a failing chip.
Just my 2d worth!
Martin
On 21/09/15 21:41, Nicholas Geovanis wrote:
> Hi all - After several...
2014 Jul 01
3
corruption of in-memory data detected (xfs)
...ny small files.
Basically, I have around 3.5 - 4 million files on this filesystem. New files are being written to the FS all the
time, until I get to 9-11 mln small files (35k on average).
at some point I get the following in dmesg:
[2870477.695512] Filesystem "sda5": XFS internal error xfs_trans_cancel at line 1138 of file fs/xfs/xfs_trans.c.?
Caller 0xffffffff8826bb7d
[2870477.695558]
[2870477.695559] Call Trace:
[2870477.695611]? [<ffffffff88262c28>] :xfs:xfs_trans_cancel+0x5b/0xfe
[2870477.695643]? [<ffffffff8826bb7d>] :xfs:xfs_mkdir+0x57c/0x5d7
[2870477.695673]? [<ffffffff8822f...
2017 Nov 16
2
xfs_rename error and brick offline
...ith Distributed-Replicate. There are
180 bricks in total. The OS is CentOS6.5, and GlusterFS is 3.11.0. I find
many bricks are offline when we generate some empty files and rename them.
I see xfs call trace in every node.
For example,
Nov 16 11:15:12 node10 kernel: XFS (rdc00d28p2): Internal error
xfs_trans_cancel at line 1948 of file fs/xfs/xfs_trans.c. Caller
0xffffffffa04e33f9
Nov 16 11:15:12 node10 kernel:
Nov 16 11:15:12 node10 kernel: Pid: 9939, comm: glusterfsd Tainted: G
--------------- H 2.6.32-prsys.1.1.0.13.x86_64 #1
Nov 16 11:15:12 node10 kernel: Call Trace:
Nov 16 11:15:12 node10 kernel:...
2017 Nov 16
0
xfs_rename error and brick offline
...re
> 180 bricks in total. The OS is CentOS6.5, and GlusterFS is 3.11.0. I find
> many bricks are offline when we generate some empty files and rename them.
> I see xfs call trace in every node.
>
> For example,
> Nov 16 11:15:12 node10 kernel: XFS (rdc00d28p2): Internal error
> xfs_trans_cancel at line 1948 of file fs/xfs/xfs_trans.c. Caller
> 0xffffffffa04e33f9
> Nov 16 11:15:12 node10 kernel:
> Nov 16 11:15:12 node10 kernel: Pid: 9939, comm: glusterfsd Tainted: G
> --------------- H 2.6.32-prsys.1.1.0.13.x86_64 #1
> Nov 16 11:15:12 node10 kernel: Call Trace:
> N...