Displaying 7 results from an estimated 7 matches for "xfs_do_force_shutdown".
2015 Sep 21
2
Centos 6.6, apparent xfs corruption
...18 20:35:15 gries kernel: [<ffffffff810e5c87>] ?
audit_syscall_entry+0x1d7/0x200
Sep 18 20:35:15 gries kernel: [<ffffffff8119fb6b>] ? sys_rename+0x1b/0x20
Sep 18 20:35:15 gries kernel: [<ffffffff8100b072>] ?
system_call_fastpath+0x16/0x1b
Sep 18 20:35:15 gries kernel: XFS (dm-2): xfs_do_force_shutdown(0x8) called
from line 1949 of file fs/xfs/xfs_trans.c. Return address =
0xffffffffa01f2e6e
Sep 18 20:35:15 gries kernel: XFS (dm-2): Corruption of in-memory data
detected. Shutting down filesystem
Sep 18 20:35:15 gries kernel: XFS (dm-2): Please umount the filesystem and
rectify the problem(s)
Se...
2015 Sep 21
0
Centos 6.6, apparent xfs corruption
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
I think you need to read this from the bottom up:
"Corruption of in-memory data detected. Shutting down filesystem"
so XFS calls xfs_do_force_shutdown to shut down the filesystem. The
call comes from fs/xfs/xfs_trans.c which fails, and so reports
"Internal error xfs_trans_cancel".
In other words, I would look at the memory corruption first. This
_could_ be a kernel problem, but I would suggest starting with an
extended memory check,...
2020 May 11
1
XFS problem
...00 00 02 00 00 00
[443804.321338] blk_update_request: I/O error, dev sdb, sector 10332927480
[443804.321376] sd 0:0:0:0: rejecting I/O to offline device
[443804.321384] XFS (dm-2): metadata I/O error: block 0xf00001 ("xfs_trans_read_buf_map") error 5 numblks 1
[443804.321390] XFS (dm-2): xfs_do_force_shutdown(0x1) called from line 239 of file fs/xfs/libxfs/xfs_defer.c. Return address = 0xffffffffc073a90b
[443804.321421] sd 0:0:0:0: rejecting I/O to offline device
[443804.321431] XFS (dm-2): metadata I/O error: block 0x1e04cee ("xlog_iodone") error 5 numblks 64
[443804.321433] XFS (dm-2): xfs_...
2017 Nov 16
2
xfs_rename error and brick offline
...12 node10 kernel: [<ffffffff810d1698>] ?
audit_syscall_entry+0x2d8/0x300
Nov 16 11:15:12 node10 kernel: [<ffffffff811883ab>] ? sys_rename+0x1b/0x20
Nov 16 11:15:12 node10 kernel: [<ffffffff8100b032>] ?
system_call_fastpath+0x16/0x1b
Nov 16 11:15:12 node10 kernel: XFS (rdc00d28p2): xfs_do_force_shutdown(0x8)
called from line 1949 of file fs/xfs/xfs_trans.c. Return address =
0xffffffffa04e5e52
Nov 16 11:15:12 node10 kernel: XFS (rdc00d28p2): Corruption of in-memory
data detected. Shutting down filesystem
Nov 16 11:15:12 node10 kernel: XFS (rdc00d28p2): Please umount the
filesystem and rectify the...
2017 Nov 16
0
xfs_rename error and brick offline
...f810d1698>] ?
> audit_syscall_entry+0x2d8/0x300
> Nov 16 11:15:12 node10 kernel: [<ffffffff811883ab>] ? sys_rename+0x1b/0x20
> Nov 16 11:15:12 node10 kernel: [<ffffffff8100b032>] ?
> system_call_fastpath+0x16/0x1b
> Nov 16 11:15:12 node10 kernel: XFS (rdc00d28p2):
> xfs_do_force_shutdown(0x8) called from line 1949 of file
> fs/xfs/xfs_trans.c. Return address = 0xffffffffa04e5e52
> Nov 16 11:15:12 node10 kernel: XFS (rdc00d28p2): Corruption of in-memory
> data detected. Shutting down filesystem
> Nov 16 11:15:12 node10 kernel: XFS (rdc00d28p2): Please umount the
> f...
2019 Oct 28
1
NFS shutdown issue
...16:34:17 linux-fs01 systemd: Unmounting /rsnapshot...
Oct 28 16:34:17 linux-fs01 kernel: XFS (sde1): Unmounting Filesystem
Oct 28 16:34:19 linux-fs01 kernel: XFS (sde1): metadata I/O error: block
0x2800ccac8 ("xlog_iodone") error 5 numblks 64
Oct 28 16:34:19 linux-fs01 kernel: XFS (sde1): xfs_do_force_shutdown(0x2)
called from line 1221 of file fs/xfs/xfs_log.c.
Return address = 0xffffffffc06cec30
Oct 28 16:34:19 linux-fs01 kernel: XFS (sde1): Log I/O Error Detected.
Shutting down filesystem
Oct 28 16:34:19 linux-fs01 kernel: XFS (sde1): Please umount the filesystem
and rectify the problem(s)
Oct 28 16:3...
2013 Apr 18
39
Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
Hi,
I''ve been working on getting a working blktap driver allowing to
access ceph RBD block devices without relying on the RBD kernel driver
and it finally got to a point where, it works and is testable.
Some of the advantages are:
- Easier to update to newer RBD version
- Allows functionality only available in the userspace RBD library
(write cache, layering, ...)
- Less issue when