similar to: XFS problem

Displaying 11 results from an estimated 11 matches similar to: "XFS problem"

2019 Oct 28
1
NFS shutdown issue
Hi all, I have an odd interaction on a CentOS 7 file server. The basic setup is a minimal 7.x install. I have 4 internal drives (/dev/sd[a-d]) configured in a RAID5 and mounted locally on /data. This is exported via NFS to ~12 workstations which use the exported file systems for /home. I have an external drive connected via USB (/dev/sde) and mounted on /rsnapshot. I use rsnapshot to back up
2015 Feb 28
0
Looking for a life-save LVM Guru
On Fri, Feb 27, 2015 at 8:24 PM, John R Pierce <pierce at hogranch.com> wrote: > On 2/27/2015 4:52 PM, Khemara Lyn wrote: >> >> I understand; I tried it in the hope that, I could activate the LV again >> with a new PV replacing the damaged one. But still I could not activate >> it. >> >> What is the right way to recover the remaining PVs left? > >
2016 Mar 16
1
[virt-builder] XFS metadata corruption in Fedora 23 template images (on resize)
Running this: $ virt-builder fedora-23 -o vm1.qcow2 --format qcow2 \ --root-password password:123456 --selinux-relabel --size 100G And, importing vm1.qcow2 into libvirt results in these "corrupted metadata buffer" errors, in a continuous loop, on the serial console: [...] [ 51.687988] XFS (vda3): First 64 bytes of corrupted metadata buffer: [ 51.688938]
2015 Sep 21
0
Centos 6.6, apparent xfs corruption
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 I think you need to read this from the bottom up: "Corruption of in-memory data detected. Shutting down filesystem" so XFS calls xfs_do_force_shutdown to shut down the filesystem. The call comes from fs/xfs/xfs_trans.c which fails, and so reports "Internal error xfs_trans_cancel". In other words, I would look at the memory
2015 Sep 21
2
Centos 6.6, apparent xfs corruption
Hi all - After several months of worry-free operation, we received the following kernel messages about an xfs filesystem running under CentOS 6.6. The proximate causes appear to be "Internal error xfs_trans_cancel" and "Corruption of in-memory data detected. Shutting down filesystem". The filesystem is back up, mounted, appears to be working OK underlying a Splunk datastore.
2017 Nov 16
0
xfs_rename error and brick offline
On Thu, Nov 16, 2017 at 6:23 AM, Paul <flypen at gmail.com> wrote: > Hi, > > I have a 5-nodes GlusterFS cluster with Distributed-Replicate. There are > 180 bricks in total. The OS is CentOS6.5, and GlusterFS is 3.11.0. I find > many bricks are offline when we generate some empty files and rename them. > I see xfs call trace in every node. > > For example, > Nov 16
2017 Nov 16
2
xfs_rename error and brick offline
Hi, I have a 5-nodes GlusterFS cluster with Distributed-Replicate. There are 180 bricks in total. The OS is CentOS6.5, and GlusterFS is 3.11.0. I find many bricks are offline when we generate some empty files and rename them. I see xfs call trace in every node. For example, Nov 16 11:15:12 node10 kernel: XFS (rdc00d28p2): Internal error xfs_trans_cancel at line 1948 of file fs/xfs/xfs_trans.c.
2023 Apr 21
1
[PATCH] ocfs2: Fix wrong search logic in __ocfs2_resv_find_window
On Fri, Apr 21, 2023 at 03:35:01PM +0800, Joseph Qi wrote: > Hi, > Could you please share a reproducer? > Anyone could access & download the URL [1] (I wrote it in patch commit log) without register SUSE account. Please check attachment file, which I downloaded from [1] and modified under the BZ comment 1. The trigger method is also in comment 1, I copy here: ./defragfs_test.sh -d
2015 Feb 28
7
Looking for a life-save LVM Guru
On 2/27/2015 4:52 PM, Khemara Lyn wrote: > I understand; I tried it in the hope that, I could activate the LV again > with a new PV replacing the damaged one. But still I could not activate > it. > > What is the right way to recover the remaining PVs left? take a filing cabinet packed full of 10s of 1000s of files of 100s of pages each, with the index cards interleaved in the
2016 Dec 14
3
Problem with yum on CentOS Linux release 7.2.1511 (Core) with 3.10.0-327.36.3.el7.x86_64 kernel
Everyone, I am at a loss on this problem would appreciate some guidance as to where to start to fix it. I noticed that my home gateway server was not being updated with the new kernel and other software, and when I ran yum it aborted with the following notices. I tried a yum clean all, but this did not fix the problem. I thought the problem may be related to one of the repos, but I have other
2013 Apr 18
39
Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
Hi, I''ve been working on getting a working blktap driver allowing to access ceph RBD block devices without relying on the RBD kernel driver and it finally got to a point where, it works and is testable. Some of the advantages are: - Easier to update to newer RBD version - Allows functionality only available in the userspace RBD library (write cache, layering, ...) - Less issue when