Ravishankar N
2021-Nov-01 01:53 UTC
[Gluster-users] GlusterFS 9.3 - Replicate Volume (2 Bricks / 1 Arbiter) - Self-healing does not always work
On Mon, Nov 1, 2021 at 12:02 AM Thorsten Walk <darkiop at gmail.com> wrote:> Hi Ravi, the file only exists at pve01 and since only once: > > ?[19:22:10] [ssh:root at pve01(192.168.1.50): ~ (700)] > ??># stat > /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768 > File: > /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768 > Size: 6 Blocks: 8 IO Block: 4096 regular file > Device: fd12h/64786d Inode: 528 Links: 1 > Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) > Access: 2021-10-30 14:34:50.385893588 +0200 > Modify: 2021-10-27 00:26:43.988756557 +0200 > Change: 2021-10-27 00:26:43.988756557 +0200 > Birth: - > > ?[19:24:41] [ssh:root at pve01(192.168.1.50): ~ (700)] > ??># ls -l > /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768 > .rw-r--r-- root root 6B 4 days ago ? > /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768 > > ?[19:24:54] [ssh:root at pve01(192.168.1.50): ~ (700)] > ??># cat > /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768 > 28084 > > Hi Thorsten, you can delete the file. From the file size and contents, itlooks like it belongs to ovirt sanlock. Not sure why you ended up in this situation (maybe unlink partially failed on this brick?). You can check the mount, brick and self-heal daemon logs for this gfid to see if you find related error/warning messages. -Ravi -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20211101/89156900/attachment.html>
Thorsten Walk
2021-Nov-01 06:51 UTC
[Gluster-users] GlusterFS 9.3 - Replicate Volume (2 Bricks / 1 Arbiter) - Self-healing does not always work
After deleting the file, output of heal info is clear.>Not sure why you ended up in this situation (maybe unlink partially failedon this brick?) Neither did I, this was a completely fresh setup with 1-2 VMs and 1-2 Proxmox LXC templates. I let it run for a few days and at some point it had the mentioned state. I continue to monitor and start with fill the bricks with data. Thanks for your help! Am Mo., 1. Nov. 2021 um 02:54 Uhr schrieb Ravishankar N < ravishankar.n at pavilion.io>:> > > On Mon, Nov 1, 2021 at 12:02 AM Thorsten Walk <darkiop at gmail.com> wrote: > >> Hi Ravi, the file only exists at pve01 and since only once: >> >> ?[19:22:10] [ssh:root at pve01(192.168.1.50): ~ (700)] >> ??># stat >> /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768 >> File: >> /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768 >> Size: 6 Blocks: 8 IO Block: 4096 regular file >> Device: fd12h/64786d Inode: 528 Links: 1 >> Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) >> Access: 2021-10-30 14:34:50.385893588 +0200 >> Modify: 2021-10-27 00:26:43.988756557 +0200 >> Change: 2021-10-27 00:26:43.988756557 +0200 >> Birth: - >> >> ?[19:24:41] [ssh:root at pve01(192.168.1.50): ~ (700)] >> ??># ls -l >> /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768 >> .rw-r--r-- root root 6B 4 days ago ? >> /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768 >> >> ?[19:24:54] [ssh:root at pve01(192.168.1.50): ~ (700)] >> ??># cat >> /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768 >> 28084 >> >> Hi Thorsten, you can delete the file. From the file size and contents, it > looks like it belongs to ovirt sanlock. Not sure why you ended up in this > situation (maybe unlink partially failed on this brick?). You can check the > mount, brick and self-heal daemon logs for this gfid to see if you find > related error/warning messages. > > -Ravi >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20211101/cd0e75d0/attachment.html>