similar to: Manual rsync before self-heal to prevent repaired server hanging

Displaying 20 results from an estimated 20000 matches similar to: "Manual rsync before self-heal to prevent repaired server hanging"

2017 Aug 28
0
self-heal not working
On 08/28/2017 01:57 AM, Ben Turner wrote: > ----- Original Message ----- >> From: "mabi" <mabi at protonmail.ch> >> To: "Ravishankar N" <ravishankar at redhat.com> >> Cc: "Ben Turner" <bturner at redhat.com>, "Gluster Users" <gluster-users at gluster.org> >> Sent: Sunday, August 27, 2017 3:15:33 PM >>
2017 Aug 27
0
self-heal not working
Thanks Ravi for your analysis. So as far as I understand nothing to worry about but my question now would be: how do I get rid of this file from the heal info? > -------- Original Message -------- > Subject: Re: [Gluster-users] self-heal not working > Local Time: August 27, 2017 3:45 PM > UTC Time: August 27, 2017 1:45 PM > From: ravishankar at redhat.com > To: mabi <mabi at
2017 Aug 28
0
self-heal not working
On 08/28/2017 01:29 PM, mabi wrote: > Excuse me for my naive questions but how do I reset the afr.dirty > xattr on the file to be healed? and do I need to do that through a > FUSE mount? or simply on every bricks directly? > > Directly on the bricks: `setfattr -n trusted.afr.dirty -v 0x000000000000000000000000
2018 Feb 09
0
self-heal trouble after changing arbiter brick
Hey, Did the heal completed and you still have some entries pending heal? If yes then can you provide the following informations to debug the issue. 1. Which version of gluster you are running 2. gluster volume heal <volname> info summary or gluster volume heal <volname> info 3. getfattr -d -e hex -m . <filepath-on-brick> output of any one of the which is pending heal from all
2014 Sep 05
2
glusterfs replica volume self heal dir very slow!!why?
Hi all? I do the following test? I create a glusterfs replica volume (replica count is 2 ) with two server node(server A and server B)? then mount the volume in client node? then? I shut down the network of server A node? in client node? I copy a dir?which has a lot of small files?? the dir size is 2.9GByte? when copy finish? I start the network of server A node? now?
2017 Aug 28
0
self-heal not working
Great, can you raise a bug for the issue so that it is easier to keep track (plus you'll be notified if the patch is posted) of it? The general guidelines are @ https://gluster.readthedocs.io/en/latest/Contributors-Guide/Bug-Reporting-Guidelines but you just need to provide whatever you described in this email thread in the bug: i.e. volume info, heal info, getfattr and stat output of
2013 Mar 14
1
glusterfs 3.3 self-heal daemon crash and can't be started
Dear glusterfs experts, Recently we have encountered a self-heal daemon crash issue after rebalanced volume. Crash stack bellow: +------------------------------------------------------------------------------+ pending frames: patchset: git://git.gluster.com/glusterfs.git signal received: 11 time of crash: 2013-03-14 16:33:50 configuration details: argp 1 backtrace 1 dlfcn 1 fdatasync 1 libpthread
2017 Aug 21
0
self-heal not working
----- Original Message ----- > From: "mabi" <mabi at protonmail.ch> > To: "Gluster Users" <gluster-users at gluster.org> > Sent: Monday, August 21, 2017 9:28:24 AM > Subject: [Gluster-users] self-heal not working > > Hi, > > I have a replicat 2 with arbiter GlusterFS 3.8.11 cluster and there is > currently one file listed to be healed as
2017 Aug 25
0
self-heal not working
Hi Ravi, Did you get a chance to have a look at the log files I have attached in my last mail? Best, Mabi > -------- Original Message -------- > Subject: Re: [Gluster-users] self-heal not working > Local Time: August 24, 2017 12:08 PM > UTC Time: August 24, 2017 10:08 AM > From: mabi at protonmail.ch > To: Ravishankar N <ravishankar at redhat.com> > Ben Turner
2017 Aug 21
0
self-heal not working
Can you also provide: gluster v heal <my vol> info split-brain If it is split brain just delete the incorrect file from the brick and run heal again. I haven't tried this with arbiter but I assume the process is the same. -b ----- Original Message ----- > From: "mabi" <mabi at protonmail.ch> > To: "Ben Turner" <bturner at redhat.com> > Cc:
2018 Feb 09
1
self-heal trouble after changing arbiter brick
Hi Karthik, Thank you very much, you made me much more relaxed. Below is getfattr output for a file from all the bricks: root at gv2 ~ # getfattr -d -e hex -m . /data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack getfattr: Removing leading '/' from absolute path names # file: data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack
2017 Aug 21
2
self-heal not working
Hi, I have a replicat 2 with arbiter GlusterFS 3.8.11 cluster and there is currently one file listed to be healed as you can see below but never gets healed by the self-heal daemon: Brick node1.domain.tld:/data/myvolume/brick /data/appdata_ocpom4nckwru/preview/1344699/64-64-crop.png Status: Connected Number of entries: 1 Brick node2.domain.tld:/data/myvolume/brick
2017 Aug 24
0
self-heal not working
Unlikely. In your case only the afr.dirty is set, not the afr.volname-client-xx xattr. `gluster volume set myvolume diagnostics.client-log-level DEBUG` is right. On 08/23/2017 10:31 PM, mabi wrote: > I just saw the following bug which was fixed in 3.8.15: > > https://bugzilla.redhat.com/show_bug.cgi?id=1471613 > > Is it possible that the problem I described in this post is
2017 Aug 27
2
self-heal not working
Yes, the shds did pick up the file for healing (I saw messages like " got entry: 1985e233-d5ee-4e3e-a51a-cf0b5f9f2aea") but no error afterwards. Anyway I reproduced it by manually setting the afr.dirty bit for a zero byte file on all 3 bricks. Since there are no afr pending xattrs indicating good/bad copies and all files are zero bytes, the data self-heal algorithm just picks the
2017 Aug 22
0
self-heal not working
Explore the following: - Launch index heal and look at the glustershd logs of all bricks for possible errors - See if the glustershd in each node is connected to all bricks. - If not try to restart shd by `volume start force` - Launch index heal again and try. - Try debugging the shd log by setting client-log-level to DEBUG temporarily. On 08/22/2017 03:19 AM, mabi wrote: > Sure, it
2017 Aug 27
2
self-heal not working
----- Original Message ----- > From: "mabi" <mabi at protonmail.ch> > To: "Ravishankar N" <ravishankar at redhat.com> > Cc: "Ben Turner" <bturner at redhat.com>, "Gluster Users" <gluster-users at gluster.org> > Sent: Sunday, August 27, 2017 3:15:33 PM > Subject: Re: [Gluster-users] self-heal not working > >
2018 Feb 09
0
self-heal trouble after changing arbiter brick
Hi Karthik, Thank you for your reply. The heal is still undergoing, as the /var/log/glusterfs/glustershd.log keeps growing, and there's a lot of pending entries in the heal info. The gluster version is 3.10.9 and 3.10.10 (the version update in progress). It doesn't have info summary [yet?], and the heal info is way too long to attach here. (It takes more than 20 minutes just to collect
2017 Aug 28
2
self-heal not working
Thank you for the command. I ran it on all my nodes and now finally the the self-heal daemon does not report any files to be healed. Hopefully this scenario can get handled properly in newer versions of GlusterFS. > -------- Original Message -------- > Subject: Re: [Gluster-users] self-heal not working > Local Time: August 28, 2017 10:41 AM > UTC Time: August 28, 2017 8:41 AM >
2017 Aug 21
2
self-heal not working
Hi Ben, So it is really a 0 kBytes file everywhere (all nodes including the arbiter and from the client). Here below you will find the output you requested. Hopefully that will help to find out why this specific file is not healing... Let me know if you need any more information. Btw node3 is my arbiter node. NODE1: STAT: File:
2017 Aug 21
2
self-heal not working
Sure, it doesn't look like a split brain based on the output: Brick node1.domain.tld:/data/myvolume/brick Status: Connected Number of entries in split-brain: 0 Brick node2.domain.tld:/data/myvolume/brick Status: Connected Number of entries in split-brain: 0 Brick node3.domain.tld:/srv/glusterfs/myvolume/brick Status: Connected Number of entries in split-brain: 0 > -------- Original