similar to: Heal operation detail of EC volumes

Displaying 20 results from an estimated 7000 matches similar to: "Heal operation detail of EC volumes"

2017 Jun 01
0
Heal operation detail of EC volumes
>Is it possible that this matches your observations ? Yes that matches what I see. So 19 files is being in parallel by 19 SHD processes. I thought only one file is being healed at a time. Then what is the meaning of disperse.shd-max-threads parameter? If I set it to 2 then each SHD thread will heal two files at the same time? >How many IOPS can handle your bricks ? Bricks are 7200RPM NL-SAS
2017 Jun 01
3
Heal operation detail of EC volumes
Hi Serkan, On 30/05/17 10:22, Serkan ?oban wrote: > Ok I understand that heal operation takes place on server side. In > this case I should see X KB > out network traffic from 16 servers and 16X KB input traffic to the > failed brick server right? So that process will get 16 chunks > recalculate our chunk and write it to disk. That should be the normal operation for a single
2017 Jun 08
1
Heal operation detail of EC volumes
On Fri, Jun 2, 2017 at 1:01 AM, Serkan ?oban <cobanserkan at gmail.com> wrote: > >Is it possible that this matches your observations ? > Yes that matches what I see. So 19 files is being in parallel by 19 > SHD processes. I thought only one file is being healed at a time. > Then what is the meaning of disperse.shd-max-threads parameter? If I > set it to 2 then each SHD
2017 Jun 02
1
?==?utf-8?q? Heal operation detail of EC volumes
Hi Serkan, On Thursday, June 01, 2017 21:31 CEST, Serkan ?oban <cobanserkan at gmail.com> wrote: ?>Is it possible that this matches your observations ? Yes that matches what I see. So 19 files is being in parallel by 19 SHD processes. I thought only one file is being healed at a time. Then what is the meaning of disperse.shd-max-threads parameter? If I set it to 2 then each SHD thread
2013 Jul 09
2
Gluster Self Heal
Hi, I have a 2-node gluster with 3 TB storage. 1) I believe the "glusterfsd" is responsible for the self healing between the 2 nodes. 2) Due to some network error, the replication stopped for some reason but the application was accessing the data from node1. When I manually try to start "glusterfsd" service, its not starting. Please advice on how I can maintain
2017 Aug 21
2
self-heal not working
Sure, it doesn't look like a split brain based on the output: Brick node1.domain.tld:/data/myvolume/brick Status: Connected Number of entries in split-brain: 0 Brick node2.domain.tld:/data/myvolume/brick Status: Connected Number of entries in split-brain: 0 Brick node3.domain.tld:/srv/glusterfs/myvolume/brick Status: Connected Number of entries in split-brain: 0 > -------- Original
2017 Aug 22
3
self-heal not working
Thanks for the additional hints, I have the following 2 questions first: - In order to launch the index heal is the following command correct: gluster volume heal myvolume - If I run a "volume start force" will it have any short disruptions on my clients which mount the volume through FUSE? If yes, how long? This is a production system that's why I am asking. > --------
2017 Aug 22
0
self-heal not working
On 08/22/2017 02:30 PM, mabi wrote: > Thanks for the additional hints, I have the following 2 questions first: > > - In order to launch the index heal is the following command correct: > gluster volume heal myvolume > Yes > - If I run a "volume start force" will it have any short disruptions > on my clients which mount the volume through FUSE? If yes, how long?
2017 Jul 11
1
Replica 3 with arbiter - heal error?
Hello, I have a Gluster 3.8.13 with replica 3 arbiter volume mounted and run there a following script: while true; do echo "$(date)" >> a.txt; sleep 2; done After few seconds I add a rule to the firewall on the client, that blocks access to node specified during mount e.g. if volume is mounted with: mount -t glusterfs -o backupvolfile-server=10.0.0.2 10.0.0.1:/vol /mnt/vol I
2017 Aug 22
0
self-heal not working
Explore the following: - Launch index heal and look at the glustershd logs of all bricks for possible errors - See if the glustershd in each node is connected to all bricks. - If not try to restart shd by `volume start force` - Launch index heal again and try. - Try debugging the shd log by setting client-log-level to DEBUG temporarily. On 08/22/2017 03:19 AM, mabi wrote: > Sure, it
2017 Aug 23
2
self-heal not working
I just saw the following bug which was fixed in 3.8.15: https://bugzilla.redhat.com/show_bug.cgi?id=1471613 Is it possible that the problem I described in this post is related to that bug? > -------- Original Message -------- > Subject: Re: [Gluster-users] self-heal not working > Local Time: August 22, 2017 11:51 AM > UTC Time: August 22, 2017 9:51 AM > From: ravishankar at
2017 Aug 24
2
self-heal not working
Thanks for confirming the command. I have now enabled DEBUG client-log-level, run a heal and then attached the glustershd log files of all 3 nodes in this mail. The volume concerned is called myvol-pro, the other 3 volumes have no problem so far. Also note that in the mean time it looks like the file has been deleted by the user and as such the heal info command does not show the file name
2017 Oct 17
3
gfid entries in volume heal info that do not heal
Hi Matt, Run these commands on all the bricks of the replica pair to get the attrs set on the backend. On the bricks of first replica set: getfattr -d -e hex -m . <brick path>/.glusterfs/10/86/ 108694db-c039-4b7c-bd3d-ad6a15d811a2 On the fourth replica set: getfattr -d -e hex -m . <brick path>/.glusterfs/ e0/c5/e0c56bf7-8bfe-46ca-bde1-e46b92d33df3 Also run the "gluster volume
2017 Aug 24
0
self-heal not working
Unlikely. In your case only the afr.dirty is set, not the afr.volname-client-xx xattr. `gluster volume set myvolume diagnostics.client-log-level DEBUG` is right. On 08/23/2017 10:31 PM, mabi wrote: > I just saw the following bug which was fixed in 3.8.15: > > https://bugzilla.redhat.com/show_bug.cgi?id=1471613 > > Is it possible that the problem I described in this post is
2017 Aug 27
2
self-heal not working
Yes, the shds did pick up the file for healing (I saw messages like " got entry: 1985e233-d5ee-4e3e-a51a-cf0b5f9f2aea") but no error afterwards. Anyway I reproduced it by manually setting the afr.dirty bit for a zero byte file on all 3 bricks. Since there are no afr pending xattrs indicating good/bad copies and all files are zero bytes, the data self-heal algorithm just picks the
2017 Oct 18
1
gfid entries in volume heal info that do not heal
Hey Matt, >From the xattr output, it looks like the files are not present on the arbiter brick & needs healing. But on the parent it does not have the pending markers set for those entries. The workaround for this is you need to do a lookup on the file which needs heal from the mount, so it will create the entry on the arbiter brick and then run the volume heal to do the healing. Follow
2017 Aug 25
0
self-heal not working
Hi Ravi, Did you get a chance to have a look at the log files I have attached in my last mail? Best, Mabi > -------- Original Message -------- > Subject: Re: [Gluster-users] self-heal not working > Local Time: August 24, 2017 12:08 PM > UTC Time: August 24, 2017 10:08 AM > From: mabi at protonmail.ch > To: Ravishankar N <ravishankar at redhat.com> > Ben Turner
2017 Oct 17
0
gfid entries in volume heal info that do not heal
Attached is the heal log for the volume as well as the shd log. >> Run these commands on all the bricks of the replica pair to get the attrs set on the backend. [root at tpc-cent-glus1-081017 ~]# getfattr -d -e hex -m . /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2 getfattr: Removing leading '/' from absolute path names # file:
2017 Aug 27
2
self-heal not working
----- Original Message ----- > From: "mabi" <mabi at protonmail.ch> > To: "Ravishankar N" <ravishankar at redhat.com> > Cc: "Ben Turner" <bturner at redhat.com>, "Gluster Users" <gluster-users at gluster.org> > Sent: Sunday, August 27, 2017 3:15:33 PM > Subject: Re: [Gluster-users] self-heal not working > >
2017 Aug 28
3
self-heal not working
Excuse me for my naive questions but how do I reset the afr.dirty xattr on the file to be healed? and do I need to do that through a FUSE mount? or simply on every bricks directly? > -------- Original Message -------- > Subject: Re: [Gluster-users] self-heal not working > Local Time: August 28, 2017 5:58 AM > UTC Time: August 28, 2017 3:58 AM > From: ravishankar at redhat.com >