similar to: What is glustershd ?

Displaying 20 results from an estimated 6000 matches similar to: "What is glustershd ?"

2018 Feb 09
0
self-heal trouble after changing arbiter brick
Hi Karthik, Thank you for your reply. The heal is still undergoing, as the /var/log/glusterfs/glustershd.log keeps growing, and there's a lot of pending entries in the heal info. The gluster version is 3.10.9 and 3.10.10 (the version update in progress). It doesn't have info summary [yet?], and the heal info is way too long to attach here. (It takes more than 20 minutes just to collect
2018 Feb 09
0
self-heal trouble after changing arbiter brick
Hey, Did the heal completed and you still have some entries pending heal? If yes then can you provide the following informations to debug the issue. 1. Which version of gluster you are running 2. gluster volume heal <volname> info summary or gluster volume heal <volname> info 3. getfattr -d -e hex -m . <filepath-on-brick> output of any one of the which is pending heal from all
2018 Feb 08
5
self-heal trouble after changing arbiter brick
Hi folks, I'm troubled moving an arbiter brick to another server because of I/O load issues. My setup is as follows: # gluster volume info Volume Name: myvol Type: Distributed-Replicate Volume ID: 43ba517a-ac09-461e-99da-a197759a7dc8 Status: Started Snapshot Count: 0 Number of Bricks: 3 x (2 + 1) = 9 Transport-type: tcp Bricks: Brick1: gv0:/data/glusterfs Brick2: gv1:/data/glusterfs Brick3:
2013 Jul 09
2
Gluster Self Heal
Hi, I have a 2-node gluster with 3 TB storage. 1) I believe the "glusterfsd" is responsible for the self healing between the 2 nodes. 2) Due to some network error, the replication stopped for some reason but the application was accessing the data from node1. When I manually try to start "glusterfsd" service, its not starting. Please advice on how I can maintain
2018 Feb 09
1
self-heal trouble after changing arbiter brick
Hi Karthik, Thank you very much, you made me much more relaxed. Below is getfattr output for a file from all the bricks: root at gv2 ~ # getfattr -d -e hex -m . /data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack getfattr: Removing leading '/' from absolute path names # file: data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack
2017 Aug 24
2
self-heal not working
Thanks for confirming the command. I have now enabled DEBUG client-log-level, run a heal and then attached the glustershd log files of all 3 nodes in this mail. The volume concerned is called myvol-pro, the other 3 volumes have no problem so far. Also note that in the mean time it looks like the file has been deleted by the user and as such the heal info command does not show the file name
2017 Sep 15
3
0-client_t: null client [Invalid argument] & high CPU usage (Gluster 3.12)
Howdy, I'm setting up several gluster 3.12 clusters running on CentOS 7 and have having issues with glusterd.log and glustershd.log both being filled with errors relating to null client errors and client-callback functions. They seem to be related to high CPU usage across the nodes although I don't have a way of confirming that (suggestions welcomed!). in
2017 Aug 25
0
self-heal not working
Hi Ravi, Did you get a chance to have a look at the log files I have attached in my last mail? Best, Mabi > -------- Original Message -------- > Subject: Re: [Gluster-users] self-heal not working > Local Time: August 24, 2017 12:08 PM > UTC Time: August 24, 2017 10:08 AM > From: mabi at protonmail.ch > To: Ravishankar N <ravishankar at redhat.com> > Ben Turner
2017 Aug 27
2
self-heal not working
Yes, the shds did pick up the file for healing (I saw messages like " got entry: 1985e233-d5ee-4e3e-a51a-cf0b5f9f2aea") but no error afterwards. Anyway I reproduced it by manually setting the afr.dirty bit for a zero byte file on all 3 bricks. Since there are no afr pending xattrs indicating good/bad copies and all files are zero bytes, the data self-heal algorithm just picks the
2017 Aug 24
0
self-heal not working
Unlikely. In your case only the afr.dirty is set, not the afr.volname-client-xx xattr. `gluster volume set myvolume diagnostics.client-log-level DEBUG` is right. On 08/23/2017 10:31 PM, mabi wrote: > I just saw the following bug which was fixed in 3.8.15: > > https://bugzilla.redhat.com/show_bug.cgi?id=1471613 > > Is it possible that the problem I described in this post is
2017 Aug 31
1
error msg in the glustershd.log
Based on this BZ https://bugzilla.redhat.com/show_bug.cgi?id=1414287 it has been fixed in glusterfs-3.11.0 --- Ashish ----- Original Message ----- From: "Amudhan P" <amudhan83 at gmail.com> To: "Ashish Pandey" <aspandey at redhat.com> Cc: "Gluster Users" <gluster-users at gluster.org> Sent: Thursday, August 31, 2017 1:07:16 PM Subject:
2017 Aug 27
2
self-heal not working
----- Original Message ----- > From: "mabi" <mabi at protonmail.ch> > To: "Ravishankar N" <ravishankar at redhat.com> > Cc: "Ben Turner" <bturner at redhat.com>, "Gluster Users" <gluster-users at gluster.org> > Sent: Sunday, August 27, 2017 3:15:33 PM > Subject: Re: [Gluster-users] self-heal not working > >
2017 Aug 27
0
self-heal not working
Thanks Ravi for your analysis. So as far as I understand nothing to worry about but my question now would be: how do I get rid of this file from the heal info? > -------- Original Message -------- > Subject: Re: [Gluster-users] self-heal not working > Local Time: August 27, 2017 3:45 PM > UTC Time: August 27, 2017 1:45 PM > From: ravishankar at redhat.com > To: mabi <mabi at
2013 Sep 05
1
NFS cann't use by esxi with Striped Volume
After some test , I confirm that Esxi cann't use Striped-Replicate volume on Glusterfs's nfs. But could success on Distributed-Replicate . Anyone know how or why ? 2013/9/5 higkoohk <higkoohk at gmail.com> > Thanks Vijay ! > > It run success after 'volume set images-stripe nfs.nlm off'. > > Now I can use Esxi with Glusterfs's nfs export . > > Many
2017 Aug 31
0
error msg in the glustershd.log
Ashish, which version has this issue fixed? On Tue, Aug 29, 2017 at 6:38 PM, Amudhan P <amudhan83 at gmail.com> wrote: > I am using 3.10.1 from which version this update is available. > > > On Tue, Aug 29, 2017 at 5:03 PM, Ashish Pandey <aspandey at redhat.com> > wrote: > >> >> Whenever we do some fop on EC volume on a file, we check the xattr also
2017 Sep 18
0
0-client_t: null client [Invalid argument] & high CPU usage (Gluster 3.12)
Sam, You might want to give glusterfs-3.12.1 a try instead. On Fri, Sep 15, 2017 at 6:42 AM, Sam McLeod <mailinglists at smcleod.net> wrote: > Howdy, > > I'm setting up several gluster 3.12 clusters running on CentOS 7 and have > having issues with glusterd.log and glustershd.log both being filled with > errors relating to null client errors and client-callback
2017 Aug 29
0
error msg in the glustershd.log
Whenever we do some fop on EC volume on a file, we check the xattr also to see if the file is healthy or not. If not, we trigger heal. lookup is the fop for which we don't take inodelk lock so it is possible that the xattr which we get for lookup fop are different for some bricks. This difference is not reliable but still we are triggering heal and that is why you are seeing these messages.
2017 Sep 18
2
0-client_t: null client [Invalid argument] & high CPU usage (Gluster 3.12)
Thanks Milind, Yes I?m hanging out for CentOS?s Storage / Gluster SIG to release the packages for 3.12.1, I can see the packages were built a week ago but they?re still not on the repo :( -- Sam > On 18 Sep 2017, at 9:57 pm, Milind Changire <mchangir at redhat.com> wrote: > > Sam, > You might want to give glusterfs-3.12.1 a try instead. > > > >> On Fri, Sep
2017 Aug 28
3
self-heal not working
Excuse me for my naive questions but how do I reset the afr.dirty xattr on the file to be healed? and do I need to do that through a FUSE mount? or simply on every bricks directly? > -------- Original Message -------- > Subject: Re: [Gluster-users] self-heal not working > Local Time: August 28, 2017 5:58 AM > UTC Time: August 28, 2017 3:58 AM > From: ravishankar at redhat.com >
2017 Aug 28
0
self-heal not working
On 08/28/2017 01:57 AM, Ben Turner wrote: > ----- Original Message ----- >> From: "mabi" <mabi at protonmail.ch> >> To: "Ravishankar N" <ravishankar at redhat.com> >> Cc: "Ben Turner" <bturner at redhat.com>, "Gluster Users" <gluster-users at gluster.org> >> Sent: Sunday, August 27, 2017 3:15:33 PM >>