search for: glfsheal

Displaying 20 results from an estimated 24 matches for "glfsheal".

2017 Aug 29
1
glfsheal-v0.log Too many open files
...Any help about the below issue? On Tue, Aug 29, 2017 at 3:00 PM, Serkan ?oban <cobanserkan at gmail.com> wrote: > Hi, > > When I run gluster v heal v0 info, it gives "v0: Not able to fetch > volfile from glusterd" error message. > I see too many open files errors in glfsheal-v0.log file. How can I > increase open file limit for glfsheal? > I already increased nfile limit in /etc/init.d/glusterd and > /etc/init.d/gluserfsd but it did not help. > > Any suggestions?
2023 Mar 21
1
How to configure?
Killed glfsheal, after a day there were 218 processes, then they got killed by OOM during the weekend. Now there are no processes active. Trying to run "heal info" reports lots of files quite quickly but does not spawn any glfsheal process. And neither does restarting glusterd. Is there some way to sel...
2023 Mar 21
1
How to configure?
...heals. Best Regards,Strahil Nikolov? On Thu, Mar 16, 2023 at 15:29, Diego Zuccato<diego.zuccato at unibo.it> wrote: In Debian stopping glusterd does not stop brick processes: to stop everything (and free the memory) I have to systemctl stop glusterd ? killall glusterfs{,d} ? killall glfsheal ? systemctl start glusterd [this behaviour hangs a simple reboot of a machine running glusterd... not nice] For now I just restarted glusterd w/o killing the bricks: root at str957-clustor00:~# ps aux|grep glfsheal|wc -l ; systemctl restart glusterd ; ps aux|grep glfsheal|wc -l 618 618 No chan...
2023 Mar 21
1
How to configure?
I have no clue. Have you checked for errors in the logs ? Maybe you might find something useful. Best Regards,Strahil Nikolov? On Tue, Mar 21, 2023 at 9:56, Diego Zuccato<diego.zuccato at unibo.it> wrote: Killed glfsheal, after a day there were 218 processes, then they got killed by OOM during the weekend. Now there are no processes active. Trying to run "heal info" reports lots of files quite quickly but does not spawn any glfsheal process. And neither does restarting glusterd. Is there some way to sel...
2023 Mar 16
1
How to configure?
...an you restart glusterd service (first check that it was not modified to kill the bricks)? Best Regards,Strahil Nikolov? On Thu, Mar 16, 2023 at 8:26, Diego Zuccato<diego.zuccato at unibo.it> wrote: OOM is just just a matter of time. Today mem use is up to 177G/187 and: # ps aux|grep glfsheal|wc -l 551 (well, one is actually the grep process, so "only" 550 glfsheal processes. I'll take the last 5: root? ? 3266352? 0.5? 0.0 600292 93044 ?? ? ? ? Sl? 06:55? 0:07 /usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml root? ? 3267220? 0.7? 0.0 600292 91964 ?? ? ? ?...
2023 Mar 24
1
How to configure?
In glfsheal-Connection.log I see many lines like: [2023-03-13 23:04:40.241481 +0000] E [MSGID: 104021] [glfs-mgmt.c:586:glfs_mgmt_getspec_cbk] 0-gfapi: failed to get the volume file [{from server}, {errno=2}, {error=File o directory non esistente}] And *lots* of gfid-mismatch errors in glustershd.log . Co...
2023 Mar 16
1
How to configure?
OOM is just just a matter of time. Today mem use is up to 177G/187 and: # ps aux|grep glfsheal|wc -l 551 (well, one is actually the grep process, so "only" 550 glfsheal processes. I'll take the last 5: root 3266352 0.5 0.0 600292 93044 ? Sl 06:55 0:07 /usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml root 3267220 0.7 0.0 600292 91964 ?...
2023 Mar 24
1
How to configure?
Can you check your volume file contents?Maybe it really can't find (or access) a specific volfile ? Best Regards,Strahil Nikolov? On Fri, Mar 24, 2023 at 8:07, Diego Zuccato<diego.zuccato at unibo.it> wrote: In glfsheal-Connection.log I see many lines like: [2023-03-13 23:04:40.241481 +0000] E [MSGID: 104021] [glfs-mgmt.c:586:glfs_mgmt_getspec_cbk] 0-gfapi: failed to get the volume file [{from server}, {errno=2}, {error=File o directory non esistente}] And *lots* of gfid-mismatch errors in glustershd.log . Co...
2023 Mar 24
1
How to configure?
...kolov ha scritto: > Can you check your volume file contents? > Maybe it really can't find (or access) a specific volfile ? > > Best Regards, > Strahil Nikolov > > On Fri, Mar 24, 2023 at 8:07, Diego Zuccato > <diego.zuccato at unibo.it> wrote: > In glfsheal-Connection.log I see many lines like: > [2023-03-13 23:04:40.241481 +0000] E [MSGID: 104021] > [glfs-mgmt.c:586:glfs_mgmt_getspec_cbk] 0-gfapi: failed to get the > volume file [{from server}, {errno=2}, {error=File o directory non > esistente}] > > And *lots*...
2023 Mar 15
1
How to configure?
If you don't experience any OOM , you can focus on the heals. 284 processes of glfsheal seems odd. Can you check the ppid for 2-3 randomly picked ?ps -o ppid= <pid> Best Regards,Strahil Nikolov? On Wed, Mar 15, 2023 at 9:54, Diego Zuccato<diego.zuccato at unibo.it> wrote: I enabled it yesterday and that greatly reduced memory pressure. Current volume info: -8<--...
2023 Apr 23
1
How to configure?
...cess) a specific volfile ? > > > > Best Regards, > > Strahil Nikolov > > > >? ? On Fri, Mar 24, 2023 at 8:07, Diego Zuccato > >? ? <diego.zuccato at unibo.it <mailto:diego.zuccato at unibo.it>> wrote: > >? ? In glfsheal-Connection.log I see many lines like: > >? ? [2023-03-13 23:04:40.241481 +0000] E [MSGID: 104021] > >? ? [glfs-mgmt.c:586:glfs_mgmt_getspec_cbk] 0-gfapi: failed to get the > >? ? volume file [{from server}, {errno=2}, {error=File o directory non > >? ? es...
2023 Mar 15
1
How to configure?
...on cluster.brick-multiplex: on cluster.daemon-log-level: ERROR -8<-- htop reports that memory usage is up to 143G, there are 602 tasks and 5232 threads (~20 running) on clustor00, 117G/49 tasks/1565 threads on clustor01 and 126G/45 tasks/1574 threads on clustor02. I see quite a lot (284!) of glfsheal processes running on clustor00 (a "gluster v heal cluster_data info summary" is running on clustor02 since yesterday, still no output). Shouldn't be just one per brick? Diego Il 15/03/2023 08:30, Strahil Nikolov ha scritto: > Do you use brick multiplexing ? > > Best Rega...
2017 Jun 28
0
Gluster volume not mounted
The mount log file of the volume would help in debugging the actual cause. On Tue, Jun 27, 2017 at 6:33 PM, Joel Diaz <mrjoeldiaz at gmail.com> wrote: > Good morning Gluster users, > > I'm very new to the Gluster file system. My apologies if this is not the > correct way to seek assistance. However, I would appreciate some insight > into understanding the issue I have.
2017 Jun 27
2
Gluster volume not mounted
Good morning Gluster users, I'm very new to the Gluster file system. My apologies if this is not the correct way to seek assistance. However, I would appreciate some insight into understanding the issue I have. I have three nodes running two volumes, engine and data. The third node is the arbiter on both volumes. Both volumes were operation fine but one of the volumes, data, no longer
2014 Oct 30
1
Firewall ports with v 3.5.2 grumble time
...What has not been helpful is that there was no mention of port: 2049 for NFS over TCP - which would have been helpful and probably my own mistake as I should have known. To really confuse matters I noticed that the bricks were not syncing anyway, and a look at the logs reveals: /var/log/glusterfs/glfsheal-www.log:[2014-10-30 07:39:48.428286] I [client-handshake.c:1462:client_setvolume_cbk] 0-www-client-1: Connected to 111.222.333.444:49154, attached to remote volume '/srv/hod/lampe-www'. along with other entries that show that I also actually need ports: 49154 and 49155 open. even gluste...
2023 Feb 20
1
Gluster 11.0 upgrade
...t;mailto:marcus.pedersen at slu.se>> wrote: > Hi Xavi, > I stopped glusterd and killall glusterd glusterfs glusterfsd > and started glusterd again. > > The only log that is not empty is glusterd.log, I attach the log > from the restart time. The brick log, glustershd.log and glfsheal-gds-common.log is empty. > > This are the errors in the log: > [2023-02-20 07:23:46.235263 +0000] E [MSGID: 106061] [glusterd.c:597:glusterd_crt_georep_folders] 0-glusterd: Dict get failed [{Key=log-group}, {errno=2}, {error=No such file or directory}] > [2023-02-20 07:23:47.359917 +000...
2023 Mar 15
1
How to configure?
Do you use brick multiplexing ? Best Regards,Strahil Nikolov? On Tue, Mar 14, 2023 at 16:44, Diego Zuccato<diego.zuccato at unibo.it> wrote: Hello all. Our Gluster 9.6 cluster is showing increasing problems. Currently it's composed of 3 servers (2x Intel Xeon 4210 [20 cores dual thread, total 40 threads], 192GB RAM, 30x HGST HUH721212AL5200 [12TB]), configured in replica 3
2017 Oct 26
0
not healing one file
...> > HeyRichard, > > > > Could you share the following informations please? > > 1. gluster volume info <volname> > > 2. getfattr output of that file from all the bricks > > getfattr -d -e hex -m . <brickpath/filepath> > > 3. glustershd & glfsheal logs > > > > Regards, > > Karthik > > > > On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com > > <mailto:atumball at redhat.com>> wrote: > > > > On a side note, try recently released health report tool, and see if...
2023 Feb 20
2
Gluster 11.0 upgrade
...u.se>> wrote: > > Hi Xavi, > > I stopped glusterd and killall glusterd glusterfs glusterfsd > > and started glusterd again. > > > > The only log that is not empty is glusterd.log, I attach the log > > from the restart time. The brick log, glustershd.log and glfsheal-gds-common.log is empty. > > > > This are the errors in the log: > > [2023-02-20 07:23:46.235263 +0000] E [MSGID: 106061] [glusterd.c:597:glusterd_crt_georep_folders] 0-glusterd: Dict get failed [{Key=log-group}, {errno=2}, {error=No such file or directory}] > > [2023-02-20...
2017 Jul 23
2
set owner:group on root of volume
On 07/20/2017 03:13 PM, mabi wrote: > Anyone has an idea? or shall I open a bug for that? This is an interesting problem. A few questions: 1. Is there any chance that one of your applications does a chown on the root? 2. Do you notice any logs related to metadata self-heal on '/' in the gluster logs? 3. Does the ownership of all bricks reset to custom uid/gid after every restart