search for: str957

Displaying 10 results from an estimated 10 matches for "str957".

Did you mean: st257
2023 Mar 16
1
How to configure?
...nfo-summary --xml root 3269492 1.6 0.0 600292 91248 ? Sl 07:10 0:07 /usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml root 3270354 4.4 0.0 600292 93260 ? Sl 07:15 0:07 /usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml -8<-- root at str957-clustor00:~# ps -o ppid= 3266352 3266345 root at str957-clustor00:~# ps -o ppid= 3267220 3267213 root at str957-clustor00:~# ps -o ppid= 3268076 3268069 root at str957-clustor00:~# ps -o ppid= 3269492 3269485 root at str957-clustor00:~# ps -o ppid= 3270354 3270347 root at str957-clustor00:~# ps aux...
2023 Mar 16
1
How to configure?
...data info-summary --xml root? ? 3269492? 1.6? 0.0 600292 91248 ?? ? ? ? Sl? 07:10? 0:07 /usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml root? ? 3270354? 4.4? 0.0 600292 93260 ?? ? ? ? Sl? 07:15? 0:07 /usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml -8<-- root at str957-clustor00:~# ps -o ppid= 3266352 3266345 root at str957-clustor00:~# ps -o ppid= 3267220 3267213 root at str957-clustor00:~# ps -o ppid= 3268076 3268069 root at str957-clustor00:~# ps -o ppid= 3269492 3269485 root at str957-clustor00:~# ps -o ppid= 3270354 3270347 root at str957-clustor00:~# ps aux...
2023 Mar 21
1
How to configure?
...es: to stop everything (and free the memory) I have to systemctl stop glusterd ? killall glusterfs{,d} ? killall glfsheal ? systemctl start glusterd [this behaviour hangs a simple reboot of a machine running glusterd... not nice] For now I just restarted glusterd w/o killing the bricks: root at str957-clustor00:~# ps aux|grep glfsheal|wc -l ; systemctl restart glusterd ; ps aux|grep glfsheal|wc -l 618 618 No change neither in glfsheal processes nor in free memory :( Should I "killall glfsheal" before OOK kicks in? Diego Il 16/03/2023 12:37, Strahil Nikolov ha scritto: > Can you...
2023 Mar 21
1
How to configure?
...terd > ? killall glusterfs{,d} > ? killall glfsheal > ? systemctl start glusterd > [this behaviour hangs a simple reboot of a machine running glusterd... > not nice] > > For now I just restarted glusterd w/o killing the bricks: > > root at str957-clustor00:~# ps aux|grep glfsheal|wc -l ; systemctl restart > glusterd ; ps aux|grep glfsheal|wc -l > 618 > 618 > > No change neither in glfsheal processes nor in free memory :( > Should I "killall glfsheal" before OOK kicks in? > > Diego...
2023 Mar 21
1
How to configure?
...glusterd >? ? ? ? killall glusterfs{,d} >? ? ? ? killall glfsheal >? ? ? ? systemctl start glusterd >? ? [this behaviour hangs a simple reboot of a machine running glusterd... >? ? not nice] > >? ? For now I just restarted glusterd w/o killing the bricks: > >? ? root at str957-clustor00:~# ps aux|grep glfsheal|wc -l ; systemctl restart >? ? glusterd ; ps aux|grep glfsheal|wc -l >? ? 618 >? ? 618 > >? ? No change neither in glfsheal processes nor in free memory :( >? ? Should I "killall glfsheal" before OOK kicks in? > >? ? Diego >...
2023 Mar 24
1
How to configure?
...>? ? ? ? systemctl start glusterd > >? ? [this behaviour hangs a simple reboot of a machine running > glusterd... > >? ? not nice] > > > >? ? For now I just restarted glusterd w/o killing the bricks: > > > >? ? root at str957-clustor00:~# ps aux|grep glfsheal|wc -l ; > systemctl restart > >? ? glusterd ; ps aux|grep glfsheal|wc -l > >? ? 618 > >? ? 618 > > > >? ? No change neither in glfsheal processes nor in free memory :( > >? ? Should I "ki...
2023 Mar 15
1
How to configure?
If you don't experience any OOM , you can focus on the heals. 284 processes of glfsheal seems odd. Can you check the ppid for 2-3 randomly picked ?ps -o ppid= <pid> Best Regards,Strahil Nikolov? On Wed, Mar 15, 2023 at 9:54, Diego Zuccato<diego.zuccato at unibo.it> wrote: I enabled it yesterday and that greatly reduced memory pressure. Current volume info: -8<-- Volume
2023 Mar 24
1
How to configure?
...? ? ? >? ? ? ? systemctl start glusterd >? ? ? >? ? [this behaviour hangs a simple reboot of a machine running >? ? glusterd... >? ? ? >? ? not nice] >? ? ? > >? ? ? >? ? For now I just restarted glusterd w/o killing the bricks: >? ? ? > >? ? ? >? ? root at str957-clustor00:~# ps aux|grep glfsheal|wc -l ; >? ? systemctl restart >? ? ? >? ? glusterd ; ps aux|grep glfsheal|wc -l >? ? ? >? ? 618 >? ? ? >? ? 618 >? ? ? > >? ? ? >? ? No change neither in glfsheal processes nor in free memory :( >? ? ? >? ? Should I "kil...
2023 Mar 24
1
How to configure?
...[this behaviour hangs a simple reboot of a machine running > >? ? glusterd... > >? ? ? >? ? not nice] > >? ? ? > > >? ? ? >? ? For now I just restarted glusterd w/o killing the bricks: > >? ? ? > > >? ? ? >? ? root at str957-clustor00:~# ps aux|grep glfsheal|wc -l ; > >? ? systemctl restart > >? ? ? >? ? glusterd ; ps aux|grep glfsheal|wc -l > >? ? ? >? ? 618 > >? ? ? >? ? 618 > >? ? ? > > >? ? ? >? ? No change neither in glfsheal processe...
2023 Apr 23
1
How to configure?
...30 12TB disks each. Since I'm going to start a new volume, could it be better to group disks in 10 3-disk (or 6 5-disk) RAID-0 volumes to reduce the number of bricks? Redundancy would be given by replica 2 (still undecided about arbiter vs thin-arbiter...). Current configuration is: root at str957-clustor00:~# gluster v info cluster_data Volume Name: cluster_data Type: Distributed-Replicate Volume ID: a8caaa90-d161-45bb-a68c-278263a8531a Status: Started Snapshot Count: 0 Number of Bricks: 45 x (2 + 1) = 135 Transport-type: tcp Bricks: Brick1: clustor00:/srv/bricks/00/d Brick2: clustor01:/sr...