Displaying 14 results from an estimated 14 matches for "clustor00".
2023 Mar 16
1
How to configure?
...mary --xml
root 3269492 1.6 0.0 600292 91248 ? Sl 07:10 0:07
/usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml
root 3270354 4.4 0.0 600292 93260 ? Sl 07:15 0:07
/usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml
-8<--
root at str957-clustor00:~# ps -o ppid= 3266352
3266345
root at str957-clustor00:~# ps -o ppid= 3267220
3267213
root at str957-clustor00:~# ps -o ppid= 3268076
3268069
root at str957-clustor00:~# ps -o ppid= 3269492
3269485
root at str957-clustor00:~# ps -o ppid= 3270354
3270347
root at str957-clustor00:~# ps aux|grep 3266...
2023 Oct 24
0
Gluster heal script?
...healing and inaccessible (socket not connected) from the fuse mount.
volume info:
-8<--
Volume Name: cluster_data
Type: Distributed-Replicate
Volume ID: 343f7ca5-2fd3-4897-bc45-e1795661f87f
Status: Started
Snapshot Count: 0
Number of Bricks: 9 x (2 + 1) = 27
Transport-type: tcp
Bricks:
Brick1: clustor00:/gb/00/d
Brick2: clustor01:/gb/00/d
Brick3: clustor02:/gb/04/a00 (arbiter)
Brick4: clustor02:/gb/00/d
Brick5: clustor00:/gb/01/d
Brick6: clustor01:/gb/00/a (arbiter)
Brick7: clustor01:/gb/01/d
Brick8: clustor02:/gb/01/d
Brick9: clustor00:/gb/00/a (arbiter)
Brick10: clustor00:/gb/02/d
Brick11: clust...
2023 Mar 16
1
How to configure?
...fo-summary --xml
root? ? 3269492? 1.6? 0.0 600292 91248 ?? ? ? ? Sl? 07:10? 0:07
/usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml
root? ? 3270354? 4.4? 0.0 600292 93260 ?? ? ? ? Sl? 07:15? 0:07
/usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml
-8<--
root at str957-clustor00:~# ps -o ppid= 3266352
3266345
root at str957-clustor00:~# ps -o ppid= 3267220
3267213
root at str957-clustor00:~# ps -o ppid= 3268076
3268069
root at str957-clustor00:~# ps -o ppid= 3269492
3269485
root at str957-clustor00:~# ps -o ppid= 3270354
3270347
root at str957-clustor00:~# ps aux|grep 3266...
2023 Mar 21
1
How to configure?
...stop
everything (and free the memory) I have to
systemctl stop glusterd
? killall glusterfs{,d}
? killall glfsheal
? systemctl start glusterd
[this behaviour hangs a simple reboot of a machine running glusterd...
not nice]
For now I just restarted glusterd w/o killing the bricks:
root at str957-clustor00:~# ps aux|grep glfsheal|wc -l ; systemctl restart
glusterd ; ps aux|grep glfsheal|wc -l
618
618
No change neither in glfsheal processes nor in free memory :(
Should I "killall glfsheal" before OOK kicks in?
Diego
Il 16/03/2023 12:37, Strahil Nikolov ha scritto:
> Can you restart gl...
2023 Mar 21
1
How to configure?
...t; ? killall glusterfs{,d}
> ? killall glfsheal
> ? systemctl start glusterd
> [this behaviour hangs a simple reboot of a machine running glusterd...
> not nice]
>
> For now I just restarted glusterd w/o killing the bricks:
>
> root at str957-clustor00:~# ps aux|grep glfsheal|wc -l ; systemctl restart
> glusterd ; ps aux|grep glfsheal|wc -l
> 618
> 618
>
> No change neither in glfsheal processes nor in free memory :(
> Should I "killall glfsheal" before OOK kicks in?
>
> Diego
>
>...
2023 Mar 15
1
How to configure?
...abled it yesterday and that greatly reduced memory pressure.
Current volume info:
-8<--
Volume Name: cluster_data
Type: Distributed-Replicate
Volume ID: a8caaa90-d161-45bb-a68c-278263a8531a
Status: Started
Snapshot Count: 0
Number of Bricks: 45 x (2 + 1) = 135
Transport-type: tcp
Bricks:
Brick1: clustor00:/srv/bricks/00/d
Brick2: clustor01:/srv/bricks/00/d
Brick3: clustor02:/srv/bricks/00/q (arbiter)
[...]
Brick133: clustor01:/srv/bricks/29/d
Brick134: clustor02:/srv/bricks/29/d
Brick135: clustor00:/srv/bricks/14/q (arbiter)
Options Reconfigured:
performance.quick-read: off
cluster.entry-self-heal:...
2023 Mar 21
1
How to configure?
...d
>? ? ? ? killall glusterfs{,d}
>? ? ? ? killall glfsheal
>? ? ? ? systemctl start glusterd
>? ? [this behaviour hangs a simple reboot of a machine running glusterd...
>? ? not nice]
>
>? ? For now I just restarted glusterd w/o killing the bricks:
>
>? ? root at str957-clustor00:~# ps aux|grep glfsheal|wc -l ; systemctl restart
>? ? glusterd ; ps aux|grep glfsheal|wc -l
>? ? 618
>? ? 618
>
>? ? No change neither in glfsheal processes nor in free memory :(
>? ? Should I "killall glfsheal" before OOK kicks in?
>
>? ? Diego
>
>? ? Il...
2023 Mar 24
1
How to configure?
...t;? ? ? ? systemctl start glusterd
> >? ? [this behaviour hangs a simple reboot of a machine running
> glusterd...
> >? ? not nice]
> >
> >? ? For now I just restarted glusterd w/o killing the bricks:
> >
> >? ? root at str957-clustor00:~# ps aux|grep glfsheal|wc -l ;
> systemctl restart
> >? ? glusterd ; ps aux|grep glfsheal|wc -l
> >? ? 618
> >? ? 618
> >
> >? ? No change neither in glfsheal processes nor in free memory :(
> >? ? Should I "killall glfs...
2023 Mar 24
1
How to configure?
...gt;? ? ? ? systemctl start glusterd
>? ? ? >? ? [this behaviour hangs a simple reboot of a machine running
>? ? glusterd...
>? ? ? >? ? not nice]
>? ? ? >
>? ? ? >? ? For now I just restarted glusterd w/o killing the bricks:
>? ? ? >
>? ? ? >? ? root at str957-clustor00:~# ps aux|grep glfsheal|wc -l ;
>? ? systemctl restart
>? ? ? >? ? glusterd ; ps aux|grep glfsheal|wc -l
>? ? ? >? ? 618
>? ? ? >? ? 618
>? ? ? >
>? ? ? >? ? No change neither in glfsheal processes nor in free memory :(
>? ? ? >? ? Should I "killall glfsh...
2023 Mar 24
1
How to configure?
...behaviour hangs a simple reboot of a machine running
> >? ? glusterd...
> >? ? ? >? ? not nice]
> >? ? ? >
> >? ? ? >? ? For now I just restarted glusterd w/o killing the bricks:
> >? ? ? >
> >? ? ? >? ? root at str957-clustor00:~# ps aux|grep glfsheal|wc -l ;
> >? ? systemctl restart
> >? ? ? >? ? glusterd ; ps aux|grep glfsheal|wc -l
> >? ? ? >? ? 618
> >? ? ? >? ? 618
> >? ? ? >
> >? ? ? >? ? No change neither in glfsheal processes nor in f...
2023 Mar 15
1
How to configure?
...abled it yesterday and that greatly reduced memory pressure.
Current volume info:
-8<--
Volume Name: cluster_data
Type: Distributed-Replicate
Volume ID: a8caaa90-d161-45bb-a68c-278263a8531a
Status: Started
Snapshot Count: 0
Number of Bricks: 45 x (2 + 1) = 135
Transport-type: tcp
Bricks:
Brick1: clustor00:/srv/bricks/00/d
Brick2: clustor01:/srv/bricks/00/d
Brick3: clustor02:/srv/bricks/00/q (arbiter)
[...]
Brick133: clustor01:/srv/bricks/29/d
Brick134: clustor02:/srv/bricks/29/d
Brick135: clustor00:/srv/bricks/14/q (arbiter)
Options Reconfigured:
performance.quick-read: off
cluster.entry-self-heal:...
2023 Apr 23
1
How to configure?
...disks each. Since I'm going to start a new
volume, could it be better to group disks in 10 3-disk (or 6 5-disk)
RAID-0 volumes to reduce the number of bricks? Redundancy would be given
by replica 2 (still undecided about arbiter vs thin-arbiter...).
Current configuration is:
root at str957-clustor00:~# gluster v info cluster_data
Volume Name: cluster_data
Type: Distributed-Replicate
Volume ID: a8caaa90-d161-45bb-a68c-278263a8531a
Status: Started
Snapshot Count: 0
Number of Bricks: 45 x (2 + 1) = 135
Transport-type: tcp
Bricks:
Brick1: clustor00:/srv/bricks/00/d
Brick2: clustor01:/srv/bricks/0...
2023 Mar 15
1
How to configure?
Do you use brick multiplexing ?
Best Regards,Strahil Nikolov?
On Tue, Mar 14, 2023 at 16:44, Diego Zuccato<diego.zuccato at unibo.it> wrote: Hello all.
Our Gluster 9.6 cluster is showing increasing problems.
Currently it's composed of 3 servers (2x Intel Xeon 4210 [20 cores dual
thread, total 40 threads], 192GB RAM, 30x HGST HUH721212AL5200 [12TB]),
configured in replica 3
2023 Nov 06
1
Verify limit-objects from clients in Gluster9 ?
...-----------------------------------------------------------------------------------
/astro 20.0TB 80%(16.0TB)
18.8TB 1.2TB Yes No
# df /mnt/scratch/astro
Filesystem 1K-blocks Used Available Use% Mounted on
clustor00:cluster_data 21474836480 20169918036 1304918444 94% /mnt/scratch
For inodes, instead:
# gluster v quota cluster_data list-objects
Path Hard-limit Soft-limit
Files Dirs Available Soft-limit exceeded? Hard-limit exceeded?
--------------------------...