Displaying 14 results from an estimated 14 matches for "cluster_data".
2023 Mar 16
1
How to configure?
...matter of time.
Today mem use is up to 177G/187 and:
# ps aux|grep glfsheal|wc -l
551
(well, one is actually the grep process, so "only" 550 glfsheal processes.
I'll take the last 5:
root 3266352 0.5 0.0 600292 93044 ? Sl 06:55 0:07
/usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml
root 3267220 0.7 0.0 600292 91964 ? Sl 07:00 0:07
/usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml
root 3268076 1.0 0.0 600160 88216 ? Sl 07:05 0:08
/usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml
root 3269492...
2023 Mar 16
1
How to configure?
...a matter of time.
Today mem use is up to 177G/187 and:
# ps aux|grep glfsheal|wc -l
551
(well, one is actually the grep process, so "only" 550 glfsheal processes.
I'll take the last 5:
root? ? 3266352? 0.5? 0.0 600292 93044 ?? ? ? ? Sl? 06:55? 0:07
/usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml
root? ? 3267220? 0.7? 0.0 600292 91964 ?? ? ? ? Sl? 07:00? 0:07
/usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml
root? ? 3268076? 1.0? 0.0 600160 88216 ?? ? ? ? Sl? 07:05? 0:08
/usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml
root? ? 3269492? 1.6?...
2023 Mar 21
1
How to configure?
...# ps aux|grep glfsheal|wc -l
>? ? 551
>
>? ? (well, one is actually the grep process, so "only" 550 glfsheal
>? ? processes.
>
>? ? I'll take the last 5:
>? ? root? ? 3266352? 0.5? 0.0 600292 93044 ?? ? ? ? Sl? 06:55? 0:07
>? ? /usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml
>? ? root? ? 3267220? 0.7? 0.0 600292 91964 ?? ? ? ? Sl? 07:00? 0:07
>? ? /usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml
>? ? root? ? 3268076? 1.0? 0.0 600160 88216 ?? ? ? ? Sl? 07:05? 0:08
>? ? /usr/libexec/glusterfs/glfsheal cluster_data info-summar...
2023 Mar 21
1
How to configure?
...t;? ? (well, one is actually the grep process, so "only" 550 glfsheal
> >? ? processes.
> >
> >? ? I'll take the last 5:
> >? ? root? ? 3266352? 0.5? 0.0 600292 93044 ?? ? ? ? Sl? 06:55? 0:07
> >? ? /usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml
> >? ? root? ? 3267220? 0.7? 0.0 600292 91964 ?? ? ? ? Sl? 07:00? 0:07
> >? ? /usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml
> >? ? root? ? 3268076? 1.0? 0.0 600160 88216 ?? ? ? ? Sl? 07:05? 0:08
> >? ? /usr/libexec/glu...
2023 Mar 15
1
How to configure?
...n you check the ppid for 2-3 randomly picked ?ps -o ppid= <pid>
Best Regards,Strahil Nikolov?
On Wed, Mar 15, 2023 at 9:54, Diego Zuccato<diego.zuccato at unibo.it> wrote: I enabled it yesterday and that greatly reduced memory pressure.
Current volume info:
-8<--
Volume Name: cluster_data
Type: Distributed-Replicate
Volume ID: a8caaa90-d161-45bb-a68c-278263a8531a
Status: Started
Snapshot Count: 0
Number of Bricks: 45 x (2 + 1) = 135
Transport-type: tcp
Bricks:
Brick1: clustor00:/srv/bricks/00/d
Brick2: clustor01:/srv/bricks/00/d
Brick3: clustor02:/srv/bricks/00/q (arbiter)
[...]
Bri...
2023 Mar 21
1
How to configure?
...t;? ? (well, one is actually the grep process, so "only" 550 glfsheal
>? ? ? >? ? processes.
>? ? ? >
>? ? ? >? ? I'll take the last 5:
>? ? ? >? ? root? ? 3266352? 0.5? 0.0 600292 93044 ?? ? ? ? Sl? 06:55? 0:07
>? ? ? >? ? /usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml
>? ? ? >? ? root? ? 3267220? 0.7? 0.0 600292 91964 ?? ? ? ? Sl? 07:00? 0:07
>? ? ? >? ? /usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml
>? ? ? >? ? root? ? 3268076? 1.0? 0.0 600160 88216 ?? ? ? ? Sl? 07:05? 0:08
>? ? ? >? ? /usr/libexec/glu...
2023 Mar 24
1
How to configure?
...50
> glfsheal
> >? ? ? >? ? processes.
> >? ? ? >
> >? ? ? >? ? I'll take the last 5:
> >? ? ? >? ? root? ? 3266352? 0.5? 0.0 600292 93044 ?? ? ? ? Sl
> 06:55? 0:07
> >? ? ? >? ? /usr/libexec/glusterfs/glfsheal cluster_data
> info-summary --xml
> >? ? ? >? ? root? ? 3267220? 0.7? 0.0 600292 91964 ?? ? ? ? Sl
> 07:00? 0:07
> >? ? ? >? ? /usr/libexec/glusterfs/glfsheal cluster_data
> info-summary --xml
> >? ? ? >? ? root? ? 3268076? 1.0? 0.0 600160 88216 ??...
2023 Mar 24
1
How to configure?
...550
>? ? glfsheal
>? ? ? >? ? ? >? ? processes.
>? ? ? >? ? ? >
>? ? ? >? ? ? >? ? I'll take the last 5:
>? ? ? >? ? ? >? ? root? ? 3266352? 0.5? 0.0 600292 93044 ?? ? ? ? Sl
>? ? 06:55? 0:07
>? ? ? >? ? ? >? ? /usr/libexec/glusterfs/glfsheal cluster_data
>? ? info-summary --xml
>? ? ? >? ? ? >? ? root? ? 3267220? 0.7? 0.0 600292 91964 ?? ? ? ? Sl
>? ? 07:00? 0:07
>? ? ? >? ? ? >? ? /usr/libexec/glusterfs/glfsheal cluster_data
>? ? info-summary --xml
>? ? ? >? ? ? >? ? root? ? 3268076? 1.0? 0.0 600160 88216 ?? ?...
2023 Mar 24
1
How to configure?
There are 285 files in /var/lib/glusterd/vols/cluster_data ... including
many files with names related to quorum bricks already moved to a
different path (like cluster_data.client.clustor02.srv-quorum-00-d.vol
that should already have been replaced by
cluster_data.clustor02.srv-bricks-00-q.vol -- and both vol files exist).
Is there something I should...
2023 Nov 06
1
Verify limit-objects from clients in Gluster9 ?
Hello all.
Is there a way to check inode limit from clients?
df -i /path/to/dir
seems to report values for all the volume, not just the dir.
For space it works as expected:
# gluster v quota cluster_data list
Path Hard-limit Soft-limit
Used Available Soft-limit exceeded? Hard-limit exceeded?
-------------------------------------------------------------------------------------------------------------------------------
/astro...
2023 Apr 23
1
How to configure?
...oing to start a new
volume, could it be better to group disks in 10 3-disk (or 6 5-disk)
RAID-0 volumes to reduce the number of bricks? Redundancy would be given
by replica 2 (still undecided about arbiter vs thin-arbiter...).
Current configuration is:
root at str957-clustor00:~# gluster v info cluster_data
Volume Name: cluster_data
Type: Distributed-Replicate
Volume ID: a8caaa90-d161-45bb-a68c-278263a8531a
Status: Started
Snapshot Count: 0
Number of Bricks: 45 x (2 + 1) = 135
Transport-type: tcp
Bricks:
Brick1: clustor00:/srv/bricks/00/d
Brick2: clustor01:/srv/bricks/00/d
Brick3: clustor02:/srv/bric...
2023 Mar 15
1
How to configure?
I enabled it yesterday and that greatly reduced memory pressure.
Current volume info:
-8<--
Volume Name: cluster_data
Type: Distributed-Replicate
Volume ID: a8caaa90-d161-45bb-a68c-278263a8531a
Status: Started
Snapshot Count: 0
Number of Bricks: 45 x (2 + 1) = 135
Transport-type: tcp
Bricks:
Brick1: clustor00:/srv/bricks/00/d
Brick2: clustor01:/srv/bricks/00/d
Brick3: clustor02:/srv/bricks/00/q (arbiter)
[...]
Bri...
2023 Mar 15
1
How to configure?
Do you use brick multiplexing ?
Best Regards,Strahil Nikolov?
On Tue, Mar 14, 2023 at 16:44, Diego Zuccato<diego.zuccato at unibo.it> wrote: Hello all.
Our Gluster 9.6 cluster is showing increasing problems.
Currently it's composed of 3 servers (2x Intel Xeon 4210 [20 cores dual
thread, total 40 threads], 192GB RAM, 30x HGST HUH721212AL5200 [12TB]),
configured in replica 3
2023 Oct 24
0
Gluster heal script?
...in heal info even
after a pass with heal full ?
I recently (~august) restarted from scratch our Gluster cluster in
"replica 3 arbiter 1" but I already found some files that are not
healing and inaccessible (socket not connected) from the fuse mount.
volume info:
-8<--
Volume Name: cluster_data
Type: Distributed-Replicate
Volume ID: 343f7ca5-2fd3-4897-bc45-e1795661f87f
Status: Started
Snapshot Count: 0
Number of Bricks: 9 x (2 + 1) = 27
Transport-type: tcp
Bricks:
Brick1: clustor00:/gb/00/d
Brick2: clustor01:/gb/00/d
Brick3: clustor02:/gb/04/a00 (arbiter)
Brick4: clustor02:/gb/00/d
Brick5...