Displaying 20 results from an estimated 400 matches similar to: "Verify limit-objects from clients in Gluster9 ?"
2023 Mar 16
1
How to configure?
OOM is just just a matter of time.
Today mem use is up to 177G/187 and:
# ps aux|grep glfsheal|wc -l
551
(well, one is actually the grep process, so "only" 550 glfsheal processes.
I'll take the last 5:
root 3266352 0.5 0.0 600292 93044 ? Sl 06:55 0:07
/usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml
root 3267220 0.7 0.0 600292 91964 ?
2023 Mar 16
1
How to configure?
Can you restart glusterd service (first check that it was not modified to kill the bricks)?
Best Regards,Strahil Nikolov?
On Thu, Mar 16, 2023 at 8:26, Diego Zuccato<diego.zuccato at unibo.it> wrote: OOM is just just a matter of time.
Today mem use is up to 177G/187 and:
# ps aux|grep glfsheal|wc -l
551
(well, one is actually the grep process, so "only" 550 glfsheal
2023 Mar 21
1
How to configure?
Theoretically it might help.If possible, try to resolve any pending heals.
Best Regards,Strahil Nikolov?
On Thu, Mar 16, 2023 at 15:29, Diego Zuccato<diego.zuccato at unibo.it> wrote: In Debian stopping glusterd does not stop brick processes: to stop
everything (and free the memory) I have to
systemctl stop glusterd
? killall glusterfs{,d}
? killall glfsheal
? systemctl start
2023 Mar 21
1
How to configure?
Killed glfsheal, after a day there were 218 processes, then they got
killed by OOM during the weekend. Now there are no processes active.
Trying to run "heal info" reports lots of files quite quickly but does
not spawn any glfsheal process. And neither does restarting glusterd.
Is there some way to selectively run glfsheal to fix one brick at a time?
Diego
Il 21/03/2023 01:21,
2023 Mar 21
1
How to configure?
I have no clue. Have you checked for errors in the logs ? Maybe you might find something useful.
Best Regards,Strahil Nikolov?
On Tue, Mar 21, 2023 at 9:56, Diego Zuccato<diego.zuccato at unibo.it> wrote: Killed glfsheal, after a day there were 218 processes, then they got
killed by OOM during the weekend. Now there are no processes active.
Trying to run "heal info" reports
2023 Mar 24
1
How to configure?
In glfsheal-Connection.log I see many lines like:
[2023-03-13 23:04:40.241481 +0000] E [MSGID: 104021]
[glfs-mgmt.c:586:glfs_mgmt_getspec_cbk] 0-gfapi: failed to get the
volume file [{from server}, {errno=2}, {error=File o directory non
esistente}]
And *lots* of gfid-mismatch errors in glustershd.log .
Couldn't find anything that would prevent heal to start. :(
Diego
Il 21/03/2023
2023 Mar 24
1
How to configure?
Can you check your volume file contents?Maybe it really can't find (or access) a specific volfile ?
Best Regards,Strahil Nikolov?
On Fri, Mar 24, 2023 at 8:07, Diego Zuccato<diego.zuccato at unibo.it> wrote: In glfsheal-Connection.log I see many lines like:
[2023-03-13 23:04:40.241481 +0000] E [MSGID: 104021]
[glfs-mgmt.c:586:glfs_mgmt_getspec_cbk] 0-gfapi: failed to get the
2023 Mar 24
1
How to configure?
There are 285 files in /var/lib/glusterd/vols/cluster_data ... including
many files with names related to quorum bricks already moved to a
different path (like cluster_data.client.clustor02.srv-quorum-00-d.vol
that should already have been replaced by
cluster_data.clustor02.srv-bricks-00-q.vol -- and both vol files exist).
Is there something I should check inside the volfiles?
Diego
Il
2023 Apr 23
1
How to configure?
After a lot of tests and unsuccessful searching, I decided to start from
scratch: I'm going to ditch the old volume and create a new one.
I have 3 servers with 30 12TB disks each. Since I'm going to start a new
volume, could it be better to group disks in 10 3-disk (or 6 5-disk)
RAID-0 volumes to reduce the number of bricks? Redundancy would be given
by replica 2 (still undecided
2023 Mar 15
1
How to configure?
If you don't experience any OOM , you can focus on the heals.
284 processes of glfsheal seems odd.
Can you check the ppid for 2-3 randomly picked ?ps -o ppid= <pid>
Best Regards,Strahil Nikolov?
On Wed, Mar 15, 2023 at 9:54, Diego Zuccato<diego.zuccato at unibo.it> wrote: I enabled it yesterday and that greatly reduced memory pressure.
Current volume info:
-8<--
Volume
2023 Oct 24
0
Gluster heal script?
Hello all.
Is there a script to help healing files that remain in heal info even
after a pass with heal full ?
I recently (~august) restarted from scratch our Gluster cluster in
"replica 3 arbiter 1" but I already found some files that are not
healing and inaccessible (socket not connected) from the fuse mount.
volume info:
-8<--
Volume Name: cluster_data
Type:
2023 Mar 15
1
How to configure?
I enabled it yesterday and that greatly reduced memory pressure.
Current volume info:
-8<--
Volume Name: cluster_data
Type: Distributed-Replicate
Volume ID: a8caaa90-d161-45bb-a68c-278263a8531a
Status: Started
Snapshot Count: 0
Number of Bricks: 45 x (2 + 1) = 135
Transport-type: tcp
Bricks:
Brick1: clustor00:/srv/bricks/00/d
Brick2: clustor01:/srv/bricks/00/d
Brick3: clustor02:/srv/bricks/00/q
2023 Jun 05
0
EOL gluster9.x
When will gluster 9.x reach EOL?
Which will be best version to update to?
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3477 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20230605/1ab2b25b/attachment.p7s>
2024 Dec 27
1
ctdb + gluster9 = not working
Hi, i tried make a samba cluster (ctdb) with gluster9 (do not use vfs
module)
But i get an errors.
[root at samba1 ctdb]# rpm -q samba samba-ctdb
samba-4.19.9-alt3.x86_64
samba-ctdb-4.19.9-alt3.x86_64
[root at samba1 ctdb]# grep -v '^#' /etc/ctdb/ctdb.conf
[cluster]
recovery lock = "/mnt/gluster/ctdb/.ctdb.lock"
[root at samba1 ctdb]# cat /etc/ctdb/public_addresses
2023 Mar 15
1
How to configure?
Do you use brick multiplexing ?
Best Regards,Strahil Nikolov?
On Tue, Mar 14, 2023 at 16:44, Diego Zuccato<diego.zuccato at unibo.it> wrote: Hello all.
Our Gluster 9.6 cluster is showing increasing problems.
Currently it's composed of 3 servers (2x Intel Xeon 4210 [20 cores dual
thread, total 40 threads], 192GB RAM, 30x HGST HUH721212AL5200 [12TB]),
configured in replica 3
1999 May 03
0
compilation of R-0.63.3 on alpha (PR#183)
This is a multipart MIME message.
--==_Exmh_981436288450
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable
Hi !
I have problems compiling R successfully on a DEC-UINX 4.0E.
I have applied the recommended config.site, which I enclose.
As can be seen from the compilation log there are linking errors...
I did a 'make check' which fails for the R
1999 May 03
1
problems compiling R-0.63.3 on alpha
Hi again !
Thanks for the info on updating the config.site file which I
have done. I have also added -lm in the Makeconf manually
because this is needed explicitly for DEC cc.
However, there are still a few problems when linking some
of the files as you can see from the enclosed log.
Ciao,
Andreas
-------------------------------------------------------
R-0.63.3>make
make[1]: Entering
2013 Sep 25
3
Dovecot extremely slow!
Please help,
Dovecot is running extremely slow for the last couple of weeks and it
seems to be getting worse (or my patience running short).
I attach the 10-master configuration and the log file after running
strace according to: http://wiki.dovecot.org/Debugging/ProcessTracing
I can click on an email and wait for a minute or more before receiving a
connection dropped or no error at all. I
2011 Sep 01
1
convert to grid file
Hi
I computed probability in each cell.
I have:
[99883,] -0.0062412957690
[99884,] -0.0062412957690
[99885,] -0.0062412957690
[99886,] -0.0062412957690
[99887,] -0.0062412957690
[99888,] -0.0062412957690
[99889,] 0.9909126638948
[99890,] 0.9909126638948
[99891,] 0.9909126638948
[99892,] 0.9909126638948
[99893,] 0.9909126638948
[99894,] 0.9909126638948
[99895,]
2013 Apr 10
4
Formatting a USB Drive
Hi All,
I have a Drobo, connected to a CentOS 6.4 box. The box sees it as /dev/sdg.
I want to format it ext3 (as they dont support ext4) but when I try I get:
# fdisk -u /dev/sdg
WARNING: GPT (GUID Partition Table) detected on '/dev/sdg'! The util fdisk
doesn't support GPT. Use GNU Parted.
WARNING: The size of this disk is 17.6 TB (17592186044416 bytes).
DOS partition table