search for: zuccato

Displaying 20 results from an estimated 51 matches for "zuccato".

2023 Mar 24
1
How to configure?
...that would prevent heal to start. :( Diego Il 21/03/2023 20:39, Strahil Nikolov ha scritto: > I have no clue. Have you checked for errors in the logs ? Maybe you > might find something useful. > > Best Regards, > Strahil Nikolov > > On Tue, Mar 21, 2023 at 9:56, Diego Zuccato > <diego.zuccato at unibo.it> wrote: > Killed glfsheal, after a day there were 218 processes, then they got > killed by OOM during the weekend. Now there are no processes active. > Trying to run "heal info" reports lots of files quite quickly but does &gt...
2023 Mar 24
1
How to configure?
Can you check your volume file contents?Maybe it really can't find (or access) a specific volfile ? Best Regards,Strahil Nikolov? On Fri, Mar 24, 2023 at 8:07, Diego Zuccato<diego.zuccato at unibo.it> wrote: In glfsheal-Connection.log I see many lines like: [2023-03-13 23:04:40.241481 +0000] E [MSGID: 104021] [glfs-mgmt.c:586:glfs_mgmt_getspec_cbk] 0-gfapi: failed to get the volume file [{from server}, {errno=2}, {error=File o directory non esistente}] And...
2023 Mar 21
1
How to configure?
I have no clue. Have you checked for errors in the logs ? Maybe you might find something useful. Best Regards,Strahil Nikolov? On Tue, Mar 21, 2023 at 9:56, Diego Zuccato<diego.zuccato at unibo.it> wrote: Killed glfsheal, after a day there were 218 processes, then they got killed by OOM during the weekend. Now there are no processes active. Trying to run "heal info" reports lots of files quite quickly but does not spawn any glfsheal process. And...
2023 Mar 24
1
How to configure?
...uld check inside the volfiles? Diego Il 24/03/2023 13:05, Strahil Nikolov ha scritto: > Can you check your volume file contents? > Maybe it really can't find (or access) a specific volfile ? > > Best Regards, > Strahil Nikolov > > On Fri, Mar 24, 2023 at 8:07, Diego Zuccato > <diego.zuccato at unibo.it> wrote: > In glfsheal-Connection.log I see many lines like: > [2023-03-13 23:04:40.241481 +0000] E [MSGID: 104021] > [glfs-mgmt.c:586:glfs_mgmt_getspec_cbk] 0-gfapi: failed to get the > volume file [{from server}, {errno=2}, {err...
2023 Mar 21
1
How to configure?
...ay to selectively run glfsheal to fix one brick at a time? Diego Il 21/03/2023 01:21, Strahil Nikolov ha scritto: > Theoretically it might help. > If possible, try to resolve any pending heals. > > Best Regards, > Strahil Nikolov > > On Thu, Mar 16, 2023 at 15:29, Diego Zuccato > <diego.zuccato at unibo.it> wrote: > In Debian stopping glusterd does not stop brick processes: to stop > everything (and free the memory) I have to > systemctl stop glusterd > ? killall glusterfs{,d} > ? killall glfsheal > ? systemctl sta...
2023 Mar 21
1
How to configure?
Theoretically it might help.If possible, try to resolve any pending heals. Best Regards,Strahil Nikolov? On Thu, Mar 16, 2023 at 15:29, Diego Zuccato<diego.zuccato at unibo.it> wrote: In Debian stopping glusterd does not stop brick processes: to stop everything (and free the memory) I have to systemctl stop glusterd ? killall glusterfs{,d} ? killall glfsheal ? systemctl start glusterd [this behaviour hangs a simple reboot of a machine r...
2023 Apr 23
1
How to configure?
...192GB RAM (that got exhausted quite often, before enabling brick-multiplex). Diego Il 24/03/2023 19:21, Strahil Nikolov ha scritto: > Try finding if any of them is missing on one of the systems. > > Best Regards, > Strahil Nikolov > > On Fri, Mar 24, 2023 at 15:59, Diego Zuccato > <diego.zuccato at unibo.it> wrote: > There are 285 files in /var/lib/glusterd/vols/cluster_data ... > including > many files with names related to quorum bricks already moved to a > different path (like cluster_data.client.clustor02.srv-quorum-00-d.vol &gt...
2012 Jul 30
1
'x' bit always set?
...pam_mkhomedir, but more versatile root preexec = /opt/checklogon '%S' '%H' '%u' '%P' '%D' '%U' -8<-- The underlying fs supports acls and xattrs: /dev/sdb1 on /srv/shared type xfs (rw,acl,user_xattr,quota) # getfacl /srv/shared/PERSONALE/diego.zuccato/ getfacl: Removing leading '/' from absolute path names # file: srv/shared/PERSONALE/diego.zuccato/ # owner: diego.zuccato # group: 100013 # flags: s-- user::rwx user:str00160-backup:rw- #effective:--- group::rwx #effective:--x mask::--x other::--x default:user::...
2023 Mar 16
1
How to configure?
Can you restart glusterd service (first check that it was not modified to kill the bricks)? Best Regards,Strahil Nikolov? On Thu, Mar 16, 2023 at 8:26, Diego Zuccato<diego.zuccato at unibo.it> wrote: OOM is just just a matter of time. Today mem use is up to 177G/187 and: # ps aux|grep glfsheal|wc -l 551 (well, one is actually the grep process, so "only" 550 glfsheal processes. I'll take the last 5: root? ? 3266352? 0.5? 0.0 600292 93044...
2023 Mar 16
1
How to configure?
...If you don't experience any OOM , you can focus on the heals. > > 284 processes of glfsheal seems odd. > > Can you check the ppid for 2-3 randomly picked ? > ps -o ppid= <pid> > > Best Regards, > Strahil Nikolov > > On Wed, Mar 15, 2023 at 9:54, Diego Zuccato > <diego.zuccato at unibo.it> wrote: > I enabled it yesterday and that greatly reduced memory pressure. > Current volume info: > -8<-- > Volume Name: cluster_data > Type: Distributed-Replicate > Volume ID: a8caaa90-d161-45bb-a68c-278263a8531...
2023 Mar 15
1
How to configure?
If you don't experience any OOM , you can focus on the heals. 284 processes of glfsheal seems odd. Can you check the ppid for 2-3 randomly picked ?ps -o ppid= <pid> Best Regards,Strahil Nikolov? On Wed, Mar 15, 2023 at 9:54, Diego Zuccato<diego.zuccato at unibo.it> wrote: I enabled it yesterday and that greatly reduced memory pressure. Current volume info: -8<-- Volume Name: cluster_data Type: Distributed-Replicate Volume ID: a8caaa90-d161-45bb-a68c-278263a8531a Status: Started Snapshot Count: 0 Number of Bricks: 45 x (2...
2023 Mar 15
1
How to configure?
...mary" is running on clustor02 since yesterday, still no output). Shouldn't be just one per brick? Diego Il 15/03/2023 08:30, Strahil Nikolov ha scritto: > Do you use brick multiplexing ? > > Best Regards, > Strahil Nikolov > > On Tue, Mar 14, 2023 at 16:44, Diego Zuccato > <diego.zuccato at unibo.it> wrote: > Hello all. > > Our Gluster 9.6 cluster is showing increasing problems. > Currently it's composed of 3 servers (2x Intel Xeon 4210 [20 cores dual > thread, total 40 threads], 192GB RAM, 30x HGST HUH721212AL5200 [...
2009 Nov 19
1
Other troubles
...th their UPN (user.name at studio.unibo.it for users in STUDENTI domain, user.name at unibo.it for users in PERSONALE domain) 3) It seems "winbind separator" is incompatible with Kerberos login: if I specify it, then all logins fail. Any hints? Some docs I missed? Thanks! -- Diego Zuccato Servizi Informatici Dip. di Astronomia - Universit? di Bologna Via Ranzani, 1 - 40126 Bologna - Italy tel.: +39 051 20 95786 mail: diego.zuccato at unibo.it
2023 Mar 15
1
How to configure?
Do you use brick multiplexing ? Best Regards,Strahil Nikolov? On Tue, Mar 14, 2023 at 16:44, Diego Zuccato<diego.zuccato at unibo.it> wrote: Hello all. Our Gluster 9.6 cluster is showing increasing problems. Currently it's composed of 3 servers (2x Intel Xeon 4210 [20 cores dual thread, total 40 threads], 192GB RAM, 30x HGST HUH721212AL5200 [12TB]), configured in replica 3 arbiter 1. Usin...
2023 Jun 05
1
Qustionmark in permission and Owner
...> Community Meeting Calendar: > > Schedule - > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC > Bridge: https://meet.google.com/cpu-eiue-hvk > Gluster-users mailing list > Gluster-users at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users -- Diego Zuccato DIFA - Dip. di Fisica e Astronomia Servizi Informatici Alma Mater Studiorum - Universit? di Bologna V.le Berti-Pichat 6/2 - 40127 Bologna - Italy tel.: +39 051 20 95786
2023 Mar 14
1
How to configure?
...erver to just 5 (RAID1 of 6x12TB disks) I might resolve RAM issues - at the cost of longer heal times in case a disk fails. Am I right or it's useless? Other recommendations? Servers have space for another 6 disks. Maybe those could be used for some SSDs to speed up access? TIA. -- Diego Zuccato DIFA - Dip. di Fisica e Astronomia Servizi Informatici Alma Mater Studiorum - Universit? di Bologna V.le Berti-Pichat 6/2 - 40127 Bologna - Italy tel.: +39 051 20 95786
2024 Jan 18
1
Upgrade 10.4 -> 11.1 making problems
...e you able to solve the problem? Can it be treated like a "normal" > split brain? 'gluster peer status' and 'gluster volume status' are ok, > so kinda looks like "pseudo"... > > > hubert > > Am Do., 18. Jan. 2024 um 08:28 Uhr schrieb Diego Zuccato > <diego.zuccato at unibo.it>: >> >> That's the same kind of errors I keep seeing on my 2 clusters, >> regenerated some months ago. Seems a pseudo-split-brain that should be >> impossible on a replica 3 cluster but keeps happening. >> Sadly going to ditch...
2023 Oct 27
1
State of the gluster project
...ing Kubernetes APIs). Kadalu Technologies also maintains many of the GlusterFS tools like gdash (https://github.com/kadalu/gdash), gluster-metrics-exporter (https://github.com/kadalu/gluster-metrics-exporter) etc. Aravinda https://kadalu.tech ---- On Fri, 27 Oct 2023 14:21:35 +0530 Diego Zuccato <diego.zuccato at unibo.it> wrote --- Maybe a bit OT... I'm no expert on either, but the concepts are quite similar. Both require "extra" nodes (metadata and monitor), but those can be virtual machines or you can host the services on OSD machines. We don't use snapshot...
2023 Oct 27
1
State of the gluster project
...talled > whas 3.5 or so. It was also extremly slow, an ls took forever. > But later versions has been "kind" to us and worked quite well > and file access has become really comfortable. > > Best regards > Marcus > > On Fri, Oct 27, 2023 at 10:16:08AM +0200, Diego Zuccato wrote: >> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe. >> >> >> Hi. >> >> I'm also migrating to BeeGFS and CephFS (depending on usage). &g...
2023 Dec 17
1
Gluster -> Ceph
...t keeps happening. I really trusted Gluster promises, but currently what I (and, worse, the users) see is a 60-70% availability. Neither Gluster nor Ceph are "backup solutions", so if the data is not easily replaceable it's better to have it elsewhere. Better if offline. -- Diego Zuccato DIFA - Dip. di Fisica e Astronomia Servizi Informatici Alma Mater Studiorum - Universit? di Bologna V.le Berti-Pichat 6/2 - 40127 Bologna - Italy tel.: +39 051 20 95786