similar to: What types of volumes are supported in the latest version of Gluster?

Displaying 20 results from an estimated 3000 matches similar to: "What types of volumes are supported in the latest version of Gluster?"

2021 Sep 27
1
回复: What types of volumes are supported in the latest version of Gluster?
An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20210927/9d027096/attachment.html>
2023 Mar 24
1
How to configure?
Can you check your volume file contents?Maybe it really can't find (or access) a specific volfile ? Best Regards,Strahil Nikolov? On Fri, Mar 24, 2023 at 8:07, Diego Zuccato<diego.zuccato at unibo.it> wrote: In glfsheal-Connection.log I see many lines like: [2023-03-13 23:04:40.241481 +0000] E [MSGID: 104021] [glfs-mgmt.c:586:glfs_mgmt_getspec_cbk] 0-gfapi: failed to get the
2023 Mar 21
1
How to configure?
I have no clue. Have you checked for errors in the logs ? Maybe you might find something useful. Best Regards,Strahil Nikolov? On Tue, Mar 21, 2023 at 9:56, Diego Zuccato<diego.zuccato at unibo.it> wrote: Killed glfsheal, after a day there were 218 processes, then they got killed by OOM during the weekend. Now there are no processes active. Trying to run "heal info" reports
2023 Mar 24
1
How to configure?
In glfsheal-Connection.log I see many lines like: [2023-03-13 23:04:40.241481 +0000] E [MSGID: 104021] [glfs-mgmt.c:586:glfs_mgmt_getspec_cbk] 0-gfapi: failed to get the volume file [{from server}, {errno=2}, {error=File o directory non esistente}] And *lots* of gfid-mismatch errors in glustershd.log . Couldn't find anything that would prevent heal to start. :( Diego Il 21/03/2023
2023 Mar 24
1
How to configure?
There are 285 files in /var/lib/glusterd/vols/cluster_data ... including many files with names related to quorum bricks already moved to a different path (like cluster_data.client.clustor02.srv-quorum-00-d.vol that should already have been replaced by cluster_data.clustor02.srv-bricks-00-q.vol -- and both vol files exist). Is there something I should check inside the volfiles? Diego Il
2023 Mar 21
1
How to configure?
Killed glfsheal, after a day there were 218 processes, then they got killed by OOM during the weekend. Now there are no processes active. Trying to run "heal info" reports lots of files quite quickly but does not spawn any glfsheal process. And neither does restarting glusterd. Is there some way to selectively run glfsheal to fix one brick at a time? Diego Il 21/03/2023 01:21,
2023 Mar 21
1
How to configure?
Theoretically it might help.If possible, try to resolve any pending heals. Best Regards,Strahil Nikolov? On Thu, Mar 16, 2023 at 15:29, Diego Zuccato<diego.zuccato at unibo.it> wrote: In Debian stopping glusterd does not stop brick processes: to stop everything (and free the memory) I have to systemctl stop glusterd ? killall glusterfs{,d} ? killall glfsheal ? systemctl start
2023 Apr 23
1
How to configure?
After a lot of tests and unsuccessful searching, I decided to start from scratch: I'm going to ditch the old volume and create a new one. I have 3 servers with 30 12TB disks each. Since I'm going to start a new volume, could it be better to group disks in 10 3-disk (or 6 5-disk) RAID-0 volumes to reduce the number of bricks? Redundancy would be given by replica 2 (still undecided
2023 Mar 16
1
How to configure?
Can you restart glusterd service (first check that it was not modified to kill the bricks)? Best Regards,Strahil Nikolov? On Thu, Mar 16, 2023 at 8:26, Diego Zuccato<diego.zuccato at unibo.it> wrote: OOM is just just a matter of time. Today mem use is up to 177G/187 and: # ps aux|grep glfsheal|wc -l 551 (well, one is actually the grep process, so "only" 550 glfsheal
2023 Mar 16
1
How to configure?
OOM is just just a matter of time. Today mem use is up to 177G/187 and: # ps aux|grep glfsheal|wc -l 551 (well, one is actually the grep process, so "only" 550 glfsheal processes. I'll take the last 5: root 3266352 0.5 0.0 600292 93044 ? Sl 06:55 0:07 /usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml root 3267220 0.7 0.0 600292 91964 ?
2023 Mar 15
1
How to configure?
If you don't experience any OOM , you can focus on the heals. 284 processes of glfsheal seems odd. Can you check the ppid for 2-3 randomly picked ?ps -o ppid= <pid> Best Regards,Strahil Nikolov? On Wed, Mar 15, 2023 at 9:54, Diego Zuccato<diego.zuccato at unibo.it> wrote: I enabled it yesterday and that greatly reduced memory pressure. Current volume info: -8<-- Volume
2023 Nov 27
2
Announcing Gluster release 11.1
I tried downloaded the file directly from the website but wget gave me errors: wget https://download.gluster.org/pub/gluster/glusterfs/11/11.1/Debian/12/amd64/apt/pool/main/g/glusterfs/glusterfs-client_11.1-1_amd64.deb --2023-11-27 11:25:50-- https://download.gluster.org/pub/gluster/glusterfs/11/11.1/Debian/12/amd64/apt/pool/main/g/glusterfs/glusterfs-client_11.1-1_amd64.deb Resolving
2023 Oct 28
2
State of the gluster project
I don't think it's worth it for anyone. It's a dead project since about 9.0, if not earlier. It's time to embrace the truth and move on. /Z On Sat, 28 Oct 2023 at 11:21, Strahil Nikolov <hunter86_bg at yahoo.com> wrote: > Well, > > After IBM acquisition, RH discontinued their support in many projects > including GlusterFS (certification exams were removed,
2023 May 15
1
Error in gluster v11
Hi there, anyone in the Gluster Devel list. Any fix about this issue? May 14 07:05:39 srv01 vms[9404]: [2023-05-14 10:05:39.618424 +0000] C [gf-io-uring.c:612:gf_io_uring_cq_process_some] (-->/lib/x86_64 -linux-gnu/libglusterfs.so.0(+0x849ae) [0x7fb4ebace9ae] -->/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x8a2e5) [0x7fb4ebad42e5] -->/lib /x86_64-linux-gnu/libglusterfs.so.0(+0x8a1a5)
2023 May 16
1
[Gluster-devel] Error in gluster v11
The referenced GitHub issue now has a potential patch that could fix the problem, though it will need to be verified. Could you try to apply the patch and check if the problem persists ? On Mon, May 15, 2023 at 2:10?AM Gilberto Ferreira < gilberto.nunes32 at gmail.com> wrote: > Hi there, anyone in the Gluster Devel list. > > Any fix about this issue? > > May 14 07:05:39
2021 Aug 20
2
Join multiple Gluster Cluster
Hi, I Configured 3 Clusters (For several Shared Homes between different maschines) .. So far no Problem. Now it need a Volume that going across this maschines, but the nodes are bound to there own clusters so peer probe fails . How i can build one big cluster with all nodes but without dataloss (And best without Downtime J) Hope there is some Pro that can help J Greets from Germany
2023 Mar 15
1
How to configure?
I enabled it yesterday and that greatly reduced memory pressure. Current volume info: -8<-- Volume Name: cluster_data Type: Distributed-Replicate Volume ID: a8caaa90-d161-45bb-a68c-278263a8531a Status: Started Snapshot Count: 0 Number of Bricks: 45 x (2 + 1) = 135 Transport-type: tcp Bricks: Brick1: clustor00:/srv/bricks/00/d Brick2: clustor01:/srv/bricks/00/d Brick3: clustor02:/srv/bricks/00/q
2023 Oct 28
1
State of the gluster project
On Sat, Oct 28, 2023 at 11:07:52PM +0300, Zakhar Kirpichenko wrote: > I don't think it's worth it for anyone. It's a dead project since about > 9.0, if not earlier. It's time to embrace the truth and move on. Which is shame because I choose GlusterFS for one of my storage clusters _specifically_ due to the ease of emergency data recovery (for purely replicated volumes) even
2024 Jan 18
2
Upgrade 10.4 -> 11.1 making problems
Are you able to set the logs to debug level ?It might provide a clue what it is going on. Best Regards,Strahil Nikolov On Thu, Jan 18, 2024 at 13:08, Diego Zuccato<diego.zuccato at unibo.it> wrote: That's the same kind of errors I keep seeing on my 2 clusters, regenerated some months ago. Seems a pseudo-split-brain that should be impossible on a replica 3 cluster but keeps
2023 May 16
1
[Gluster-devel] Error in gluster v11
Hi Xavi That's depend. Is it safe? I have this env production you know??? --- Gilberto Nunes Ferreira (47) 99676-7530 - Whatsapp / Telegram Em ter., 16 de mai. de 2023 ?s 07:45, Xavi Hernandez <jahernan at redhat.com> escreveu: > The referenced GitHub issue now has a potential patch that could fix the > problem, though it will need to be verified. Could you try to apply the