Displaying 20 results from an estimated 5000 matches similar to: "It's the end? I hope not..."
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
Yes but I want to add.
Is it the same logic?
---
Gilberto Nunes Ferreira
+55 (47) 99676-7530
Proxmox VE
VinChin Backup & Restore
Em ter., 5 de nov. de 2024, 14:09, Aravinda <aravinda at kadalu.tech> escreveu:
> Hello Gilberto,
>
> You can create a Arbiter volume using three bricks. Two of them will be
> data bricks and one will be Arbiter brick.
>
> gluster volume
2024 Sep 21
1
GlusterFS Replica over ZFS
I assume you will be using the volumes for VM workload.There is a 'virt' group of settings optimized for virtualization (location at /var/lib/glusterd/groups/virt) which is also used by oVirt. It guarantees that VMs can live migrate without breaking.
Best Regards,
Strahil Nikolov
On Fri, Sep 20, 2024 at 19:00, Gilberto Ferreira<gilberto.nunes32 at gmail.com> wrote: Hi there.
2023 May 16
1
[Gluster-devel] Error in gluster v11
Hi Xavi
That's depend. Is it safe? I have this env production you know???
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram
Em ter., 16 de mai. de 2023 ?s 07:45, Xavi Hernandez <jahernan at redhat.com>
escreveu:
> The referenced GitHub issue now has a potential patch that could fix the
> problem, though it will need to be verified. Could you try to apply the
2023 May 16
1
[Gluster-devel] Error in gluster v11
Hi again
I just noticed that there is some updates from glusterd
apt list --upgradable
Listing... Done
glusterfs-client/unknown 11.0-2 amd64 [upgradable from: 11.0-1]
glusterfs-common/unknown 11.0-2 amd64 [upgradable from: 11.0-1]
glusterfs-server/unknown 11.0-2 amd64 [upgradable from: 11.0-1]
libgfapi0/unknown 11.0-2 amd64 [upgradable from: 11.0-1]
libgfchangelog0/unknown 11.0-2 amd64
2023 May 16
1
[Gluster-devel] Error in gluster v11
Ok. No problem. I can test it in a virtual environment.
Send me the path.
Oh but the way, I don't compile gluster from scratch.
I was used the deb file from
https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram
Em ter., 16 de mai. de 2023 ?s 09:21, Xavi Hernandez <jahernan at redhat.com>
escreveu:
>
2023 May 16
1
[Gluster-devel] Error in gluster v11
Hi Gilberto,
On Tue, May 16, 2023 at 12:56?PM Gilberto Ferreira <
gilberto.nunes32 at gmail.com> wrote:
> Hi Xavi
> That's depend. Is it safe? I have this env production you know???
>
It should be safe, but I wouldn't test it on production. Can't you try it
in any test environment before ?
Xavi
>
> ---
> Gilberto Nunes Ferreira
> (47) 99676-7530 -
2023 May 17
1
[Gluster-devel] Error in gluster v11
On Tue, May 16, 2023 at 4:00?PM Gilberto Ferreira <
gilberto.nunes32 at gmail.com> wrote:
> Hi again
> I just noticed that there is some updates from glusterd
>
> apt list --upgradable
> Listing... Done
> glusterfs-client/unknown 11.0-2 amd64 [upgradable from: 11.0-1]
> glusterfs-common/unknown 11.0-2 amd64 [upgradable from: 11.0-1]
> glusterfs-server/unknown 11.0-2
2023 Nov 27
2
Announcing Gluster release 11.1
I tried downloaded the file directly from the website but wget gave me
errors:
wget
https://download.gluster.org/pub/gluster/glusterfs/11/11.1/Debian/12/amd64/apt/pool/main/g/glusterfs/glusterfs-client_11.1-1_amd64.deb
--2023-11-27 11:25:50--
https://download.gluster.org/pub/gluster/glusterfs/11/11.1/Debian/12/amd64/apt/pool/main/g/glusterfs/glusterfs-client_11.1-1_amd64.deb
Resolving
2023 May 16
1
[Gluster-devel] Error in gluster v11
The referenced GitHub issue now has a potential patch that could fix the
problem, though it will need to be verified. Could you try to apply the
patch and check if the problem persists ?
On Mon, May 15, 2023 at 2:10?AM Gilberto Ferreira <
gilberto.nunes32 at gmail.com> wrote:
> Hi there, anyone in the Gluster Devel list.
>
> Any fix about this issue?
>
> May 14 07:05:39
2024 Oct 21
1
How much disk can fail after a catastrophic failure occur?
Ok! I got it about how many disks I can lose and so on.
But regard the arbiter isse, I always set this parameters in the gluster
volume, in order to avoid split-brain and I might add that work pretty well
to me.
I already have a Proxmox VE cluster with 2 nodes and about 50 vms, running
different Linux distro - and Windows as well - with Cpanel and other stuff,
in production.
Anyway here the
2023 May 15
1
Error in gluster v11
Hi there, anyone in the Gluster Devel list.
Any fix about this issue?
May 14 07:05:39 srv01 vms[9404]: [2023-05-14 10:05:39.618424 +0000] C
[gf-io-uring.c:612:gf_io_uring_cq_process_some] (-->/lib/x86_64
-linux-gnu/libglusterfs.so.0(+0x849ae) [0x7fb4ebace9ae]
-->/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x8a2e5) [0x7fb4ebad42e5]
-->/lib
/x86_64-linux-gnu/libglusterfs.so.0(+0x8a1a5)
2024 Dec 02
1
Disk size and virtual size drive me crazy!
qemu-img info 100/vm-100-disk-0.qcow2
image: 100/vm-100-disk-0.qcow2
file format: qcow2
virtual size: 120 GiB (128849018880 bytes)
disk size: 916 GiB
cluster_size: 65536
Format specific information:
compat: 1.1
compression type: zlib
lazy refcounts: false
refcount bits: 16
corrupt: false
extended l2: false
Child node '/file':
filename: 100/vm-100-disk-0.qcow2
2024 Nov 29
1
Disk size and virtual size drive me crazy!
No! I didn't! I wasn't aware of this option.
I will try.
Thanks
Em sex., 29 de nov. de 2024 ?s 16:43, Strahil Nikolov <hunter86_bg at yahoo.com>
escreveu:
> Have you figured it out ?
>
> Have you tried setting storage.reserve to 0 ?
>
> Best Regards,
> Strahil Nikolov
>
> On Thu, Nov 21, 2024 at 0:39, Gilberto Ferreira
> <gilberto.nunes32 at
2024 Sep 20
1
GlusterFS Replica over ZFS
Hi there.
I am about to set up 3 server with GlusterFS over ZFS, running in Proxmox
VE 8.
Any advice or warning about this set up or I am at the right side of the
road!
Thanks for any advice.
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2024 Dec 07
1
GlusterFS over LVM (thick not thin!)
I am afraid of lvm going to disk failure.
So if I have 2 disk and one crashes, I thing all LVM goes bad!
So, I have decided to used individual disk.
Thanks
Em sex., 22 de nov. de 2024 ?s 13:39, Gilberto Ferreira <
gilberto.nunes32 at gmail.com> escreveu:
> Hi Allan!
>
> Thanks for your feedback.
>
> Cheers
>
>
>
>
>
>
>
> Em sex., 22 de nov. de
2023 Apr 12
1
Rename volume?
I noticed that there once was a rename command but it was removed. Do you know why? And is there a way to do it manually?
Thanks!
--
OStR Dr. R. Kupper
Kepler-Gymnasium Freudenstadt
Am Mittwoch, April 12, 2023 17:11 CEST, schrieb Gilberto Ferreira <gilberto.nunes32 at gmail.com>:
> I think gluster volume rename is not available anymore since version 6.5.
>
> ---
>
2024 Oct 20
1
How much disk can fail after a catastrophic failure occur?
If it's replica 2, you can loose up to 1 replica per distribution group.For example, if you have a volume TEST with such setup:
server1:/brick1
server2:/brick1
server1:/brick2
server2:/brick2
You can loose any brick of the replica "/brick1" and any brick in the replica "/brick2". So if you loose server1:/brick1 and server2:/brick2 -> no data loss will be experienced.
2024 Nov 29
1
Disk size and virtual size drive me crazy!
Have you figured it out ?
Have you tried setting storage.reserve to 0 ?
Best Regards,
Strahil Nikolov
On Thu, Nov 21, 2024 at 0:39, Gilberto Ferreira<gilberto.nunes32 at gmail.com> wrote:
11.1
---
Gilberto Nunes Ferreira
+55 (47) 99676-7530
Proxmox VE
VinChin Backup & Restore
Em qua., 20 de nov. de 2024, 19:28, Strahil Nikolov <hunter86_bg at yahoo.com> escreveu:
2024 Nov 20
1
Disk size and virtual size drive me crazy!
11.1
---
Gilberto Nunes Ferreira
+55 (47) 99676-7530
Proxmox VE
VinChin Backup & Restore
Em qua., 20 de nov. de 2024, 19:28, Strahil Nikolov <hunter86_bg at yahoo.com>
escreveu:
> What's your gluster version ?
>
> Best Regards,
> Strahil Nikolov
>
> ? ??????????, 11 ??????? 2024 ?. ? 20:57:50 ?. ???????+2, Gilberto
> Ferreira <gilberto.nunes32 at
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
So I went ahead and do the force (is with you!)
gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1
arbiter:/arbiter2 arbiter:/arbiter3
volume add-brick: failed: Multiple bricks of a replicate volume are present
on the same server. This setup is not optimal. Bricks should be on
different nodes to have best fault tolerant co
nfiguration. Use 'force' at the end of the command