Displaying 20 results from an estimated 4000 matches similar to: "Geo Replication sync intervals"
2024 Aug 18
1
Geo Replication sync intervals
Hi Gilberto,
I doubt you can change that stuff. Officially it's async replication and it might take some time to replicate.
What do you want to improve ?
Best Regards,
Strahil Nikolov
? ?????, 16 ?????? 2024 ?. ? 20:31:25 ?. ???????+3, Gilberto Ferreira <gilberto.nunes32 at gmail.com> ??????:
Hi there.
I have two sites with gluster geo replication, and all work pretty
2023 May 16
1
[Gluster-devel] Error in gluster v11
Hi Xavi
That's depend. Is it safe? I have this env production you know???
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram
Em ter., 16 de mai. de 2023 ?s 07:45, Xavi Hernandez <jahernan at redhat.com>
escreveu:
> The referenced GitHub issue now has a potential patch that could fix the
> problem, though it will need to be verified. Could you try to apply the
2023 May 16
1
[Gluster-devel] Error in gluster v11
Hi Gilberto,
On Tue, May 16, 2023 at 12:56?PM Gilberto Ferreira <
gilberto.nunes32 at gmail.com> wrote:
> Hi Xavi
> That's depend. Is it safe? I have this env production you know???
>
It should be safe, but I wouldn't test it on production. Can't you try it
in any test environment before ?
Xavi
>
> ---
> Gilberto Nunes Ferreira
> (47) 99676-7530 -
2023 May 16
1
[Gluster-devel] Error in gluster v11
Ok. No problem. I can test it in a virtual environment.
Send me the path.
Oh but the way, I don't compile gluster from scratch.
I was used the deb file from
https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram
Em ter., 16 de mai. de 2023 ?s 09:21, Xavi Hernandez <jahernan at redhat.com>
escreveu:
>
2023 May 16
1
[Gluster-devel] Error in gluster v11
The referenced GitHub issue now has a potential patch that could fix the
problem, though it will need to be verified. Could you try to apply the
patch and check if the problem persists ?
On Mon, May 15, 2023 at 2:10?AM Gilberto Ferreira <
gilberto.nunes32 at gmail.com> wrote:
> Hi there, anyone in the Gluster Devel list.
>
> Any fix about this issue?
>
> May 14 07:05:39
2023 May 15
1
Error in gluster v11
Hi there, anyone in the Gluster Devel list.
Any fix about this issue?
May 14 07:05:39 srv01 vms[9404]: [2023-05-14 10:05:39.618424 +0000] C
[gf-io-uring.c:612:gf_io_uring_cq_process_some] (-->/lib/x86_64
-linux-gnu/libglusterfs.so.0(+0x849ae) [0x7fb4ebace9ae]
-->/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x8a2e5) [0x7fb4ebad42e5]
-->/lib
/x86_64-linux-gnu/libglusterfs.so.0(+0x8a1a5)
2023 May 16
1
[Gluster-devel] Error in gluster v11
Hi again
I just noticed that there is some updates from glusterd
apt list --upgradable
Listing... Done
glusterfs-client/unknown 11.0-2 amd64 [upgradable from: 11.0-1]
glusterfs-common/unknown 11.0-2 amd64 [upgradable from: 11.0-1]
glusterfs-server/unknown 11.0-2 amd64 [upgradable from: 11.0-1]
libgfapi0/unknown 11.0-2 amd64 [upgradable from: 11.0-1]
libgfchangelog0/unknown 11.0-2 amd64
2023 Nov 27
2
Announcing Gluster release 11.1
I tried downloaded the file directly from the website but wget gave me
errors:
wget
https://download.gluster.org/pub/gluster/glusterfs/11/11.1/Debian/12/amd64/apt/pool/main/g/glusterfs/glusterfs-client_11.1-1_amd64.deb
--2023-11-27 11:25:50--
https://download.gluster.org/pub/gluster/glusterfs/11/11.1/Debian/12/amd64/apt/pool/main/g/glusterfs/glusterfs-client_11.1-1_amd64.deb
Resolving
2023 May 17
1
[Gluster-devel] Error in gluster v11
On Tue, May 16, 2023 at 4:00?PM Gilberto Ferreira <
gilberto.nunes32 at gmail.com> wrote:
> Hi again
> I just noticed that there is some updates from glusterd
>
> apt list --upgradable
> Listing... Done
> glusterfs-client/unknown 11.0-2 amd64 [upgradable from: 11.0-1]
> glusterfs-common/unknown 11.0-2 amd64 [upgradable from: 11.0-1]
> glusterfs-server/unknown 11.0-2
2024 Sep 21
1
GlusterFS Replica over ZFS
I assume you will be using the volumes for VM workload.There is a 'virt' group of settings optimized for virtualization (location at /var/lib/glusterd/groups/virt) which is also used by oVirt. It guarantees that VMs can live migrate without breaking.
Best Regards,
Strahil Nikolov
On Fri, Sep 20, 2024 at 19:00, Gilberto Ferreira<gilberto.nunes32 at gmail.com> wrote: Hi there.
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
So I went ahead and do the force (is with you!)
gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1
arbiter:/arbiter2 arbiter:/arbiter3
volume add-brick: failed: Multiple bricks of a replicate volume are present
on the same server. This setup is not optimal. Bricks should be on
different nodes to have best fault tolerant co
nfiguration. Use 'force' at the end of the command
2024 Nov 08
1
Add an arbiter when have multiple bricks at same server.
After force the add-brick
gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1
arbiter:/arbiter2 arbiter:/arbiter3 force
volume add-brick: success
pve01:~# gluster volume info
Volume Name: VMS
Type: Distributed-Replicate
Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1:
2024 Nov 08
1
Add an arbiter when have multiple bricks at same server.
What's the volume structure right now?
Best Regards,
Strahil Nikolov
On Wed, Nov 6, 2024 at 18:24, Gilberto Ferreira<gilberto.nunes32 at gmail.com> wrote: So I went ahead and do the force (is with you!)
gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1 arbiter:/arbiter2 arbiter:/arbiter3
volume add-brick: failed: Multiple bricks of a replicate volume are present
2023 Apr 12
1
Rename volume?
I noticed that there once was a rename command but it was removed. Do you know why? And is there a way to do it manually?
Thanks!
--
OStR Dr. R. Kupper
Kepler-Gymnasium Freudenstadt
Am Mittwoch, April 12, 2023 17:11 CEST, schrieb Gilberto Ferreira <gilberto.nunes32 at gmail.com>:
> I think gluster volume rename is not available anymore since version 6.5.
>
> ---
>
2024 Dec 02
1
Disk size and virtual size drive me crazy!
qemu-img info 100/vm-100-disk-0.qcow2
image: 100/vm-100-disk-0.qcow2
file format: qcow2
virtual size: 120 GiB (128849018880 bytes)
disk size: 916 GiB
cluster_size: 65536
Format specific information:
compat: 1.1
compression type: zlib
lazy refcounts: false
refcount bits: 16
corrupt: false
extended l2: false
Child node '/file':
filename: 100/vm-100-disk-0.qcow2
2024 Oct 21
1
How much disk can fail after a catastrophic failure occur?
Ok! I got it about how many disks I can lose and so on.
But regard the arbiter isse, I always set this parameters in the gluster
volume, in order to avoid split-brain and I might add that work pretty well
to me.
I already have a Proxmox VE cluster with 2 nodes and about 50 vms, running
different Linux distro - and Windows as well - with Cpanel and other stuff,
in production.
Anyway here the
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
But if I change replica 2 arbiter 1 to replica 3 arbiter 1
gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1
arbiter:/arbiter2 arbiter:/arbiter3
I got thir error:
volume add-brick: failed: Multiple bricks of a replicate volume are present
on the same server. This setup is not optimal. Bricks should be on
different nodes to have best fault tolerant configuration. Use
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
Ok.
I have a 3rd host with Debian 12 installed and Gluster v11. The name of the
host is arbiter!
I already add this host into the pool:
arbiter:~# gluster pool list
UUID Hostname State
0cbbfc27-3876-400a-ac1d-2d73e72a4bfd gluster1.home.local Connected
99ed1f1e-7169-4da8-b630-a712a5b71ccd gluster2 Connected
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
Right now you have 3 "sets" of replica 2 on 2 hosts.In your case you don't need so much space for arbiters (10-15GB with 95 maxpct is enough for each "set") and you need a 3rd system or when the node that holds the data brick + arbiter brick fails (2 node scenario) - that "set" will be unavailable.
If you do have a 3rd host, I think the command would be:gluster
2024 Jan 18
2
Upgrade 10.4 -> 11.1 making problems
Are you able to set the logs to debug level ?It might provide a clue what it is going on.
Best Regards,Strahil Nikolov
On Thu, Jan 18, 2024 at 13:08, Diego Zuccato<diego.zuccato at unibo.it> wrote: That's the same kind of errors I keep seeing on my 2 clusters,
regenerated some months ago. Seems a pseudo-split-brain that should be
impossible on a replica 3 cluster but keeps