search for: arbiters

Displaying 20 results from an estimated 457 matches for "arbiters".

Did you mean: arbiter
2017 Dec 20
4
Syntax for creating arbiter volumes in gluster 4.0
Hi, The existing syntax in the gluster CLI for creating arbiter volumes is `gluster volume create <volname> replica 3 arbiter 1 <list of bricks>` . It means (or at least intended to mean) that out of the 3 bricks, 1 brick is the arbiter. There has been some feedback while implementing arbiter support in glusterd2 for glusterfs-4.0 that we should change this to `replica 2 arbiter
2018 May 06
3
arbiter node on client?
is possible to add an arbiter node on the client? Let's assume a gluster storage made with 2 storage server. This is prone to split-brains. An arbiter node can be added, but can I put the arbiter on one of the client ? Can I use multiple arbiter for the same volume ? In example, one arbiter on each client.
2018 May 07
0
arbiter node on client?
...disk images and all three of my arbiter bricks are on one of the kvm hosts. > Can I use multiple arbiter for the same volume ? In example, one arbiter on > each client. I'm pretty sure that you can only have one arbiter per subvolume, and I'm not even sure what the point of multiple arbiters over the same data would be. In my case, I have three subvolumes (three replica pairs), which means I need three arbiters and those could be spread across multiple nodes, of course, but I don't think saying "I want 12 arbiters instead of 3!" would be supported. -- Dave Sherohman
2018 Feb 27
2
Quorum in distributed-replicate volume
...ks if it would be sigificantly > > simpler and safer than repurposing the existing bricks (and I'm getting > > the impression that it probably would be). > > Yes it is the simpler and safer way of doing that. > > > Does it particularly matter > > whether the arbiters are all on the same node or on three separate > > nodes? > > > No it doesn't matter as long as the bricks of same replica subvol are not > on the same nodes. OK, great. So basically just install the gluster server on the new node(s), do a peer probe to add them to the clust...
2018 Feb 08
5
self-heal trouble after changing arbiter brick
Hi folks, I'm troubled moving an arbiter brick to another server because of I/O load issues. My setup is as follows: # gluster volume info Volume Name: myvol Type: Distributed-Replicate Volume ID: 43ba517a-ac09-461e-99da-a197759a7dc8 Status: Started Snapshot Count: 0 Number of Bricks: 3 x (2 + 1) = 9 Transport-type: tcp Bricks: Brick1: gv0:/data/glusterfs Brick2: gv1:/data/glusterfs Brick3:
2018 Feb 09
0
self-heal trouble after changing arbiter brick
Hey, Did the heal completed and you still have some entries pending heal? If yes then can you provide the following informations to debug the issue. 1. Which version of gluster you are running 2. gluster volume heal <volname> info summary or gluster volume heal <volname> info 3. getfattr -d -e hex -m . <filepath-on-brick> output of any one of the which is pending heal from all
2018 Feb 27
0
Quorum in distributed-replicate volume
...t; > > simpler and safer than repurposing the existing bricks (and I'm getting > > > the impression that it probably would be). > > > > Yes it is the simpler and safer way of doing that. > > > > > Does it particularly matter > > > whether the arbiters are all on the same node or on three separate > > > nodes? > > > > > No it doesn't matter as long as the bricks of same replica subvol are > not > > on the same nodes. > > OK, great. So basically just install the gluster server on the new > node(s), d...
2018 Jan 29
2
Replacing a third data node with an arbiter one
Thank you, for that, however I have a problem. Le 26/01/2018 ? 02:35, Ravishankar N a ?crit?: > Yes, you would need to reduce it to replica 2 and then convert it to > arbiter. > 1. Ensure there are no pending heals, i.e. heal info shows zero entries. > 2. gluster volume remove-brick thedude replica 2 > ngluster-3.network.hoggins.fr:/export/brick/thedude force > 3. gluster volume
2017 Jun 29
2
Arbiter node as VM
Hello, I have a replica 2 GlusterFS 3.8.11 cluster on 2 Debian 8 physical servers using ZFS as filesystem. Now in order to avoid a split-brain situation I would like to add a third node as arbiter. Regarding the arbiter node I have a few questions: - can the arbiter node be a virtual machine? (I am planning to use Xen as hypervisor) - can I use ext4 as file system on my arbiter? or does it need
2017 Sep 29
1
Gluster geo replication volume is faulty
I am trying to get up geo replication between two gluster volumes I have set up two replica 2 arbiter 1 volumes with 9 bricks [root at gfs1 ~]# gluster volume info Volume Name: gfsvol Type: Distributed-Replicate Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306 Status: Started Snapshot Count: 0 Number of Bricks: 3 x (2 + 1) = 9 Transport-type: tcp Bricks: Brick1: gfs2:/gfs/brick1/gv0 Brick2:
2018 Feb 09
0
self-heal trouble after changing arbiter brick
Hi Karthik, Thank you for your reply. The heal is still undergoing, as the /var/log/glusterfs/glustershd.log keeps growing, and there's a lot of pending entries in the heal info. The gluster version is 3.10.9 and 3.10.10 (the version update in progress). It doesn't have info summary [yet?], and the heal info is way too long to attach here. (It takes more than 20 minutes just to collect
2018 Jan 26
0
Replacing a third data node with an arbiter one
On 01/24/2018 07:20 PM, Hoggins! wrote: > Hello, > > The subject says it all. I have a replica 3 cluster : > > gluster> volume info thedude > > Volume Name: thedude > Type: Replicate > Volume ID: bc68dfd3-94e2-4126-b04d-77b51ec6f27e > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x 3 = 3 >
2018 Feb 09
1
self-heal trouble after changing arbiter brick
Hi Karthik, Thank you very much, you made me much more relaxed. Below is getfattr output for a file from all the bricks: root at gv2 ~ # getfattr -d -e hex -m . /data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack getfattr: Removing leading '/' from absolute path names # file: data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack
2017 Sep 22
0
Arbiter and geo-replication
On 09/22/2017 02:25 AM, Kotresh Hiremath Ravishankar wrote: > The volume layout of geo-replication slave volume could be different > from master volume. > It's not mandatory that if the master volume is arbiter type, the > slave also needs to be arbiter. > But if it's decided to use the arbiter both at master and slave, then > the expansion rules is > applicable
2017 Jun 29
0
Arbiter node as VM
As long as the VM isn't hosted on one of the two Gluster nodes, that's perfectly fine. One of my smaller clusters uses the same setup. As for your other questions, as long as it supports Unix file permissions, Gluster doesn't care what filesystem you use. Mix & match as you wish. Just try to keep matching Gluster versions across your nodes. On 29 June 2017 at 16:10, mabi
2018 Jan 29
0
Replacing a third data node with an arbiter one
On 01/29/2018 08:56 PM, Hoggins! wrote: > Thank you, for that, however I have a problem. > > Le 26/01/2018 ? 02:35, Ravishankar N a ?crit?: >> Yes, you would need to reduce it to replica 2 and then convert it to >> arbiter. >> 1. Ensure there are no pending heals, i.e. heal info shows zero entries. >> 2. gluster volume remove-brick thedude replica 2 >>
2017 Oct 06
0
Gluster geo replication volume is faulty
On 09/29/2017 09:30 PM, rick sanchez wrote: > I am trying to get up geo replication between two gluster volumes > > I have set up two replica 2 arbiter 1 volumes with 9 bricks > > [root at gfs1 ~]# gluster volume info > Volume Name: gfsvol > Type: Distributed-Replicate > Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306 > Status: Started > Snapshot Count: 0 > Number
2018 Jan 24
4
Replacing a third data node with an arbiter one
Hello, The subject says it all. I have a replica 3 cluster : gluster> volume info thedude ? Volume Name: thedude Type: Replicate Volume ID: bc68dfd3-94e2-4126-b04d-77b51ec6f27e Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: ngluster-1.network.hoggins.fr:/export/brick/thedude Brick2:
2017 Dec 11
2
How large the Arbiter node?
Hi, I see gluster now recommends the use of an arbiter brick in "replica 2" situations. How large should this brick be? I understand only metadata is to be stored. Let's say total storage usage will be 5TB of mixed size files. How large should such a brick be? -- Sent from the Delta quadrant using Borg technology! Nux! www.nux.ro
2018 Feb 27
2
Quorum in distributed-replicate volume
...I can probably find one or more machines with a few hundred GB free which could be allocated for arbiter bricks if it would be sigificantly simpler and safer than repurposing the existing bricks (and I'm getting the impression that it probably would be). Does it particularly matter whether the arbiters are all on the same node or on three separate nodes? -- Dave Sherohman