similar to: Syntax for creating arbiter volumes in gluster 4.0

Displaying 20 results from an estimated 10000 matches similar to: "Syntax for creating arbiter volumes in gluster 4.0"

2018 Jan 26
2
Replacing a third data node with an arbiter one
On Fri, Jan 26, 2018 at 7:05 AM, Ravishankar N <ravishankar at redhat.com> wrote: > > > On 01/24/2018 07:20 PM, Hoggins! wrote: > > Hello, > > The subject says it all. I have a replica 3 cluster : > > gluster> volume info thedude > > Volume Name: thedude > Type: Replicate > Volume ID: bc68dfd3-94e2-4126-b04d-77b51ec6f27e >
2018 Jan 26
0
Replacing a third data node with an arbiter one
On 01/24/2018 07:20 PM, Hoggins! wrote: > Hello, > > The subject says it all. I have a replica 3 cluster : > > gluster> volume info thedude > > Volume Name: thedude > Type: Replicate > Volume ID: bc68dfd3-94e2-4126-b04d-77b51ec6f27e > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x 3 = 3 >
2018 Jan 29
2
Replacing a third data node with an arbiter one
Thank you, for that, however I have a problem. Le 26/01/2018 ? 02:35, Ravishankar N a ?crit?: > Yes, you would need to reduce it to replica 2 and then convert it to > arbiter. > 1. Ensure there are no pending heals, i.e. heal info shows zero entries. > 2. gluster volume remove-brick thedude replica 2 > ngluster-3.network.hoggins.fr:/export/brick/thedude force > 3. gluster volume
2018 Jan 24
4
Replacing a third data node with an arbiter one
Hello, The subject says it all. I have a replica 3 cluster : gluster> volume info thedude ? Volume Name: thedude Type: Replicate Volume ID: bc68dfd3-94e2-4126-b04d-77b51ec6f27e Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: ngluster-1.network.hoggins.fr:/export/brick/thedude Brick2:
2018 Feb 25
3
Convert replica 2 to replica 2+1 arbiter
I must ask again, just to be sure. Is what you are proposing definitely supported in v3.8? Kind regards, Mitja On 25/02/2018 13:55, Jim Kinney wrote: > gluster volume add-brick volname replica 3 arbiter 1 > brickhost:brickpath/to/new/arbitervol > > Yes. The replica 3 looks odd. Somewhere in 3.12 (?) or not until v4 a > change in command will happen so it won't count the
2018 Feb 27
2
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 04:59:36PM +0530, Karthik Subrahmanya wrote: > > > Since arbiter bricks need not be of same size as the data bricks, if you > > > can configure three more arbiter bricks > > > based on the guidelines in the doc [1], you can do it live and you will > > > have the distribution count also unchanged. > > > > I can probably find
2018 Feb 09
0
self-heal trouble after changing arbiter brick
Hi Karthik, Thank you for your reply. The heal is still undergoing, as the /var/log/glusterfs/glustershd.log keeps growing, and there's a lot of pending entries in the heal info. The gluster version is 3.10.9 and 3.10.10 (the version update in progress). It doesn't have info summary [yet?], and the heal info is way too long to attach here. (It takes more than 20 minutes just to collect
2018 Feb 08
5
self-heal trouble after changing arbiter brick
Hi folks, I'm troubled moving an arbiter brick to another server because of I/O load issues. My setup is as follows: # gluster volume info Volume Name: myvol Type: Distributed-Replicate Volume ID: 43ba517a-ac09-461e-99da-a197759a7dc8 Status: Started Snapshot Count: 0 Number of Bricks: 3 x (2 + 1) = 9 Transport-type: tcp Bricks: Brick1: gv0:/data/glusterfs Brick2: gv1:/data/glusterfs Brick3:
2018 Jan 29
0
Replacing a third data node with an arbiter one
On 01/29/2018 08:56 PM, Hoggins! wrote: > Thank you, for that, however I have a problem. > > Le 26/01/2018 ? 02:35, Ravishankar N a ?crit?: >> Yes, you would need to reduce it to replica 2 and then convert it to >> arbiter. >> 1. Ensure there are no pending heals, i.e. heal info shows zero entries. >> 2. gluster volume remove-brick thedude replica 2 >>
2017 Aug 27
2
self-heal not working
Yes, the shds did pick up the file for healing (I saw messages like " got entry: 1985e233-d5ee-4e3e-a51a-cf0b5f9f2aea") but no error afterwards. Anyway I reproduced it by manually setting the afr.dirty bit for a zero byte file on all 3 bricks. Since there are no afr pending xattrs indicating good/bad copies and all files are zero bytes, the data self-heal algorithm just picks the
2018 Feb 09
1
self-heal trouble after changing arbiter brick
Hi Karthik, Thank you very much, you made me much more relaxed. Below is getfattr output for a file from all the bricks: root at gv2 ~ # getfattr -d -e hex -m . /data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack getfattr: Removing leading '/' from absolute path names # file: data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack
2017 Aug 28
3
self-heal not working
Excuse me for my naive questions but how do I reset the afr.dirty xattr on the file to be healed? and do I need to do that through a FUSE mount? or simply on every bricks directly? > -------- Original Message -------- > Subject: Re: [Gluster-users] self-heal not working > Local Time: August 28, 2017 5:58 AM > UTC Time: August 28, 2017 3:58 AM > From: ravishankar at redhat.com >
2018 Feb 25
2
Convert replica 2 to replica 2+1 arbiter
Hi! I am using GlusterFS on CentOS7 with glusterfs-3.8.15 RPM version. I currently have a replica 2 running and I would like to get rid of the split-brain problem before it occurs. This is one of the possible solutions. Is it possible to and an arbiter to this volume? I have read in a thread from 2016 that this feature is planned for version 3.8. Is the feature available? If so, could you give
2017 Aug 27
2
self-heal not working
----- Original Message ----- > From: "mabi" <mabi at protonmail.ch> > To: "Ravishankar N" <ravishankar at redhat.com> > Cc: "Ben Turner" <bturner at redhat.com>, "Gluster Users" <gluster-users at gluster.org> > Sent: Sunday, August 27, 2017 3:15:33 PM > Subject: Re: [Gluster-users] self-heal not working > >
2018 Apr 27
3
How to set up a 4 way gluster file system
Hi, I have 4 servers each with 1TB of storage set as /dev/sdb1, I would like to set these up in a raid 10 which will? give me 2TB useable. So Mirrored and concatenated? The command I am running is as per documents but I get a warning error, how do I get this to proceed please as the documents do not say. gluster volume create gv0 replica 2 glusterp1:/bricks/brick1/gv0
2018 Mar 02
1
geo-replication
Hi again, I have been testing and reading up on other solutions and just wanted to check if my ideas are ok. I have been looking at dispersed volumes and wonder if there are any problems running replicated-distributed cluster on the master node and a dispersed-distributed cluster on the slave side of a geo-replication. Second thought, running disperesed on both sides, is that a problem (Master:
2017 Aug 24
2
self-heal not working
Thanks for confirming the command. I have now enabled DEBUG client-log-level, run a heal and then attached the glustershd log files of all 3 nodes in this mail. The volume concerned is called myvol-pro, the other 3 volumes have no problem so far. Also note that in the mean time it looks like the file has been deleted by the user and as such the heal info command does not show the file name
2018 Feb 25
0
Convert replica 2 to replica 2+1 arbiter
gluster volume add-brick volname replica 3 arbiter 1 brickhost:brickpath/to/new/arbitervol Yes. The replica 3 looks odd. Somewhere in 3.12 (?) or not until v4 a change in command will happen so it won't count the arbiter as a replica. On February 25, 2018 5:05:04 AM EST, "Mitja Miheli?" <mitja.mihelic at arnes.si> wrote: >Hi! > >I am using GlusterFS on CentOS7 with
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 5:35 PM, Dave Sherohman <dave at sherohman.org> wrote: > On Tue, Feb 27, 2018 at 04:59:36PM +0530, Karthik Subrahmanya wrote: > > > > Since arbiter bricks need not be of same size as the data bricks, if > you > > > > can configure three more arbiter bricks > > > > based on the guidelines in the doc [1], you can do it live and
2017 Aug 28
2
self-heal not working
Thank you for the command. I ran it on all my nodes and now finally the the self-heal daemon does not report any files to be healed. Hopefully this scenario can get handled properly in newer versions of GlusterFS. > -------- Original Message -------- > Subject: Re: [Gluster-users] self-heal not working > Local Time: August 28, 2017 10:41 AM > UTC Time: August 28, 2017 8:41 AM >