search for: sherohman

Displaying 18 results from an estimated 18 matches for "sherohman".

Did you mean: sherman
2018 May 07
0
arbiter node on client?
...ple arbiters over the same data would be. In my case, I have three subvolumes (three replica pairs), which means I need three arbiters and those could be spread across multiple nodes, of course, but I don't think saying "I want 12 arbiters instead of 3!" would be supported. -- Dave Sherohman
2018 May 06
3
arbiter node on client?
is possible to add an arbiter node on the client? Let's assume a gluster storage made with 2 storage server. This is prone to split-brains. An arbiter node can be added, but can I put the arbiter on one of the client ? Can I use multiple arbiter for the same volume ? In example, one arbiter on each client.
2018 Feb 15
2
Failover problems with gluster 3.8.8-1 (latest Debian stable)
Hi, Have you checked for any file system errors on the brick mount point? I once was facing weird io errors and xfs_repair fixed the issue. What about the heal? Does it report any pending heals? On Feb 15, 2018 14:20, "Dave Sherohman" <dave at sherohman.org> wrote: > Well, it looks like I've stumped the list, so I did a bit of additional > digging myself: > > azathoth replicates with yog-sothoth, so I compared their brick > directories. `ls -R /var/local/brick0/data | md5sum` gives the same >...
2018 Feb 15
0
Failover problems with gluster 3.8.8-1 (latest Debian stable)
...on yog-sothoth, wipe everything from /var/local/brick0, and then re-add it to the cluster as if I were replacing a physically failed disk? Seems like that should work in principle, but it feels dangerous to wipe the partition and rebuild, regardless. On Tue, Feb 13, 2018 at 07:33:44AM -0600, Dave Sherohman wrote: > I'm using gluster for a virt-store with 3x2 distributed/replicated > servers for 16 qemu/kvm/libvirt virtual machines using image files > stored in gluster and accessed via libgfapi. Eight of these disk images > are standalone, while the other eight are qcow2 images which...
2018 Feb 27
2
Quorum in distributed-replicate volume
...h could be allocated for arbiter bricks if it would be sigificantly simpler and safer than repurposing the existing bricks (and I'm getting the impression that it probably would be). Does it particularly matter whether the arbiters are all on the same node or on three separate nodes? -- Dave Sherohman
2018 Feb 13
2
Failover problems with gluster 3.8.8-1 (latest Debian stable)
.../A N/A Y 7588 Task Status of Volume palantir ------------------------------------------------------------------------------ Task : Rebalance ID : c38e11fe-fe1b-464d-b9f5-1398441cc229 Status : completed -- Dave Sherohman
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 4:18 PM, Dave Sherohman <dave at sherohman.org> wrote: > On Tue, Feb 27, 2018 at 03:20:25PM +0530, Karthik Subrahmanya wrote: > > If you want to use the first two bricks as arbiter, then you need to be > > aware of the following things: > > - Your distribution count will be decreased to 2. >...
2018 Feb 26
2
Quorum in distributed-replicate volume
...the reasoning for it? I would expect that, if the cluster splits into brick 1 by itself and bricks 2-3-4-5-6 still together, then brick 1 will recognize that it doesn't have volume-wide quorum and reject writes, thus allowing brick 2 to remain authoritative and able to accept writes. -- Dave Sherohman
2018 Feb 27
2
Quorum in distributed-replicate volume
...node(s), do a peer probe to add them to the cluster, and then gluster volume create palantir replica 3 arbiter 1 [saruman brick] [gandalf brick] [arbiter 1] [azathoth brick] [yog-sothoth brick] [arbiter 2] [cthulhu brick] [mordiggian brick] [arbiter 3] Or is there more to it than that? -- Dave Sherohman
2018 Feb 26
0
Quorum in distributed-replicate volume
Hi Dave, On Mon, Feb 26, 2018 at 4:45 PM, Dave Sherohman <dave at sherohman.org> wrote: > I've configured 6 bricks as distributed-replicated with replica 2, > expecting that all active bricks would be usable so long as a quorum of > at least 4 live bricks is maintained. > The client quorum is configured per replica sub volume and n...
2018 Feb 27
2
Quorum in distributed-replicate volume
...Filesystem Size Used Avail Use% Mounted on /dev/mapper/gandalf-gluster 885G 55G 786G 7% /var/local/brick0 and the other four have $ df -h /var/local/brick0 Filesystem Size Used Avail Use% Mounted on /dev/sdb1 11T 254G 11T 3% /var/local/brick0 -- Dave Sherohman
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 1:40 PM, Dave Sherohman <dave at sherohman.org> wrote: > On Tue, Feb 27, 2018 at 12:00:29PM +0530, Karthik Subrahmanya wrote: > > I will try to explain how you can end up in split-brain even with cluster > > wide quorum: > > Yep, the explanation made sense. I hadn't considered the possibil...
2018 Feb 16
0
Failover problems with gluster 3.8.8-1 (latest Debian stable)
...sck reports no errors. > What about the heal? Does it report any pending heals? There are now. It looks like taking the brick offline to fsck it was enough to trigger gluster to recheck everything. I'll check after it finishes to see whether this ultimately resolves the issue. -- Dave Sherohman
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 5:35 PM, Dave Sherohman <dave at sherohman.org> wrote: > On Tue, Feb 27, 2018 at 04:59:36PM +0530, Karthik Subrahmanya wrote: > > > > Since arbiter bricks need not be of same size as the data bricks, if > you > > > > can configure three more arbiter bricks > > > > based on...
2018 Feb 26
2
Quorum in distributed-replicate volume
...add those two bricks as arbiters (And did I miss any additional steps?) Unfortunately, I've been running a few months already with the current configuration and there are several virtual machines running off the existing volume, so I'll need to reconfigure it online if possible. -- Dave Sherohman
2018 Feb 27
0
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 6:14 PM, Dave Sherohman <dave at sherohman.org> wrote: > On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote: > > > "In a replica 2 volume... If we set the client-quorum option to > > > auto, then the first brick must always be up, irrespective of the > > > status...
2018 May 29
1
glustefs as vmware datastore in production
Sometimes os disk hang occured and re-mounted with ro in vm guest(centos6) when storage was busy. After install vmware plugin, increased block response timeout to 30 sec. But os workload reponse time was not good. I guess my system composed with 5400 rpm disks with raid6. Overall storage performance is not good for multiple os images. best regards. After tha 2018? 5? 29? (?) ?? 1:45, Jo?o
2018 Apr 27
2
How to set up a 4 way gluster file system
Hi, I have 4 nodes, so a quorum would be 3 of 4. The Q is I suppose why does the documentation give this command as an example without qualifying it? SO I am running the wrong command? I want a "raid10" On 27 April 2018 at 18:05, Karthik Subrahmanya <ksubrahm at redhat.com> wrote: > Hi, > > With replica 2 volumes one can easily end up in split-brains if there are