Displaying 20 results from an estimated 800 matches similar to: "Failover problems with gluster 3.8.8-1 (latest Debian stable)"
2018 Feb 15
0
Failover problems with gluster 3.8.8-1 (latest Debian stable)
Well, it looks like I've stumped the list, so I did a bit of additional
digging myself:
azathoth replicates with yog-sothoth, so I compared their brick
directories. `ls -R /var/local/brick0/data | md5sum` gives the same
result on both servers, so the filenames are identical in both bricks.
However, `du -s /var/local/brick0/data` shows that azathoth has about 3G
more data (445G vs 442G) than
2018 Feb 15
2
Failover problems with gluster 3.8.8-1 (latest Debian stable)
Hi,
Have you checked for any file system errors on the brick mount point?
I once was facing weird io errors and xfs_repair fixed the issue.
What about the heal? Does it report any pending heals?
On Feb 15, 2018 14:20, "Dave Sherohman" <dave at sherohman.org> wrote:
> Well, it looks like I've stumped the list, so I did a bit of additional
> digging myself:
>
>
2018 Feb 27
2
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 12:00:29PM +0530, Karthik Subrahmanya wrote:
> I will try to explain how you can end up in split-brain even with cluster
> wide quorum:
Yep, the explanation made sense. I hadn't considered the possibility of
alternating outages. Thanks!
> > > It would be great if you can consider configuring an arbiter or
> > > replica 3 volume.
> >
2018 Feb 27
2
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 04:59:36PM +0530, Karthik Subrahmanya wrote:
> > > Since arbiter bricks need not be of same size as the data bricks, if you
> > > can configure three more arbiter bricks
> > > based on the guidelines in the doc [1], you can do it live and you will
> > > have the distribution count also unchanged.
> >
> > I can probably find
2018 Feb 27
0
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 6:14 PM, Dave Sherohman <dave at sherohman.org> wrote:
> On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote:
> > > "In a replica 2 volume... If we set the client-quorum option to
> > > auto, then the first brick must always be up, irrespective of the
> > > status of the second brick. If only the second brick is up,
2018 Feb 26
2
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote:
> > "In a replica 2 volume... If we set the client-quorum option to
> > auto, then the first brick must always be up, irrespective of the
> > status of the second brick. If only the second brick is up, the
> > subvolume becomes read-only."
> >
> By default client-quorum is
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 4:18 PM, Dave Sherohman <dave at sherohman.org> wrote:
> On Tue, Feb 27, 2018 at 03:20:25PM +0530, Karthik Subrahmanya wrote:
> > If you want to use the first two bricks as arbiter, then you need to be
> > aware of the following things:
> > - Your distribution count will be decreased to 2.
>
> What's the significance of this? I'm
2018 Feb 27
2
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 03:20:25PM +0530, Karthik Subrahmanya wrote:
> If you want to use the first two bricks as arbiter, then you need to be
> aware of the following things:
> - Your distribution count will be decreased to 2.
What's the significance of this? I'm trying to find documentation on
distribution counts in gluster, but my google-fu is failing me.
> - Your data on
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 1:40 PM, Dave Sherohman <dave at sherohman.org> wrote:
> On Tue, Feb 27, 2018 at 12:00:29PM +0530, Karthik Subrahmanya wrote:
> > I will try to explain how you can end up in split-brain even with cluster
> > wide quorum:
>
> Yep, the explanation made sense. I hadn't considered the possibility of
> alternating outages. Thanks!
>
>
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 5:35 PM, Dave Sherohman <dave at sherohman.org> wrote:
> On Tue, Feb 27, 2018 at 04:59:36PM +0530, Karthik Subrahmanya wrote:
> > > > Since arbiter bricks need not be of same size as the data bricks, if
> you
> > > > can configure three more arbiter bricks
> > > > based on the guidelines in the doc [1], you can do it live and
2018 Feb 16
0
Failover problems with gluster 3.8.8-1 (latest Debian stable)
On Thu, Feb 15, 2018 at 09:34:02PM +0200, Alex K wrote:
> Have you checked for any file system errors on the brick mount point?
I hadn't. fsck reports no errors.
> What about the heal? Does it report any pending heals?
There are now. It looks like taking the brick offline to fsck it was
enough to trigger gluster to recheck everything. I'll check after it
finishes to see whether
2011 Oct 17
1
brick out of space, unmounted brick
Hello Gluster users,
Before I put Gluster into production, I am wondering how it determines whether a byte can be written, and where I should look in the source code to change these behaviors. My experiences are with glusterfs 3.2.4 on CentOS 6 64-bit.
Suppose I have a Gluster volume made up of four 1 MB bricks, like this
Volume Name: test
Type: Distributed-Replicate
Status: Started
Number of
2018 Feb 25
2
Re-adding an existing brick to a volume
Hi!
I am running a replica 3 volume. On server2 I wanted to move the brick
to a new disk.
I removed the brick from the volume:
gluster volume remove-brick VOLUME rep 2
server2:/gluster/VOLUME/brick0/brick force
I unmounted the old brick and mounted the new disk to the same location.
I added the empty new brick to the volume:
gluster volume add-brick VOLUME rep 3
2009 Feb 23
1
Interleave or not
Lets say you had 4 servers and you wanted to setup replicate and
distribute. What methoid would be better:
server sdb1
xen0 brick0
xen1 mirror0
xen2 brick1
xen3 mirror1
replicate block0 - brick0 mirror0
replicate block1 - brick1 mirror1
distribute unify - block0 block1
or
server sdb1 sdb2
xen0 brick0 mirror3
xen1 brick1 mirror0
xen2 brick2 mirror1
xen3 brick3 mirror2
replicate block0 -
2018 Feb 25
0
Re-adding an existing brick to a volume
.gluster and attr already in that folder so it would not connect it as a brick
I don't think there is option to "reconnect brick back"
what I did many times - delete .gluster and reset attr on the folder,
connect the brick and then update those attr. with stat
commands example here
http://lists.gluster.org/pipermail/gluster-users/2018-January/033352.html
Vlad
On Sun, Feb 25, 2018
2017 Aug 09
1
Gluster performance with VM's
Hi, community
Please, help me with my trouble.
I have 2 Gluster nodes, with 2 bricks on each.
Configuration:
Node1 brick1 replicated on Node0 brick0
Node0 brick1 replicated on Node1 brick0
Volume Name: gm0
Type: Distributed-Replicate
Volume ID: 5e55f511-8a50-46e4-aa2f-5d4f73c859cf
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1:
2018 Feb 25
1
Re-adding an existing brick to a volume
Let me see if I understand this.
Remove attrs from the brick and delete the .glusterfs folder. Data stays
in place. Add the brick to the volume.
Since most of the data is the same as on the actual volume it does not
need to be synced, and the heal operation finishes much faster.
Do I have this right?
Kind regards,
Mitja
On 25/02/2018 17:02, Vlad Kopylov wrote:
> .gluster and attr already in
2008 Oct 15
1
Glusterfs performance with large directories
We at Wiseguys are looking into GlusterFS to run our Internet Archive.
The archive stores webpages collected by our spiders.
The test setup consists of three data machines, each exporting a volume
of about 3.7TB and one nameserver machine.
File layout is such that each host has it's own directory, for example the
GlusterFS website would be located in:
2017 Jul 17
1
Gluster set brick online and start sync.
Hello everybody,
Please, help to fix me a problem.
I have a distributed-replicated volume between two servers. On each
server I have 2 RAID-10 arrays, that replicated between servers.
Brick gl1:/mnt/brick1/gm0 49153 0 Y
13910
Brick gl0:/mnt/brick0/gm0 N/A N/A N
N/A
Brick gl0:/mnt/brick1/gm0 N/A
2012 Nov 27
1
Performance after failover
Hey, all.
I'm currently trying out GlusterFS 3.3.
I've got two servers and four clients, all on separate boxes.
I've got a Distributed-Replicated volume with 4 bricks, two from each
server,
and I'm using the FUSE client.
I was trying out failover, currently testing for reads.
I was reading a big file, using iftop to see which server was actually
being read from.
I put up an