Displaying 20 results from an estimated 1000 matches similar to: "How to set up a 4 way gluster file system"
2018 Apr 27
0
How to set up a 4 way gluster file system
Hi,
With replica 2 volumes one can easily end up in split-brains if there are
frequent disconnects and high IOs going on.
If you use replica 3 or arbiter volumes, it will guard you by using the
quorum mechanism giving you both consistency and availability.
But in replica 2 volumes, quorum does not make sense since it needs both
the nodes up to guarantee consistency, which costs availability.
If
2018 Apr 27
2
How to set up a 4 way gluster file system
Hi,
I have 4 nodes, so a quorum would be 3 of 4. The Q is I suppose why does
the documentation give this command as an example without qualifying it?
SO I am running the wrong command? I want a "raid10"
On 27 April 2018 at 18:05, Karthik Subrahmanya <ksubrahm at redhat.com> wrote:
> Hi,
>
> With replica 2 volumes one can easily end up in split-brains if there are
2018 Apr 15
1
un-expected warning message when atempting to build a 4 node gluster setup.
Hi,
I am on centos 7.4 with gluster 4.
I am trying to a distributed and replicated volume on the 4 nodes
I am getting this un-expected qualification,
[root at glustep1 brick1]# gluster volume create gv0 replica 2
glusterp1:/bricks/brick1/gv0 glusterp2:/bricks/brick1/gv0
glusterp3:/bricks/brick1/gv0 glusterp4:/bricks/brick1/gv0
8><----
Replica 2 volumes are prone to split-brain. Use
2018 Mar 28
3
Announcing Gluster release 4.0.1 (Short Term Maintenance)
On Wed, Mar 28, 2018 at 02:57:55PM +1300, Thing wrote:
> Hi,
>
> Thanks, yes, not very familiar with Centos and hence googling took a while
> to find a 4.0 version at,
>
> https://wiki.centos.org/SpecialInterestGroup/Storage
The announcement for Gluster 4.0 in CentOS should contain all the
details that you need as well:
2018 May 21
2
split brain? but where?
Hi,
I seem to have a split brain issue, but I cannot figure out where this is
and what it is, can someone help me pls, I cant find what to fix here.
==========
root at salt-001:~# salt gluster* cmd.run 'df -h'
glusterp2.graywitch.co.nz:
Filesystem Size Used
Avail Use% Mounted on
/dev/mapper/centos-root
2018 May 10
2
broken gluster config
Whatever repair happened has now finished but I still have this,
I cant find anything so far telling me how to fix it. Looking at
http://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/heal-info-and-split-brain-resolution/
I cant determine what file? dir gvo? is actually the issue.
[root at glusterp1 gv0]# gluster volume heal gv0 info split-brain
Brick
2018 May 22
2
split brain? but where?
Hi,
Which version of gluster you are using?
You can find which file is that using the following command
find <brickpath> -samefile <brickpath/.glusterfs/<first two bits of
gfid>/<next 2 bits of gfid>/<full gfid>
Please provide the getfatr output of the file which is in split brain.
The steps to recover from split-brain can be found here,
2018 May 10
2
broken gluster config
[root at glusterp1 gv0]# !737
gluster v status
Status of volume: gv0
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick glusterp1:/bricks/brick1/gv0 49152 0 Y
5229
Brick glusterp2:/bricks/brick1/gv0 49152 0 Y
2054
Brick
2018 May 21
0
split brain? but where?
How do I find what "eafb8799-4e7a-4264-9213-26997c5a4693" is?
https://docs.gluster.org/en/v3/Troubleshooting/gfid-to-path/
On May 21, 2018 3:22:01 PM PDT, Thing <thing.thing at gmail.com> wrote:
>Hi,
>
>I seem to have a split brain issue, but I cannot figure out where this
>is
>and what it is, can someone help me pls, I cant find what to fix here.
>
2018 May 10
0
broken gluster config
[trying to read,
I cant understand what is wrong?
root at glusterp1 gv0]# gluster volume heal gv0 info
Brick glusterp1:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
Status: Connected
Number of entries: 1
Brick glusterp2:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
Status: Connected
Number of entries: 1
2018 May 22
1
split brain? but where?
I tried looking for a file of the same size and the gfid doesnt show up,
8><---
[root at glusterp2 fb]# pwd
/bricks/brick1/gv0/.glusterfs/ea/fb
[root at glusterp2 fb]# ls -al
total 3130892
drwx------. 2 root root 64 May 22 13:01 .
drwx------. 4 root root 24 May 8 14:27 ..
-rw-------. 1 root root 3294887936 May 4 11:07
eafb8799-4e7a-4264-9213-26997c5a4693
-rw-r--r--. 1 root
2018 May 22
0
split brain? but where?
I tried this already.
8><---
[root at glusterp2 fb]# find /bricks/brick1/gv0 -samefile
/bricks/brick1/gv0/.glusterfs/ea/fb/eafb8799-4e7a-4264-9213-26997c5a4693
/bricks/brick1/gv0/.glusterfs/ea/fb/eafb8799-4e7a-4264-9213-26997c5a4693
[root at glusterp2 fb]#
8><---
gluster 4
Centos 7.4
8><---
df -h
[root at glusterp2 fb]# df -h
Filesystem
2018 May 10
0
broken gluster config
also I have this "split brain"?
[root at glusterp1 gv0]# gluster volume heal gv0 info
Brick glusterp1:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
Status: Connected
Number of entries: 1
Brick glusterp2:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
/glusterp1/images/centos-server-001.qcow2
2018 May 10
2
broken gluster config
Hi,
I have 3 Centos7.4 machines setup as a 3 way raid 1.
Due to an oopsie on my part for glusterp1 /bricks/brick1/gv0 didnt mount on
boot and as a result its empty.
Meanwhile I have data on glusterp2 /bricks/brick1/gv0 and glusterp3
/bricks/brick1/gv0 as expected.
Is there a way to get glusterp1's gv0 to sync off the other 2? there must
be but,
I have looked at the gluster docs and I
2018 May 10
0
broken gluster config
Show us output from: gluster v status
It should be easy to fix. Stop gluster daemon on that node, mount the
brick, start gluster daemon again.
Check: gluster v status
Does it show the brick up?
HTH,
Diego
On Wed, May 9, 2018, 20:01 Thing <thing.thing at gmail.com> wrote:
> Hi,
>
> I have 3 Centos7.4 machines setup as a 3 way raid 1.
>
> Due to an oopsie on my part for
2018 May 08
1
mount failing client to gluster cluster.
Hi,
On a debian 9 client,
========
root at kvm01:/var/lib/libvirt# dpkg -l glusterfs-client
8><---
ii glusterfs-client 3.8.8-1 amd64
clustered file-system (client package)
root at kvm01:/var/lib/libvirt#
=======
I am trying to to do a mount to a Centos 7 gluster setup,
=======
[root at glustep1 libvirt]# rpm -q glusterfs
glusterfs-4.0.2-1.el7.x86_64
2017 Sep 07
0
GlusterFS as virtual machine storage
Hi Neil, docs mention two live nodes of replica 3 blaming each other and
refusing to do IO.
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/#1-replica-3-volume
On Sep 7, 2017 17:52, "Alastair Neil" <ajneil.tech at gmail.com> wrote:
> *shrug* I don't use arbiter for vm work loads just straight replica 3.
2017 Sep 07
3
GlusterFS as virtual machine storage
*shrug* I don't use arbiter for vm work loads just straight replica 3.
There are some gotchas with using an arbiter for VM workloads. If
quorum-type is auto and a brick that is not the arbiter drop out then if
the up brick is dirty as far as the arbiter is concerned i.e. the only good
copy is on the down brick you will get ENOTCONN and your VMs will halt on
IO.
On 6 September 2017 at 16:06,
2018 Feb 26
2
Quorum in distributed-replicate volume
I've configured 6 bricks as distributed-replicated with replica 2,
expecting that all active bricks would be usable so long as a quorum of
at least 4 live bricks is maintained.
However, I have just found
http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/
Which states that "In a replica 2 volume... If we set the client-quorum
2018 Feb 06
5
strange hostname issue on volume create command with famous Peer in Cluster state error message
Hello,
i installed glusterfs 3.11.3 version 3 nodes ubuntu 16.04 machine. All machines have same /etc/hosts.
node1 hostname
pri.ostechnix.lan
node2 hostname
sec.ostechnix.lan
node2 hostname
third.ostechnix.lan
51.15.77.14 pri.ostechnix.lan pri
51.15.90.60 sec.ostechnix.lan sec
163.172.151.120 third.ostechnix.lan third
volume create command is
root at