Displaying 15 results from an estimated 15 matches for "20ways".
2018 Apr 27
3
How to set up a 4 way gluster file system
...eate gv0 replica 2 glusterp1:/bricks/brick1/gv0
glusterp2:/bricks/brick1/gv0 glusterp3:/bricks/brick1/gv0
glusterp4:/bricks/brick1/gv0
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to
avoid this. See:
http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/
.
Do you still want to continue?
(y/n) n
Usage:
volume create <NEW-VOLNAME> [stripe <COUNT>] [replica <COUNT> [arbiter
<COUNT>]] [disperse [<COUNT>]] [disperse-data <COUNT>] [redundancy <COUNT>]
[transport <tcp|rdma|tcp,rdma>...
2018 Apr 15
1
un-expected warning message when atempting to build a 4 node gluster setup.
...2
glusterp1:/bricks/brick1/gv0 glusterp2:/bricks/brick1/gv0
glusterp3:/bricks/brick1/gv0 glusterp4:/bricks/brick1/gv0
8><----
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to
avoid this. See:
http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/
.
Do you still want to continue?
(y/n) n
8><-----
Looking at both gluster docs and redhat docs this seems un-expected.
regards
Steven
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-us...
2018 Apr 27
0
How to set up a 4 way gluster file system
...p1:/bricks/brick1/gv0
> glusterp2:/bricks/brick1/gv0 glusterp3:/bricks/brick1/gv0
> glusterp4:/bricks/brick1/gv0
> Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to
> avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/
> Split%20brain%20and%20ways%20to%20deal%20with%20it/.
> Do you still want to continue?
> (y/n) n
>
> Usage:
> volume create <NEW-VOLNAME> [stripe <COUNT>] [replica <COUNT> [arbiter
> <COUNT>]] [disperse [<COUNT>]] [disperse-data <COUNT>] [redundancy <COUNT>]
> [t...
2018 Feb 26
2
Quorum in distributed-replicate volume
I've configured 6 bricks as distributed-replicated with replica 2,
expecting that all active bricks would be usable so long as a quorum of
at least 4 live bricks is maintained.
However, I have just found
http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/
Which states that "In a replica 2 volume... If we set the client-quorum
option to auto, then the first brick must always be up, irrespective of
the status of the second brick. If only the second brick is up, the
subvolume becomes read-only."
Does this apply only...
2017 Sep 07
0
GlusterFS as virtual machine storage
Hi Neil, docs mention two live nodes of replica 3 blaming each other and
refusing to do IO.
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/#1-replica-3-volume
On Sep 7, 2017 17:52, "Alastair Neil" <ajneil.tech at gmail.com> wrote:
> *shrug* I don't use arbiter for vm work loads just straight replica 3.
> There are some gotchas with using an arbiter for VM workloads. If
> quorum...
2018 Feb 06
1
strange hostname issue on volume create command with famous Peer in Cluster state error message
...myvol1 replica 2 transport tcp pri.ostechnix.lan:/gluster/brick1/mpoint1 sec.ostechnix.lan:/gluster/brick1/mpoint1 force
> Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/ <https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/>.
> Do you still want to continue?
> (y/n) y
> volume create: myvol1: failed: Host pri.ostechnix.lan is not in 'Peer in Cluster' state...
2018 Apr 27
2
How to set up a 4 way gluster file system
...0
>> glusterp2:/bricks/brick1/gv0 glusterp3:/bricks/brick1/gv0
>> glusterp4:/bricks/brick1/gv0
>> Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to
>> avoid this. See: http://docs.gluster.org/en/lat
>> est/Administrator%20Guide/Split%20brain%20and%20ways%20to%
>> 20deal%20with%20it/.
>> Do you still want to continue?
>> (y/n) n
>>
>> Usage:
>> volume create <NEW-VOLNAME> [stripe <COUNT>] [replica <COUNT> [arbiter
>> <COUNT>]] [disperse [<COUNT>]] [disperse-data <COUNT>]...
2017 Sep 07
3
GlusterFS as virtual machine storage
*shrug* I don't use arbiter for vm work loads just straight replica 3.
There are some gotchas with using an arbiter for VM workloads. If
quorum-type is auto and a brick that is not the arbiter drop out then if
the up brick is dirty as far as the arbiter is concerned i.e. the only good
copy is on the down brick you will get ENOTCONN and your VMs will halt on
IO.
On 6 September 2017 at 16:06,
2018 Feb 06
5
strange hostname issue on volume create command with famous Peer in Cluster state error message
...reate myvol1 replica 2 transport tcp pri.ostechnix.lan:/gluster/brick1/mpoint1 sec.ostechnix.lan:/gluster/brick1/mpoint1 force
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
Do you still want to continue?
(y/n) y
volume create: myvol1: failed: Host pri.ostechnix.lan is not in 'Peer in Cluster' state
node 1 glusterd.log is here
root at pri:/var/log/glusterfs# cat glusterd.log
[2018-02-06 13:28:37.638373] W [glusterfsd.c:1331:cleanup...
2018 Feb 06
0
strange hostname issue on volume create command with famous Peer in Cluster state error message
...; transport tcp pri.ostechnix.lan:/gluster/brick1/mpoint1
> sec.ostechnix.lan:/gluster/brick1/mpoint1 force
> Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to
> avoid this. See: https://gluster.readthedocs.io/en/latest/Administrator%
> 20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
> Do you still want to continue?
> (y/n) y
> volume create: myvol1: failed: Host pri.ostechnix.lan is not in 'Peer in
> Cluster' state
>
> node 1 glusterd.log is here
>
> root at pri:/var/log/glusterfs# cat glusterd.log
> [2018-02-06 13:...
2017 Sep 07
2
GlusterFS as virtual machine storage
...+ arbiter.
On 7 September 2017 at 14:06, Pavel Szalbot <pavel.szalbot at gmail.com> wrote:
> Hi Neil, docs mention two live nodes of replica 3 blaming each other and
> refusing to do IO.
>
> https://gluster.readthedocs.io/en/latest/Administrator%
> 20Guide/Split%20brain%20and%20ways%20to%20deal%20with%
> 20it/#1-replica-3-volume
>
>
>
> On Sep 7, 2017 17:52, "Alastair Neil" <ajneil.tech at gmail.com> wrote:
>
>> *shrug* I don't use arbiter for vm work loads just straight replica 3.
>> There are some gotchas with using an arbite...
2017 Sep 25
1
Adding bricks to an existing installation.
Sharding is not enabled.
Ludwig
On Mon, Sep 25, 2017 at 2:34 PM, <lemonnierk at ulrar.net> wrote:
> Do you have sharding enabled ? If yes, don't do it.
> If no I'll let someone who knows better answer you :)
>
> On Mon, Sep 25, 2017 at 02:27:13PM -0400, Ludwig Gamache wrote:
> > All,
> >
> > We currently have a Gluster installation which is made of 2
2018 Feb 26
0
Quorum in distributed-replicate volume
...plicated volume with replica 2, the data
will have 2 copies,
and considering your scenario of quorum to be taken on the total number of
bricks will lead to split-brains.
>
> However, I have just found
>
> http://docs.gluster.org/en/latest/Administrator%20Guide/
> Split%20brain%20and%20ways%20to%20deal%20with%20it/
>
> Which states that "In a replica 2 volume... If we set the client-quorum
> option to auto, then the first brick must always be up, irrespective of
> the status of the second brick. If only the second brick is up, the
> subvolume becomes read-only.&quo...
2018 Feb 06
0
strange hostname issue on volume create command with famous Peer in Cluster state error message
...gt; transport tcp pri.ostechnix.lan:/gluster/brick1/mpoint1
> sec.ostechnix.lan:/gluster/brick1/mpoint1 force
> Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to
> avoid this. See:
> https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
> Do you still want to continue?
> (y/n) y
> volume create: myvol1: failed: Host pri.ostechnix.lan is not in 'Peer in
> Cluster' state
>
> node 1 glusterd.log is here
>
> root at pri:/var/log/glusterfs# cat glusterd.log
> [2018-02-06 13:...
2017 Sep 08
0
GlusterFS as virtual machine storage
...17 at 14:06, Pavel Szalbot <pavel.szalbot at gmail.com>
> wrote:
>
>> Hi Neil, docs mention two live nodes of replica 3 blaming each other and
>> refusing to do IO.
>>
>> https://gluster.readthedocs.io/en/latest/Administrator%20Gui
>> de/Split%20brain%20and%20ways%20to%20deal%20with%20it/#1-replica-3-volume
>>
>>
>>
>> On Sep 7, 2017 17:52, "Alastair Neil" <ajneil.tech at gmail.com> wrote:
>>
>>> *shrug* I don't use arbiter for vm work loads just straight replica 3.
>>> There are some gotcha...