Displaying 16 results from an estimated 16 matches for "glusterp3".
Did you mean:
glusterp1
2018 May 21
2
split brain? but where?
...9.4G 179M 9.2G 2%
/var/lib
tmpfs 771M 8.0K 771M 1%
/run/user/42
glusterp1:gv0 932G 273G 659G 30%
/isos
glusterp1:gv0/glusterp1/images 932G 273G 659G 30%
/var/lib/libvirt/images
glusterp3.graywitch.co.nz:
Filesystem Size Used
Avail Use% Mounted on
/dev/mapper/centos-root 20G 3.5G
17G 18% /
devtmpfs 3.8G 0
3.8G 0% /dev
tmpfs...
2018 May 21
0
split brain? but where?
...>/var/lib
> tmpfs 771M 8.0K 771M 1%
>/run/user/42
> glusterp1:gv0 932G 273G 659G 30%
>/isos
> glusterp1:gv0/glusterp1/images 932G 273G 659G 30%
>/var/lib/libvirt/images
>glusterp3.graywitch.co.nz:
> Filesystem Size Used
>Avail Use% Mounted on
> /dev/mapper/centos-root 20G 3.5G
> 17G 18% /
> devtmpfs 3.8G 0
>3.8G 0...
2018 May 22
2
split brain? but where?
...771M 8.0K 771M 1%
> >/run/user/42
> > glusterp1:gv0 932G 273G 659G 30%
> >/isos
> > glusterp1:gv0/glusterp1/images 932G 273G 659G 30%
> >/var/lib/libvirt/images
> >glusterp3.graywitch.co.nz:
> > Filesystem Size Used
> >Avail Use% Mounted on
> > /dev/mapper/centos-root 20G 3.5G
> > 17G 18% /
> > devtmpfs...
2018 May 22
0
split brain? but where?
...771M 8.0K 771M 1%
>> >/run/user/42
>> > glusterp1:gv0 932G 273G 659G 30%
>> >/isos
>> > glusterp1:gv0/glusterp1/images 932G 273G 659G 30%
>> >/var/lib/libvirt/images
>> >glusterp3.graywitch.co.nz:
>> > Filesystem Size Used
>> >Avail Use% Mounted on
>> > /dev/mapper/centos-root 20G 3.5G
>> > 17G 18% /
>> > devtmpfs...
2018 May 22
1
split brain? but where?
...771M 1%
>>> >/run/user/42
>>> > glusterp1:gv0 932G 273G 659G 30%
>>> >/isos
>>> > glusterp1:gv0/glusterp1/images 932G 273G 659G 30%
>>> >/var/lib/libvirt/images
>>> >glusterp3.graywitch.co.nz:
>>> > Filesystem Size Used
>>> >Avail Use% Mounted on
>>> > /dev/mapper/centos-root 20G 3.5G
>>> > 17G 18% /
>>> > devtmpfs...
2018 May 10
2
broken gluster config
...o split-brain
Brick glusterp1:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693>
Status: Connected
Number of entries in split-brain: 1
Brick glusterp2:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693>
Status: Connected
Number of entries in split-brain: 1
Brick glusterp3:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693>
Status: Connected
Number of entries in split-brain: 1
[root at glusterp1 gv0]#
On 10 May 2018 at 12:22, Thing <thing.thing at gmail.com> wrote:
> also I have this "split brain"?
>
> [root at glusterp1...
2018 May 10
0
broken gluster config
...o
Brick glusterp1:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
Status: Connected
Number of entries: 1
Brick glusterp2:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
Status: Connected
Number of entries: 1
Brick glusterp3:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
Status: Connected
Number of entries: 1
[root at glusterp1 gv0]# getfattr -d -m . -e hex /bricks/brick1/gv0
getfattr: Removing leading '/' from absolute path names
# file: bricks/brick1/gv0
security.se...
2018 May 10
2
broken gluster config
...process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick glusterp1:/bricks/brick1/gv0 49152 0 Y
5229
Brick glusterp2:/bricks/brick1/gv0 49152 0 Y
2054
Brick glusterp3:/bricks/brick1/gv0 49152 0 Y
2110
Self-heal Daemon on localhost N/A N/A Y
5219
Self-heal Daemon on glusterp2 N/A N/A Y
1943
Self-heal Daemon on glusterp3 N/A N/A Y
2067
Task Status of Volume...
2018 May 10
0
broken gluster config
...glusterp2:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
/glusterp1/images/centos-server-001.qcow2
/glusterp1/images/kubernetes-template.qcow2
/glusterp1/images/kworker01.qcow2
/glusterp1/images/kworker02.qcow2
Status: Connected
Number of entries: 5
Brick glusterp3:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
/glusterp1/images/centos-server-001.qcow2
/glusterp1/images/kubernetes-template.qcow2
/glusterp1/images/kworker01.qcow2
/glusterp1/images/kworker02.qcow2
Status: Connected
Number of entries: 5
[root at gluste...
2018 May 10
2
broken gluster config
Hi,
I have 3 Centos7.4 machines setup as a 3 way raid 1.
Due to an oopsie on my part for glusterp1 /bricks/brick1/gv0 didnt mount on
boot and as a result its empty.
Meanwhile I have data on glusterp2 /bricks/brick1/gv0 and glusterp3
/bricks/brick1/gv0 as expected.
Is there a way to get glusterp1's gv0 to sync off the other 2? there must
be but,
I have looked at the gluster docs and I cant find anything about repairing
resyncing?
Where am I meant to look for such info?
thanks
-------------- next part --------------
An H...
2018 May 10
0
broken gluster config
...hing.thing at gmail.com> wrote:
> Hi,
>
> I have 3 Centos7.4 machines setup as a 3 way raid 1.
>
> Due to an oopsie on my part for glusterp1 /bricks/brick1/gv0 didnt mount
> on boot and as a result its empty.
>
> Meanwhile I have data on glusterp2 /bricks/brick1/gv0 and glusterp3
> /bricks/brick1/gv0 as expected.
>
> Is there a way to get glusterp1's gv0 to sync off the other 2? there must
> be but,
>
> I have looked at the gluster docs and I cant find anything about
> repairing resyncing?
>
> Where am I meant to look for such info?
>
>...
2018 Apr 27
3
How to set up a 4 way gluster file system
...10 which will? give me 2TB useable. So Mirrored and
concatenated?
The command I am running is as per documents but I get a warning error,
how do I get this to proceed please as the documents do not say.
gluster volume create gv0 replica 2 glusterp1:/bricks/brick1/gv0
glusterp2:/bricks/brick1/gv0 glusterp3:/bricks/brick1/gv0
glusterp4:/bricks/brick1/gv0
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to
avoid this. See:
http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/
.
Do you still want to continue?
(y/n) n
Usage:
volu...
2018 Mar 28
3
Announcing Gluster release 4.0.1 (Short Term Maintenance)
On Wed, Mar 28, 2018 at 02:57:55PM +1300, Thing wrote:
> Hi,
>
> Thanks, yes, not very familiar with Centos and hence googling took a while
> to find a 4.0 version at,
>
> https://wiki.centos.org/SpecialInterestGroup/Storage
The announcement for Gluster 4.0 in CentOS should contain all the
details that you need as well:
2018 Apr 15
1
un-expected warning message when atempting to build a 4 node gluster setup.
Hi,
I am on centos 7.4 with gluster 4.
I am trying to a distributed and replicated volume on the 4 nodes
I am getting this un-expected qualification,
[root at glustep1 brick1]# gluster volume create gv0 replica 2
glusterp1:/bricks/brick1/gv0 glusterp2:/bricks/brick1/gv0
glusterp3:/bricks/brick1/gv0 glusterp4:/bricks/brick1/gv0
8><----
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to
avoid this. See:
http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/
.
Do you still want to continue?
(y/n...
2018 Apr 27
2
How to set up a 4 way gluster file system
...concatenated?
>>
>> The command I am running is as per documents but I get a warning error,
>> how do I get this to proceed please as the documents do not say.
>>
>> gluster volume create gv0 replica 2 glusterp1:/bricks/brick1/gv0
>> glusterp2:/bricks/brick1/gv0 glusterp3:/bricks/brick1/gv0
>> glusterp4:/bricks/brick1/gv0
>> Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to
>> avoid this. See: http://docs.gluster.org/en/lat
>> est/Administrator%20Guide/Split%20brain%20and%20ways%20to%
>> 20deal%20with%20it/.
>&g...
2018 Apr 27
0
How to set up a 4 way gluster file system
...e. So Mirrored
> and concatenated?
>
> The command I am running is as per documents but I get a warning error,
> how do I get this to proceed please as the documents do not say.
>
> gluster volume create gv0 replica 2 glusterp1:/bricks/brick1/gv0
> glusterp2:/bricks/brick1/gv0 glusterp3:/bricks/brick1/gv0
> glusterp4:/bricks/brick1/gv0
> Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to
> avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/
> Split%20brain%20and%20ways%20to%20deal%20with%20it/.
> Do you still want to contin...