Displaying 17 results from an estimated 17 matches for "glusterp1".
Did you mean:
gluster1
2018 May 10
2
broken gluster config
...r happened has now finished but I still have this,
I cant find anything so far telling me how to fix it. Looking at
http://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/heal-info-and-split-brain-resolution/
I cant determine what file? dir gvo? is actually the issue.
[root at glusterp1 gv0]# gluster volume heal gv0 info split-brain
Brick glusterp1:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693>
Status: Connected
Number of entries in split-brain: 1
Brick glusterp2:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693>
Status: Connected
Number of...
2018 May 10
0
broken gluster config
[trying to read,
I cant understand what is wrong?
root at glusterp1 gv0]# gluster volume heal gv0 info
Brick glusterp1:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
Status: Connected
Number of entries: 1
Brick glusterp2:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
Status: Conn...
2018 May 10
0
broken gluster config
also I have this "split brain"?
[root at glusterp1 gv0]# gluster volume heal gv0 info
Brick glusterp1:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
Status: Connected
Number of entries: 1
Brick glusterp2:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
/glusterp1/i...
2018 May 10
2
broken gluster config
[root at glusterp1 gv0]# !737
gluster v status
Status of volume: gv0
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick glusterp1:/bricks/brick1/gv0 49152 0 Y
5229
Brick glusterp2:/bri...
2018 May 21
2
split brain? but where?
...r/lib/libvirt/images
glusterp2:gv0 932G 273G
659G 30% /isos
tmpfs 771M 48K
771M 1% /run/user/1000
tmpfs 771M 0
771M 0% /run/user/0
glusterp1.graywitch.co.nz:
Filesystem Size Used Avail Use%
Mounted on
/dev/mapper/centos-root 20G 3.5G 17G 18% /
devtmpfs 3.8G 0 3.8G 0%
/dev
tmpfs...
2018 May 21
0
split brain? but where?
...glusterp2:gv0 932G 273G
>659G 30% /isos
> tmpfs 771M 48K
>771M 1% /run/user/1000
> tmpfs 771M 0
>771M 0% /run/user/0
>glusterp1.graywitch.co.nz:
> Filesystem Size Used Avail Use%
>Mounted on
> /dev/mapper/centos-root 20G 3.5G 17G 18% /
> devtmpfs 3.8G 0 3.8G 0%
>/dev
> tmpfs...
2018 May 22
2
split brain? but where?
...932G 273G
> >659G 30% /isos
> > tmpfs 771M 48K
> >771M 1% /run/user/1000
> > tmpfs 771M 0
> >771M 0% /run/user/0
> >glusterp1.graywitch.co.nz:
> > Filesystem Size Used Avail Use%
> >Mounted on
> > /dev/mapper/centos-root 20G 3.5G 17G 18% /
> > devtmpfs 3.8G 0 3.8G 0%
> >/dev
>...
2018 May 22
0
split brain? but where?
...932G 273G
>> >659G 30% /isos
>> > tmpfs 771M 48K
>> >771M 1% /run/user/1000
>> > tmpfs 771M 0
>> >771M 0% /run/user/0
>> >glusterp1.graywitch.co.nz:
>> > Filesystem Size Used Avail Use%
>> >Mounted on
>> > /dev/mapper/centos-root 20G 3.5G 17G 18% /
>> > devtmpfs 3.8G 0 3.8G 0%
>...
2018 May 10
2
broken gluster config
Hi,
I have 3 Centos7.4 machines setup as a 3 way raid 1.
Due to an oopsie on my part for glusterp1 /bricks/brick1/gv0 didnt mount on
boot and as a result its empty.
Meanwhile I have data on glusterp2 /bricks/brick1/gv0 and glusterp3
/bricks/brick1/gv0 as expected.
Is there a way to get glusterp1's gv0 to sync off the other 2? there must
be but,
I have looked at the gluster docs and I can...
2018 May 22
1
split brain? but where?
...gt; >659G 30% /isos
>>> > tmpfs 771M 48K
>>> >771M 1% /run/user/1000
>>> > tmpfs 771M 0
>>> >771M 0% /run/user/0
>>> >glusterp1.graywitch.co.nz:
>>> > Filesystem Size Used Avail Use%
>>> >Mounted on
>>> > /dev/mapper/centos-root 20G 3.5G 17G 18% /
>>> > devtmpfs 3.8G...
2018 May 08
1
mount failing client to gluster cluster.
...amd64
clustered file-system (client package)
root at kvm01:/var/lib/libvirt#
=======
I am trying to to do a mount to a Centos 7 gluster setup,
=======
[root at glustep1 libvirt]# rpm -q glusterfs
glusterfs-4.0.2-1.el7.x86_64
[root at glustep1 libvirt]#
=======
mount -t glusterfs glusterp1.graywitch.co.nz:/gv0/kvm01/images
/var/lib/libvirt/images
the logs are telling me,
=========
root at kvm01:/var/lib/libvirt# >
/var/log/glusterfs/var-lib-libvirt-images.log
root at kvm01:/var/lib/libvirt# mount -t glusterfs
glusterp1.graywitch.co.nz:/gv0/kvm01/images/
/var/lib/libvirt/images
M...
2018 May 10
0
broken gluster config
...ount the
brick, start gluster daemon again.
Check: gluster v status
Does it show the brick up?
HTH,
Diego
On Wed, May 9, 2018, 20:01 Thing <thing.thing at gmail.com> wrote:
> Hi,
>
> I have 3 Centos7.4 machines setup as a 3 way raid 1.
>
> Due to an oopsie on my part for glusterp1 /bricks/brick1/gv0 didnt mount
> on boot and as a result its empty.
>
> Meanwhile I have data on glusterp2 /bricks/brick1/gv0 and glusterp3
> /bricks/brick1/gv0 as expected.
>
> Is there a way to get glusterp1's gv0 to sync off the other 2? there must
> be but,
>
> I...
2018 Mar 28
3
Announcing Gluster release 4.0.1 (Short Term Maintenance)
On Wed, Mar 28, 2018 at 02:57:55PM +1300, Thing wrote:
> Hi,
>
> Thanks, yes, not very familiar with Centos and hence googling took a while
> to find a 4.0 version at,
>
> https://wiki.centos.org/SpecialInterestGroup/Storage
The announcement for Gluster 4.0 in CentOS should contain all the
details that you need as well:
2018 Apr 27
3
How to set up a 4 way gluster file system
...set as /dev/sdb1, I would like to
set these up in a raid 10 which will? give me 2TB useable. So Mirrored and
concatenated?
The command I am running is as per documents but I get a warning error,
how do I get this to proceed please as the documents do not say.
gluster volume create gv0 replica 2 glusterp1:/bricks/brick1/gv0
glusterp2:/bricks/brick1/gv0 glusterp3:/bricks/brick1/gv0
glusterp4:/bricks/brick1/gv0
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to
avoid this. See:
http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20...
2018 Apr 15
1
un-expected warning message when atempting to build a 4 node gluster setup.
Hi,
I am on centos 7.4 with gluster 4.
I am trying to a distributed and replicated volume on the 4 nodes
I am getting this un-expected qualification,
[root at glustep1 brick1]# gluster volume create gv0 replica 2
glusterp1:/bricks/brick1/gv0 glusterp2:/bricks/brick1/gv0
glusterp3:/bricks/brick1/gv0 glusterp4:/bricks/brick1/gv0
8><----
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to
avoid this. See:
http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%...
2018 Apr 27
2
How to set up a 4 way gluster file system
...raid 10 which will? give me 2TB useable. So Mirrored
>> and concatenated?
>>
>> The command I am running is as per documents but I get a warning error,
>> how do I get this to proceed please as the documents do not say.
>>
>> gluster volume create gv0 replica 2 glusterp1:/bricks/brick1/gv0
>> glusterp2:/bricks/brick1/gv0 glusterp3:/bricks/brick1/gv0
>> glusterp4:/bricks/brick1/gv0
>> Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to
>> avoid this. See: http://docs.gluster.org/en/lat
>> est/Administrator%20Guide/Sp...
2018 Apr 27
0
How to set up a 4 way gluster file system
...gt; to set these up in a raid 10 which will? give me 2TB useable. So Mirrored
> and concatenated?
>
> The command I am running is as per documents but I get a warning error,
> how do I get this to proceed please as the documents do not say.
>
> gluster volume create gv0 replica 2 glusterp1:/bricks/brick1/gv0
> glusterp2:/bricks/brick1/gv0 glusterp3:/bricks/brick1/gv0
> glusterp4:/bricks/brick1/gv0
> Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to
> avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/
> Split%20brain%20and%20...