search for: glusterp2

Displaying 16 results from an estimated 16 matches for "glusterp2".

Did you mean: gluster2
2018 May 22
1
split brain? but where?
I tried looking for a file of the same size and the gfid doesnt show up, 8><--- [root at glusterp2 fb]# pwd /bricks/brick1/gv0/.glusterfs/ea/fb [root at glusterp2 fb]# ls -al total 3130892 drwx------. 2 root root 64 May 22 13:01 . drwx------. 4 root root 24 May 8 14:27 .. -rw-------. 1 root root 3294887936 May 4 11:07 eafb8799-4e7a-4264-9213-26997c5a4693 -rw-r--r--. 1 root root...
2018 May 22
2
split brain? but where?
...t; >Hi, > > > >I seem to have a split brain issue, but I cannot figure out where this > >is > >and what it is, can someone help me pls, I cant find what to fix here. > > > >========== > >root at salt-001:~# salt gluster* cmd.run 'df -h' > >glusterp2.graywitch.co.nz: > > Filesystem Size Used > >Avail Use% Mounted on > > /dev/mapper/centos-root 19G 3.4G > > 16G 19% / > > devtmpfs...
2018 May 21
2
split brain? but where?
Hi, I seem to have a split brain issue, but I cannot figure out where this is and what it is, can someone help me pls, I cant find what to fix here. ========== root at salt-001:~# salt gluster* cmd.run 'df -h' glusterp2.graywitch.co.nz: Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos-root 19G 3.4G 16G 19% / devtmpfs 3.8G 0 3.8G 0% /dev tmpfs...
2018 May 22
0
split brain? but where?
I tried this already. 8><--- [root at glusterp2 fb]# find /bricks/brick1/gv0 -samefile /bricks/brick1/gv0/.glusterfs/ea/fb/eafb8799-4e7a-4264-9213-26997c5a4693 /bricks/brick1/gv0/.glusterfs/ea/fb/eafb8799-4e7a-4264-9213-26997c5a4693 [root at glusterp2 fb]# 8><--- gluster 4 Centos 7.4 8><--- df -h [root at glusterp2 fb]# df -h Files...
2018 May 21
0
split brain? but where?
...ng <thing.thing at gmail.com> wrote: >Hi, > >I seem to have a split brain issue, but I cannot figure out where this >is >and what it is, can someone help me pls, I cant find what to fix here. > >========== >root at salt-001:~# salt gluster* cmd.run 'df -h' >glusterp2.graywitch.co.nz: > Filesystem Size Used >Avail Use% Mounted on > /dev/mapper/centos-root 19G 3.4G > 16G 19% / > devtmpfs 3.8G 0 >3.8G 0...
2018 May 10
2
broken gluster config
...d-split-brain-resolution/ I cant determine what file? dir gvo? is actually the issue. [root at glusterp1 gv0]# gluster volume heal gv0 info split-brain Brick glusterp1:/bricks/brick1/gv0 <gfid:eafb8799-4e7a-4264-9213-26997c5a4693> Status: Connected Number of entries in split-brain: 1 Brick glusterp2:/bricks/brick1/gv0 <gfid:eafb8799-4e7a-4264-9213-26997c5a4693> Status: Connected Number of entries in split-brain: 1 Brick glusterp3:/bricks/brick1/gv0 <gfid:eafb8799-4e7a-4264-9213-26997c5a4693> Status: Connected Number of entries in split-brain: 1 [root at glusterp1 gv0]# On 10 Ma...
2018 May 10
0
broken gluster config
[trying to read, I cant understand what is wrong? root at glusterp1 gv0]# gluster volume heal gv0 info Brick glusterp1:/bricks/brick1/gv0 <gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain Status: Connected Number of entries: 1 Brick glusterp2:/bricks/brick1/gv0 <gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain Status: Connected Number of entries: 1 Brick glusterp3:/bricks/brick1/gv0 <gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain Status: Connected Number of entries: 1 [root at glusterp1 gv0]...
2018 May 10
2
broken gluster config
...t at glusterp1 gv0]# !737 gluster v status Status of volume: gv0 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick glusterp1:/bricks/brick1/gv0 49152 0 Y 5229 Brick glusterp2:/bricks/brick1/gv0 49152 0 Y 2054 Brick glusterp3:/bricks/brick1/gv0 49152 0 Y 2110 Self-heal Daemon on localhost N/A N/A Y 5219 Self-heal Daemon on glusterp2 N/A N/A Y 1943 Self-heal Daemon on glu...
2018 May 10
0
broken gluster config
also I have this "split brain"? [root at glusterp1 gv0]# gluster volume heal gv0 info Brick glusterp1:/bricks/brick1/gv0 <gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain Status: Connected Number of entries: 1 Brick glusterp2:/bricks/brick1/gv0 <gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain /glusterp1/images/centos-server-001.qcow2 /glusterp1/images/kubernetes-template.qcow2 /glusterp1/images/kworker01.qcow2 /glusterp1/images/kworker02.qcow2 Status: Connected Number of entries: 5 Brick glusterp3...
2018 May 10
2
broken gluster config
Hi, I have 3 Centos7.4 machines setup as a 3 way raid 1. Due to an oopsie on my part for glusterp1 /bricks/brick1/gv0 didnt mount on boot and as a result its empty. Meanwhile I have data on glusterp2 /bricks/brick1/gv0 and glusterp3 /bricks/brick1/gv0 as expected. Is there a way to get glusterp1's gv0 to sync off the other 2? there must be but, I have looked at the gluster docs and I cant find anything about repairing resyncing? Where am I meant to look for such info? thanks ----------...
2018 May 10
0
broken gluster config
...ed, May 9, 2018, 20:01 Thing <thing.thing at gmail.com> wrote: > Hi, > > I have 3 Centos7.4 machines setup as a 3 way raid 1. > > Due to an oopsie on my part for glusterp1 /bricks/brick1/gv0 didnt mount > on boot and as a result its empty. > > Meanwhile I have data on glusterp2 /bricks/brick1/gv0 and glusterp3 > /bricks/brick1/gv0 as expected. > > Is there a way to get glusterp1's gv0 to sync off the other 2? there must > be but, > > I have looked at the gluster docs and I cant find anything about > repairing resyncing? > > Where am I mean...
2018 Apr 27
3
How to set up a 4 way gluster file system
...ke to set these up in a raid 10 which will? give me 2TB useable. So Mirrored and concatenated? The command I am running is as per documents but I get a warning error, how do I get this to proceed please as the documents do not say. gluster volume create gv0 replica 2 glusterp1:/bricks/brick1/gv0 glusterp2:/bricks/brick1/gv0 glusterp3:/bricks/brick1/gv0 glusterp4:/bricks/brick1/gv0 Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/ . Do you still want to co...
2018 Mar 28
3
Announcing Gluster release 4.0.1 (Short Term Maintenance)
On Wed, Mar 28, 2018 at 02:57:55PM +1300, Thing wrote: > Hi, > > Thanks, yes, not very familiar with Centos and hence googling took a while > to find a 4.0 version at, > > https://wiki.centos.org/SpecialInterestGroup/Storage The announcement for Gluster 4.0 in CentOS should contain all the details that you need as well:
2018 Apr 15
1
un-expected warning message when atempting to build a 4 node gluster setup.
Hi, I am on centos 7.4 with gluster 4. I am trying to a distributed and replicated volume on the 4 nodes I am getting this un-expected qualification, [root at glustep1 brick1]# gluster volume create gv0 replica 2 glusterp1:/bricks/brick1/gv0 glusterp2:/bricks/brick1/gv0 glusterp3:/bricks/brick1/gv0 glusterp4:/bricks/brick1/gv0 8><---- Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/ . Do you...
2018 Apr 27
2
How to set up a 4 way gluster file system
...le. So Mirrored >> and concatenated? >> >> The command I am running is as per documents but I get a warning error, >> how do I get this to proceed please as the documents do not say. >> >> gluster volume create gv0 replica 2 glusterp1:/bricks/brick1/gv0 >> glusterp2:/bricks/brick1/gv0 glusterp3:/bricks/brick1/gv0 >> glusterp4:/bricks/brick1/gv0 >> Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to >> avoid this. See: http://docs.gluster.org/en/lat >> est/Administrator%20Guide/Split%20brain%20and%20ways%20to% >&gt...
2018 Apr 27
0
How to set up a 4 way gluster file system
...hich will? give me 2TB useable. So Mirrored > and concatenated? > > The command I am running is as per documents but I get a warning error, > how do I get this to proceed please as the documents do not say. > > gluster volume create gv0 replica 2 glusterp1:/bricks/brick1/gv0 > glusterp2:/bricks/brick1/gv0 glusterp3:/bricks/brick1/gv0 > glusterp4:/bricks/brick1/gv0 > Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to > avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/ > Split%20brain%20and%20ways%20to%20deal%20with%20it/. &gt...