Displaying 20 results from an estimated 500 matches similar to: "split brain? but where?"
2018 May 22
2
split brain? but where?
Hi,
Which version of gluster you are using?
You can find which file is that using the following command
find <brickpath> -samefile <brickpath/.glusterfs/<first two bits of
gfid>/<next 2 bits of gfid>/<full gfid>
Please provide the getfatr output of the file which is in split brain.
The steps to recover from split-brain can be found here,
2018 May 21
0
split brain? but where?
How do I find what "eafb8799-4e7a-4264-9213-26997c5a4693" is?
https://docs.gluster.org/en/v3/Troubleshooting/gfid-to-path/
On May 21, 2018 3:22:01 PM PDT, Thing <thing.thing at gmail.com> wrote:
>Hi,
>
>I seem to have a split brain issue, but I cannot figure out where this
>is
>and what it is, can someone help me pls, I cant find what to fix here.
>
2018 May 22
0
split brain? but where?
I tried this already.
8><---
[root at glusterp2 fb]# find /bricks/brick1/gv0 -samefile
/bricks/brick1/gv0/.glusterfs/ea/fb/eafb8799-4e7a-4264-9213-26997c5a4693
/bricks/brick1/gv0/.glusterfs/ea/fb/eafb8799-4e7a-4264-9213-26997c5a4693
[root at glusterp2 fb]#
8><---
gluster 4
Centos 7.4
8><---
df -h
[root at glusterp2 fb]# df -h
Filesystem
2018 May 22
1
split brain? but where?
I tried looking for a file of the same size and the gfid doesnt show up,
8><---
[root at glusterp2 fb]# pwd
/bricks/brick1/gv0/.glusterfs/ea/fb
[root at glusterp2 fb]# ls -al
total 3130892
drwx------. 2 root root 64 May 22 13:01 .
drwx------. 4 root root 24 May 8 14:27 ..
-rw-------. 1 root root 3294887936 May 4 11:07
eafb8799-4e7a-4264-9213-26997c5a4693
-rw-r--r--. 1 root
2018 May 10
2
broken gluster config
Whatever repair happened has now finished but I still have this,
I cant find anything so far telling me how to fix it. Looking at
http://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/heal-info-and-split-brain-resolution/
I cant determine what file? dir gvo? is actually the issue.
[root at glusterp1 gv0]# gluster volume heal gv0 info split-brain
Brick
2018 May 10
0
broken gluster config
[trying to read,
I cant understand what is wrong?
root at glusterp1 gv0]# gluster volume heal gv0 info
Brick glusterp1:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
Status: Connected
Number of entries: 1
Brick glusterp2:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
Status: Connected
Number of entries: 1
2018 May 10
2
broken gluster config
[root at glusterp1 gv0]# !737
gluster v status
Status of volume: gv0
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick glusterp1:/bricks/brick1/gv0 49152 0 Y
5229
Brick glusterp2:/bricks/brick1/gv0 49152 0 Y
2054
Brick
2018 May 10
0
broken gluster config
also I have this "split brain"?
[root at glusterp1 gv0]# gluster volume heal gv0 info
Brick glusterp1:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
Status: Connected
Number of entries: 1
Brick glusterp2:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
/glusterp1/images/centos-server-001.qcow2
2018 Mar 28
3
Announcing Gluster release 4.0.1 (Short Term Maintenance)
On Wed, Mar 28, 2018 at 02:57:55PM +1300, Thing wrote:
> Hi,
>
> Thanks, yes, not very familiar with Centos and hence googling took a while
> to find a 4.0 version at,
>
> https://wiki.centos.org/SpecialInterestGroup/Storage
The announcement for Gluster 4.0 in CentOS should contain all the
details that you need as well:
2018 May 08
1
mount failing client to gluster cluster.
Hi,
On a debian 9 client,
========
root at kvm01:/var/lib/libvirt# dpkg -l glusterfs-client
8><---
ii glusterfs-client 3.8.8-1 amd64
clustered file-system (client package)
root at kvm01:/var/lib/libvirt#
=======
I am trying to to do a mount to a Centos 7 gluster setup,
=======
[root at glustep1 libvirt]# rpm -q glusterfs
glusterfs-4.0.2-1.el7.x86_64
2018 May 10
2
broken gluster config
Hi,
I have 3 Centos7.4 machines setup as a 3 way raid 1.
Due to an oopsie on my part for glusterp1 /bricks/brick1/gv0 didnt mount on
boot and as a result its empty.
Meanwhile I have data on glusterp2 /bricks/brick1/gv0 and glusterp3
/bricks/brick1/gv0 as expected.
Is there a way to get glusterp1's gv0 to sync off the other 2? there must
be but,
I have looked at the gluster docs and I
2018 May 10
0
broken gluster config
Show us output from: gluster v status
It should be easy to fix. Stop gluster daemon on that node, mount the
brick, start gluster daemon again.
Check: gluster v status
Does it show the brick up?
HTH,
Diego
On Wed, May 9, 2018, 20:01 Thing <thing.thing at gmail.com> wrote:
> Hi,
>
> I have 3 Centos7.4 machines setup as a 3 way raid 1.
>
> Due to an oopsie on my part for
2018 Apr 27
3
How to set up a 4 way gluster file system
Hi,
I have 4 servers each with 1TB of storage set as /dev/sdb1, I would like to
set these up in a raid 10 which will? give me 2TB useable. So Mirrored and
concatenated?
The command I am running is as per documents but I get a warning error,
how do I get this to proceed please as the documents do not say.
gluster volume create gv0 replica 2 glusterp1:/bricks/brick1/gv0
2018 Apr 27
2
How to set up a 4 way gluster file system
Hi,
I have 4 nodes, so a quorum would be 3 of 4. The Q is I suppose why does
the documentation give this command as an example without qualifying it?
SO I am running the wrong command? I want a "raid10"
On 27 April 2018 at 18:05, Karthik Subrahmanya <ksubrahm at redhat.com> wrote:
> Hi,
>
> With replica 2 volumes one can easily end up in split-brains if there are
2018 Apr 15
1
un-expected warning message when atempting to build a 4 node gluster setup.
Hi,
I am on centos 7.4 with gluster 4.
I am trying to a distributed and replicated volume on the 4 nodes
I am getting this un-expected qualification,
[root at glustep1 brick1]# gluster volume create gv0 replica 2
glusterp1:/bricks/brick1/gv0 glusterp2:/bricks/brick1/gv0
glusterp3:/bricks/brick1/gv0 glusterp4:/bricks/brick1/gv0
8><----
Replica 2 volumes are prone to split-brain. Use
2018 Apr 27
0
How to set up a 4 way gluster file system
Hi,
With replica 2 volumes one can easily end up in split-brains if there are
frequent disconnects and high IOs going on.
If you use replica 3 or arbiter volumes, it will guard you by using the
quorum mechanism giving you both consistency and availability.
But in replica 2 volumes, quorum does not make sense since it needs both
the nodes up to guarantee consistency, which costs availability.
If
2010 Oct 21
2
Bug? Mount and fstab
I think this is likely a bug with either mount or glusterfs:
[root at vm-container-0-0 ~]# cat /etc/fstab
LABEL=/ / ext3 defaults 1 1
LABEL=/state/partition /state/partition1 ext3 defaults 1 2
LABEL=/var /var ext3 defaults 1 2
tmpfs /dev/shm tmpfs defaults
2001 Apr 06
2
Samba-2.2.0alpha-3 and smbmount
Hi All,
When I use smbmount to attach a NT 4 Server's drive C:, I use
the following command:
smbmount //inet_server_1/C$ /mnt/server1 -o
username=xxxxxx,password=xxxxxx,fmask=700,dmask=700,nosuid,noexec
now, if I am su'd to root, and do a df -h I get:
Filesystem Size Used Avail Use% Mounted on
/dev/hda1 6.7G 2.5G 3.8G 39% /
//inet_server_1/C$ 7.8G 3.2G
2004 Jul 18
7
Resize ocfs....?
I've tried several times to resize one of my ocfs volumes[see emcpowere1 below]
I'm on the latest [prod] version of ocfs/ocfs-tools as of 3pm EST on 18JUL04.
Per instructions...
Take down db, unmount all ocfs drives, use ' tuneocfs -F -S 100G /dev/emcpowere1 '
Supposedly this should work, but I get get....
The size specified, 100G, is larger than the device size, 59G.
Aborting.
2008 Feb 14
4
Problem with p2v - device /dev/ida/c0d0p2
Hello,
I''m doing one emergencial migrate for one very old machine, because we
will have one IMPLOSION (!!!) very close from here, and I''m thinking
that this old IDE disk will not survive... :-/
[root@registro-2 /root]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/ida/c0d0p2 3.8G 2.3G 1.4G 62% /
/dev/ida/c0d0p8 26G 20G 4.4G 82% /data