similar to: broken gluster config

Displaying 20 results from an estimated 1000 matches similar to: "broken gluster config"

2018 May 10
2
broken gluster config
[root at glusterp1 gv0]# !737 gluster v status Status of volume: gv0 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick glusterp1:/bricks/brick1/gv0 49152 0 Y 5229 Brick glusterp2:/bricks/brick1/gv0 49152 0 Y 2054 Brick
2018 May 10
2
broken gluster config
Whatever repair happened has now finished but I still have this, I cant find anything so far telling me how to fix it. Looking at http://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/heal-info-and-split-brain-resolution/ I cant determine what file? dir gvo? is actually the issue. [root at glusterp1 gv0]# gluster volume heal gv0 info split-brain Brick
2018 May 10
0
broken gluster config
also I have this "split brain"? [root at glusterp1 gv0]# gluster volume heal gv0 info Brick glusterp1:/bricks/brick1/gv0 <gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain Status: Connected Number of entries: 1 Brick glusterp2:/bricks/brick1/gv0 <gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain /glusterp1/images/centos-server-001.qcow2
2018 May 10
0
broken gluster config
Show us output from: gluster v status It should be easy to fix. Stop gluster daemon on that node, mount the brick, start gluster daemon again. Check: gluster v status Does it show the brick up? HTH, Diego On Wed, May 9, 2018, 20:01 Thing <thing.thing at gmail.com> wrote: > Hi, > > I have 3 Centos7.4 machines setup as a 3 way raid 1. > > Due to an oopsie on my part for
2018 May 10
0
broken gluster config
[trying to read, I cant understand what is wrong? root at glusterp1 gv0]# gluster volume heal gv0 info Brick glusterp1:/bricks/brick1/gv0 <gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain Status: Connected Number of entries: 1 Brick glusterp2:/bricks/brick1/gv0 <gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain Status: Connected Number of entries: 1
2018 May 21
2
split brain? but where?
Hi, I seem to have a split brain issue, but I cannot figure out where this is and what it is, can someone help me pls, I cant find what to fix here. ========== root at salt-001:~# salt gluster* cmd.run 'df -h' glusterp2.graywitch.co.nz: Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos-root
2018 May 22
2
split brain? but where?
Hi, Which version of gluster you are using? You can find which file is that using the following command find <brickpath> -samefile <brickpath/.glusterfs/<first two bits of gfid>/<next 2 bits of gfid>/<full gfid> Please provide the getfatr output of the file which is in split brain. The steps to recover from split-brain can be found here,
2018 May 21
0
split brain? but where?
How do I find what "eafb8799-4e7a-4264-9213-26997c5a4693" is? https://docs.gluster.org/en/v3/Troubleshooting/gfid-to-path/ On May 21, 2018 3:22:01 PM PDT, Thing <thing.thing at gmail.com> wrote: >Hi, > >I seem to have a split brain issue, but I cannot figure out where this >is >and what it is, can someone help me pls, I cant find what to fix here. >
2018 May 22
1
split brain? but where?
I tried looking for a file of the same size and the gfid doesnt show up, 8><--- [root at glusterp2 fb]# pwd /bricks/brick1/gv0/.glusterfs/ea/fb [root at glusterp2 fb]# ls -al total 3130892 drwx------. 2 root root 64 May 22 13:01 . drwx------. 4 root root 24 May 8 14:27 .. -rw-------. 1 root root 3294887936 May 4 11:07 eafb8799-4e7a-4264-9213-26997c5a4693 -rw-r--r--. 1 root
2018 May 22
0
split brain? but where?
I tried this already. 8><--- [root at glusterp2 fb]# find /bricks/brick1/gv0 -samefile /bricks/brick1/gv0/.glusterfs/ea/fb/eafb8799-4e7a-4264-9213-26997c5a4693 /bricks/brick1/gv0/.glusterfs/ea/fb/eafb8799-4e7a-4264-9213-26997c5a4693 [root at glusterp2 fb]# 8><--- gluster 4 Centos 7.4 8><--- df -h [root at glusterp2 fb]# df -h Filesystem
2018 Apr 27
3
How to set up a 4 way gluster file system
Hi, I have 4 servers each with 1TB of storage set as /dev/sdb1, I would like to set these up in a raid 10 which will? give me 2TB useable. So Mirrored and concatenated? The command I am running is as per documents but I get a warning error, how do I get this to proceed please as the documents do not say. gluster volume create gv0 replica 2 glusterp1:/bricks/brick1/gv0
2018 Mar 28
3
Announcing Gluster release 4.0.1 (Short Term Maintenance)
On Wed, Mar 28, 2018 at 02:57:55PM +1300, Thing wrote: > Hi, > > Thanks, yes, not very familiar with Centos and hence googling took a while > to find a 4.0 version at, > > https://wiki.centos.org/SpecialInterestGroup/Storage The announcement for Gluster 4.0 in CentOS should contain all the details that you need as well:
2018 Apr 27
2
How to set up a 4 way gluster file system
Hi, I have 4 nodes, so a quorum would be 3 of 4. The Q is I suppose why does the documentation give this command as an example without qualifying it? SO I am running the wrong command? I want a "raid10" On 27 April 2018 at 18:05, Karthik Subrahmanya <ksubrahm at redhat.com> wrote: > Hi, > > With replica 2 volumes one can easily end up in split-brains if there are
2018 Apr 15
1
un-expected warning message when atempting to build a 4 node gluster setup.
Hi, I am on centos 7.4 with gluster 4. I am trying to a distributed and replicated volume on the 4 nodes I am getting this un-expected qualification, [root at glustep1 brick1]# gluster volume create gv0 replica 2 glusterp1:/bricks/brick1/gv0 glusterp2:/bricks/brick1/gv0 glusterp3:/bricks/brick1/gv0 glusterp4:/bricks/brick1/gv0 8><---- Replica 2 volumes are prone to split-brain. Use
2018 Apr 27
0
How to set up a 4 way gluster file system
Hi, With replica 2 volumes one can easily end up in split-brains if there are frequent disconnects and high IOs going on. If you use replica 3 or arbiter volumes, it will guard you by using the quorum mechanism giving you both consistency and availability. But in replica 2 volumes, quorum does not make sense since it needs both the nodes up to guarantee consistency, which costs availability. If
2018 May 08
1
mount failing client to gluster cluster.
Hi, On a debian 9 client, ======== root at kvm01:/var/lib/libvirt# dpkg -l glusterfs-client 8><--- ii glusterfs-client 3.8.8-1 amd64 clustered file-system (client package) root at kvm01:/var/lib/libvirt# ======= I am trying to to do a mount to a Centos 7 gluster setup, ======= [root at glustep1 libvirt]# rpm -q glusterfs glusterfs-4.0.2-1.el7.x86_64
2018 Jan 09
2
update to Centos7.4: Failed to open \EFI\BOOT\grubx64.efi - Not Found
Hello All, updating from Centos7.3 to Centos7.4 rendered one of our laptops unbootable. EM: Failed to open \EFI\BOOT\grubx64.efi - Not Found Failed to load image \EFI\BOOT\grubx64.efi: Not Found start_image() returned Not Found How could this occur because of an update and how to fix this? What I tried is booting from centos usb, chroot to /mnt/sysimage and gave command: efibootmgr
2018 Jan 12
1
update to Centos7.4: Failed to open \EFI\BOOT\grubx64.efi - Not Found
----- Oorspronkelijk bericht ----- Van: "Adrian Jenzer" <a.jenzer at herzogdemeuron.com> Aan: "CentOS mailing list" <centos at centos.org> Verzonden: Dinsdag 9 januari 2018 16:56:57 Onderwerp: Re: [CentOS] update to Centos7.4: Failed to open \EFI\BOOT\grubx64.efi - Not Found -----Original Message----- From: CentOS [mailto:centos-bounces at centos.org] On Behalf Of
2017 Aug 15
2
Is transport=rdma tested with "stripe"?
On Tue, Aug 15, 2017 at 01:04:11PM +0000, Hatazaki, Takao wrote: > Ji-Hyeon, > > You're saying that "stripe=2 transport=rdma" should work. Ok, that > was firstly I wanted to know. I'll put together logs later this week. Note that "stripe" is not tested much and practically unmaintained. We do not advise you to use it. If you have large files that you
2018 Mar 13
4
Can't heal a volume: "Please check if all brick processes are running."
Hi Anatoliy, The heal command is basically used to heal any mismatching contents between replica copies of the files. For the command "gluster volume heal <volname>" to succeed, you should have the self-heal-daemon running, which is true only if your volume is of type replicate/disperse. In your case you have a plain distribute volume where you do not store the replica of any