Hi All,
I tried many things to troubleshoot my issues; recreating volumes with
different configurations, using different installation medium for the OS,
re-installing the gluster environment several times.
My issues were resolved by reformatting my /opt partition from XFS to EXT4,
then recreating the gluster volumes with EXT4-backed bricks rather than
XFS-backed bricks.
Is there a reason to believe that XFS is an improper file system to be used
for gluster bricks?
On Tue, Oct 6, 2015 at 9:20 AM, Cobin Bluth <cbluth at gmail.com> wrote:
> I appreciate your response, Joe.
>
> Even before asking for help in IRC and this mailing list, I had googled
> and come across information about healing and split-brain, but I had not
> encountered any information that had helped.
>
> I tried "gluster volume heal DSR info" and this is the output I
received:
>
>
> [root at glusterfs1 ~]# gluster volume heal DSR info
> Brick glusterfs1:/opt/dsr/
> Number of entries: 0
>
> Brick glusterfs2:/opt/dsr/
> Number of entries: 0
>
> Brick glusterfs3:/opt/dsr/
> Number of entries: 0
>
> Brick glusterfs4:/opt/dsr/
> Number of entries: 0
>
> Brick glusterfs5:/opt/dsr/
> Number of entries: 0
>
> Brick glusterfs6:/opt/dsr/
> Number of entries: 0
>
> Brick glusterfs7:/opt/dsr/
> Number of entries: 0
>
> Brick glusterfs8:/opt/dsr/
> Number of entries: 0
>
> [root at glusterfs1 ~]# gluster volume heal DSR info split-brain
> Brick glusterfs1:/opt/dsr/
> Number of entries in split-brain: 0
>
> Brick glusterfs2:/opt/dsr/
> Number of entries in split-brain: 0
>
> Brick glusterfs3:/opt/dsr/
> Number of entries in split-brain: 0
>
> Brick glusterfs4:/opt/dsr/
> Number of entries in split-brain: 0
>
> Brick glusterfs5:/opt/dsr/
> Number of entries in split-brain: 0
>
> Brick glusterfs6:/opt/dsr/
> Number of entries in split-brain: 0
>
> Brick glusterfs7:/opt/dsr/
> Number of entries in split-brain: 0
>
> Brick glusterfs8:/opt/dsr/
> Number of entries in split-brain: 0
>
> [root at glusterfs1 ~]#
>
>
>
> If it helps, I am using CentOS7 and the latest repo found here:
>
http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
>
>
> [root at glusterfs1 ~]# cat /etc/centos-release
> CentOS Linux release 7.1.1503 (Core)
> [root at glusterfs1 ~]# yum repolist
> Loaded plugins: fastestmirror
> Determining fastest mirrors
> * base: lug.mtu.edu
> * epel: mirrors.cat.pdx.edu
> * extras: centos.mirrors.tds.net
> * updates: mirrors.cat.pdx.edu
> repo id repo name
>
> status
> !base/7/x86_64 CentOS-7 -
> Base
> 8,652
> !epel/x86_64 Extra
> Packages for Enterprise Linux 7 - x86_64
> 8,524
> !extras/7/x86_64 CentOS-7 -
> Extras
> 214
> !glusterfs-epel/7/x86_64 GlusterFS
> is a clustered file-system capable of scaling to several petabytes.
> 14
> !glusterfs-noarch-epel/7 GlusterFS
> is a clustered file-system capable of scaling to several petabytes.
> 2
> !updates/7/x86_64 CentOS-7 -
> Updates
> 1,486
> repolist: 18,892
> [root at glusterfs1 ~]# yum list installed | grep -i gluster
> glusterfs.x86_64 3.7.4-2.el7
> @glusterfs-epel
> glusterfs-api.x86_64 3.7.4-2.el7
> @glusterfs-epel
> glusterfs-cli.x86_64 3.7.4-2.el7
> @glusterfs-epel
> glusterfs-client-xlators.x86_64 3.7.4-2.el7
> @glusterfs-epel
> glusterfs-fuse.x86_64 3.7.4-2.el7
> @glusterfs-epel
> glusterfs-libs.x86_64 3.7.4-2.el7
> @glusterfs-epel
> glusterfs-rdma.x86_64 3.7.4-2.el7
> @glusterfs-epel
> glusterfs-server.x86_64 3.7.4-2.el7
> @glusterfs-epel
> samba-vfs-glusterfs.x86_64 4.1.12-23.el7_1 @updates
>
> [root at glusterfs1 ~]#
>
>
>
> And I am still experiencing the same error.
>
> I am not largely familiar with troubleshooting this, is there something
> that I am missing?
> I have set up another cluster in the same fashion, and I experienced the
> issue again.
> I appreciate the information regarding the split-brain, but for me it
> doesnt seem to be the case, could there be another culprit?
>
>
>
>
>
> On Mon, Oct 5, 2015 at 7:44 PM, Joe Julian <joe at julianfamily.org>
wrote:
>
>> [19:33] <JoeJulian> AceFacee check "gluster volume heal DSR
info"
>> [19:33] <JoeJulian> it sounds like split-brain.
>> [19:34] <JoeJulian> @split-brain
>> [19:34] <glusterbot> JoeJulian: To heal split-brains, see
>>
https://gluster.readthedocs.org/en/release-3.7.0/Features/heal-info-and-split-brain-resolution/
>> For additional information, see this older article
>> https://joejulian.name/blog/fixing-split-brain-with-glusterfs-33/ Also
>> see splitmount
>> https://joejulian.name/blog/glusterfs-split-brain-recovery-made-easy/
>>
>> On 10/05/2015 05:43 PM, Cobin Bluth wrote:
>>
>> Please see the following:
>>
>>
>> root at Asus:/mnt/GlusterFS-POC# mount
>> [ ...truncated... ]
>> GlusterFS1:DSR on /mnt/GlusterFS-POC type fuse.glusterfs
>>
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
>> root at Asus:/mnt/GlusterFS-POC# pwd
>> /mnt/GlusterFS-POC
>> root at Asus:/mnt/GlusterFS-POC# for i in {1..10}; do cat
/mnt/10MB-File >
>> $i.tmp; done
>> cat: write error: Input/output error
>> cat: write error: Input/output error
>> cat: write error: Input/output error
>> cat: write error: Input/output error
>> cat: write error: Input/output error
>> cat: write error: Input/output error
>> cat: write error: Input/output error
>> cat: write error: Input/output error
>> cat: write error: Input/output error
>> cat: write error: Input/output error
>> cat: write error: Input/output error
>> cat: write error: Input/output error
>> cat: write error: Input/output error
>> cat: write error: Input/output error
>> cat: write error: Input/output error
>> cat: write error: Input/output error
>> cat: write error: Input/output error
>> cat: write error: Input/output error
>> cat: write error: Input/output error
>> cat: write error: Input/output error
>> root at Asus:/mnt/GlusterFS-POC# cat /mnt/10MB-File > test-file
>> cat: write error: Input/output error
>> cat: write error: Input/output error
>> root at Asus:/mnt/GlusterFS-POC# dd if=/dev/zero of=test-file bs=1M
count=10
>> dd: error writing ?test-file?: Input/output error
>> dd: closing output file ?test-file?: Input/output error
>> root at Asus:/mnt/GlusterFS-POC# ls -lsha
>> total 444K
>> 9.0K drwxr-xr-x 4 root root 4.1K Oct 5 17:32 .
>> 4.0K drwxr-xr-x 10 root root 4.0K Oct 5 17:16 ..
>> 13K -rw-r--r-- 1 root root 257K Oct 5 17:30 10.tmp
>> 1.0K -rw-r--r-- 1 root root 129K Oct 5 17:30 1.tmp
>> 13K -rw-r--r-- 1 root root 257K Oct 5 17:30 2.tmp
>> 1.0K -rw-r--r-- 1 root root 129K Oct 5 17:30 3.tmp
>> 13K -rw-r--r-- 1 root root 257K Oct 5 17:30 4.tmp
>> 13K -rw-r--r-- 1 root root 257K Oct 5 17:30 5.tmp
>> 13K -rw-r--r-- 1 root root 257K Oct 5 17:30 6.tmp
>> 13K -rw-r--r-- 1 root root 257K Oct 5 17:30 7.tmp
>> 13K -rw-r--r-- 1 root root 257K Oct 5 17:30 8.tmp
>> 13K -rw-r--r-- 1 root root 257K Oct 5 17:30 9.tmp
>> 329K -rw-r--r-- 1 root root 385K Oct 5 17:32 test-file
>> 0 drwxr-xr-x 3 root root 48 Oct 5 16:05 .trashcan
>> root at Asus:/mnt/GlusterFS-POC#
>>
>>
>>
>> I am trying to do tests on my gluster volume to see how well it will
work
>> for me.
>> I am getting that error when I try to use dd on it.
>>
>> What would be the best way to troubleshoot this?
>>
>>
>> Thanks,
>>
>> Cobin
>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing listGluster-users at
gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20151012/d91d8545/attachment.html>