Displaying 9 results from an estimated 9 matches for "recovering_from_file_split".
2018 Jan 24
1
Split brain directory
...plit-brain itself shows zero items to be healed
(lines 388 to 446).
All the clients mount this volume using glusterfs-fuse.
I don't know what to do, please help.
Thanks.
Luca Gervasi
References:
[1]
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Recovering_from_File_Split-brain.html
[2]
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/sect-managing_split-brain
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180124/24df...
2017 Dec 21
0
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Here is the process for resolving split brain on replica 2:
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Recovering_from_File_Split-brain.html
It should be pretty much the same for replica 3, you change the xattrs with something like:
# setfattr -n trusted.afr.vol-client-0 -v 0x000000000000000100000000 /gfs/brick-b/a
When I try to decide which copy to use I normally run things like:
# stat /<path to brick>/pat/to/file...
2017 Dec 20
2
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hi,
I have the following volume:
Volume Name: virt_images
Type: Replicate
Volume ID: 9f3c8273-4d9d-4af2-a4e7-4cb4a51e3594
Status: Started
Snapshot Count: 2
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: virt3:/data/virt_images/brick
Brick2: virt2:/data/virt_images/brick
Brick3: printserver:/data/virt_images/brick (arbiter)
Options Reconfigured:
features.quota-deem-statfs:
2017 Dec 21
2
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
...the arbiter brick also.
Regards,
Karthik
On Thu, Dec 21, 2017 at 9:55 AM, Ben Turner <bturner at redhat.com> wrote:
> Here is the process for resolving split brain on replica 2:
>
> https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/
> Administration_Guide/Recovering_from_File_Split-brain.html
>
> It should be pretty much the same for replica 3, you change the xattrs
> with something like:
>
> # setfattr -n trusted.afr.vol-client-0 -v 0x000000000000000100000000
> /gfs/brick-b/a
>
> When I try to decide which copy to use I normally run things like:
>...
2017 Dec 21
0
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
...arthik
>
>
> On Thu, Dec 21, 2017 at 9:55 AM, Ben Turner <bturner at redhat.com> wrote:
>>
>> Here is the process for resolving split brain on replica 2:
>>
>>
>> https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Recovering_from_File_Split-brain.html
>>
>> It should be pretty much the same for replica 3, you change the xattrs
>> with something like:
>>
>> # setfattr -n trusted.afr.vol-client-0 -v 0x000000000000000100000000
>> /gfs/brick-b/a
>>
>> When I try to decide which copy to use I...
2017 Dec 19
3
How to make sure self-heal backlog is empty ?
Hello list,
I'm not sure what to look for here, not sure if what I'm seeing is the
actual "backlog" (that we need to make sure is empty while performing a
rolling upgrade before going to the next node), how can I tell, while
reading this, if it's okay to reboot / upgrade my next node in the pool ?
Here is what I do for checking :
for i in `gluster volume list`; do
2017 Dec 22
2
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
...ec 21, 2017 at 9:55 AM, Ben Turner <bturner at redhat.com> wrote:
> >>
> >> Here is the process for resolving split brain on replica 2:
> >>
> >>
> >> https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/
> Administration_Guide/Recovering_from_File_Split-brain.html
> >>
> >> It should be pretty much the same for replica 3, you change the xattrs
> >> with something like:
> >>
> >> # setfattr -n trusted.afr.vol-client-0 -v 0x000000000000000100000000
> >> /gfs/brick-b/a
> >>
> >>...
2017 Dec 22
0
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
...er <bturner at redhat.com> wrote:
>> >>
>> >> Here is the process for resolving split brain on replica 2:
>> >>
>> >>
>> >>
>> >> https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Recovering_from_File_Split-brain.html
>> >>
>> >> It should be pretty much the same for replica 3, you change the xattrs
>> >> with something like:
>> >>
>> >> # setfattr -n trusted.afr.vol-client-0 -v 0x000000000000000100000000
>> >> /gfs/brick-b/a
>...
2017 Dec 22
0
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
...te:
> >> >>
> >> >> Here is the process for resolving split brain on replica 2:
> >> >>
> >> >>
> >> >>
> >> >> https://access.redhat.com/documentation/en-US/Red_Hat_
> Storage/2.1/html/Administration_Guide/Recovering_from_File_Split-
> brain.html
> >> >>
> >> >> It should be pretty much the same for replica 3, you change the
> xattrs
> >> >> with something like:
> >> >>
> >> >> # setfattr -n trusted.afr.vol-client-0 -v 0x000000000000000100000000
&...