Dear Ravishankar,
I'm not sure if Brick4 had pending AFRs because I don't know what that
means and it's been a few days so I am not sure I would be able to find
that information.
Anyways, after wasting a few days rsyncing the old brick to a new host I
decided to just try to add the old brick back into the volume instead of
bringing it up on the new host. I created a new brick directory on the old
host, moved the old brick's contents into that new directory (minus the
.glusterfs directory), added the new brick to the volume, and then did
Vlad's find/stat trick? from the brick to the FUSE mount point.
The interesting problem I have now is that some files don't appear in the
FUSE mount's directory listings, but I can actually list them directly and
even read them. What could cause that?
Thanks,
?
https://lists.gluster.org/pipermail/gluster-users/2018-February/033584.html
On Fri, May 24, 2019 at 4:59 PM Ravishankar N <ravishankar at redhat.com>
wrote:
>
> On 23/05/19 2:40 AM, Alan Orth wrote:
>
> Dear list,
>
> I seem to have gotten into a tricky situation. Today I brought up a shiny
> new server with new disk arrays and attempted to replace one brick of a
> replica 2 distribute/replicate volume on an older server using the
> `replace-brick` command:
>
> # gluster volume replace-brick homes wingu0:/mnt/gluster/homes
> wingu06:/data/glusterfs/sdb/homes commit force
>
> The command was successful and I see the new brick in the output of
> `gluster volume info`. The problem is that Gluster doesn't seem to be
> migrating the data,
>
> `replace-brick` definitely must heal (not migrate) the data. In your case,
> data must have been healed from Brick-4 to the replaced Brick-3. Are there
> any errors in the self-heal daemon logs of Brick-4's node? Does Brick-4
> have pending AFR xattrs blaming Brick-3? The doc is a bit out of date.
> replace-brick command internally does all the setfattr steps that are
> mentioned in the doc.
>
> -Ravi
>
>
> and now the original brick that I replaced is no longer part of the volume
> (and a few terabytes of data are just sitting on the old brick):
>
> # gluster volume info homes | grep -E "Brick[0-9]:"
> Brick1: wingu4:/mnt/gluster/homes
> Brick2: wingu3:/mnt/gluster/homes
> Brick3: wingu06:/data/glusterfs/sdb/homes
> Brick4: wingu05:/data/glusterfs/sdb/homes
> Brick5: wingu05:/data/glusterfs/sdc/homes
> Brick6: wingu06:/data/glusterfs/sdc/homes
>
> I see the Gluster docs have a more complicated procedure for replacing
> bricks that involves getfattr/setfattr?. How can I tell Gluster about the
> old brick? I see that I have a backup of the old volfile thanks to
yum's
> rpmsave function if that helps.
>
> We are using Gluster 5.6 on CentOS 7. Thank you for any advice you can
> give.
>
> ?
>
https://docs.gluster.org/en/latest/Administrator%20Guide/Managing%20Volumes/#replace-faulty-brick
>
> --
> Alan Orth
> alan.orth at gmail.com
> https://picturingjordan.com
> https://englishbulgaria.net
> https://mjanja.ch
> "In heaven all the interesting people are missing." ?Friedrich
Nietzsche
>
> _______________________________________________
> Gluster-users mailing listGluster-users at
gluster.orghttps://lists.gluster.org/mailman/listinfo/gluster-users
>
>
--
Alan Orth
alan.orth at gmail.com
https://picturingjordan.com
https://englishbulgaria.net
https://mjanja.ch
"In heaven all the interesting people are missing." ?Friedrich
Nietzsche
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20190529/017cf9f0/attachment.html>