Lindsay Mathieson
2016-Jan-21 00:54 UTC
[Gluster-users] File Corruption when adding bricks to live replica volumes
On 19/01/16 22:06, Krutika Dhananjay wrote:> As far as the reverse heal is concerned, there is one issue with > add-brick where replica count is increased, which is still under review. > Could you instead try the following steps at the time of add-brick and > tell me if it works fine: > > 1. Run 'gluster volume add-brick datastore1 replica 3 > vng.proxmox.softlog:/vmdata/datastore1' as usual. > > 2. Kill the glusterfsd process corresponding to newly added brick (the > brick in vng in your case). You should be able to get its pid in the > output of 'gluster volume status datastore1'. > 3. Create a dummy file on the root of the volume from the mount point. > This can be any random name. > 4. Delete the dummy file created in step 3. > 5. Bring the killed brick back up. For this, you can run 'gluster > volume start datastore1 force'. > 6. Then execute 'gluster volume heal datastore1 full' on the node with > the highest uuid (this we know how to do from the previous thread on > the same topic). > > Then monitor heal-info output to track heal progress.I'm afraid it didn't work Krutika, I still got the reverse heal problem. nb. I am starting from a replica 3 store, removing a brick, cleaning it, then re-adding it. Possibly that affects the process? -- Lindsay Mathieson
Krutika Dhananjay
2016-Jan-21 04:44 UTC
[Gluster-users] File Corruption when adding bricks to live replica volumes
It should not, especially if you followed the steps I gave in my previous mail. Just to be clear, how did you clean up the brick that was removed? I mean, what command did you use? -Krutika ----- Original Message -----> From: "Lindsay Mathieson" <lindsay.mathieson at gmail.com> > To: "Krutika Dhananjay" <kdhananj at redhat.com> > Cc: "gluster-users" <Gluster-users at gluster.org> > Sent: Thursday, January 21, 2016 6:24:58 AM > Subject: Re: [Gluster-users] File Corruption when adding bricks to live > replica volumes> On 19/01/16 22:06, Krutika Dhananjay wrote: > > As far as the reverse heal is concerned, there is one issue with > > add-brick where replica count is increased, which is still under review. > > Could you instead try the following steps at the time of add-brick and > > tell me if it works fine: > > > > 1. Run 'gluster volume add-brick datastore1 replica 3 > > vng.proxmox.softlog:/vmdata/datastore1' as usual. > > > > 2. Kill the glusterfsd process corresponding to newly added brick (the > > brick in vng in your case). You should be able to get its pid in the > > output of 'gluster volume status datastore1'. > > 3. Create a dummy file on the root of the volume from the mount point. > > This can be any random name. > > 4. Delete the dummy file created in step 3. > > 5. Bring the killed brick back up. For this, you can run 'gluster > > volume start datastore1 force'. > > 6. Then execute 'gluster volume heal datastore1 full' on the node with > > the highest uuid (this we know how to do from the previous thread on > > the same topic). > > > > Then monitor heal-info output to track heal progress.> I'm afraid it didn't work Krutika, I still got the reverse heal problem.> nb. I am starting from a replica 3 store, removing a brick, cleaning it, > then re-adding it. Possibly that affects the process?> -- > Lindsay Mathieson-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160120/66ec4670/attachment.html>