John Gardeniers
2014-Mar-12 04:46 UTC
[Gluster-users] Self-heal failed on new bricks. Or did it?
Hi All, I'm new to Gluster and am therefore at the "suck it and see" stage, trying different things to see it in operation. I have a replica pair with a single volume of one brick on each server acting as a back end for an RHEV test install. Today I added an additional brick on each of the replica pair with "gluster volume add-brick testvol 192.168.19.20:/brick2 192.168.19.21:/brick2 force", the force being necessary as brick2 is on the system drive (remember, I'm just experimenting for now). While there are no other signs of errors, when I run "gluster volume heal testvol info heal-failed" I get the following: Gathering Heal info on volume testvol has been successful Brick 192.168.19.20:/storage Number of entries: 0 Brick 192.168.19.21:/storage Number of entries: 0 Brick 192.168.19.20:/brick2 Number of entries: 4 at path on brick ----------------------------------- 2014-03-12 03:59:50 /f67693a9-e255-4466-a5a9-298e54f0f7c7/dom_md 2014-03-12 03:59:50 /f67693a9-e255-4466-a5a9-298e54f0f7c7/master/tasks 2014-03-12 03:59:50 /f67693a9-e255-4466-a5a9-298e54f0f7c7/master/vms 2014-03-12 03:59:50 /f67693a9-e255-4466-a5a9-298e54f0f7c7/master Brick 192.168.19.21:/brick2 Number of entries: 0 I've rsynced brick2 from each gluster server to another box and diffed them. There are absolutely no differences. So how come Gluster is saying that the heal failed? More importantly, what can I do about it? Thanks, John
John Gardeniers
2014-Mar-12 21:38 UTC
[Gluster-users] Self-heal failed on new bricks. Or did it?
Hi All, Having received no replies overnight I thought to see what would happen if I reboot each of the replica servers (one at a time of course). The problem disappeared. Theories anyone? John On 12/03/14 15:46, John Gardeniers wrote:> Hi All, > > I'm new to Gluster and am therefore at the "suck it and see" stage, > trying different things to see it in operation. I have a replica pair > with a single volume of one brick on each server acting as a back end > for an RHEV test install. > > Today I added an additional brick on each of the replica pair with > "gluster volume add-brick testvol 192.168.19.20:/brick2 > 192.168.19.21:/brick2 force", the force being necessary as brick2 is on > the system drive (remember, I'm just experimenting for now). > > While there are no other signs of errors, when I run "gluster volume > heal testvol info heal-failed" I get the following: > > Gathering Heal info on volume testvol has been successful > > Brick 192.168.19.20:/storage > Number of entries: 0 > > Brick 192.168.19.21:/storage > Number of entries: 0 > > Brick 192.168.19.20:/brick2 > Number of entries: 4 > at path on brick > ----------------------------------- > 2014-03-12 03:59:50 /f67693a9-e255-4466-a5a9-298e54f0f7c7/dom_md > 2014-03-12 03:59:50 /f67693a9-e255-4466-a5a9-298e54f0f7c7/master/tasks > 2014-03-12 03:59:50 /f67693a9-e255-4466-a5a9-298e54f0f7c7/master/vms > 2014-03-12 03:59:50 /f67693a9-e255-4466-a5a9-298e54f0f7c7/master > > Brick 192.168.19.21:/brick2 > Number of entries: 0 > > I've rsynced brick2 from each gluster server to another box and diffed > them. There are absolutely no differences. So how come Gluster is saying > that the heal failed? More importantly, what can I do about it? > > Thanks, > John > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users > > ______________________________________________________________________ > This email has been scanned by the Symantec Email Security.cloud service. > For more information please visit http://www.symanteccloud.com > ______________________________________________________________________