Pranith Kumar Karampuri
2016-Jul-12 13:39 UTC
[Gluster-users] 3.7.13, index healing broken?
Wow, what are the steps to recreate the problem? On Tue, Jul 12, 2016 at 3:09 PM, Dmitry Melekhov <dm at belkam.com> wrote:> 12.07.2016 13:33, Pranith Kumar Karampuri ?????: > > What was "gluster volume heal <volname> info" showing when you saw this > issue? > > > just reproduced : > > > [root at father brick]# > gstatus-0.64-3.el7.x86_64.rpm > > [root at father brick]# gluster volume heal pool > Launching heal operation to perform index self heal on volume pool has > been successful > Use heal info commands to check status > [root at father brick]# gluster volume heal pool info > Brick father:/wall/pool/brick > Status: Connected > Number of entries: 0 > > Brick son:/wall/pool/brick > Status: Connected > Number of entries: 0 > > Brick spirit:/wall/pool/brick > Status: Connected > Number of entries: 0 > > [root at father brick]# > > > > > On Mon, Jul 11, 2016 at 3:28 PM, Dmitry Melekhov <dm at belkam.com> wrote: > >> Hello! >> >> 3.7.13, 3 bricks volume. >> >> inside one of bricks: >> >> [root at father brick]# ls -l gstatus-0.64-3.el7.x86_64.rpm >> -rw-r--r-- 2 root root 52268 ??? 11 13:00 gstatus-0.64-3.el7.x86_64.rpm >> [root at father brick]# >> >> >> [root at father brick]# > gstatus-0.64-3.el7.x86_64.rpm >> [root at father brick]# ls -l gstatus-0.64-3.el7.x86_64.rpm >> -rw-r--r-- 2 root root 0 ??? 11 13:54 gstatus-0.64-3.el7.x86_64.rpm >> [root at father brick]# >> >> so now file has 0 length. >> >> try to heal: >> >> >> >> [root at father brick]# gluster volume heal pool >> Launching heal operation to perform index self heal on volume pool has >> been successful >> Use heal info commands to check status >> [root at father brick]# ls -l gstatus-0.64-3.el7.x86_64.rpm >> -rw-r--r-- 2 root root 0 ??? 11 13:54 gstatus-0.64-3.el7.x86_64.rpm >> [root at father brick]# >> >> >> nothing! >> >> [root at father brick]# gluster volume heal pool full >> Launching heal operation to perform full self heal on volume pool has >> been successful >> Use heal info commands to check status >> [root at father brick]# ls -l gstatus-0.64-3.el7.x86_64.rpm >> -rw-r--r-- 2 root root 52268 ??? 11 13:00 gstatus-0.64-3.el7.x86_64.rpm >> [root at father brick]# >> >> >> full heal is OK. >> >> But, self-heal is doing index heal according to >> >> >> http://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Developer-guide/afr-self-heal-daemon/ >> >> Is this bug? >> >> >> As far as I remember it worked in 3.7.10.... >> >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://www.gluster.org/mailman/listinfo/gluster-users > > > > > -- > Pranith > > >-- Pranith -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160712/e319b21c/attachment.html>
12.07.2016 17:39, Pranith Kumar Karampuri ?????:> Wow, what are the steps to recreate the problem?just set file length to zero, always reproducible.> > On Tue, Jul 12, 2016 at 3:09 PM, Dmitry Melekhov <dm at belkam.com > <mailto:dm at belkam.com>> wrote: > > 12.07.2016 13:33, Pranith Kumar Karampuri ?????: >> What was "gluster volume heal <volname> info" showing when you >> saw this issue? > > just reproduced : > > > [root at father brick]# > gstatus-0.64-3.el7.x86_64.rpm > > [root at father brick]# gluster volume heal pool > Launching heal operation to perform index self heal on volume pool > has been successful > Use heal info commands to check status > [root at father brick]# gluster volume heal pool info > Brick father:/wall/pool/brick > Status: Connected > Number of entries: 0 > > Brick son:/wall/pool/brick > Status: Connected > Number of entries: 0 > > Brick spirit:/wall/pool/brick > Status: Connected > Number of entries: 0 > > [root at father brick]# > > > >> >> On Mon, Jul 11, 2016 at 3:28 PM, Dmitry Melekhov <dm at belkam.com >> <mailto:dm at belkam.com>> wrote: >> >> Hello! >> >> 3.7.13, 3 bricks volume. >> >> inside one of bricks: >> >> [root at father brick]# ls -l gstatus-0.64-3.el7.x86_64.rpm >> -rw-r--r-- 2 root root 52268 ??? 11 13:00 >> gstatus-0.64-3.el7.x86_64.rpm >> [root at father brick]# >> >> >> [root at father brick]# > gstatus-0.64-3.el7.x86_64.rpm >> [root at father brick]# ls -l gstatus-0.64-3.el7.x86_64.rpm >> -rw-r--r-- 2 root root 0 ??? 11 13:54 >> gstatus-0.64-3.el7.x86_64.rpm >> [root at father brick]# >> >> so now file has 0 length. >> >> try to heal: >> >> >> >> [root at father brick]# gluster volume heal pool >> Launching heal operation to perform index self heal on volume >> pool has been successful >> Use heal info commands to check status >> [root at father brick]# ls -l gstatus-0.64-3.el7.x86_64.rpm >> -rw-r--r-- 2 root root 0 ??? 11 13:54 >> gstatus-0.64-3.el7.x86_64.rpm >> [root at father brick]# >> >> >> nothing! >> >> [root at father brick]# gluster volume heal pool full >> Launching heal operation to perform full self heal on volume >> pool has been successful >> Use heal info commands to check status >> [root at father brick]# ls -l gstatus-0.64-3.el7.x86_64.rpm >> -rw-r--r-- 2 root root 52268 ??? 11 13:00 >> gstatus-0.64-3.el7.x86_64.rpm >> [root at father brick]# >> >> >> full heal is OK. >> >> But, self-heal is doing index heal according to >> >> http://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Developer-guide/afr-self-heal-daemon/ >> >> Is this bug? >> >> >> As far as I remember it worked in 3.7.10.... >> >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org> >> http://www.gluster.org/mailman/listinfo/gluster-users >> >> >> >> >> -- >> Pranith > > > > > -- > Pranith-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160712/9b318418/attachment.html>