Hari Gowtham
2019-Jun-10 14:07 UTC
[Gluster-users] No healing on peer disconnect - is it correct?
On Mon, Jun 10, 2019 at 7:21 PM snowmailer <snowmailer at gmail.com> wrote:> > Can someone advice on this, please? > > BR! > > D?a 3. 6. 2019 o 18:58 u??vate? Martin <snowmailer at gmail.com> nap?sal: > > > Hi all, > > > > I need someone to explain if my gluster behaviour is correct. I am not sure if my gluster works as it should. I have simple Replica 3 - Number of Bricks: 1 x 3 = 3. > > > > When one of my hypervisor is disconnected as peer, i.e. gluster process is down but bricks running, other two healthy nodes start signalling that they lost one peer. This is correct. > > Next, I restart gluster process on node where gluster process failed and I thought It should trigger healing of files on failed node but nothing is happening. > > > > I run VMs disks on this gluster volume. No healing is triggered after gluster restart, remaining two nodes get peer back after restart of gluster and everything is running without down time. > > Even VMs that are running on ?failed? node where gluster process was down (bricks were up) are running without down time.I assume your VMs use gluster as the storage. In that case, the gluster volume might be mounted on all the hypervisors. The mount/ client is smart enough to give the correct data from the other two machines which were always up. This is the reason things are working fine. Gluster should heal the brick. Adding people how can help you better with the heal part. @Karthik Subrahmanya @Ravishankar N do take a look and answer this part.> > > > Is this behaviour correct? I mean No healing is triggered after peer is reconnected back and VMs. > > > > Thanks for explanation. > > > > BR! > > Martin > > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users-- Regards, Hari Gowtham.
Martin
2019-Jun-10 14:23 UTC
[Gluster-users] No healing on peer disconnect - is it correct?
My VMs using Gluster as storage through libgfapi support in Qemu. But I dont see any healing of reconnected brick. Thanks Karthik / Ravishankar in advance!> On 10 Jun 2019, at 16:07, Hari Gowtham <hgowtham at redhat.com> wrote: > > On Mon, Jun 10, 2019 at 7:21 PM snowmailer <snowmailer at gmail.com <mailto:snowmailer at gmail.com>> wrote: >> >> Can someone advice on this, please? >> >> BR! >> >> D?a 3. 6. 2019 o 18:58 u??vate? Martin <snowmailer at gmail.com> nap?sal: >> >>> Hi all, >>> >>> I need someone to explain if my gluster behaviour is correct. I am not sure if my gluster works as it should. I have simple Replica 3 - Number of Bricks: 1 x 3 = 3. >>> >>> When one of my hypervisor is disconnected as peer, i.e. gluster process is down but bricks running, other two healthy nodes start signalling that they lost one peer. This is correct. >>> Next, I restart gluster process on node where gluster process failed and I thought It should trigger healing of files on failed node but nothing is happening. >>> >>> I run VMs disks on this gluster volume. No healing is triggered after gluster restart, remaining two nodes get peer back after restart of gluster and everything is running without down time. >>> Even VMs that are running on ?failed? node where gluster process was down (bricks were up) are running without down time. > > I assume your VMs use gluster as the storage. In that case, the > gluster volume might be mounted on all the hypervisors. > The mount/ client is smart enough to give the correct data from the > other two machines which were always up. > This is the reason things are working fine. > > Gluster should heal the brick. > Adding people how can help you better with the heal part. > @Karthik Subrahmanya @Ravishankar N do take a look and answer this part. > >>> >>> Is this behaviour correct? I mean No healing is triggered after peer is reconnected back and VMs. >>> >>> Thanks for explanation. >>> >>> BR! >>> Martin >>> >>> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org> >> https://lists.gluster.org/mailman/listinfo/gluster-users <https://lists.gluster.org/mailman/listinfo/gluster-users> > > > > -- > Regards, > Hari Gowtham.-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190610/bdf41bd0/attachment.html>