Kevin Lemonnier
2017-Jan-27 22:05 UTC
[Gluster-users] Location of the gluster client log with libgfapi?
> Basically, every now & then I notice random VHD images popping up in the > heal queue, and they're almost always in pairs, "healing" the same file on > 2 of the 3 replicate bricks. > That already strikes me as odd, as if a file is "dirty" on more than one > brick, surely that's a split-brain scenario? (nothing logged in "info > split-brain" though)I don't think that's a problem, they do tend to show the heal on every brick but the one being healed .. I think the sources show the file to heal, not the dirty one. At least that's what I noticed on my clusters.> > Anyway, these heal processes always hang around for a couple of hours, even > when it's just metadata on an arbiter brick. > That doesn't make sense to me, an arbiter shouldn't take more than a couple > of seconds to heal!?Sorry, no idea on that, I never used arbiter setups.> > I spoke with Joe on IRC, and he suggested I'd find more info in the > client's logs...Well it'd be good to know why they need healing, for sure. I don't know of any way to get that on the gluster side, you need to find a way on oVirt to redirect the output of the qemu process somewhere. That's where you'll find the libgfapi logs. Never used oVirt so I can't really help on that :/ -- Kevin Lemonnier PGP Fingerprint : 89A5 2283 04A0 E6E9 0111 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Digital signature URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170127/c8b18952/attachment.sig>
Gambit15
2017-Jan-27 22:17 UTC
[Gluster-users] Location of the gluster client log with libgfapi?
On 27 January 2017 at 19:05, Kevin Lemonnier <lemonnierk at ulrar.net> wrote:> > Basically, every now & then I notice random VHD images popping up in the > > heal queue, and they're almost always in pairs, "healing" the same file > on > > 2 of the 3 replicate bricks. > > That already strikes me as odd, as if a file is "dirty" on more than one > > brick, surely that's a split-brain scenario? (nothing logged in "info > > split-brain" though) > > I don't think that's a problem, they do tend to show the heal on every > brick > but the one being healed .. I think the sources show the file to heal, not > the > dirty one. > At least that's what I noticed on my clusters. > > > > > Anyway, these heal processes always hang around for a couple of hours, > even > > when it's just metadata on an arbiter brick. > > That doesn't make sense to me, an arbiter shouldn't take more than a > couple > > of seconds to heal!? > > Sorry, no idea on that, I never used arbiter setups. >If it's actually showing the source files that are being healed *from*, not *to*, that'd make sense. Although it's a counter-intuitive way of displaying things & is completely contrary to all of the documentation (as described by readthedocs.gluster.io, Red Hat & Rackspace)> > > > I spoke with Joe on IRC, and he suggested I'd find more info in the > > client's logs... > > Well it'd be good to know why they need healing, for sure. > I don't know of any way to get that on the gluster side, you need to > find a way on oVirt to redirect the output of the qemu process somewhere. > That's where you'll find the libgfapi logs. > Never used oVirt so I can't really help on that :/ >Well you've given me somewhere to start from at least. Appreciated! D -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170127/306616ad/attachment.html>