Kevin Lemonnier
2017-Jan-27 19:20 UTC
[Gluster-users] Location of the gluster client log with libgfapi?
On Fri, Jan 27, 2017 at 02:45:46PM -0300, Gambit15 wrote:> Hey guys, > Would anyone be able to tell me the name/location of the gluster client > log when mounting through libgfapi? >Nowhere, unfortunatly. If you are talking about KVM (qemu) you'll get it on the stdout of the VM, which can be annoying to get depending on what you are using. -- Kevin Lemonnier PGP Fingerprint : 89A5 2283 04A0 E6E9 0111 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Digital signature URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170127/d690b9d8/attachment.sig>
Doug Ingham
2017-Jan-27 20:21 UTC
[Gluster-users] Location of the gluster client log with libgfapi?
Hi Kevin, On 27 Jan 2017 16:20, "Kevin Lemonnier" <lemonnierk at ulrar.net> wrote: On Fri, Jan 27, 2017 at 02:45:46PM -0300, Gambit15 wrote:> Hey guys, > Would anyone be able to tell me the name/location of the gluster client > log when mounting through libgfapi? >Nowhere, unfortunatly. If you are talking about KVM (qemu) you'll get it on the stdout of the VM, which can be annoying to get depending on what you are using. I'm using oVirt 4, which whilst backed by KVM, I'm aware has a few defaults which differ from pure KVM. On the gluster side, I'm running 3.8 in a (2+1)x2 setup & default server quorum settings. Basically, every now & then I notice random VHD images popping up in the heal queue, and they're almost always in pairs, "healing" the same file on 2 of the 3 replicate bricks. That already strikes me as odd, as if a file is "dirty" on more than one brick, surely that's a split-brain scenario? (nothing logged in "info split-brain" though) Anyway, these heal processes always hang around for a couple of hours, even when it's just metadata on an arbiter brick. That doesn't make sense to me, an arbiter shouldn't take more than a couple of seconds to heal!? I spoke with Joe on IRC, and he suggested I'd find more info in the client's logs... -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170127/c69e5fd1/attachment.html>
Kevin Lemonnier
2017-Jan-27 22:05 UTC
[Gluster-users] Location of the gluster client log with libgfapi?
> Basically, every now & then I notice random VHD images popping up in the > heal queue, and they're almost always in pairs, "healing" the same file on > 2 of the 3 replicate bricks. > That already strikes me as odd, as if a file is "dirty" on more than one > brick, surely that's a split-brain scenario? (nothing logged in "info > split-brain" though)I don't think that's a problem, they do tend to show the heal on every brick but the one being healed .. I think the sources show the file to heal, not the dirty one. At least that's what I noticed on my clusters.> > Anyway, these heal processes always hang around for a couple of hours, even > when it's just metadata on an arbiter brick. > That doesn't make sense to me, an arbiter shouldn't take more than a couple > of seconds to heal!?Sorry, no idea on that, I never used arbiter setups.> > I spoke with Joe on IRC, and he suggested I'd find more info in the > client's logs...Well it'd be good to know why they need healing, for sure. I don't know of any way to get that on the gluster side, you need to find a way on oVirt to redirect the output of the qemu process somewhere. That's where you'll find the libgfapi logs. Never used oVirt so I can't really help on that :/ -- Kevin Lemonnier PGP Fingerprint : 89A5 2283 04A0 E6E9 0111 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Digital signature URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170127/c8b18952/attachment.sig>