lemonnierk at ulrar.net
2017-Sep-09 22:08 UTC
[Gluster-users] GlusterFS as virtual machine storage
Mh, not so sure really, using libgfapi and it's been working perfectly fine. And trust me, there had been A LOT of various crashes, reboots and kill of nodes. Maybe it's a version thing ? A new bug in the new gluster releases that doesn't affect our 3.7.15. On Sat, Sep 09, 2017 at 10:19:24AM -0700, WK wrote:> Well, that makes me feel better. > > I've seen all these stories here and on Ovirt recently about VMs going > read-only, even on fairly simply layouts. > > Each time, I've responded that we just don't see those issues. > > I guess the fact that we were lazy about switching to gfapi turns out to > be a potential explanation <grin> > > -wk > > > > > > > On 9/9/2017 6:49 AM, Pavel Szalbot wrote: > > Yes, this is my observation so far. > > > > On Sep 9, 2017 13:32, "Gionatan Danti" <g.danti at assyoma.it > > <mailto:g.danti at assyoma.it>> wrote: > > > > > > So, to recap: > > - with gfapi, your VMs crashes/mount read-only with a single node > > failure; > > - with gpapi also, fio seems to have no problems; > > - with native FUSE client, both VMs and fio have no problems at all. > > >> _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Digital signature URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170909/9c410552/attachment.sig>
I'm on 3.10.5. Its rock solid (at least with the fuse mount <Grin>) We are also typically on a somewhat slower GlusterFS LAN network (bonded 2x1G, jumbo frames) so that may be a factor. I'll try to setup a trusted pool to test libgfapi soon. I'm curious as to how much faster it is, but the fuse mount is fast enough, dirt simple to use, and just works on all VM ops such as migration, snaps etc, so there hasn't been a compelling need to squeeze out a few more I/Os. On 9/9/2017 3:08 PM, lemonnierk at ulrar.net wrote:> Mh, not so sure really, using libgfapi and it's been working perfectly > fine. And trust me, there had been A LOT of various crashes, reboots and > kill of nodes. > > Maybe it's a version thing ? A new bug in the new gluster releases that > doesn't affect our 3.7.15. > > On Sat, Sep 09, 2017 at 10:19:24AM -0700, WK wrote: >> Well, that makes me feel better. >> >> I've seen all these stories here and on Ovirt recently about VMs going >> read-only, even on fairly simply layouts. >> >> Each time, I've responded that we just don't see those issues. >> >> I guess the fact that we were lazy about switching to gfapi turns out to >> be a potential explanation <grin> >> >> -wk >> >> >> >> >> >> >> On 9/9/2017 6:49 AM, Pavel Szalbot wrote: >>> Yes, this is my observation so far. >>> >>> On Sep 9, 2017 13:32, "Gionatan Danti" <g.danti at assyoma.it >>> <mailto:g.danti at assyoma.it>> wrote: >>> >>> >>> So, to recap: >>> - with gfapi, your VMs crashes/mount read-only with a single node >>> failure; >>> - with gpapi also, fio seems to have no problems; >>> - with native FUSE client, both VMs and fio have no problems at all. >>> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://lists.gluster.org/mailman/listinfo/gluster-users > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170909/be199b6f/attachment.html>
Hey guys, I got another "reboot crash" with gfapi and this time libvirt-3.2.1 (from cbs.centos.org). Is there anyone who can audit the libgfapi usage in libvirt? :-) WK: I use bonded 2x10Gbps and I do get crashes only in heavy I/O situations (fio). Upgrading system (apt-get dist-upgrade) was ok, so this might be even related to amount of IOPS. -ps On Sun, Sep 10, 2017 at 6:37 AM, WK <wkmail at bneit.com> wrote:> I'm on 3.10.5. Its rock solid (at least with the fuse mount <Grin>) > > We are also typically on a somewhat slower GlusterFS LAN network (bonded > 2x1G, jumbo frames) so that may be a factor. > > I'll try to setup a trusted pool to test libgfapi soon. > > I'm curious as to how much faster it is, but the fuse mount is fast enough, > dirt simple to use, and just works on all VM ops such as migration, snaps > etc, so there hasn't been a compelling need to squeeze out a few more I/Os. > > > > > > > On 9/9/2017 3:08 PM, lemonnierk at ulrar.net wrote: > > Mh, not so sure really, using libgfapi and it's been working perfectly > fine. And trust me, there had been A LOT of various crashes, reboots and > kill of nodes. > > Maybe it's a version thing ? A new bug in the new gluster releases that > doesn't affect our 3.7.15. > > On Sat, Sep 09, 2017 at 10:19:24AM -0700, WK wrote: > > Well, that makes me feel better. > > I've seen all these stories here and on Ovirt recently about VMs going > read-only, even on fairly simply layouts. > > Each time, I've responded that we just don't see those issues. > > I guess the fact that we were lazy about switching to gfapi turns out to > be a potential explanation <grin> > > -wk > > > > > > > On 9/9/2017 6:49 AM, Pavel Szalbot wrote: > > Yes, this is my observation so far. > > On Sep 9, 2017 13:32, "Gionatan Danti" <g.danti at assyoma.it > <mailto:g.danti at assyoma.it>> wrote: > > > So, to recap: > - with gfapi, your VMs crashes/mount read-only with a single node > failure; > - with gpapi also, fio seems to have no problems; > - with native FUSE client, both VMs and fio have no problems at all. > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users