Il 23-08-2017 18:14 Pavel Szalbot ha scritto:> Hi, after many VM crashes during upgrades of Gluster, losing network > connectivity on one node etc. I would advise running replica 2 with > arbiter.Hi Pavel, this is bad news :( So, in your case at least, Gluster was not stable? Something as simple as an update would let it crash?> I once even managed to break this setup (with arbiter) due to network > partitioning - one data node never healed and I had to restore from > backups (it was easier and kind of non-production). Be extremely > careful and plan for failure.I would use VM locking via sanlock or virtlock, so a split brain should not cause simultaneous changes on both replicas. I am more concerned about volume heal time: what will happen if the standby node crashes/reboots? Will *all* data be re-synced from the master, or only changed bit will be re-synced? As stated above, I would like to avoid using sharding... Thanks. -- Danti Gionatan Supporto Tecnico Assyoma S.r.l. - www.assyoma.it email: g.danti at assyoma.it - info at assyoma.it GPG public key ID: FF5F32A8
lemonnierk at ulrar.net
2017-Aug-23 16:59 UTC
[Gluster-users] GlusterFS as virtual machine storage
What he is saying is that, on a two node volume, upgrading a node will cause the volume to go down. That's nothing weird, you really should use 3 nodes. On Wed, Aug 23, 2017 at 06:51:55PM +0200, Gionatan Danti wrote:> Il 23-08-2017 18:14 Pavel Szalbot ha scritto: > > Hi, after many VM crashes during upgrades of Gluster, losing network > > connectivity on one node etc. I would advise running replica 2 with > > arbiter. > > Hi Pavel, this is bad news :( > So, in your case at least, Gluster was not stable? Something as simple > as an update would let it crash? > > > I once even managed to break this setup (with arbiter) due to network > > partitioning - one data node never healed and I had to restore from > > backups (it was easier and kind of non-production). Be extremely > > careful and plan for failure. > > I would use VM locking via sanlock or virtlock, so a split brain should > not cause simultaneous changes on both replicas. I am more concerned > about volume heal time: what will happen if the standby node > crashes/reboots? Will *all* data be re-synced from the master, or only > changed bit will be re-synced? As stated above, I would like to avoid > using sharding... > > Thanks. > > > -- > Danti Gionatan > Supporto Tecnico > Assyoma S.r.l. - www.assyoma.it > email: g.danti at assyoma.it - info at assyoma.it > GPG public key ID: FF5F32A8 > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Digital signature URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170823/2392a677/attachment.sig>
Hi, I believe it is not that simple. Even replica 2 + arbiter volume with default network.ping-timeout will cause the underlying VM to remount filesystem as read-only (device error will occur) unless you tune mount options in VM's fstab. -ps On Wed, Aug 23, 2017 at 6:59 PM, <lemonnierk at ulrar.net> wrote:> What he is saying is that, on a two node volume, upgrading a node will > cause the volume to go down. That's nothing weird, you really should use > 3 nodes. > > On Wed, Aug 23, 2017 at 06:51:55PM +0200, Gionatan Danti wrote: >> Il 23-08-2017 18:14 Pavel Szalbot ha scritto: >> > Hi, after many VM crashes during upgrades of Gluster, losing network >> > connectivity on one node etc. I would advise running replica 2 with >> > arbiter. >> >> Hi Pavel, this is bad news :( >> So, in your case at least, Gluster was not stable? Something as simple >> as an update would let it crash? >> >> > I once even managed to break this setup (with arbiter) due to network >> > partitioning - one data node never healed and I had to restore from >> > backups (it was easier and kind of non-production). Be extremely >> > careful and plan for failure. >> >> I would use VM locking via sanlock or virtlock, so a split brain should >> not cause simultaneous changes on both replicas. I am more concerned >> about volume heal time: what will happen if the standby node >> crashes/reboots? Will *all* data be re-synced from the master, or only >> changed bit will be re-synced? As stated above, I would like to avoid >> using sharding... >> >> Thanks. >> >> >> -- >> Danti Gionatan >> Supporto Tecnico >> Assyoma S.r.l. - www.assyoma.it >> email: g.danti at assyoma.it - info at assyoma.it >> GPG public key ID: FF5F32A8 >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://lists.gluster.org/mailman/listinfo/gluster-users > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users
Il 23-08-2017 18:51 Gionatan Danti ha scritto:> Il 23-08-2017 18:14 Pavel Szalbot ha scritto: >> Hi, after many VM crashes during upgrades of Gluster, losing network >> connectivity on one node etc. I would advise running replica 2 with >> arbiter. > > Hi Pavel, this is bad news :( > So, in your case at least, Gluster was not stable? Something as simple > as an update would let it crash? > >> I once even managed to break this setup (with arbiter) due to network >> partitioning - one data node never healed and I had to restore from >> backups (it was easier and kind of non-production). Be extremely >> careful and plan for failure. > > I would use VM locking via sanlock or virtlock, so a split brain > should not cause simultaneous changes on both replicas. I am more > concerned about volume heal time: what will happen if the standby node > crashes/reboots? Will *all* data be re-synced from the master, or only > changed bit will be re-synced? As stated above, I would like to avoid > using sharding... > > Thanks.Hi all, any other advice from who use (or do not use) Gluster as a replicated VM backend? Thanks. -- Danti Gionatan Supporto Tecnico Assyoma S.r.l. - www.assyoma.it email: g.danti at assyoma.it - info at assyoma.it GPG public key ID: FF5F32A8
Il 25-08-2017 08:32 Gionatan Danti ha scritto:> Hi all, > any other advice from who use (or do not use) Gluster as a replicated > VM backend? > > Thanks.Sorry, I was not seeing messages because I was not subscribed on the list; I read it from the web. So it seems that Pavel and WK have vastly different experience with Gluster. Any plausible cause for that difference?> WK wrote: > 2 node plus Arbiter. You NEED the arbiter or a third node. Do NOT try 2 > node with a VMThis is true even if I manage locking at application level (via virlock or sanlock)? Also, on a two-node setup it is *guaranteed* for updates to one node to put offline the whole volume? On the other hand, a 3-way setup (or 2+arbiter) if free from all these problems? Thanks. -- Danti Gionatan Supporto Tecnico Assyoma S.r.l. - www.assyoma.it email: g.danti at assyoma.it - info at assyoma.it GPG public key ID: FF5F32A8