That's not correct. There is no risk of corruption using "sync=disabled". In the worst case you just end up with old data but no corruption. See the following comment from a master of ZFS (Aaron Toponce): https://pthree.org/2013/01/25/glusterfs-linked-list-topology/#comment-227906 Btw: I have enterprise SSD for my ZFS SLOG but in the case of GlusterFS I see not much improvement. The real performance improvement comes by disabling ZFS synchronous writes. I do that for all my ZFS pools/partitions which have GlutserFS on top. -------- Original Message -------- Subject: Re: [Gluster-users] Production cluster planning Local Time: September 26, 2016 11:08 PM UTC Time: September 26, 2016 9:08 PM From: lindsay.mathieson at gmail.com To: gluster-users at gluster.org On 27/09/2016 4:13 AM, mabi wrote:> I would also say do not forget to set "sync=disabled".I wouldn't be doing that - very high risk of gluster corruption in the event of power loss or server crash. Up to 5 seconds of writes could be lost that way. If writes aren't fast enough I'd add a SSD partition for slog. Preferably a data center quality one. -- Lindsay Mathieson _______________________________________________ Gluster-users mailing list Gluster-users at gluster.org http://www.gluster.org/mailman/listinfo/gluster-users -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160928/7baeae2c/attachment.html>
On 29/09/2016 4:32 AM, mabi wrote:> hat's not correct. There is no risk of corruption using > "sync=disabled". In the worst case you just end up with old data but > no corruption. See the following comment from a master of ZFS (Aaron > Toponce): > > https://pthree.org/2013/01/25/glusterfs-linked-list-topology/#comment-227906Your missing what he said - *ZFS* will not be corrupted but the data written could be in any state, in this case the gluster filesystem data and meta data. To have one ndoe in a cluster out of sync with out the cluster knowing would be very bad. -- Lindsay Mathieson
Il 30 set 2016 11:35, "mabi" <mabi at protonmail.ch> ha scritto:> > That's not correct. There is no risk of corruption using "sync=disabled".In the worst case you just end up with old data but no corruption. See the following comment from a master of ZFS (Aaron Toponce):> >https://pthree.org/2013/01/25/glusterfs-linked-list-topology/#comment-227906> > Btw: I have enterprise SSD for my ZFS SLOG but in the case of GlusterFS Isee not much improvement. The real performance improvement comes by disabling ZFS synchronous writes. I do that for all my ZFS pools/partitions which have GlutserFS on top. This seems logical. did you mesure the performance gain with sync disabled? Which configuration do you use in gluster? Zfs with raidz2 and slog to ssd? Any l2arc? I was thinking about creating one or more raidz2 to use as bricks, with 2 ssd. One small partition on these ssd would be used as a mirrored SLOG and the other 2 would be used as standalone arc cache. will this worth the use of SSD or would be totally useless with gluster? I don't know if use gluster hot tiering or let zfs manage everything As suggestion for gluster developers: if ZFS is considered stable it could be used as default (replacing xfs) and many features that zfs already has could be removed from gluster (like bitrot) keeping gluster smaller and faster -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160930/761af34a/attachment.html>
On September 30, 2016 1:46:31 PM GMT+02:00, Gandalf Corvotempesta <gandalf.corvotempesta at gmail.com> wrote:> >As suggestion for gluster developers: if ZFS is considered stable it >could >be used as default (replacing xfs) and many features that zfs already >has >could be removed from gluster (like bitrot) keeping gluster smaller and >fasterZFS is neither small nor fast. It will never be a default, imho, as its incompatible license prevents it from being included in many distros, thus significantly complicating installation.> >------------------------------------------------------------------------ > >_______________________________________________ >Gluster-users mailing list >Gluster-users at gluster.org >http://www.gluster.org/mailman/listinfo/gluster-users-- Sent from my Android device with K-9 Mail. Please excuse my brevity.