Anyone made some performance comparison between XFS and ZFS with ZIL on SSD, in gluster environment ? I've tried to compare both on another SDS (LizardFS) and I haven't seen any tangible performance improvement. Is gluster different ?
On Tue, Oct 10, 2017, at 11:19 AM, Gandalf Corvotempesta wrote:> Anyone made some performance comparison between XFS and ZFS with ZIL > on SSD, in gluster environment ? > > I've tried to compare both on another SDS (LizardFS) and I haven't > seen any tangible performance improvement. > > Is gluster different ?Probably not. If there is, it would probably favor XFS. The developers at Red Hat use XFS almost exclusively. We at Facebook have a mix, but XFS is (I think) the most common. Whatever the developers use tends to become "the way local filesystems work" and code is written based on that profile, so even without intention that tends to get a bit of a boost. To the extent that ZFS makes different tradeoffs - e.g. using lots more memory, very different disk access patterns - it's probably going to have a bit more of an "impedance mismatch" with the choices Gluster itself has made. If you're interested in ways to benefit from a disk+SSD combo under XFS, it is possible to configure XFS with a separate journal device but I believe there were some bugs encountered when doing that. Richard Wareing's upcoming Dev Summit talk on Hybrid XFS might cover those, in addition to his own work on using an SSD in even more interesting ways.
I've had good results with using SSD as LVM cache for gluster bricks ( http://man7.org/linux/man-pages/man7/lvmcache.7.html). I still use XFS on bricks. On Tue, Oct 10, 2017 at 12:27 PM, Jeff Darcy <jeff at pl.atyp.us> wrote:> On Tue, Oct 10, 2017, at 11:19 AM, Gandalf Corvotempesta wrote: > > Anyone made some performance comparison between XFS and ZFS with ZIL > > on SSD, in gluster environment ? > > > > I've tried to compare both on another SDS (LizardFS) and I haven't > > seen any tangible performance improvement. > > > > Is gluster different ? > > Probably not. If there is, it would probably favor XFS. The developers > at Red Hat use XFS almost exclusively. We at Facebook have a mix, but > XFS is (I think) the most common. Whatever the developers use tends to > become "the way local filesystems work" and code is written based on > that profile, so even without intention that tends to get a bit of a > boost. To the extent that ZFS makes different tradeoffs - e.g. using > lots more memory, very different disk access patterns - it's probably > going to have a bit more of an "impedance mismatch" with the choices > Gluster itself has made. > > If you're interested in ways to benefit from a disk+SSD combo under XFS, > it is possible to configure XFS with a separate journal device but I > believe there were some bugs encountered when doing that. Richard > Wareing's upcoming Dev Summit talk on Hybrid XFS might cover those, in > addition to his own work on using an SSD in even more interesting ways. > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171010/2c0717a4/attachment.html>
2017-10-10 18:27 GMT+02:00 Jeff Darcy <jeff at pl.atyp.us>:> Probably not. If there is, it would probably favor XFS. The developers > at Red Hat use XFS almost exclusively. We at Facebook have a mix, but > XFS is (I think) the most common. Whatever the developers use tends to > become "the way local filesystems work" and code is written based on > that profile, so even without intention that tends to get a bit of a > boost. To the extent that ZFS makes different tradeoffs - e.g. using > lots more memory, very different disk access patterns - it's probably > going to have a bit more of an "impedance mismatch" with the choices > Gluster itself has made.Ok, so XFS is the way to go :)
Pavel Kutishchev
2017-Oct-10 18:04 UTC
[Gluster-users] Gluster shows volume size less than created
Hello folks, Would please someone advice, after volume creation glusterfs shows volume less than created. Example below: Status of volume: vol_17ec47c44ae6bd45d0db4627683b4f15 ------------------------------------------------------------------------------ Brick??????????????? : Brick glusterfs-sas-server29.sds.default.svc.kubernetes.local:/var/lib/heketi/mounts/vg_946dbd5ccbf78dddcca3857a32f32535/brick_0af81ba1b5d4e9ddb8deb57796912106/brick TCP Port???????????? : 49159 RDMA Port??????????? : 0 Online?????????????? : Y Pid????????????????? : 6376 File System????????? : xfs Device?????????????? : /dev/mapper/vg_946dbd5ccbf78dddcca3857a32f32535-brick_0af81ba1b5d4e9ddb8deb57796912106 Mount Options??????? : rw,seclabel,noatime,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota Inode Size?????????? : 512 Disk Space Free????? : 999.5GB Total Disk Space???? : 999.5GB Inode Count????????? : 524283904 Free Inodes????????? : 524283877 But on LVM i can see the following: ? brick_0af81ba1b5d4e9ddb8deb57796912106 vg_946dbd5ccbf78dddcca3857a32f32535 Vwi-aotz-- 1000.00g tp_0af81ba1b5d4e9ddb8deb57796912106??????? 0.05 ? tp_0af81ba1b5d4e9ddb8deb57796912106 vg_946dbd5ccbf78dddcca3857a32f32535 twi-aotz-- 1000.00g??????????????????????????????????????????? 0.05?? 0.03 -- Best regards Pavel Kutishchev DevOPS Engineer at Self employed.