Travis Eddy
2017-Mar-30 14:59 UTC
[Gluster-users] Gluster really very slow, like painful performance
Is it me ? or is it Gluster? I feel like there is (or hopefully) a simple setting needs to be changed.... ( from the google searches I'm not the only one) I've used GlusterFS on and off for years and even with KVM its always been really slow. Its been ok for generic file storage) I know with NFS there are some options that make it 10 times faster than the defaults. Is this the same for Gluster and my Google Fu isn't finding it? Simple test: 1 Gb network ( this should be the bottle neck or at least close... NOT THE 6MB/sec max I'm seeing) goto Microcenter, buy several AMD 8 core chip & motherboard specials, 16gb for all. some 1tb disks too. and some of those laptop sshd (for the OS). (don't blame the parts, the gigabit network should still be the choke but its NO where close ) install cent 7 min. make a BTRFS storage area. single node gluster setup... mkfs.btrfs -m raid1 -d raid 1 /dev/sdb /dev/sdc install glusterfs according to https://wiki.centos.org/HowTos/GlusterFSonCentOS (using the centos packages) turn off selinux and firewalld $ sudo gluster volume create GlusterVol7 192.168.3.16:/mnt/tmp/brick $ sudo gluster volume set GlusterVol7 nfs.disable off ezpz now restart can lets to some work... On the XenServer host connect to Gluster as a NFS SR New Storage -> NFS type in 192.168.3.16:/GlusterVol7 bla bla bla not copy over some VM's wait a half a day or whole day or two depending on OS drive size..... (I am not exaggerating) Start a VM (windows or linux). Now try to copy data from samba/nfs/gluster/internet, and save to disk... 6MB/sec is the fastest I've seen once the VM's OS cache fills.... I know if I use pain NFS and use options (rw,async,fsid=0,insecure,no_subtree_check,no_root_squash) This hardware will saturate a gigabit network. Thank you Travis Eddy -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170330/a2b99748/attachment.html>