Brandon Bates
2017-Oct-27 06:47 UTC
[Gluster-users] Poor gluster performance on large files.
Hi gluster users, I've spent several months trying to get any kind of high performance out of gluster. The current XFS/samba array is used for video editing and 300-400MB/s for at least 4 clients is minimum (currently a single windows client gets at least 700/700 for a single client over samba, peaking to 950 at times using blackmagic speed test). Gluster has been getting me as low as 200MB/s when the server can do well over 1000MB/s. I have really been counting on / touting Gluster as being the way of the future for us. However I can't justify cutting our performance to a mere 13% of non-gluster speeds. I've started to reach a give up point and really need some help/hope otherwise I'll just have to migrate the data from server 1 to server 2 just like I've been doing for the last decade. :( If anyone can please help me understand where I might be going wrong it would be absolutely wonderful! Server 1: Single E5-1620 v2 Ubuntu 14.04 glusterfs 3.10.5 16GB Ram 24 drive array on LSI raid Sustained >1.5GB/s to XFS (77TB) Server 2: Single E5-2620 v3 Ubuntu 16.04 glusterfs 3.10.5 32GB Ram 36 drive array on LSI raid Sustained >2.5GB/s to XFS (164TB) Speed tests are done with local with single thread (dd) or 4 threads (iozone) using my standard 64k io size to 20G or 5G files (20G for local drives, 5G for gluster) files. Servers have Intel X520-DA2 dual port 10Gbit NICS bonded together with 802.11ad LAG to a Quanta LB6-M switch. Iperf throughput numbers are single stream >9000Mbit/s Here is my current gluster performance: Single brick on server 1 (server 2 was similar): Fuse mount: 1000MB/s write 325MB/s read Distributed only servers 1+2: Fuse mount on server 1: 900MB/s write iozone 4 streams 320MB/s read iozone 4 streams single stream read 91MB/s @64K, 141MB/s @1M simultaneous iozone 4 stream 5G files Server 1: 1200MB/s write, 200MB/s read Server 2: 950MB/s write, 310MB/s read I did some earlier single brick tests with samba VFS and 3 workstations and got up to 750MB/s write and 800MB/s read aggregate but that's still not good. These are the only volume settings tweaks I have made (after much single box testing to find what actually made a difference): performance.cache-size 1GB (Default 23MB) performance.client-io-threads on performance.io-thread-count 64 performance.read-ahead-page-count 16 performance.stat-prefetch on server.event-threads 8 (default?) client.event-threads 8 Any help given is appreciated! -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171026/8afad062/attachment.html>
Bartosz Zięba
2017-Oct-27 08:29 UTC
[Gluster-users] Poor gluster performance on large files.
Why don?t you set LSI to passtrough mode and set one brick per HDD? Regards, Bartosz> Wiadomo?? napisana przez Brandon Bates <brandon at brandonbates.com> w dniu 27.10.2017, o godz. 08:47: > > Hi gluster users, > I've spent several months trying to get any kind of high performance out of gluster. The current XFS/samba array is used for video editing and 300-400MB/s for at least 4 clients is minimum (currently a single windows client gets at least 700/700 for a single client over samba, peaking to 950 at times using blackmagic speed test). Gluster has been getting me as low as 200MB/s when the server can do well over 1000MB/s. I have really been counting on / touting Gluster as being the way of the future for us. However I can't justify cutting our performance to a mere 13% of non-gluster speeds. I've started to reach a give up point and really need some help/hope otherwise I'll just have to migrate the data from server 1 to server 2 just like I've been doing for the last decade. :( > > If anyone can please help me understand where I might be going wrong it would be absolutely wonderful! > > Server 1: > Single E5-1620 v2 > Ubuntu 14.04 > glusterfs 3.10.5 > 16GB Ram > 24 drive array on LSI raid > Sustained >1.5GB/s to XFS (77TB) > > Server 2: > Single E5-2620 v3 > Ubuntu 16.04 > glusterfs 3.10.5 > 32GB Ram > 36 drive array on LSI raid > Sustained >2.5GB/s to XFS (164TB) > > Speed tests are done with local with single thread (dd) or 4 threads (iozone) using my standard 64k io size to 20G or 5G files (20G for local drives, 5G for gluster) files. > > Servers have Intel X520-DA2 dual port 10Gbit NICS bonded together with 802.11ad LAG to a Quanta LB6-M switch. Iperf throughput numbers are single stream >9000Mbit/s > > Here is my current gluster performance: > > Single brick on server 1 (server 2 was similar): > Fuse mount: > 1000MB/s write > 325MB/s read > > Distributed only servers 1+2: > Fuse mount on server 1: > 900MB/s write iozone 4 streams > 320MB/s read iozone 4 streams > single stream read 91MB/s @64K, 141MB/s @1M > simultaneous iozone 4 stream 5G files > Server 1: 1200MB/s write, 200MB/s read > Server 2: 950MB/s write, 310MB/s read > > I did some earlier single brick tests with samba VFS and 3 workstations and got up to 750MB/s write and 800MB/s read aggregate but that's still not good. > > These are the only volume settings tweaks I have made (after much single box testing to find what actually made a difference): > performance.cache-size 1GB (Default 23MB) > performance.client-io-threads on > performance.io-thread-count 64 > performance.read-ahead-page-count 16 > performance.stat-prefetch on > server.event-threads 8 (default?) > client.event-threads 8 > > Any help given is appreciated! > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171027/4aff06fe/attachment.html>
Brandon Bates
2017-Oct-27 16:11 UTC
[Gluster-users] Poor gluster performance on large files.
Unfortunately I'm not in a position to try that now. The first server is (and has been) in production as the main file server, the second would have been a candidate for trying but I've had to start staging data there so I can't now. -----Original Message----- From: Bartosz Zieba [mailto:kontakt at avatat.pl] Sent: Friday, October 27, 2017 1:29 AM To: Brandon Bates Cc: gluster-users at gluster.org Subject: Re: [Gluster-users] Poor gluster performance on large files. Why don't you set LSI to passtrough mode and set one brick per HDD? Regards, Bartosz Wiadomo?? napisana przez Brandon Bates <brandon at brandonbates.com> w dniu 27.10.2017, o godz. 08:47: Hi gluster users, I've spent several months trying to get any kind of high performance out of gluster. The current XFS/samba array is used for video editing and 300-400MB/s for at least 4 clients is minimum (currently a single windows client gets at least 700/700 for a single client over samba, peaking to 950 at times using blackmagic speed test). Gluster has been getting me as low as 200MB/s when the server can do well over 1000MB/s. I have really been counting on / touting Gluster as being the way of the future for us. However I can't justify cutting our performance to a mere 13% of non-gluster speeds. I've started to reach a give up point and really need some help/hope otherwise I'll just have to migrate the data from server 1 to server 2 just like I've been doing for the last decade. :( If anyone can please help me understand where I might be going wrong it would be absolutely wonderful! Server 1: Single E5-1620 v2 Ubuntu 14.04 glusterfs 3.10.5 16GB Ram 24 drive array on LSI raid Sustained >1.5GB/s to XFS (77TB) Server 2: Single E5-2620 v3 Ubuntu 16.04 glusterfs 3.10.5 32GB Ram 36 drive array on LSI raid Sustained >2.5GB/s to XFS (164TB) Speed tests are done with local with single thread (dd) or 4 threads (iozone) using my standard 64k io size to 20G or 5G files (20G for local drives, 5G for gluster) files. Servers have Intel X520-DA2 dual port 10Gbit NICS bonded together with 802.11ad LAG to a Quanta LB6-M switch. Iperf throughput numbers are single stream >9000Mbit/s Here is my current gluster performance: Single brick on server 1 (server 2 was similar): Fuse mount: 1000MB/s write 325MB/s read Distributed only servers 1+2: Fuse mount on server 1: 900MB/s write iozone 4 streams 320MB/s read iozone 4 streams single stream read 91MB/s @64K, 141MB/s @1M simultaneous iozone 4 stream 5G files Server 1: 1200MB/s write, 200MB/s read Server 2: 950MB/s write, 310MB/s read I did some earlier single brick tests with samba VFS and 3 workstations and got up to 750MB/s write and 800MB/s read aggregate but that's still not good. These are the only volume settings tweaks I have made (after much single box testing to find what actually made a difference): performance.cache-size 1GB (Default 23MB) performance.client-io-threads on performance.io-thread-count 64 performance.read-ahead-page-count 16 performance.stat-prefetch on server.event-threads 8 (default?) client.event-threads 8 Any help given is appreciated! _______________________________________________ Gluster-users mailing list Gluster-users at gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171027/c34c0c26/attachment.html>
Karan Sandha
2017-Oct-30 10:44 UTC
[Gluster-users] Poor gluster performance on large files.
Hi Brandon, Can you please turn OFF client-io-threads as we have seen degradation of performance with io-threads ON on sequential read/writes, random read/writes. Server event threads is 1 and client event threads are 2 by default. Thanks & Regards On Fri, Oct 27, 2017 at 12:17 PM, Brandon Bates <brandon at brandonbates.com> wrote:> Hi gluster users, > I've spent several months trying to get any kind of high performance out > of gluster. The current XFS/samba array is used for video editing and > 300-400MB/s for at least 4 clients is minimum (currently a single windows > client gets at least 700/700 for a single client over samba, peaking to 950 > at times using blackmagic speed test). Gluster has been getting me as low > as 200MB/s when the server can do well over 1000MB/s. I have really been > counting on / touting Gluster as being the way of the future for us > . However I can't justify cutting our performance to a mere 13% of > non-gluster speeds. I've started to reach a give up point and really > need some help/hope otherwise I'll just have to migrate the data from > server 1 to server 2 just like I've been doing for the last decade. :( > > If anyone can please help me understand where I might be going wrong it > would be absolutely wonderful! > > Server 1: > Single E5-1620 v2 > Ubuntu 14.04 > glusterfs 3.10.5 > 16GB Ram > 24 drive array on LSI raid > Sustained >1.5GB/s to XFS (77TB) > > Server 2: > Single E5-2620 v3 > Ubuntu 16.04 > glusterfs 3.10.5 > 32GB Ram > 36 drive array on LSI raid > Sustained >2.5GB/s to XFS (164TB) > > Speed tests are done with local with single thread (dd) or 4 threads > (iozone) using my standard 64k io size to 20G or 5G files (20G for local > drives, 5G for gluster) files. > > Servers have Intel X520-DA2 dual port 10Gbit NICS bonded together with > 802.11ad LAG to a Quanta LB6-M switch. Iperf throughput numbers are single > stream >9000Mbit/s > > Here is my current gluster performance: > > Single brick on server 1 (server 2 was similar): > Fuse mount: > 1000MB/s write > 325MB/s read > > Distributed only servers 1+2: > Fuse mount on server 1: > 900MB/s write iozone 4 streams > 320MB/s read iozone 4 streams > single stream read 91MB/s @64K, 141MB/s @1M > simultaneous iozone 4 stream 5G files > Server 1: 1200MB/s write, 200MB/s read > Server 2: 950MB/s write, 310MB/s read > > I did some earlier single brick tests with samba VFS and 3 workstations > and got up to 750MB/s write and 800MB/s read aggregate but that's still not > good. > > These are the only volume settings tweaks I have made (after much single > box testing to find what actually made a difference): > performance.cache-size 1GB (Default 23MB) > performance.client-io-threads on > performance.io-thread-count 64 > performance.read-ahead-page-count 16 > performance.stat-prefetch on > server.event-threads 8 (default?) > client.event-threads 8 > > Any help given is appreciated! > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users >-- KARAN SANDHA QUALITY ENGINEER Red Hat Bangalore <https://www.redhat.com/> ksandha at redhat.com M: 9888009555 IM: Karan on @irc <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> @redhatnews <https://twitter.com/redhatnews> Red Hat <https://www.linkedin.com/company/red-hat> Red Hat <https://www.facebook.com/RedHatInc> -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171030/871b6c6b/attachment.html>
Serkan Çoban
2017-Oct-30 13:32 UTC
[Gluster-users] Poor gluster performance on large files.
>Can you please turn OFF client-io-threads as we have seen degradation ofperformance with io-threads ON on sequential read/writes, random read/writes. May I ask which version is this degradation happened? I tested 3.10 vs 3.12 performance a while ago and saw 2-3x performance lost with 3.12. Is it because of client-io-threads? On Mon, Oct 30, 2017 at 1:44 PM, Karan Sandha <ksandha at redhat.com> wrote:> Hi Brandon, > > Can you please turn OFF client-io-threads as we have seen degradation of > performance with io-threads ON on sequential read/writes, random > read/writes. Server event threads is 1 and client event threads are 2 by > default. > > Thanks & Regards > > On Fri, Oct 27, 2017 at 12:17 PM, Brandon Bates <brandon at brandonbates.com> > wrote: > >> Hi gluster users, >> I've spent several months trying to get any kind of high performance out >> of gluster. The current XFS/samba array is used for video editing and >> 300-400MB/s for at least 4 clients is minimum (currently a single windows >> client gets at least 700/700 for a single client over samba, peaking to 950 >> at times using blackmagic speed test). Gluster has been getting me as low >> as 200MB/s when the server can do well over 1000MB/s. I have really >> been counting on / touting Gluster as being the way of the future for us >> . However I can't justify cutting our performance to a mere 13% of >> non-gluster speeds. I've started to reach a give up point and really >> need some help/hope otherwise I'll just have to migrate the data from >> server 1 to server 2 just like I've been doing for the last decade. :( >> >> If anyone can please help me understand where I might be going wrong it >> would be absolutely wonderful! >> >> Server 1: >> Single E5-1620 v2 >> Ubuntu 14.04 >> glusterfs 3.10.5 >> 16GB Ram >> 24 drive array on LSI raid >> Sustained >1.5GB/s to XFS (77TB) >> >> Server 2: >> Single E5-2620 v3 >> Ubuntu 16.04 >> glusterfs 3.10.5 >> 32GB Ram >> 36 drive array on LSI raid >> Sustained >2.5GB/s to XFS (164TB) >> >> Speed tests are done with local with single thread (dd) or 4 threads >> (iozone) using my standard 64k io size to 20G or 5G files (20G for local >> drives, 5G for gluster) files. >> >> Servers have Intel X520-DA2 dual port 10Gbit NICS bonded together with >> 802.11ad LAG to a Quanta LB6-M switch. Iperf throughput numbers are single >> stream >9000Mbit/s >> >> Here is my current gluster performance: >> >> Single brick on server 1 (server 2 was similar): >> Fuse mount: >> 1000MB/s write >> 325MB/s read >> >> Distributed only servers 1+2: >> Fuse mount on server 1: >> 900MB/s write iozone 4 streams >> 320MB/s read iozone 4 streams >> single stream read 91MB/s @64K, 141MB/s @1M >> simultaneous iozone 4 stream 5G files >> Server 1: 1200MB/s write, 200MB/s read >> Server 2: 950MB/s write, 310MB/s read >> >> I did some earlier single brick tests with samba VFS and 3 workstations >> and got up to 750MB/s write and 800MB/s read aggregate but that's still not >> good. >> >> These are the only volume settings tweaks I have made (after much single >> box testing to find what actually made a difference): >> performance.cache-size 1GB (Default 23MB) >> performance.client-io-threads on >> performance.io-thread-count 64 >> performance.read-ahead-page-count 16 >> performance.stat-prefetch on >> server.event-threads 8 (default?) >> client.event-threads 8 >> >> Any help given is appreciated! >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://lists.gluster.org/mailman/listinfo/gluster-users >> > > > > -- > > KARAN SANDHA > > QUALITY ENGINEER > > Red Hat Bangalore <https://www.redhat.com/> > > ksandha at redhat.com M: 9888009555 IM: Karan on @irc > <https://red.ht/sig> > TRIED. TESTED. TRUSTED. <https://redhat.com/trusted> > @redhatnews <https://twitter.com/redhatnews> Red Hat > <https://www.linkedin.com/company/red-hat> Red Hat > <https://www.facebook.com/RedHatInc> > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171030/41846c9e/attachment.html>
Brandon Bates
2017-Oct-30 15:58 UTC
[Gluster-users] Poor gluster performance on large files.
Client-io-threads ON, server.event-threads 8, client.event-threads 8 900MB/s Write, 320MB/s Read Client-io-threads OFF, server.event-threads 8, client.event-threads 8 873MB/s Write, 115MB/s Read Client-io-threads OFF, server.event-threads 1, client.event-threads 2 876MB/s Write, 267MB/s Read Client-io-threads ON, server.event-threads 1, client.event-threads 2 943MB/s Write, 275MB/s Read> On Oct 30, 2017, at 3:44 AM, Karan Sandha <ksandha at redhat.com> wrote: > > Hi Brandon, > > Can you please turn OFF client-io-threads as we have seen degradation of performance with io-threads ON on sequential read/writes, random read/writes. Server event threads is 1 and client event threads are 2 by default. > > Thanks & Regards > >> On Fri, Oct 27, 2017 at 12:17 PM, Brandon Bates <brandon at brandonbates.com> wrote: >> Hi gluster users, >> I've spent several months trying to get any kind of high performance out of gluster. The current XFS/samba array is used for video editing and 300-400MB/s for at least 4 clients is minimum (currently a single windows client gets at least 700/700 for a single client over samba, peaking to 950 at times using blackmagic speed test). Gluster has been getting me as low as 200MB/s when the server can do well over 1000MB/s. I have really been counting on / touting Gluster as being the way of the future for us. However I can't justify cutting our performance to a mere 13% of non-gluster speeds. I've started to reach a give up point and really need some help/hope otherwise I'll just have to migrate the data from server 1 to server 2 just like I've been doing for the last decade. :( >> >> If anyone can please help me understand where I might be going wrong it would be absolutely wonderful! >> >> Server 1: >> Single E5-1620 v2 >> Ubuntu 14.04 >> glusterfs 3.10.5 >> 16GB Ram >> 24 drive array on LSI raid >> Sustained >1.5GB/s to XFS (77TB) >> >> Server 2: >> Single E5-2620 v3 >> Ubuntu 16.04 >> glusterfs 3.10.5 >> 32GB Ram >> 36 drive array on LSI raid >> Sustained >2.5GB/s to XFS (164TB) >> >> Speed tests are done with local with single thread (dd) or 4 threads (iozone) using my standard 64k io size to 20G or 5G files (20G for local drives, 5G for gluster) files. >> >> Servers have Intel X520-DA2 dual port 10Gbit NICS bonded together with 802.11ad LAG to a Quanta LB6-M switch. Iperf throughput numbers are single stream >9000Mbit/s >> >> Here is my current gluster performance: >> >> Single brick on server 1 (server 2 was similar): >> Fuse mount: >> 1000MB/s write >> 325MB/s read >> >> Distributed only servers 1+2: >> Fuse mount on server 1: >> 900MB/s write iozone 4 streams >> 320MB/s read iozone 4 streams >> single stream read 91MB/s @64K, 141MB/s @1M >> simultaneous iozone 4 stream 5G files >> Server 1: 1200MB/s write, 200MB/s read >> Server 2: 950MB/s write, 310MB/s read >> >> I did some earlier single brick tests with samba VFS and 3 workstations and got up to 750MB/s write and 800MB/s read aggregate but that's still not good. >> >> These are the only volume settings tweaks I have made (after much single box testing to find what actually made a difference): >> performance.cache-size 1GB (Default 23MB) >> performance.client-io-threads on >> performance.io-thread-count 64 >> performance.read-ahead-page-count 16 >> performance.stat-prefetch on >> server.event-threads 8 (default?) >> client.event-threads 8 >> >> Any help given is appreciated! >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://lists.gluster.org/mailman/listinfo/gluster-users > > > > -- > KARAN SANDHA > QUALITY ENGINEER > Red Hat Bangalore > ksandha at redhat.com M: 9888009555 IM: Karan on @irc > > > TRIED. TESTED. TRUSTED. > @redhatnews Red Hat Red Hat-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171030/c3b31c48/attachment.html>