Hi all, Just doing some initial testing on glusterfs (1.3.10, Debian packages), and I'm somewhat underwhelmed with the performance. I setup up a test AFR and a test Unify config with two systems connected by a local, managed gigabit switch. My configs have POSIX locking, read-ahead, write-behind, and threaded i/o enabled (in that order) on the server side. I then compared bonnie output on the raw filesystems to the gluster output. Machine 1: 72 MB/sec block write, 72MB/sec block read, 29 MB/sec block rewrite. Machine 2: 36 MB/sec block write, 72MB/sec block read, 21 MB/sec block rewrite. gluster-AFR: 22 MB/sec block write, 24 MB/sec block read, 9 MB/sec block rewrite. gluster-Unify (ALU scheduler): 21 MB/sec, 20 MB/sec block read, 8.8 MB/sec block rewrite. The file operation speeds on the initial machines were in thousands to tens of thousands of operations a second. On both glusterfs configs they were in the hundreds of ops/sec. The client I was testing on was Machine 1, since it had the higher overall performance and was under less load. Is this expected performance with gluster for a small number of nodes on TCP/IP? Or am I missing some critical piece of configuration? In particular, I thought that in an AFR config the client was supposed to automatically stripe read requests across available volumes, but the read performance doesn't seem to indicate that's happening, considering the requests it sends to itself should be able to get close to its normal ~70MB/sec rate. Any tips would be appreciated. :) Thanks! Graeme
On Thu, Sep 18, 2008 at 9:04 PM, Graeme <graeme at sudo.ca> wrote:> Hi all, > > Just doing some initial testing on glusterfs (1.3.10, Debian packages), > and I'm somewhat underwhelmed with the performance. I setup up a test > AFR and a test Unify config with two systems connected by a local, > managed gigabit switch. My configs have POSIX locking, read-ahead, > write-behind, and threaded i/o enabled (in that order) on the server > side. I then compared bonnie output on the raw filesystems to the > gluster output. >The Debian package is slightly outdated. 1.3.12 is the latest stable with fixes and improvements. It'd be good if you can benchmark 1.4.x as well for it has lots of architectural changes/improvements (supposedly).> In particular, I thought that in an AFR config the client was supposed to > automatically stripe read requests across available volumes, but the > read performance doesn't seem to indicate that's happening, considering > the requests it sends to itself should be able to get close to its > normal ~70MB/sec rate. >1.3.x doesn't automatically stripe read requests, unfortunately. If I'm not wrong, you'd need to add Unify over AFR. KwangErn -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20080918/89726980/attachment.html>
Hi! 2008/9/18 Graeme <graeme at sudo.ca>:> Hi all, > > Just doing some initial testing on glusterfs (1.3.10, Debian packages), > and I'm somewhat underwhelmed with the performance. I setup up a test > AFR and a test Unify config with two systems connected by a local, > managed gigabit switch. My configs have POSIX locking, read-ahead, > write-behind, and threaded i/o enabled (in that order) on the server > side. I then compared bonnie output on the raw filesystems to the > gluster output. > > Machine 1: 72 MB/sec block write, 72MB/sec block read, 29 MB/sec block > rewrite. > Machine 2: 36 MB/sec block write, 72MB/sec block read, 21 MB/sec block > rewrite. > gluster-AFR: 22 MB/sec block write, 24 MB/sec block read, 9 MB/sec block > rewrite. > gluster-Unify (ALU scheduler): 21 MB/sec, 20 MB/sec block read, 8.8 > MB/sec block rewrite. > > The file operation speeds on the initial machines were in thousands to > tens of thousands of operations a second. On both glusterfs configs they > were in the hundreds of ops/sec. The client I was testing on was Machine > 1, since it had the higher overall performance and was under less load. > > Is this expected performance with gluster for a small number of nodes on > TCP/IP? Or am I missing some critical piece of configuration? In > particular, I thought that in an AFR config the client was supposed to > automatically stripe read requests across available volumes, but the > read performance doesn't seem to indicate that's happening, considering > the requests it sends to itself should be able to get close to its > normal ~70MB/sec rate. > > Any tips would be appreciated. :)And I have some for you... Considering my limited experience with GlusterFS I hope that at least some of my suggestions apply to your problem: :) - telling GlusterFS to always read from local volume: http://www.gluster.org/docs/index.php/GlusterFS_Translators_v1.3#Automatic_File_Replication_Translator_.28AFR.29 " # option read-subvolume brick2 # by default reads are scheduled from all subvolumes" - making sure it's not your network that kills performance: Do you know the maximum throughput of your network connection? Might the bottleneck be there? I'd try iperf (http://dast.nlanr.net/projects/Iperf/) or something like that and/or run some tests with bonnie on a NFS or CIFS share. Do you have a machine with two local disks available? In that case I'd try a configuration completely without a physical network, too. - compiling the latest version from source for testing: GlusterFS 1.4 is supposed to bring big improvements for small files: http://www.gluster.org/docs/index.php/GlusterFS_Roadmap#GlusterFS_1.4_-_Small_File_Performance "# binary protocol - bit level protocol headers (huge improvement in performance for small files) " Harald St?rzebecher
At 12:04 PM 9/18/2008, Graeme wrote:>Hi all, >Is this expected performance with gluster for a small number of nodes on >TCP/IP? Or am I missing some critical piece of configuration? In >particular, I thought that in an AFR config the client was supposed to >automatically stripe read requests across available volumes, but the >read performance doesn't seem to indicate that's happening, considering >the requests it sends to itself should be able to get close to its >normal ~70MB/sec rate.this isn't exactly true. for large files, you should get close to local speed, provided you specify a local read volume in your AFR config. However, there's overhead on every file access--gluster has to check the other AFR servers to make sure it has the latest version, then it can read from the local volume. so for small files this overhead becomes significant but for huge files it should be minimal.>Any tips would be appreciated. :)my network bandwidth between 2 AFR'ed servers is about 50% less with 1.4 than it was with 1.3, also response time seems to be about 20% faster. so I'd look to build 1.4 and see if that improves the situation>Thanks! >Graeme > > > > >_______________________________________________ >Gluster-users mailing list >Gluster-users at gluster.org >http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
At 12:04 PM 9/18/2008, Graeme wrote:>Hi all, >Is this expected performance with gluster for a small number of nodes on >TCP/IP? Or am I missing some critical piece of configuration? In >particular, I thought that in an AFR config the client was supposed to >automatically stripe read requests across available volumes, but the >read performance doesn''t seem to indicate that''s happening, considering >the requests it sends to itself should be able to get close to its >normal ~70MB/sec rate.this isn''t exactly true. for large files, you should get close to local speed, provided you specify a local read volume in your AFR config. However, there''s overhead on every file access--gluster has to check the other AFR servers to make sure it has the latest version, then it can read from the local volume. so for small files this overhead becomes significant but for huge files it should be minimal.>Any tips would be appreciated. :)my network bandwidth between 2 AFR''ed servers is about 50% less with 1.4 than it was with 1.3, also response time seems to be about 20% faster. so I''d look to build 1.4 and see if that improves the situation>Thanks! >Graeme > > > > >_______________________________________________ >Gluster-users mailing list >Gluster-users at gluster.org >http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
At 12:04 PM 9/18/2008, Graeme wrote:>Hi all, >Is this expected performance with gluster for a small number of nodes on >TCP/IP? Or am I missing some critical piece of configuration? In >particular, I thought that in an AFR config the client was supposed to >automatically stripe read requests across available volumes, but the >read performance doesn''t seem to indicate that''s happening, considering >the requests it sends to itself should be able to get close to its >normal ~70MB/sec rate.this isn''t exactly true. for large files, you should get close to local speed, provided you specify a local read volume in your AFR config. However, there''s overhead on every file access--gluster has to check the other AFR servers to make sure it has the latest version, then it can read from the local volume. so for small files this overhead becomes significant but for huge files it should be minimal.>Any tips would be appreciated. :)my network bandwidth between 2 AFR''ed servers is about 50% less with 1.4 than it was with 1.3, also response time seems to be about 20% faster. so I''d look to build 1.4 and see if that improves the situation>Thanks! >Graeme > > > > >_______________________________________________ >Gluster-users mailing list >Gluster-users at gluster.org >http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
At 12:04 PM 9/18/2008, Graeme wrote:>Hi all, >Is this expected performance with gluster for a small number of nodes on >TCP/IP? Or am I missing some critical piece of configuration? In >particular, I thought that in an AFR config the client was supposed to >automatically stripe read requests across available volumes, but the >read performance doesn''t seem to indicate that''s happening, considering >the requests it sends to itself should be able to get close to its >normal ~70MB/sec rate.this isn''t exactly true. for large files, you should get close to local speed, provided you specify a local read volume in your AFR config. However, there''s overhead on every file access--gluster has to check the other AFR servers to make sure it has the latest version, then it can read from the local volume. so for small files this overhead becomes significant but for huge files it should be minimal.>Any tips would be appreciated. :)my network bandwidth between 2 AFR''ed servers is about 50% less with 1.4 than it was with 1.3, also response time seems to be about 20% faster. so I''d look to build 1.4 and see if that improves the situation>Thanks! >Graeme > > > > >_______________________________________________ >Gluster-users mailing list >Gluster-users at gluster.org >http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
Keith Freedman wrote:> However, there's overhead on every file access--gluster has to check > the other AFR servers to make sure it has the latest version, then it > can read from the local volume. so for small files this overhead > becomes significant but for huge files it should be minimal.Ok, well, bonnie benchmarks with eight 1GB files, but I'm still seeing a 50% performance reduction in reads. 1GB files fall into the "huge" category for me.> my network bandwidth between 2 AFR'ed servers is about 50% less with > 1.4 than it was with 1.3, also response time seems to be about 20% > faster. > > so I'd look to build 1.4 and see if that improves the situationAlright, sounds like the general word is to just use the 1.4 branch, and don't roll it out until 1.4 goes stable. I'll try and get some 1.4 packaging and benchmarking done tomorrow, and I'll see how it goes. G
> > > Machine 1: 72 MB/sec block write, 72MB/sec block read, 29 MB/sec block > rewrite. > Machine 2: 36 MB/sec block write, 72MB/sec block read, 21 MB/sec block > rewrite. > gluster-AFR: 22 MB/sec block write, 24 MB/sec block read, 9 MB/sec block > rewrite. > gluster-Unify (ALU scheduler): 21 MB/sec, 20 MB/sec block read, 8.8 > MB/sec block rewrite. > > Is this expected performance with gluster for a small number of nodes on > TCP/IP? Or am I missing some critical piece of configuration? In > particular, I thought that in an AFR config the client was supposed to > automatically stripe read requests across available volumes, but the > read performance doesn't seem to indicate that's happening, considering > the requests it sends to itself should be able to get close to its > normal ~70MB/sec rate.Are you using write-behind in the client volume spec? write-behind affects write performance significantly. AFR spreads different files to be read from different subvolumes, and not parts of a file. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20080919/d426750b/attachment.html>