I conustructed a 2-server/ 1 client gluster with Gigabit-ethernet, but got so bad a benchmark. Is there any thing can I tune? [@65.64 ~]# for ((i=0;i<17;++i)) ; do dd if=/dev/zero of=/mnt/yyy$i bs=4M count=2 ; done 2+0 records in 2+0 records out 8388608 bytes (8.4 MB) copied, 0.770213 seconds, 10.9 MB/s 2+0 records in 2+0 records out 8388608 bytes (8.4 MB) copied, 0.771131 seconds, 10.9 MB/s ... [@123.21 glusterfs]# cat glusterfs-server.vol volume brick1 type storage/posix option directory /exports/disk1 end-volume volume brick2 type storage/posix option directory /exports/disk2 end-volume volume brick-ns type storage/posix option directory /exports/ns end-volume ### Add network serving capability to above brick. volume server type protocol/server option transport-type tcp/server # For TCP/IP transport subvolumes brick1 brick2 brick-ns option auth.ip.brick1.allow 10.10.* # Allow access to "brick" volume option auth.ip.brick2.allow 10.10.* # Allow access to "brick" volume option auth.ip.brick-ns.allow 10.10.* # Allow access to "brick-ns" volume end-volume [@123.21 glusterfs]# cat glusterfs-client.vol volume remote-brick1_1 type protocol/client option transport-type tcp/client option remote-host 10.10.123.21 option remote-subvolume brick1 end-volume volume remote-brick1_2 type protocol/client option transport-type tcp/client option remote-host 10.10.123.21 option remote-subvolume brick2 end-volume volume remote-brick2_1 type protocol/client option transport-type tcp/client option remote-host 10.10.123.22 option remote-subvolume brick1 end-volume volume remote-brick2_2 type protocol/client option transport-type tcp/client option remote-host 10.10.123.22 option remote-subvolume brick2 end-volume volume brick-afr1_2 type cluster/afr subvolumes remote-brick1_1 remote-brick2_2 end-volume volume brick-afr2_1 type cluster/afr subvolumes remote-brick1_2 remote-brick2_1 end-volume volume remote-ns1 type protocol/client option transport-type tcp/client option remote-host 10.10.123.21 option remote-subvolume brick-ns end-volume volume remote-ns2 type protocol/client option transport-type tcp/client option remote-host 10.10.123.22 option remote-subvolume brick-ns end-volume volume ns-afr0 type cluster/afr subvolumes remote-ns1 remote-ns2 end-volume volume unify0 type cluster/unify option scheduler alu option alu.limits.min-free-disk 10% option alu.order disk-usage option namespace ns-afr0 subvolumes brick-afr1_2 brick-afr2_1 end-volume
Hello Kirby, Please check if every involved device is running at gigabit speed and test with at least 100 mb of data. Glenn Kirby Zhou wrote:> I conustructed a 2-server/ 1 client gluster with Gigabit-ethernet, but got > so bad a benchmark. > Is there any thing can I tune? > > [@65.64 ~]# for ((i=0;i<17;++i)) ; do dd if=/dev/zero of=/mnt/yyy$i bs=4M > count=2 ; done > 2+0 records in > 2+0 records out > 8388608 bytes (8.4 MB) copied, 0.770213 seconds, 10.9 MB/s > 2+0 records in > 2+0 records out > 8388608 bytes (8.4 MB) copied, 0.771131 seconds, 10.9 MB/s > ... > > [@123.21 glusterfs]# cat glusterfs-server.vol > volume brick1 > type storage/posix > option directory /exports/disk1 > end-volume > > volume brick2 > type storage/posix > option directory /exports/disk2 > end-volume > > volume brick-ns > type storage/posix > option directory /exports/ns > end-volume > > ### Add network serving capability to above brick. > volume server > type protocol/server > option transport-type tcp/server # For TCP/IP transport > subvolumes brick1 brick2 brick-ns > option auth.ip.brick1.allow 10.10.* # Allow access to "brick" volume > option auth.ip.brick2.allow 10.10.* # Allow access to "brick" volume > option auth.ip.brick-ns.allow 10.10.* # Allow access to "brick-ns" volume > end-volume > > [@123.21 glusterfs]# cat glusterfs-client.vol > volume remote-brick1_1 > type protocol/client > option transport-type tcp/client > option remote-host 10.10.123.21 > option remote-subvolume brick1 > end-volume > > volume remote-brick1_2 > type protocol/client > option transport-type tcp/client > option remote-host 10.10.123.21 > option remote-subvolume brick2 > end-volume > > volume remote-brick2_1 > type protocol/client > option transport-type tcp/client > option remote-host 10.10.123.22 > option remote-subvolume brick1 > end-volume > > volume remote-brick2_2 > type protocol/client > option transport-type tcp/client > option remote-host 10.10.123.22 > option remote-subvolume brick2 > end-volume > > volume brick-afr1_2 > type cluster/afr > subvolumes remote-brick1_1 remote-brick2_2 > end-volume > > volume brick-afr2_1 > type cluster/afr > subvolumes remote-brick1_2 remote-brick2_1 > end-volume > > volume remote-ns1 > type protocol/client > option transport-type tcp/client > option remote-host 10.10.123.21 > option remote-subvolume brick-ns > end-volume > > volume remote-ns2 > type protocol/client > option transport-type tcp/client > option remote-host 10.10.123.22 > option remote-subvolume brick-ns > end-volume > > volume ns-afr0 > type cluster/afr > subvolumes remote-ns1 remote-ns2 > end-volume > > volume unify0 > type cluster/unify > option scheduler alu > option alu.limits.min-free-disk 10% > option alu.order disk-usage > option namespace ns-afr0 > subvolumes brick-afr1_2 brick-afr2_1 > end-volume > > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users > >
BTW: My OS is RHEL-5.2/x86_64 And: fuse.x86_64-2.7.4-1.el5.rf from dag.wieers.com dkms-fuse.noarch-2.7.4-1.nodist.rf from dag.wieers.com dkms.noarch-2.0.20.4-1.el5.rf from dag.wieers.com glusterfs.x86_64-1.3.10-1 from glusterfs.org -----Original Message----- From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Kirby Zhou Sent: Friday, December 05, 2008 11:34 PM To: gluster-users at gluster.org Subject: [Gluster-users] Why so bad performance? I conustructed a 2-server/ 1 client gluster with Gigabit-ethernet, but got so bad a benchmark. Is there any thing can I tune? [@65.64 ~]# for ((i=0;i<17;++i)) ; do dd if=/dev/zero of=/mnt/yyy$i bs=4M count=2 ; done 2+0 records in 2+0 records out 8388608 bytes (8.4 MB) copied, 0.770213 seconds, 10.9 MB/s 2+0 records in 2+0 records out 8388608 bytes (8.4 MB) copied, 0.771131 seconds, 10.9 MB/s ... [@123.21 glusterfs]# cat glusterfs-server.vol volume brick1 type storage/posix option directory /exports/disk1 end-volume volume brick2 type storage/posix option directory /exports/disk2 end-volume volume brick-ns type storage/posix option directory /exports/ns end-volume ### Add network serving capability to above brick. volume server type protocol/server option transport-type tcp/server # For TCP/IP transport subvolumes brick1 brick2 brick-ns option auth.ip.brick1.allow 10.10.* # Allow access to "brick" volume option auth.ip.brick2.allow 10.10.* # Allow access to "brick" volume option auth.ip.brick-ns.allow 10.10.* # Allow access to "brick-ns" volume end-volume [@123.21 glusterfs]# cat glusterfs-client.vol volume remote-brick1_1 type protocol/client option transport-type tcp/client option remote-host 10.10.123.21 option remote-subvolume brick1 end-volume volume remote-brick1_2 type protocol/client option transport-type tcp/client option remote-host 10.10.123.21 option remote-subvolume brick2 end-volume volume remote-brick2_1 type protocol/client option transport-type tcp/client option remote-host 10.10.123.22 option remote-subvolume brick1 end-volume volume remote-brick2_2 type protocol/client option transport-type tcp/client option remote-host 10.10.123.22 option remote-subvolume brick2 end-volume volume brick-afr1_2 type cluster/afr subvolumes remote-brick1_1 remote-brick2_2 end-volume volume brick-afr2_1 type cluster/afr subvolumes remote-brick1_2 remote-brick2_1 end-volume volume remote-ns1 type protocol/client option transport-type tcp/client option remote-host 10.10.123.21 option remote-subvolume brick-ns end-volume volume remote-ns2 type protocol/client option transport-type tcp/client option remote-host 10.10.123.22 option remote-subvolume brick-ns end-volume volume ns-afr0 type cluster/afr subvolumes remote-ns1 remote-ns2 end-volume volume unify0 type cluster/unify option scheduler alu option alu.limits.min-free-disk 10% option alu.order disk-usage option namespace ns-afr0 subvolumes brick-afr1_2 brick-afr2_1 end-volume _______________________________________________ Gluster-users mailing list Gluster-users at gluster.org http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
I have tested using scp: [@123.25 /]# scp /opt/xxx 10.10.123.22:/opt/ xxx 100% 256MB 51.2MB/s 00:05 [@123.25 /]# dd if=/opt/xxx of=/mnt/xxx bs=2M 128+0 records in 128+0 records out 268435456 bytes (268 MB) copied, 23.0106 seconds, 11.7 MB/s So, you can see how slow the speed my gluster. I wanna what can I do to improve the performance. -----Original Message----- From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of RedShift Sent: Friday, December 05, 2008 11:45 PM To: gluster-users at gluster.org Subject: Re: [Gluster-users] Why so bad performance? Hello Kirby, Please check if every involved device is running at gigabit speed and test with at least 100 mb of data. Glenn Kirby Zhou wrote:> I conustructed a 2-server/ 1 client gluster with Gigabit-ethernet, but got > so bad a benchmark. > Is there any thing can I tune? > > [@65.64 ~]# for ((i=0;i<17;++i)) ; do dd if=/dev/zero of=/mnt/yyy$i bs=4M > count=2 ; done > 2+0 records in > 2+0 records out > 8388608 bytes (8.4 MB) copied, 0.770213 seconds, 10.9 MB/s > 2+0 records in > 2+0 records out > 8388608 bytes (8.4 MB) copied, 0.771131 seconds, 10.9 MB/s > ... > > [@123.21 glusterfs]# cat glusterfs-server.vol > volume brick1 > type storage/posix > option directory /exports/disk1 > end-volume > > volume brick2 > type storage/posix > option directory /exports/disk2 > end-volume > > volume brick-ns > type storage/posix > option directory /exports/ns > end-volume > > ### Add network serving capability to above brick. > volume server > type protocol/server > option transport-type tcp/server # For TCP/IP transport > subvolumes brick1 brick2 brick-ns > option auth.ip.brick1.allow 10.10.* # Allow access to "brick" volume > option auth.ip.brick2.allow 10.10.* # Allow access to "brick" volume > option auth.ip.brick-ns.allow 10.10.* # Allow access to "brick-ns" volume > end-volume > > [@123.21 glusterfs]# cat glusterfs-client.vol > volume remote-brick1_1 > type protocol/client > option transport-type tcp/client > option remote-host 10.10.123.21 > option remote-subvolume brick1 > end-volume > > volume remote-brick1_2 > type protocol/client > option transport-type tcp/client > option remote-host 10.10.123.21 > option remote-subvolume brick2 > end-volume > > volume remote-brick2_1 > type protocol/client > option transport-type tcp/client > option remote-host 10.10.123.22 > option remote-subvolume brick1 > end-volume > > volume remote-brick2_2 > type protocol/client > option transport-type tcp/client > option remote-host 10.10.123.22 > option remote-subvolume brick2 > end-volume > > volume brick-afr1_2 > type cluster/afr > subvolumes remote-brick1_1 remote-brick2_2 > end-volume > > volume brick-afr2_1 > type cluster/afr > subvolumes remote-brick1_2 remote-brick2_1 > end-volume > > volume remote-ns1 > type protocol/client > option transport-type tcp/client > option remote-host 10.10.123.21 > option remote-subvolume brick-ns > end-volume > > volume remote-ns2 > type protocol/client > option transport-type tcp/client > option remote-host 10.10.123.22 > option remote-subvolume brick-ns > end-volume > > volume ns-afr0 > type cluster/afr > subvolumes remote-ns1 remote-ns2 > end-volume > > volume unify0 > type cluster/unify > option scheduler alu > option alu.limits.min-free-disk 10% > option alu.order disk-usage > option namespace ns-afr0 > subvolumes brick-afr1_2 brick-afr2_1 > end-volume > > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users > >_______________________________________________ Gluster-users mailing list Gluster-users at gluster.org http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
interesting ... did you mention what is your HW for the glusterFS server side? can you post complete specs of your config? - HW - proc speed/type, RAM, etc. - where you have installed both GlusterFS server and client? - a. On Fri, Dec 5, 2008 at 8:30 AM, Kirby Zhou <kirbyzhou at sohu-rd.com> wrote:> I have tested using scp: > > [@123.25 /]# scp /opt/xxx 10.10.123.22:/opt/ > xxx 100% 256MB 51.2MB/s 00:05 > > [@123.25 /]# dd if=/opt/xxx of=/mnt/xxx bs=2M > 128+0 records in > 128+0 records out > 268435456 bytes (268 MB) copied, 23.0106 seconds, 11.7 MB/s > > So, you can see how slow the speed my gluster. > I wanna what can I do to improve the performance. > > -----Original Message----- > From: gluster-users-bounces at gluster.org > [mailto:gluster-users-bounces at gluster.org] On Behalf Of RedShift > Sent: Friday, December 05, 2008 11:45 PM > To: gluster-users at gluster.org > Subject: Re: [Gluster-users] Why so bad performance? > > Hello Kirby, > > > Please check if every involved device is running at gigabit speed and test > with at least 100 mb of data. > > > Glenn > > Kirby Zhou wrote: >> I conustructed a 2-server/ 1 client gluster with Gigabit-ethernet, but got >> so bad a benchmark. >> Is there any thing can I tune? >> >> [@65.64 ~]# for ((i=0;i<17;++i)) ; do dd if=/dev/zero of=/mnt/yyy$i bs=4M >> count=2 ; done >> 2+0 records in >> 2+0 records out >> 8388608 bytes (8.4 MB) copied, 0.770213 seconds, 10.9 MB/s >> 2+0 records in >> 2+0 records out >> 8388608 bytes (8.4 MB) copied, 0.771131 seconds, 10.9 MB/s >> ... >> >> [@123.21 glusterfs]# cat glusterfs-server.vol >> volume brick1 >> type storage/posix >> option directory /exports/disk1 >> end-volume >> >> volume brick2 >> type storage/posix >> option directory /exports/disk2 >> end-volume >> >> volume brick-ns >> type storage/posix >> option directory /exports/ns >> end-volume >> >> ### Add network serving capability to above brick. >> volume server >> type protocol/server >> option transport-type tcp/server # For TCP/IP transport >> subvolumes brick1 brick2 brick-ns >> option auth.ip.brick1.allow 10.10.* # Allow access to "brick" volume >> option auth.ip.brick2.allow 10.10.* # Allow access to "brick" volume >> option auth.ip.brick-ns.allow 10.10.* # Allow access to "brick-ns" volume >> end-volume >> >> [@123.21 glusterfs]# cat glusterfs-client.vol >> volume remote-brick1_1 >> type protocol/client >> option transport-type tcp/client >> option remote-host 10.10.123.21 >> option remote-subvolume brick1 >> end-volume >> >> volume remote-brick1_2 >> type protocol/client >> option transport-type tcp/client >> option remote-host 10.10.123.21 >> option remote-subvolume brick2 >> end-volume >> >> volume remote-brick2_1 >> type protocol/client >> option transport-type tcp/client >> option remote-host 10.10.123.22 >> option remote-subvolume brick1 >> end-volume >> >> volume remote-brick2_2 >> type protocol/client >> option transport-type tcp/client >> option remote-host 10.10.123.22 >> option remote-subvolume brick2 >> end-volume >> >> volume brick-afr1_2 >> type cluster/afr >> subvolumes remote-brick1_1 remote-brick2_2 >> end-volume >> >> volume brick-afr2_1 >> type cluster/afr >> subvolumes remote-brick1_2 remote-brick2_1 >> end-volume >> >> volume remote-ns1 >> type protocol/client >> option transport-type tcp/client >> option remote-host 10.10.123.21 >> option remote-subvolume brick-ns >> end-volume >> >> volume remote-ns2 >> type protocol/client >> option transport-type tcp/client >> option remote-host 10.10.123.22 >> option remote-subvolume brick-ns >> end-volume >> >> volume ns-afr0 >> type cluster/afr >> subvolumes remote-ns1 remote-ns2 >> end-volume >> >> volume unify0 >> type cluster/unify >> option scheduler alu >> option alu.limits.min-free-disk 10% >> option alu.order disk-usage >> option namespace ns-afr0 >> subvolumes brick-afr1_2 brick-afr2_1 >> end-volume >> >> >> >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users >> >> > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users >-- A S P A S I A . . . . . . . . . . ..
> fuse.x86_64-2.7.4-1.el5.rf from dag.wieers.com > dkms-fuse.noarch-2.7.4-1.nodist.rf from dag.wieers.comCan you try with a different fuse version (kernel module included)? There have been reports about 2.7.4 performing badly for writes in certain kernel versions. Can you try 2.7.3 ? thanks, avati
Are you sure? I have tried, but nothing changes [@123.25 ~]# rmmod fuse [@123.25 ~]# yum install fuse-2.7.3-1.el5.rf dkms-fuse-2.7.3-1.nodist.rf glusterfs [@123.25 ~]# dd if=/opt/xxx of=/mnt/xxx4 bs=16M 16+0 records in 16+0 records out 268435456 bytes (268 MB) copied, 25.4561 seconds, 10.5 MB/s [@123.25 ~]# modinfo fuse filename: /lib/modules/2.6.18-92.el5/extra/fuse.ko alias: char-major-10-229 license: GPL description: Filesystem in Userspace author: Miklos Szeredi <miklos at szeredi.hu> srcversion: BA591606954B6B1F7AA2660 depends: vermagic: 2.6.18-92.el5 SMP mod_unload gcc-4.1 @123.25 ~]# ll /lib/modules/2.6.18-92.el5/extra/fuse.ko -rw-r--r-- 1 root root 101384 Dec 7 22:55 /lib/modules/2.6.18-92.el5/extra/fuse.ko -----Original Message----- From: anand.avati at gmail.com [mailto:anand.avati at gmail.com] On Behalf Of Anand Avati Sent: Sunday, December 07, 2008 3:02 AM To: Kirby Zhou Cc: gluster-users at gluster.org Subject: Re: [Gluster-users] Why so bad performance?> fuse.x86_64-2.7.4-1.el5.rf fromdag.wieers.com> dkms-fuse.noarch-2.7.4-1.nodist.rf from dag.wieers.comCan you try with a different fuse version (kernel module included)? There have been reports about 2.7.4 performing badly for writes in certain kernel versions. Can you try 2.7.3 ? thanks, avati
> [@123.25 ~]# rmmod fuse > [@123.25 ~]# yum install fuse-2.7.3-1.el5.rf dkms-fuse-2.7.3-1.nodist.rf > glusterfs > [@123.25 ~]# dd if=/opt/xxx of=/mnt/xxx4 bs=16M > 16+0 records in > 16+0 records out > 268435456 bytes (268 MB) copied, 25.4561 seconds, 10.5 MB/scan you try two things - 1. reconfigure your setup to use wb+protocol/client on the client side and protocol/server+posix on the server side and run the dd tests? 2. upgrade to 1.4.0rc1 thanks, avati