Tobias Andersson
2012-Jan-25 12:35 UTC
[Gluster-users] Performance issues on GlusterFS with KVM/qcow2
Hi, I am currently doing some performance tests with GlusterFS to see if it would be possible to replace our current storage infrastructure with SATA/RAID10 servers in a GlusterFS replica. Everything looks good on the paper and the setup was smooth. However when it comes to write speed its not really what I was hoping for. Here are some stats: --------- Write test 1, VM-HOST (Debian 6, GlusterFS on /vm_storage): storage1:/cloud on /vm_storage type fuse.glusterfs (rw,allow_other,default_permissions,max_read=131072) echo 3 > /proc/sys/vm/drop_caches time dd if=/dev/zero of=./bigfile bs=1M count=5000 Result: 5000+0 records in 5000+0 records out 5242880000 bytes (5.2 GB) copied, 101.464 s, 51.7 MB/s real 1m41.725s user 0m0.004s sys 0m3.104s --------- Write test 2, VM (Debian 6, VirtIO, stored as qcow2 on /vm_storage): /dev/vda2 on / type ext4 (rw,errors=remount-ro) echo 3 > /proc/sys/vm/drop_caches time dd if=/dev/zero of=./bigfile bs=1M count=5000 Result: 5000+0 records in 5000+0 records out 5242880000 bytes (5.2 GB) copied, 796.309 s, 6.6 MB/s real 13m16.626s user 0m0.000s sys 0m3.700s --------- Conclusion: 51 MB/s on KVM-HOST is as expected, my uplink goes almost full with around 400-450Mbps to each GlusterFS storage node. The problem appears only when doing benchmark within the KVM Virtual Machine. Write speeds are really poor (6.6 MB/s), however read is fine. I've tried both 3.2.5 and 3.3 beta2 with no luck. Is this a known issue? Have I missed something? Some performance tweaks maybe? Best regards, -- Tobias Andersson ------ Please avoid sending me Word, Excel or PowerPoint attachments. See http://www.gnu.org/philosophy/no-word-attachments.html
Giovanni Toraldo
2012-Jan-25 13:24 UTC
[Gluster-users] Performance issues on GlusterFS with KVM/qcow2
Hi Tobias, 2012/1/25 Tobias Andersson <tobias at tobiasa.se>:> Write test 2, VM (Debian 6, VirtIO, stored as qcow2 on /vm_storage):qcow2 performances are strictly depending on qcow2 preallocation feature. You may also want to test with a RAW image type and report back. -- Giovanni Toraldo - LiberSoft http://www.libersoft.it
Andreas Kurz
2012-Jan-25 13:44 UTC
[Gluster-users] Performance issues on GlusterFS with KVM/qcow2
On 01/25/2012 01:35 PM, Tobias Andersson wrote:> Hi, > > I am currently doing some performance tests with GlusterFS to see if it > would be possible to replace our current storage infrastructure with > SATA/RAID10 servers in a GlusterFS replica. Everything looks good on the > paper and the setup was smooth. > > However when it comes to write speed its not really what I was hoping > for. Here are some stats: > > --------- > > Write test 1, VM-HOST (Debian 6, GlusterFS on /vm_storage): > > storage1:/cloud on /vm_storage type fuse.glusterfs > (rw,allow_other,default_permissions,max_read=131072) > > echo 3 > /proc/sys/vm/drop_caches > time dd if=/dev/zero of=./bigfile bs=1M count=5000 > > Result: > > 5000+0 records in > 5000+0 records out > 5242880000 bytes (5.2 GB) copied, 101.464 s, 51.7 MB/s > > real 1m41.725s > user 0m0.004s > sys 0m3.104s > > --------- > > Write test 2, VM (Debian 6, VirtIO, stored as qcow2 on /vm_storage):This might be interesting regarding qcow2 performance: http://www.linux-kvm.org/page/Qcow2 Regards, Andreas -- Need help with High Availability clustering? http://www.hastexo.com/now> > /dev/vda2 on / type ext4 (rw,errors=remount-ro) > > echo 3 > /proc/sys/vm/drop_caches > time dd if=/dev/zero of=./bigfile bs=1M count=5000 > > Result: > > 5000+0 records in > 5000+0 records out > 5242880000 bytes (5.2 GB) copied, 796.309 s, 6.6 MB/s > > real 13m16.626s > user 0m0.000s > sys 0m3.700s > > --------- > > Conclusion: > > 51 MB/s on KVM-HOST is as expected, my uplink goes almost full with > around 400-450Mbps to each GlusterFS storage node. > > The problem appears only when doing benchmark within the KVM Virtual > Machine. Write speeds are really poor (6.6 MB/s), however read is fine. > > I've tried both 3.2.5 and 3.3 beta2 with no luck. > > Is this a known issue? Have I missed something? Some performance tweaks > maybe? > > Best regards, >-------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 222 bytes Desc: OpenPGP digital signature URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120125/db015522/attachment.sig>
John Lauro
2012-Jan-25 13:48 UTC
[Gluster-users] Performance issues on GlusterFS with KVM/qcow2
> --------- > > Write test 2, VM (Debian 6, VirtIO, stored as qcow2 on /vm_storage): > > /dev/vda2 on / type ext4 (rw,errors=remount-ro) > > echo 3 > /proc/sys/vm/drop_caches > time dd if=/dev/zero of=./bigfile bs=1M count=5000 > > Result: > > 5000+0 records in > 5000+0 records out > 5242880000 bytes (5.2 GB) copied, 796.309 s, 6.6 MB/s > > real 13m16.626s > user 0m0.000s > sys 0m3.700s > > --------Not that this helps, just duplicated your test on my test setup as a reference point. Your performance does seem a little worse than expected. [root at glustertestc1 v1]# time dd if=/dev/zero of=./bigfile bs=1M count=5000 5000+0 records in 5000+0 records out 5242880000 bytes (5.2 GB) copied, 173.744 s, 30.2 MB/s real 2m53.800s user 0m0.022s sys 0m8.496s This is with servers (replicated gluster volume spread over 2 servers), and the physical disks are on a gigabit ethernet iSCSI SAN with 10K drive, so not exactly high speed. I find with my testing performance for large files is tolerable, but writing of many small files is terrible with gluster. For reference, here was timing of virtual gluster server: [root at glustertests1 data1]# time dd if=/dev/zero of=./bigfile2 bs=1M count=5000 5000+0 records in 5000+0 records out 5242880000 bytes (5.2 GB) copied, 87.2035 s, 60.1 MB/s real 1m27.237s user 0m0.006s sys 0m5.872s (My test clients and servers are all ESXi 4.1 VMs running Scientific Linux 6.1). How is your network latency between servers and clients? [root at glustertestc1 v1]# ping 10.0.12.141 -c 1000 -q -A -s 8000 PING 10.0.12.141 (10.0.12.141) 8000(8028) bytes of data. --- 10.0.12.141 ping statistics --- 1000 packets transmitted, 1000 received, 0% packet loss, time 163ms rtt min/avg/max/mdev = 0.114/0.132/1.638/0.061 ms, ipg/ewma 0.163/0.124 ms (My test clients and servers are all on the same physical box. Guess it might be virtualized a little faster than gigabit speeds between them.)