Gandalf Corvotempesta
2016-Jul-11 17:23 UTC
[Gluster-users] New cluster - first experience
2016-07-11 18:52 GMT+02:00, Alastair Neil <ajneil.tech at gmail.com>:> what performance do you see with the dd directly to the bricks?# echo 3 > /proc/sys/vm/drop_caches; dd if=/dev/zero of=test bs=1M count=1000 conv=fsync 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 8.20727 s, 128 MB/s # echo 3 > /proc/sys/vm/drop_caches; dd if=/dev/zero of=test bs=1M count=1000 conv=fsync 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 8.24893 s, 127 MB/s # echo 3 > /proc/sys/vm/drop_caches; dd if=/dev/zero of=test bs=1M count=1000 conv=fsync 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 8.64035 s, 121 MB/s # echo 3 > /proc/sys/vm/drop_caches; dd if=/dev/zero of=test bs=1M count=1000 conv=fsync 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 8.29078 s, 126 MB/s # echo 3 > /proc/sys/vm/drop_caches; dd if=/dev/zero of=test bs=1M count=1000 conv=fsync 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 8.28962 s, 126 MB/s
Gandalf Corvotempesta
2016-Jul-11 17:31 UTC
[Gluster-users] New cluster - first experience
2016-07-11 19:23 GMT+02:00, Gandalf Corvotempesta <gandalf.corvotempesta at gmail.com>:> 2016-07-11 18:52 GMT+02:00, Alastair Neil <ajneil.tech at gmail.com>: >> what performance do you see with the dd directly to the bricks? > > # echo 3 > /proc/sys/vm/drop_caches; dd if=/dev/zero of=test bs=1M > count=1000 conv=fsync > 1000+0 records in > 1000+0 records out > 1048576000 bytes (1.0 GB) copied, 8.20727 s, 128 MB/s > > # echo 3 > /proc/sys/vm/drop_caches; dd if=/dev/zero of=test bs=1M > count=1000 conv=fsync > 1000+0 records in > 1000+0 records out > 1048576000 bytes (1.0 GB) copied, 8.24893 s, 127 MB/s > > # echo 3 > /proc/sys/vm/drop_caches; dd if=/dev/zero of=test bs=1M > count=1000 conv=fsync > 1000+0 records in > 1000+0 records out > 1048576000 bytes (1.0 GB) copied, 8.64035 s, 121 MB/s > > # echo 3 > /proc/sys/vm/drop_caches; dd if=/dev/zero of=test bs=1M > count=1000 conv=fsync > 1000+0 records in > 1000+0 records out > 1048576000 bytes (1.0 GB) copied, 8.29078 s, 126 MB/s > > # echo 3 > /proc/sys/vm/drop_caches; dd if=/dev/zero of=test bs=1M > count=1000 conv=fsync > 1000+0 records in > 1000+0 records out > 1048576000 bytes (1.0 GB) copied, 8.28962 s, 126 MB/s >This is the same but from the client, using the gluster volume. # echo 3 > /proc/sys/vm/drop_caches; dd if=/dev/zero of=test bs=1M count=1000 conv=fsync 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 111.786 s, 9.4 MB/s 9.4MB/s = 75mbit As I'm using replica 3, 75mbit*3 = 225mbit of aggregated bandwidth. I think it's too low on a gigabit network. Each disks on each node is able to saturate the network, so I would expect about 950mbit when writing in parallel to 3 nodes. I'm reaching 1/4 of available speed.