Punit Dambiwal
2015-Feb-13 06:58 UTC
[Gluster-users] Gluster performance on the small files
Hi, I have seen the gluster performance is dead slow on the small files...even i am using the SSD....it's too bad performance....even i am getting better performance in my SAN with normal SATA disk... I am using distributed replicated glusterfs with replica count=2...i have all SSD disks on the brick... root at vm3:~# dd bs=64k count=4k if=/dev/zero of=test oflag=dsync 4096+0 records in 4096+0 records out 268435456 bytes (268 MB) copied, 57.3145 s, 4.7 MB/s root at vm3:~# dd bs=64k count=4k if=/dev/zero of=test conv=fdatasync 4096+0 records in 4096+0 records out 268435456 bytes (268 MB) copied, 1.80093 s, 149 MB/s Thanks, Punit -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150213/2806f6d1/attachment.html>
Michaƫl Couren
2015-Feb-13 10:41 UTC
[Gluster-users] Gluster performance on the small files
> > 268435456 bytes (268 MB) copied, 57.3145 s, 4.7 MB/s >Hi, I did the same test on various replicated volumes we have on 3 KVM virtual machines. * Gluster 3.6.0 Bricks on same disk as the system (ext4+lvm) : 1.0 MB/S, (1,6 MB/s for the sytem disk) * Gluster 3.6.2 Bricks on same disk as the system (xfs) : 2,4 MB/S, (2.4 MB/s for the sytem disk) * Gluster 3.6.2 Bricks on separate SAN LUNs (xfs): 14 MB/S (22MB/s for the system disk) Remark : same test on various machines without GLuster : * 5 VMs, writing through VirtIO driver on SAN LUNs : 37 MB/s, 57 MB/s, 49MB/S, 55MB/S, 56MB/S * 3 baremetal servers (RAID 5 on SAS disks) : 105 MB/s, 128 MB/s, 100MB/s * 1 baremetal server (SSD drive) : 141 MB/s * my personnal ubuntu recent laptop (XFS) : 16MB/S on SSD drive ; 1.6 MB/S on encrypted /home :( Conclusions : a 40% decrease is observed in some cases, not all. Global performances are mainly those of the machines homing glusterfs (VMs on SAN perf are 50% those of baremetal local disk) -- Cordialement / Best regards, Micha?l Couren, ABES, Montpellier, France.
For those interested here are the results of my tests using Gluster 3.5.2. Nothing much better here neither... shell$ dd bs=64k count=4k if=/dev/zero of=test oflag=dsync 4096+0 records in 4096+0 records out 268435456 bytes (268 MB) copied, 51.9808 s, 5.2 MB/s shell$ dd bs=64k count=4k if=/dev/zero of=test2 conv=fdatasync 4096+0 records in 4096+0 records out 268435456 bytes (268 MB) copied, 3.01334 s, 89.1 MB/s On Friday, February 13, 2015 7:58 AM, Punit Dambiwal <hypunit at gmail.com> wrote: Hi, I have seen the gluster performance is dead slow on the small files...even i am using the SSD....it's too bad performance....even i am getting better performance in my SAN with normal SATA disk... I am using distributed replicated glusterfs with replica count=2...i have all SSD disks on the brick... root at vm3:~# dd bs=64k count=4kif=/dev/zero of=test oflag=dsync4096+0 records in4096+0 records out268435456 bytes (268 MB) copied, 57.3145s, 4.7 MB/s root at vm3:~# dd bs=64k count=4kif=/dev/zero of=test conv=fdatasync4096+0 records in4096+0 records out268435456 bytes (268 MB) copied, 1.80093s, 149 MB/s Thanks,Punit _______________________________________________ Gluster-users mailing list Gluster-users at gluster.org http://www.gluster.org/mailman/listinfo/gluster-users -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150213/9da8ad07/attachment.html>
Samuli Heinonen
2015-Feb-14 08:08 UTC
[Gluster-users] Gluster performance on the small files
Hi! What image type you are using to store virtual machines? For example using sparse QCOW2 images is much slower than preallocated RAW images. Performance with QCOW2 should get better after image file has grown bigger and it's not necessary to resize sparse image anymore. Best regards, Samuli Heinonen On 13.2.2015, at 8.58, Punit Dambiwal <hypunit at gmail.com> wrote:> Hi, > > I have seen the gluster performance is dead slow on the small files...even i am using the SSD....it's too bad performance....even i am getting better performance in my SAN with normal SATA disk... > > I am using distributed replicated glusterfs with replica count=2...i have all SSD disks on the brick... > > root at vm3:~# dd bs=64k count=4k if=/dev/zero of=test oflag=dsync > > 4096+0 records in > > 4096+0 records out > > 268435456 bytes (268 MB) copied, 57.3145 s, 4.7 MB/s > > > > root at vm3:~# dd bs=64k count=4k if=/dev/zero of=test conv=fdatasync > > 4096+0 records in > > 4096+0 records out > > 268435456 bytes (268 MB) copied, 1.80093 s, 149 MB/s > > > > Thanks, > > Punit > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150214/f4c95e16/attachment.html>
On 02/13/2015 12:28 PM, Punit Dambiwal wrote:> Hi, > > I have seen the gluster performance is dead slow on the small > files...even i am using the SSD....it's too bad performance....even i am > getting better performance in my SAN with normal SATA disk... > > I am using distributed replicated glusterfs with replica count=2...i > have all SSD disks on the brick... > > root at vm3:~# dd bs=64k count=4k if=/dev/zero of=test oflag=dsync > > 4096+0 records in > > 4096+0 records out > > 268435456 bytes (268 MB) copied, 57.3145 s, 4.7 MB/s > > > root at vm3:~# dd bs=64k count=4k if=/dev/zero of=test conv=fdatasync > > 4096+0 records in > > 4096+0 records out > > 268435456 bytes (268 MB) copied, 1.80093 s, 149 MB/s >Can you please specify your volume configuration? -Vijay
On 02/12/2015 10:58 PM, Punit Dambiwal wrote:> Hi, > > I have seen the gluster performance is dead slow on the small > files...even i am using the SSD....it's too bad performance....even i > am getting better performance in my SAN with normal SATA disk... > > I am using distributed replicated glusterfs with replica count=2...i > have all SSD disks on the brick... > > root at vm3:~# dd bs=64k count=4k if=/dev/zero of=test oflag=dsync > > 4096+0 records in > > 4096+0 records out > > 268435456 bytes (268 MB) copied, 57.3145 s, 4.7 MB/s > > > root at vm3:~# dd bs=64k count=4k if=/dev/zero of=test conv=fdatasync > > 4096+0 records in > > 4096+0 records out > > 268435456 bytes (268 MB) copied, 1.80093 s, 149 MB/s > > >How small is your VM image? The image is the file that GlusterFS is serving, not the small files within it. Perhaps the filesystem you're using within your VM is inefficient with regard to how it handles disk writes. I believe your concept of "small file" performance is misunderstood, as is often the case with this phrase. The "small file" issue has to do with the overhead of finding and checking the validity of any file, but with a small file the percentage of time doing those checks is proportionally greater. With your VM image, that file is already open. There are no self-heal checks or lookups that are happening in your tests, so that overhead is not the problem. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150216/33142068/attachment.html>