Hello! I have GlusterFS installation with parameters: - 4 servers, connected by 1Gbit/s network (760-800 Mbit/s by iperf) - Distributed-replicated volume with 4 bricks and 2x4 redundancy formula. - Replicated volume with 2 bricks and 2x2 formula. I found some trouble: if I try to copy huge amount of files (94000 files, 3Gb size), this process takes terribly long time (from 20 to 40 minutes). I perform some tests and results is: Directly to storage (single 2TB HDD): 158MB/s Directly to storage (RAID1 of 2 HDDs): 190MB/s To Replicated gluster volume: 89MB/s To Distributed-replicated gluster volume: 49MB/s Test command is: sync && echo 3 > /proc/sys/vm/drop_caches && dd if=/dev/zero of=gluster.test.bin bs=1G count=1 Switching direct-io on and off doesn't have effect. Playing with glusterfs options too. What I can do with performance? My volumes: Volume Name: nginx Type: Replicate Volume ID: e3306431-e01d-41f8-8b2d-86a61837b0b2 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: control1:/storage/nginx Brick2: control2:/storage/nginx Volume Name: instances Type: Distributed-Replicate Volume ID: d32363fc-4b53-433c-87b7-ad51acfa4125 Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: control1:/storage/instances Brick2: control2:/storage/instances Brick3: compute1:/storage/instances Brick4: compute2:/storage/instances Options Reconfigured: cluster.self-heal-window-size: 1 cluster.data-self-heal-algorithm: diff performance.stat-prefetch: 1 features.quota-timeout: 3600 performance.write-behind-window-size: 512MB performance.cache-size: 1GB performance.io-thread-count: 64 performance.flush-behind: on performance.cache-min-file-size: 0 performance.write-behind: on Mounted with default options by Gluster-FUSE. -- With best regards, differentlocal (www.differentlocal.ru | differentlocal at gmail.com), System administrator. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130227/7ee8d468/attachment.html>
El dia Wed, 27 Feb 2013 15:37:53 +0600, en/na Nikita A Kardashin va escriure:> I have GlusterFS installation with parameters: > > - 4 servers, connected by 1Gbit/s network (760-800 Mbit/s by iperf) > - Distributed-replicated volume with 4 bricks and 2x4 redundancy > formula. - Replicated volume with 2 bricks and 2x2 formula.(...)> What I can do with performance?What version are you using on it? I'd suggest you 3.3.0 since it offers me best i/o ratios. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130227/5662e9a9/attachment.sig>
On 27.02.2013 09:37, Nikita A Kardashin wrote:> To Replicated gluster volume: 89MB/s > To Distributed-replicated gluster volume: 49MB/s > > Test command is: sync && echo 3 > /proc/sys/vm/drop_caches && dd > if=/dev/zero of=gluster.test.bin bs=1G count=1Hello Nikita, To me that sounds just about right, it's the kind of speed I get as well. If you think of it, what happens in the background is that the 1GB file is written not only from you to one of the servers, but also from this server to the other servers, so it gets properly replicated/distributed. So depending on your setup, you are not writing 1GB file once, but 3-4 times, hence the drop in speed. You could squeeze a bit more out of it if you can create the volumes over the servers' secondary nic so servers-to-server traffic goes through there. -- Sent from the Delta quadrant using Borg technology! Nux! www.nux.ro
On 02/27/2013 07:34 AM, Michael Cronenworth wrote:> > What are your volume settings? Have you adjusted the cache sizes?Sorry.. I see your original post and the settings now.
I know. But I think, each file written four times on speed of storage (190MB/s, if network is not overloaded), and overall performance is remains on usable level. And on two same (by hardware) servers in replicated, not distributed mode with scheme "2 bricks, 2 servers, 1x2 redundancy" I got about 100MB/s write performance without any tuning (with default settings). Why on 4 servers with 2x2 formula I got only 50MB/s with any tuning settings? What speed I get in planned implementation - 9 servers in distributed and replicated mode with 3x3 redundancy? 5Mb/s? Maybe some system and/or gluster tweals can help me, or I going by wrong way? My use-case is simple - distributed, redundant shared storage for Openstack cloud. Initially we planing to use 9 servers in 3x3 scheme, in future - much more. 2013/2/27 Michael Cronenworth <mike at cchtml.com>> On 02/27/2013 07:34 AM, Michael Cronenworth wrote: > >> >> What are your volume settings? Have you adjusted the cache sizes? >> > > Sorry.. I see your original post and the settings now. > > ______________________________**_________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://supercolony.gluster.**org/mailman/listinfo/gluster-**users<http://supercolony.gluster.org/mailman/listinfo/gluster-users> >-- With best regards, differentlocal (www.differentlocal.ru | differentlocal at gmail.com), System administrator. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130227/7498573f/attachment.html>