How are you doing the read/write tests on the fuse/glusterfs mountpoint?
Many small files will be slow because all the time is spent coordinating
locks.
On Wed, Feb 27, 2013 at 9:31 AM, Thomas Wakefield <twake at
cola.iges.org>wrote:
> Help please-
>
>
> I am running 3.3.1 on Centos using a 10GB network. I get reasonable write
> speeds, although I think they could be faster. But my read speeds are
> REALLY slow.
>
> Executive summary:
>
> On gluster client-
> Writes average about 700-800MB/s
> Reads average about 70-80MB/s
>
> On server-
> Writes average about 1-1.5GB/s
> Reads average about 2-3GB/s
>
> Any thoughts?
>
>
>
> Here are some additional details:
>
> Nothing interesting in any of the log files, everything is very quite.
> All servers had no other load, and all clients are performing the same way.
>
>
> Volume Name: shared
> Type: Distribute
> Volume ID: de11cc19-0085-41c3-881e-995cca244620
> Status: Started
> Number of Bricks: 26
> Transport-type: tcp
> Bricks:
> Brick1: fs-disk2:/storage/disk2a
> Brick2: fs-disk2:/storage/disk2b
> Brick3: fs-disk2:/storage/disk2d
> Brick4: fs-disk2:/storage/disk2e
> Brick5: fs-disk2:/storage/disk2f
> Brick6: fs-disk2:/storage/disk2g
> Brick7: fs-disk2:/storage/disk2h
> Brick8: fs-disk2:/storage/disk2i
> Brick9: fs-disk2:/storage/disk2j
> Brick10: fs-disk2:/storage/disk2k
> Brick11: fs-disk2:/storage/disk2l
> Brick12: fs-disk2:/storage/disk2m
> Brick13: fs-disk2:/storage/disk2n
> Brick14: fs-disk2:/storage/disk2o
> Brick15: fs-disk2:/storage/disk2p
> Brick16: fs-disk2:/storage/disk2q
> Brick17: fs-disk2:/storage/disk2r
> Brick18: fs-disk2:/storage/disk2s
> Brick19: fs-disk2:/storage/disk2t
> Brick20: fs-disk2:/storage/disk2u
> Brick21: fs-disk2:/storage/disk2v
> Brick22: fs-disk2:/storage/disk2w
> Brick23: fs-disk2:/storage/disk2x
> Brick24: fs-disk3:/storage/disk3a
> Brick25: fs-disk3:/storage/disk3b
> Brick26: fs-disk3:/storage/disk3c
> Options Reconfigured:
> performance.write-behind: on
> performance.read-ahead: on
> performance.io-cache: on
> performance.stat-prefetch: on
> performance.quick-read: on
> cluster.min-free-disk: 500GB
> nfs.disable: off
>
>
> sysctl.conf settings for 10GBe
> # increase TCP max buffer size settable using setsockopt()
> net.core.rmem_max = 67108864
> net.core.wmem_max = 67108864
> # increase Linux autotuning TCP buffer limit
> net.ipv4.tcp_rmem = 4096 87380 67108864
> net.ipv4.tcp_wmem = 4096 65536 67108864
> # increase the length of the processor input queue
> net.core.netdev_max_backlog = 250000
> # recommended default congestion control is htcp
> net.ipv4.tcp_congestion_control=htcp
> # recommended for hosts with jumbo frames enabled
> net.ipv4.tcp_mtu_probing=1
>
>
>
>
>
>
> Thomas W.
> Sr. Systems Administrator COLA/IGES
> twake at cola.iges.org
> Affiliate Computer Scientist GMU
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130227/306efec0/attachment.html>