Kaamesh Kamalaaharan
2016-Aug-03 06:40 UTC
[Gluster-users] Gluster not saturating 10gb network
Hi , I have gluster 3.6.2 installed on my server network. Due to internal issues we are not allowed to upgrade the gluster version. All the clients are on the same version of gluster. When transferring files to/from the clients or between my nodes over the 10gb network, the transfer rate is capped at 450Mb/s .Is there any way to increase the transfer speeds for gluster mounts? Our server setup is as following: 2 gluster servers -gfs1 and gfs2 volume name : gfsvolume 3 clients - hpc1, hpc2,hpc3 gluster volume mounted on /export/gfsmount/ The following is the average results what i did so far: 1) test bandwith with iperf between all machines - 9.4 GiB/s 2) test write speed with dd dd if=/dev/zero of=/export/gfsmount/testfile bs=1G count=1 result=399Mb/s 3) test read speed with dd dd if=/export/gfsmount/testfile of=/dev/zero bs=1G count=1 result=284MB/s My gluster volume configuration: Volume Name: gfsvolume Type: Replicate Volume ID: a29bd2fb-b1ef-4481-be10-c2f4faf4059b Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: gfs1:/export/sda/brick Brick2: gfs2:/export/sda/brick Options Reconfigured: performance.quick-read: off network.ping-timeout: 30 network.frame-timeout: 90 performance.cache-max-file-size: 2MB cluster.server-quorum-type: none nfs.addr-namelookup: off nfs.trusted-write: off performance.write-behind-window-size: 4MB cluster.data-self-heal-algorithm: diff performance.cache-refresh-timeout: 60 performance.cache-size: 1GB cluster.quorum-type: fixed auth.allow: 172.* cluster.quorum-count: 1 diagnostics.latency-measurement: on diagnostics.count-fop-hits: on cluster.server-quorum-ratio: 50% Any help would be appreciated. Thanks, Kaamesh -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160803/b11653b4/attachment.html>
your 10G nic is capable, the problem is the disk speed, fix ur disk speed first,
use ssd or sshd or sas 15k in a raid 0 or raid 5/6 x4 at least.
On Wednesday, August 3, 2016 2:40 AM, Kaamesh Kamalaaharan <kaamesh at
novocraft.com> wrote:
Hi ,?I have gluster 3.6.2 installed on my server network. Due to internal
issues we are not allowed to upgrade the gluster version. All the clients are on
the same version of gluster. When transferring files ?to/from the clients or
between my nodes over the 10gb network, the transfer rate is capped at 450Mb/s
.Is there any way to increase the transfer speeds for gluster mounts??
Our server setup is as following:
2 gluster servers -gfs1 and gfs2?volume name : gfsvolume3 clients - hpc1,
hpc2,hpc3gluster volume mounted on /export/gfsmount/
The following is the average results what i did so far:
1) test bandwith with iperf between all machines - 9.4 GiB/s2) test write speed
with dd?
dd if=/dev/zero of=/export/gfsmount/testfile bs=1G count=1
result=399Mb/s
3) test read speed with dd
dd if=/export/gfsmount/testfile of=/dev/zero bs=1G count=1
result=284MB/s
My gluster volume configuration:?
Volume Name: gfsvolumeType: ReplicateVolume ID:
a29bd2fb-b1ef-4481-be10-c2f4faf4059bStatus: StartedNumber of Bricks: 1 x 2 =
2Transport-type: tcpBricks:Brick1: gfs1:/export/sda/brickBrick2:
gfs2:/export/sda/brickOptions Reconfigured:performance.quick-read:
offnetwork.ping-timeout: 30network.frame-timeout:
90performance.cache-max-file-size: 2MBcluster.server-quorum-type:
nonenfs.addr-namelookup: offnfs.trusted-write:
offperformance.write-behind-window-size: 4MBcluster.data-self-heal-algorithm:
diffperformance.cache-refresh-timeout: 60performance.cache-size:
1GBcluster.quorum-type: fixedauth.allow: 172.*cluster.quorum-count:
1diagnostics.latency-measurement: ondiagnostics.count-fop-hits:
oncluster.server-quorum-ratio: 50%
Any help would be appreciated. Thanks,Kaamesh
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20160803/671bf213/attachment.html>
Дмитрий Глушенок
2016-Aug-09 17:24 UTC
[Gluster-users] Gluster not saturating 10gb network
Hi,
Same problem on 3.8.1. Even on loopback interface (traffic not leaves gluster
node):
Writing locally to replica 2 volume (each brick is separate local RAID6): 613
MB/sec
Writing locally to 1-brick volume: 877 MB/sec
Writing locally to the brick itself (directly to XFS): 1400 MB/sec
Tests were performed using fio with following settings:
bs=4096k
ioengine=libaio
iodepth=32
direct=0
runtime=600
directory=/R1
numjobs=1
rw=write
size=40g
Even with direct=1 the brick itself gives 1400 MB/sec.
1-brick volume profiling below:
# gluster volume profile test-data-03 info
Brick: gluster-01:/R1/test-data-03
-----------------------------------------------
Cumulative Stats:
Block Size: 131072b+ 262144b+
No. of Reads: 0 0
No. of Writes: 889072 20
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us 3 RELEASE
100.00 122.96 us 67.00 us 42493.00 us 208598 WRITE
Duration: 1605 seconds
Data Read: 0 bytes
Data Written: 116537688064 bytes
Interval 0 Stats:
Block Size: 131072b+ 262144b+
No. of Reads: 0 0
No. of Writes: 889072 20
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us 3 RELEASE
100.00 122.96 us 67.00 us 42493.00 us 208598 WRITE
Duration: 1605 seconds
Data Read: 0 bytes
Data Written: 116537688064 bytes
#
As you can see all writes are performed using 128 KB block size. And it looks
like a bottleneck. Which was discussed previously btw:
http://www.gluster.org/pipermail/gluster-devel/2013-March/038821.html
Using GFAPI to access the volume shows better speed, but still far from raw
brick. fio tests with ioengine=gfapi gives following:
Writing locally to replica 2 volume (each brick is separate local RAID6): 680
MB/sec
Writing locally to 1-brick volume: 960 MB/sec
Accorging to 1-brick volume profile 128 KB blocks no more used:
# gluster volume profile tzk-data-03 info
Brick: j-gluster-01.vcod.jet.su:/R1/tzk-data-03
-----------------------------------------------
Cumulative Stats:
Block Size: 4194304b+
No. of Reads: 0
No. of Writes: 9211
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
100.00 2237.67 us 1880.00 us 5785.00 us 8701 WRITE
Duration: 49 seconds
Data Read: 0 bytes
Data Written: 38633734144 bytes
Interval 0 Stats:
Block Size: 4194304b+
No. of Reads: 0
No. of Writes: 9211
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
100.00 2237.67 us 1880.00 us 5785.00 us 8701 WRITE
Duration: 49 seconds
Data Read: 0 bytes
Data Written: 38633734144 bytes
[root at j-gluster-01 ~]#
So, it may be worth to try using NFS Ganesha with GFAPI plugin.
> 3 ???. 2016 ?., ? 9:40, Kaamesh Kamalaaharan <kaamesh at
novocraft.com> ???????(?):
>
> Hi ,
> I have gluster 3.6.2 installed on my server network. Due to internal issues
we are not allowed to upgrade the gluster version. All the clients are on the
same version of gluster. When transferring files to/from the clients or between
my nodes over the 10gb network, the transfer rate is capped at 450Mb/s .Is there
any way to increase the transfer speeds for gluster mounts?
>
> Our server setup is as following:
>
> 2 gluster servers -gfs1 and gfs2
> volume name : gfsvolume
> 3 clients - hpc1, hpc2,hpc3
> gluster volume mounted on /export/gfsmount/
>
>
>
> The following is the average results what i did so far:
>
> 1) test bandwith with iperf between all machines - 9.4 GiB/s
> 2) test write speed with dd
> dd if=/dev/zero of=/export/gfsmount/testfile bs=1G count=1
>
> result=399Mb/s
>
> 3) test read speed with dd
> dd if=/export/gfsmount/testfile of=/dev/zero bs=1G count=1
>
> result=284MB/s
>
> My gluster volume configuration:
>
> Volume Name: gfsvolume
> Type: Replicate
> Volume ID: a29bd2fb-b1ef-4481-be10-c2f4faf4059b
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: gfs1:/export/sda/brick
> Brick2: gfs2:/export/sda/brick
> Options Reconfigured:
> performance.quick-read: off
> network.ping-timeout: 30
> network.frame-timeout: 90
> performance.cache-max-file-size: 2MB
> cluster.server-quorum-type: none
> nfs.addr-namelookup: off
> nfs.trusted-write: off
> performance.write-behind-window-size: 4MB
> cluster.data-self-heal-algorithm: diff
> performance.cache-refresh-timeout: 60
> performance.cache-size: 1GB
> cluster.quorum-type: fixed
> auth.allow: 172.*
> cluster.quorum-count: 1
> diagnostics.latency-measurement: on
> diagnostics.count-fop-hits: on
> cluster.server-quorum-ratio: 50%
>
> Any help would be appreciated.
> Thanks,
> Kaamesh
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
--
Dmitry Glushenok
Jet Infosystems