Kaamesh Kamalaaharan
2016-Aug-05 03:56 UTC
[Gluster-users] Gluster not saturating 10gb network
Hi, I was mistaken about my server specs. my actual specs for each server are 12 x 4.0TB 3.5" LFF NL-SAS 6G, 128MB, 7.2K rpm HDDs (as Data Store set as RAID 6 achieve 36.0TB usable storage) not the WD RED as i mentioned earlier. I guess i should have a higher transfer rate with these drives in. 400 MB/s is a bit too slow in my opinion. Any help i can get will be greatly appreciated as im not sure where i should start debugging this issue On Fri, Aug 5, 2016 at 2:44 AM, Leno Vo <lenovolastname at yahoo.com> wrote:> i got 1.2 gb/s on seagate sshd ST1000LX001 raid 5 x3 (but with the > dreaded cache array on) and 1.1 gb/s on samsung pro ssd 1tb x3 raid5 (no > array caching on for it's not compatible on proliant---not enterprise ssd). > > > On Thursday, August 4, 2016 5:23 AM, Kaamesh Kamalaaharan < > kaamesh at novocraft.com> wrote: > > > hi, > thanks for the reply. I have hardware raid 5 storage servers with 4TB WD > red drives. I think they are capable of 6GB/s transfers so it shouldnt be a > drive speed issue. Just for testing i tried to do a dd test directy into > the brick mounted from the storage server itself and got around 800mb/s > transfer rate which is double what i get when the brick is mounted on the > client. Are there any other options or tests that i can perform to figure > out the root cause of my problem as i have exhaused most google searches > and tests. > > Kaamesh > > On Wed, Aug 3, 2016 at 10:58 PM, Leno Vo <lenovolastname at yahoo.com> wrote: > > your 10G nic is capable, the problem is the disk speed, fix ur disk speed > first, use ssd or sshd or sas 15k in a raid 0 or raid 5/6 x4 at least. > > > On Wednesday, August 3, 2016 2:40 AM, Kaamesh Kamalaaharan < > kaamesh at novocraft.com> wrote: > > > Hi , > I have gluster 3.6.2 installed on my server network. Due to internal > issues we are not allowed to upgrade the gluster version. All the clients > are on the same version of gluster. When transferring files to/from the > clients or between my nodes over the 10gb network, the transfer rate is > capped at 450Mb/s .Is there any way to increase the transfer speeds for > gluster mounts? > > Our server setup is as following: > > 2 gluster servers -gfs1 and gfs2 > volume name : gfsvolume > 3 clients - hpc1, hpc2,hpc3 > gluster volume mounted on /export/gfsmount/ > > > > > The following is the average results what i did so far: > > 1) test bandwith with iperf between all machines - 9.4 GiB/s > 2) test write speed with dd > > dd if=/dev/zero of=/export/gfsmount/testfile bs=1G count=1 > > result=399Mb/s > > > 3) test read speed with dd > > dd if=/export/gfsmount/testfile of=/dev/zero bs=1G count=1 > > > result=284MB/s > > > My gluster volume configuration: > > > Volume Name: gfsvolume > > Type: Replicate > > Volume ID: a29bd2fb-b1ef-4481-be10-c2f4faf4059b > > Status: Started > > Number of Bricks: 1 x 2 = 2 > > Transport-type: tcp > > Bricks: > > Brick1: gfs1:/export/sda/brick > > Brick2: gfs2:/export/sda/brick > > Options Reconfigured: > > performance.quick-read: off > > network.ping-timeout: 30 > > network.frame-timeout: 90 > > performance.cache-max-file-size: 2MB > > cluster.server-quorum-type: none > > nfs.addr-namelookup: off > > nfs.trusted-write: off > > performance.write-behind-window-size: 4MB > > cluster.data-self-heal-algorithm: diff > > performance.cache-refresh-timeout: 60 > > performance.cache-size: 1GB > > cluster.quorum-type: fixed > > auth.allow: 172.* > > cluster.quorum-count: 1 > > diagnostics.latency-measurement: on > > diagnostics.count-fop-hits: on > > cluster.server-quorum-ratio: 50% > > > Any help would be appreciated. > > Thanks, > > Kaamesh > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users > > > > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160805/886d50a4/attachment.html>
Gandalf Corvotempesta
2016-Aug-05 06:42 UTC
[Gluster-users] Gluster not saturating 10gb network
Il 05 ago 2016 5:57 AM, "Kaamesh Kamalaaharan" <kaamesh at novocraft.com> ha scritto:> 12 x 4.0TB 3.5" LFF NL-SAS 6G, 128MB, 7.2K rpm HDDs (as Data Store set asRAID 6 achieve 36.0TB usable storage)>12x4tb disks in a single raid6? What about rebuild time? You almost get an URE during a rebuild with that size. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160805/bce02f50/attachment.html>