Hi Ben,
Yes...i am using 2*10G (Bonding LACP)...
[root at cpu02 ~]# /usr/bin/iperf3 -c 10.10.0.10
Connecting to host 10.10.0.10, port 5201
[ 4] local 10.10.0.11 port 45135 connected to 10.10.0.10 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 1.15 GBytes 9.85 Gbits/sec 13 1.30 MBytes
[ 4] 1.00-2.00 sec 1.15 GBytes 9.91 Gbits/sec 0 1.89 MBytes
[ 4] 2.00-3.00 sec 1.15 GBytes 9.90 Gbits/sec 0 2.33 MBytes
[ 4] 3.00-4.00 sec 1.15 GBytes 9.89 Gbits/sec 1 2.41 MBytes
[ 4] 4.00-5.00 sec 1.15 GBytes 9.90 Gbits/sec 0 2.42 MBytes
[ 4] 5.00-6.00 sec 1.15 GBytes 9.90 Gbits/sec 0 2.53 MBytes
[ 4] 6.00-7.00 sec 1.15 GBytes 9.90 Gbits/sec 1 2.53 MBytes
[ 4] 7.00-8.00 sec 1.15 GBytes 9.90 Gbits/sec 0 2.68 MBytes
[ 4] 8.00-9.00 sec 1.15 GBytes 9.90 Gbits/sec 0 2.76 MBytes
[ 4] 9.00-10.00 sec 1.15 GBytes 9.90 Gbits/sec 0 2.88 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 11.5 GBytes 9.89 Gbits/sec 15 sender
[ 4] 0.00-10.00 sec 11.5 GBytes 9.89 Gbits/sec
receiver
[root at cpu01 ~]# time `dd if=/dev/zero of=/rhev/data-center/mnt/glusterSD/
10.10.0.14\:_ds01/myfile bs=1024k count=1000; sync`
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 2.70556 s, 388 MB/s
real 0m2.815s
user 0m0.002s
sys 0m0.822s
[root at cpu01 ~]# time `dd if=/dev/zero of=/rhev/data-center/mnt/glusterSD/
10.10.0.14\:_ds01/e732a82f-bae9-4368-8b98-dedc1c3814de/images/myfile
bs=1024k count=1000; sync`
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 1.93056 s, 543 MB/s
real 0m2.077s
user 0m0.002s
sys 0m0.795s
[root at cpu01 ~]#
Ban@ how i will check those things :-
A couple things to check with your SSDs:
-Scheduler {noop or deadline }
-No read ahead!
-No RAID!
-Make sure the kernel seems them as SSDs
Thanks,
Punit
On Thu, Apr 9, 2015 at 2:55 AM, Ben Turner <bturner at redhat.com> wrote:
> ----- Original Message -----
> > From: "Vijay Bellur" <vbellur at redhat.com>
> > To: "Punit Dambiwal" <hypunit at gmail.com>,
gluster-users at gluster.org
> > Sent: Wednesday, April 8, 2015 6:44:42 AM
> > Subject: Re: [Gluster-users] Glusterfs performance tweaks
> >
> > On 04/08/2015 02:57 PM, Punit Dambiwal wrote:
> > > Hi,
> > >
> > > I am getting very slow throughput in the glusterfs (dead
slow...even
> > > SATA is better) ... i am using all SSD in my environment.....
> > >
> > > I have the following setup :-
> > > A. 4* host machine with Centos 7(Glusterfs 3.6.2 | Distributed
> > > Replicated | replica=2)
> > > B. Each server has 24 SSD as bricks?(Without HW Raid | JBOD)
> > > C. Each server has 2 Additional ssd for OS?
> > > D. Network 2*10G with bonding?(2*E5 CPU and 64GB RAM)
> > >
> > > Note :- Performance/Throughput slower then Normal SATA 7200
RPM?even i
> > > am using all SSD in my ENV..
> > >
> > > Gluster Volume options :-
> > >
> > > +++++++++++++++
> > > Options Reconfigured:
> > > performance.nfs.write-behind-window-size: 1024MB
> > > performance.io-thread-count: 32
> > > performance.cache-size: 1024MB
> > > cluster.quorum-type: auto
> > > cluster.server-quorum-type: server
> > > diagnostics.count-fop-hits: on
> > > diagnostics.latency-measurement: on
> > > nfs.disable: on
> > > user.cifs: enable
> > > auth.allow: *
> > > performance.quick-read: off
> > > performance.read-ahead: off
> > > performance.io-cache: off
> > > performance.stat-prefetch: off
> > > cluster.eager-lock: enable
> > > network.remote-dio: enable
> > > storage.owner-uid: 36
> > > storage.owner-gid: 36
> > > server.allow-insecure: on
> > > network.ping-timeout: 0
> > > diagnostics.brick-log-level: INFO
> > > +++++++++++++++++++
> > >
> > > Test with SATA and Glusterfs SSD?.
> > > ???????
> > > Dell EQL (SATA disk 7200 RPM)
> > > ?-
> > > [root at mirror ~]#
> > > 4096+0 records in
> > > 4096+0 records out
> > > 268435456 bytes (268 MB) copied, 20.7763 s, 12.9 MB/s
> > > [root at mirror ~]# dd if=/dev/zero of=test bs=64k count=4k
oflag=dsync
> > > 4096+0 records in
> > > 4096+0 records out
> > > 268435456 bytes (268 MB) copied, 23.5947 s, 11.4 MB/s
> > >
> > > GlsuterFS SSD
> > > ?
> > > [root at sv-VPN1 ~]# dd if=/dev/zero of=test bs=64k count=4k
oflag=dsync
> > > 4096+0 records in
> > > 4096+0 records out
> > > 268435456 bytes (268 MB) copied, 66.2572 s, 4.1 MB/s
> > > [root at sv-VPN1 ~]# dd if=/dev/zero of=test bs=64k count=4k
oflag=dsync
> > > 4096+0 records in
> > > 4096+0 records out
> > > 268435456 bytes (268 MB) copied, 62.6922 s, 4.3 MB/s
> > > ????????
> > >
> > > Please let me know what i should do to improve the performance of
my
> > > glusterfs?
> >
> >
> > What is the throughput that you get when you run these commands on the
> > disks directly without gluster in the picture?
> >
> > By running dd with dsync you are ensuring that there is no buffering
> > anywhere in the stack and that is the reason why low throughput is
being
> > observed.
>
> This is slow for the env you described. Are you sure you are using your
> 10G NICs? What do you see with iperf between the client and server? In my
> env with 12 spinning disks in a RAID 6 + single 10G NIC I get:
>
> [root at gqac025 gluster-mount]# dd if=/dev/zero of=test bs=64k count=4k
> oflag=dsync
> 4096+0 records in
> 4096+0 records out
> 268435456 bytes (268 MB) copied, 9.88752 s, 27.1 MB/s
>
> A couple things to check with your SSDs:
>
> -Scheduler {noop or deadline }
> -No read ahead!
> -No RAID!
> -Make sure the kernel seems them as SSDs
>
> As Vijay said you will see WAY better throughput if you get rid of the
> dsync flag. Maybe try something like:
>
> $ time `dd if=/dev/zero of=/gluster-mount/myfile bs=1024k count=1000; sync`
>
> That will give you an idea of what it takes to write to RAM then sync the
> dirty pages to disk.
>
> -b
>
> > -Vijay
> >
> > -Vijay
> >
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20150409/7ea2a448/attachment.html>