On 04/08/2015 02:57 PM, Punit Dambiwal wrote:> Hi, > > I am getting very slow throughput in the glusterfs (dead slow...even > SATA is better) ... i am using all SSD in my environment..... > > I have the following setup :- > A. 4* host machine with Centos 7(Glusterfs 3.6.2 | Distributed > Replicated | replica=2) > B. Each server has 24 SSD as bricks?(Without HW Raid | JBOD) > C. Each server has 2 Additional ssd for OS? > D. Network 2*10G with bonding?(2*E5 CPU and 64GB RAM) > > Note :- Performance/Throughput slower then Normal SATA 7200 RPM?even i > am using all SSD in my ENV.. > > Gluster Volume options :- > > +++++++++++++++ > Options Reconfigured: > performance.nfs.write-behind-window-size: 1024MB > performance.io-thread-count: 32 > performance.cache-size: 1024MB > cluster.quorum-type: auto > cluster.server-quorum-type: server > diagnostics.count-fop-hits: on > diagnostics.latency-measurement: on > nfs.disable: on > user.cifs: enable > auth.allow: * > performance.quick-read: off > performance.read-ahead: off > performance.io-cache: off > performance.stat-prefetch: off > cluster.eager-lock: enable > network.remote-dio: enable > storage.owner-uid: 36 > storage.owner-gid: 36 > server.allow-insecure: on > network.ping-timeout: 0 > diagnostics.brick-log-level: INFO > +++++++++++++++++++ > > Test with SATA and Glusterfs SSD?. > ??????? > Dell EQL (SATA disk 7200 RPM) > ?- > [root at mirror ~]# > 4096+0 records in > 4096+0 records out > 268435456 bytes (268 MB) copied, 20.7763 s, 12.9 MB/s > [root at mirror ~]# dd if=/dev/zero of=test bs=64k count=4k oflag=dsync > 4096+0 records in > 4096+0 records out > 268435456 bytes (268 MB) copied, 23.5947 s, 11.4 MB/s > > GlsuterFS SSD > ? > [root at sv-VPN1 ~]# dd if=/dev/zero of=test bs=64k count=4k oflag=dsync > 4096+0 records in > 4096+0 records out > 268435456 bytes (268 MB) copied, 66.2572 s, 4.1 MB/s > [root at sv-VPN1 ~]# dd if=/dev/zero of=test bs=64k count=4k oflag=dsync > 4096+0 records in > 4096+0 records out > 268435456 bytes (268 MB) copied, 62.6922 s, 4.3 MB/s > ???????? > > Please let me know what i should do to improve the performance of my > glusterfs?What is the throughput that you get when you run these commands on the disks directly without gluster in the picture? By running dd with dsync you are ensuring that there is no buffering anywhere in the stack and that is the reason why low throughput is being observed. -Vijay -Vijay
----- Original Message -----> From: "Vijay Bellur" <vbellur at redhat.com> > To: "Punit Dambiwal" <hypunit at gmail.com>, gluster-users at gluster.org > Sent: Wednesday, April 8, 2015 6:44:42 AM > Subject: Re: [Gluster-users] Glusterfs performance tweaks > > On 04/08/2015 02:57 PM, Punit Dambiwal wrote: > > Hi, > > > > I am getting very slow throughput in the glusterfs (dead slow...even > > SATA is better) ... i am using all SSD in my environment..... > > > > I have the following setup :- > > A. 4* host machine with Centos 7(Glusterfs 3.6.2 | Distributed > > Replicated | replica=2) > > B. Each server has 24 SSD as bricks?(Without HW Raid | JBOD) > > C. Each server has 2 Additional ssd for OS? > > D. Network 2*10G with bonding?(2*E5 CPU and 64GB RAM) > > > > Note :- Performance/Throughput slower then Normal SATA 7200 RPM?even i > > am using all SSD in my ENV.. > > > > Gluster Volume options :- > > > > +++++++++++++++ > > Options Reconfigured: > > performance.nfs.write-behind-window-size: 1024MB > > performance.io-thread-count: 32 > > performance.cache-size: 1024MB > > cluster.quorum-type: auto > > cluster.server-quorum-type: server > > diagnostics.count-fop-hits: on > > diagnostics.latency-measurement: on > > nfs.disable: on > > user.cifs: enable > > auth.allow: * > > performance.quick-read: off > > performance.read-ahead: off > > performance.io-cache: off > > performance.stat-prefetch: off > > cluster.eager-lock: enable > > network.remote-dio: enable > > storage.owner-uid: 36 > > storage.owner-gid: 36 > > server.allow-insecure: on > > network.ping-timeout: 0 > > diagnostics.brick-log-level: INFO > > +++++++++++++++++++ > > > > Test with SATA and Glusterfs SSD?. > > ??????? > > Dell EQL (SATA disk 7200 RPM) > > ?- > > [root at mirror ~]# > > 4096+0 records in > > 4096+0 records out > > 268435456 bytes (268 MB) copied, 20.7763 s, 12.9 MB/s > > [root at mirror ~]# dd if=/dev/zero of=test bs=64k count=4k oflag=dsync > > 4096+0 records in > > 4096+0 records out > > 268435456 bytes (268 MB) copied, 23.5947 s, 11.4 MB/s > > > > GlsuterFS SSD > > ? > > [root at sv-VPN1 ~]# dd if=/dev/zero of=test bs=64k count=4k oflag=dsync > > 4096+0 records in > > 4096+0 records out > > 268435456 bytes (268 MB) copied, 66.2572 s, 4.1 MB/s > > [root at sv-VPN1 ~]# dd if=/dev/zero of=test bs=64k count=4k oflag=dsync > > 4096+0 records in > > 4096+0 records out > > 268435456 bytes (268 MB) copied, 62.6922 s, 4.3 MB/s > > ???????? > > > > Please let me know what i should do to improve the performance of my > > glusterfs? > > > What is the throughput that you get when you run these commands on the > disks directly without gluster in the picture? > > By running dd with dsync you are ensuring that there is no buffering > anywhere in the stack and that is the reason why low throughput is being > observed.This is slow for the env you described. Are you sure you are using your 10G NICs? What do you see with iperf between the client and server? In my env with 12 spinning disks in a RAID 6 + single 10G NIC I get: [root at gqac025 gluster-mount]# dd if=/dev/zero of=test bs=64k count=4k oflag=dsync 4096+0 records in 4096+0 records out 268435456 bytes (268 MB) copied, 9.88752 s, 27.1 MB/s A couple things to check with your SSDs: -Scheduler {noop or deadline } -No read ahead! -No RAID! -Make sure the kernel seems them as SSDs As Vijay said you will see WAY better throughput if you get rid of the dsync flag. Maybe try something like: $ time `dd if=/dev/zero of=/gluster-mount/myfile bs=1024k count=1000; sync` That will give you an idea of what it takes to write to RAM then sync the dirty pages to disk. -b> -Vijay > > -Vijay > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users
Hi Vijay, If i run the same command directly on the brick... [root at cpu01 1]# dd if=/dev/zero of=test bs=64k count=4k oflag=dsync 4096+0 records in 4096+0 records out 268435456 bytes (268 MB) copied, 16.8022 s, 16.0 MB/s [root at cpu01 1]# pwd /bricks/1 [root at cpu01 1]# [image: Inline image 1] On Wed, Apr 8, 2015 at 6:44 PM, Vijay Bellur <vbellur at redhat.com> wrote:> On 04/08/2015 02:57 PM, Punit Dambiwal wrote: > >> Hi, >> >> I am getting very slow throughput in the glusterfs (dead slow...even >> SATA is better) ... i am using all SSD in my environment..... >> >> I have the following setup :- >> A. 4* host machine with Centos 7(Glusterfs 3.6.2 | Distributed >> Replicated | replica=2) >> B. Each server has 24 SSD as bricks?(Without HW Raid | JBOD) >> C. Each server has 2 Additional ssd for OS? >> D. Network 2*10G with bonding?(2*E5 CPU and 64GB RAM) >> >> Note :- Performance/Throughput slower then Normal SATA 7200 RPM?even i >> am using all SSD in my ENV.. >> >> Gluster Volume options :- >> >> +++++++++++++++ >> Options Reconfigured: >> performance.nfs.write-behind-window-size: 1024MB >> performance.io-thread-count: 32 >> performance.cache-size: 1024MB >> cluster.quorum-type: auto >> cluster.server-quorum-type: server >> diagnostics.count-fop-hits: on >> diagnostics.latency-measurement: on >> nfs.disable: on >> user.cifs: enable >> auth.allow: * >> performance.quick-read: off >> performance.read-ahead: off >> performance.io-cache: off >> performance.stat-prefetch: off >> cluster.eager-lock: enable >> network.remote-dio: enable >> storage.owner-uid: 36 >> storage.owner-gid: 36 >> server.allow-insecure: on >> network.ping-timeout: 0 >> diagnostics.brick-log-level: INFO >> +++++++++++++++++++ >> >> Test with SATA and Glusterfs SSD?. >> ??????? >> Dell EQL (SATA disk 7200 RPM) >> ?- >> [root at mirror ~]# >> 4096+0 records in >> 4096+0 records out >> 268435456 bytes (268 MB) copied, 20.7763 s, 12.9 MB/s >> [root at mirror ~]# dd if=/dev/zero of=test bs=64k count=4k oflag=dsync >> 4096+0 records in >> 4096+0 records out >> 268435456 bytes (268 MB) copied, 23.5947 s, 11.4 MB/s >> >> GlsuterFS SSD >> ? >> [root at sv-VPN1 ~]# dd if=/dev/zero of=test bs=64k count=4k oflag=dsync >> 4096+0 records in >> 4096+0 records out >> 268435456 bytes (268 MB) copied, 66.2572 s, 4.1 MB/s >> [root at sv-VPN1 ~]# dd if=/dev/zero of=test bs=64k count=4k oflag=dsync >> 4096+0 records in >> 4096+0 records out >> 268435456 bytes (268 MB) copied, 62.6922 s, 4.3 MB/s >> ???????? >> >> Please let me know what i should do to improve the performance of my >> glusterfs? >> > > > What is the throughput that you get when you run these commands on the > disks directly without gluster in the picture? > > By running dd with dsync you are ensuring that there is no buffering > anywhere in the stack and that is the reason why low throughput is being > observed. > > -Vijay > > -Vijay > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150409/bb305588/attachment.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 11395 bytes Desc: not available URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150409/bb305588/attachment.png>