Hi, I am getting very slow throughput in the glusterfs (dead slow...even SATA is better) ... i am using all SSD in my environment..... I have the following setup :- A. 4* host machine with Centos 7(Glusterfs 3.6.2 | Distributed Replicated | replica=2) B. Each server has 24 SSD as bricks?(Without HW Raid | JBOD) C. Each server has 2 Additional ssd for OS? D. Network 2*10G with bonding?(2*E5 CPU and 64GB RAM) Note :- Performance/Throughput slower then Normal SATA 7200 RPM?even i am using all SSD in my ENV.. Gluster Volume options :- +++++++++++++++ Options Reconfigured: performance.nfs.write-behind-window-size: 1024MB performance.io-thread-count: 32 performance.cache-size: 1024MB cluster.quorum-type: auto cluster.server-quorum-type: server diagnostics.count-fop-hits: on diagnostics.latency-measurement: on nfs.disable: on user.cifs: enable auth.allow: * performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off cluster.eager-lock: enable network.remote-dio: enable storage.owner-uid: 36 storage.owner-gid: 36 server.allow-insecure: on network.ping-timeout: 0 diagnostics.brick-log-level: INFO +++++++++++++++++++ Test with SATA and Glusterfs SSD?. ??????? Dell EQL (SATA disk 7200 RPM) ?- [root at mirror ~]# dd if=/dev/zero of=test bs=64k count=4k oflag=dsync 4096+0 records in 4096+0 records out 268435456 bytes (268 MB) copied, 20.7763 s, 12.9 MB/s [root at mirror ~]# dd if=/dev/zero of=test bs=64k count=4k oflag=dsync 4096+0 records in 4096+0 records out 268435456 bytes (268 MB) copied, 23.5947 s, 11.4 MB/s GlsuterFS SSD ? [root at sv-VPN1 ~]# dd if=/dev/zero of=test bs=64k count=4k oflag=dsync 4096+0 records in 4096+0 records out 268435456 bytes (268 MB) copied, 66.2572 s, 4.1 MB/s [root at sv-VPN1 ~]# dd if=/dev/zero of=test bs=64k count=4k oflag=dsync 4096+0 records in 4096+0 records out 268435456 bytes (268 MB) copied, 62.6922 s, 4.3 MB/s ???????? Please let me know what i should do to improve the performance of my glusterfs? Thanks, Punit Dambiwal -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150408/55f468ce/attachment.html>
On 04/08/2015 02:57 PM, Punit Dambiwal wrote:> Hi, > > I am getting very slow throughput in the glusterfs (dead slow...even > SATA is better) ... i am using all SSD in my environment..... > > I have the following setup :- > A. 4* host machine with Centos 7(Glusterfs 3.6.2 | Distributed > Replicated | replica=2) > B. Each server has 24 SSD as bricks?(Without HW Raid | JBOD) > C. Each server has 2 Additional ssd for OS? > D. Network 2*10G with bonding?(2*E5 CPU and 64GB RAM) > > Note :- Performance/Throughput slower then Normal SATA 7200 RPM?even i > am using all SSD in my ENV.. > > Gluster Volume options :- > > +++++++++++++++ > Options Reconfigured: > performance.nfs.write-behind-window-size: 1024MB > performance.io-thread-count: 32 > performance.cache-size: 1024MB > cluster.quorum-type: auto > cluster.server-quorum-type: server > diagnostics.count-fop-hits: on > diagnostics.latency-measurement: on > nfs.disable: on > user.cifs: enable > auth.allow: * > performance.quick-read: off > performance.read-ahead: off > performance.io-cache: off > performance.stat-prefetch: off > cluster.eager-lock: enable > network.remote-dio: enable > storage.owner-uid: 36 > storage.owner-gid: 36 > server.allow-insecure: on > network.ping-timeout: 0 > diagnostics.brick-log-level: INFO > +++++++++++++++++++ > > Test with SATA and Glusterfs SSD?. > ??????? > Dell EQL (SATA disk 7200 RPM) > ?- > [root at mirror ~]# > 4096+0 records in > 4096+0 records out > 268435456 bytes (268 MB) copied, 20.7763 s, 12.9 MB/s > [root at mirror ~]# dd if=/dev/zero of=test bs=64k count=4k oflag=dsync > 4096+0 records in > 4096+0 records out > 268435456 bytes (268 MB) copied, 23.5947 s, 11.4 MB/s > > GlsuterFS SSD > ? > [root at sv-VPN1 ~]# dd if=/dev/zero of=test bs=64k count=4k oflag=dsync > 4096+0 records in > 4096+0 records out > 268435456 bytes (268 MB) copied, 66.2572 s, 4.1 MB/s > [root at sv-VPN1 ~]# dd if=/dev/zero of=test bs=64k count=4k oflag=dsync > 4096+0 records in > 4096+0 records out > 268435456 bytes (268 MB) copied, 62.6922 s, 4.3 MB/s > ???????? > > Please let me know what i should do to improve the performance of my > glusterfs?What is the throughput that you get when you run these commands on the disks directly without gluster in the picture? By running dd with dsync you are ensuring that there is no buffering anywhere in the stack and that is the reason why low throughput is being observed. -Vijay -Vijay