Hi, I made short tests with glusterfs and bonding, but I have performance issues. Environment: - bonding mode=4 (with switch support) or mode=6 - centos7 - vlans - two servers with 4 nic/node, one nic on the internet (this is the default route) and 3 nic as bonded interface - MTU 9000 on all interface (bondings, vlans, eths, etc), MTU 9216 on the switch ports - each host vlan-s can ping each host on the vlan subnets and on the non vlan subnets. - the volume uses the bonded vlans as bricks [root at node1 lock]# gluster vol info Volume Name: meta Type: Replicate Volume ID: f4d026e7-3edd-442f-9207-f0a849acebf5 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: gs00.itsmart.cloud:/gluster/meta0 Brick2: gs01.itsmart.cloud:/gluster/meta1 I did this test: [root at node0 lock]# dd if=/dev/zero of=/mnt/lock/disk bs=1M count=1000 conv=fdatasync 1000+0 records in 1000+0 records out 1048576000 bytes (1,0 GB) copied, 10,3035 s, 102 MB/s I compared with local hdd speed tests: [root at node0 lock]# dd if=/dev/zero of=/home/disk bs=1M count=1000 conv=fdatasync 1000+0 records in 1000+0 records out 1048576000 bytes (1,0 GB) copied, 3,04411 s, 344 MB/s Ok, I mean this is a network based solution, but I think the 100MB/sec is possible with one nic too. I just wondering, maybe my bonding isn't working fine. What do you think, is it ok? The port utilization is minimal, there are two bigger traffic on two ports only. Thanks in advance. Tibor -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140929/75dd5b99/attachment.html>
Adding more physical interfaces to a bonded NIC won't get you more speed for single-stream operations. To get more performance out of your bonded NIC, you need to run multiple instances of the "dd" command. A snippet from a good reference article: ---------------------------------------------------------- Most administrators assume that bonding multiple network cards together instantly results in double the bandwidth and high-availability in case a link goes down. Unfortunately, this is not true. ---------------------------------------------------------- http://www.enterprisenetworkingplanet.com/linux_unix/article.php/3850636/Understanding-NIC-Bonding-with-Linux.htm -Ron On Mon, Sep 29, 2014 at 9:54 AM, Demeter Tibor <tdemeter at itsmart.hu> wrote:> > Hi, > > I made short tests with glusterfs and bonding, but I have performance > issues. > > Environment: > > - bonding mode=4 (with switch support) or mode=6 > - centos7 > - vlans > - two servers with 4 nic/node, one nic on the internet (this is the > default route) and 3 nic as bonded interface > - MTU 9000 on all interface (bondings, vlans, eths, etc), MTU 9216 on the > switch ports > - each host vlan-s can ping each host on the vlan subnets and on the non > vlan subnets. > - the volume uses the bonded vlans as bricks > > [root at node1 lock]# gluster vol info > > Volume Name: meta > Type: Replicate > Volume ID: f4d026e7-3edd-442f-9207-f0a849acebf5 > Status: Started > Number of Bricks: 1 x 2 = 2 > Transport-type: tcp > Bricks: > Brick1: gs00.itsmart.cloud:/gluster/meta0 > Brick2: gs01.itsmart.cloud:/gluster/meta1 > > > I did this test: > > [root at node0 lock]# dd if=/dev/zero of=/mnt/lock/disk bs=1M count=1000 > conv=fdatasync > 1000+0 records in > 1000+0 records out > 1048576000 bytes (1,0 GB) copied, 10,3035 s, 102 MB/s > > > I compared with local hdd speed tests: > > > [root at node0 lock]# dd if=/dev/zero of=/home/disk bs=1M count=1000 > conv=fdatasync > 1000+0 records in > 1000+0 records out > 1048576000 bytes (1,0 GB) copied, 3,04411 s, 344 MB/s > > > Ok, I mean this is a network based solution, but I think the 100MB/sec is > possible with one nic too. > > I just wondering, maybe my bonding isn't working fine. > > What do you think, is it ok? > > > The port utilization is minimal, there are two bigger traffic on two ports > only. > > > Thanks in advance. > > > > Tibor > > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140929/6dd67bab/attachment.html>
> Ok, I mean this is a network based solution, but I think the 100MB/sec is > possible with one nic too. > I just wondering, maybe my bonding isn't working fine.You should test with multiple clients/dd streams. http://serverfault.com/questions/569060/link-aggregation-lacp-802-3ad-max-throughput/ rr
dd is a single-threaded operation. To test GlusterFS (or any clustered file system) to it's fullest, you need a (highly) multi-threaded test. -Dan ---------------- Dan Mons Unbreaker of broken things Cutting Edge http://cuttingedge.com.au On 29 September 2014 23:54, Demeter Tibor <tdemeter at itsmart.hu> wrote:> > Hi, > > I made short tests with glusterfs and bonding, but I have performance > issues. > > Environment: > > - bonding mode=4 (with switch support) or mode=6 > - centos7 > - vlans > - two servers with 4 nic/node, one nic on the internet (this is the default > route) and 3 nic as bonded interface > - MTU 9000 on all interface (bondings, vlans, eths, etc), MTU 9216 on the > switch ports > - each host vlan-s can ping each host on the vlan subnets and on the non > vlan subnets. > - the volume uses the bonded vlans as bricks > > [root at node1 lock]# gluster vol info > > Volume Name: meta > Type: Replicate > Volume ID: f4d026e7-3edd-442f-9207-f0a849acebf5 > Status: Started > Number of Bricks: 1 x 2 = 2 > Transport-type: tcp > Bricks: > Brick1: gs00.itsmart.cloud:/gluster/meta0 > Brick2: gs01.itsmart.cloud:/gluster/meta1 > > > > I did this test: > > [root at node0 lock]# dd if=/dev/zero of=/mnt/lock/disk bs=1M count=1000 > conv=fdatasync > 1000+0 records in > 1000+0 records out > 1048576000 bytes (1,0 GB) copied, 10,3035 s, 102 MB/s > > > I compared with local hdd speed tests: > > > [root at node0 lock]# dd if=/dev/zero of=/home/disk bs=1M count=1000 > conv=fdatasync > 1000+0 records in > 1000+0 records out > 1048576000 bytes (1,0 GB) copied, 3,04411 s, 344 MB/s > > > Ok, I mean this is a network based solution, but I think the 100MB/sec is > possible with one nic too. > > I just wondering, maybe my bonding isn't working fine. > > What do you think, is it ok? > > > The port utilization is minimal, there are two bigger traffic on two ports > only. > > > Thanks in advance. > > > > Tibor > > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users