I have been playing around with Gluster on and off for the last 6 years or so. Most of the things that have been keeping me from using it have been related to latency. In the past I have been using 10 gig infiniband or 10 gig ethernet, recently the price of 40 gig ethernet has fallen quite a bit with guys like Arista. My question is, is this worth it at all for something like Gluster? The port to port latency looks impressive at under 4 microseconds, but I don't yet know what total system to system latency would look like assuming QSPF+ copper cables and linux stack. --><>Nathan Stratton Founder, CTO Exario Networks, Inc. nathan at robotics.net nathan at exarionetworks.com http://www.robotics.net http://www.exarionetworks.com/ Building the WebRTC solutions today that your customers will demand tomorrow. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130614/f11353d9/attachment.html>
I'm using 40G Infiniband with IPoIB for gluster. Here are some ping times (from host 172.16.1.10): [root at node0.cloud ~]# ping -c 10 172.16.1.11 PING 172.16.1.11 (172.16.1.11) 56(84) bytes of data. 64 bytes from 172.16.1.11: icmp_seq=1 ttl=64 time=0.093 ms 64 bytes from 172.16.1.11: icmp_seq=2 ttl=64 time=0.113 ms 64 bytes from 172.16.1.11: icmp_seq=3 ttl=64 time=0.163 ms 64 bytes from 172.16.1.11: icmp_seq=4 ttl=64 time=0.125 ms 64 bytes from 172.16.1.11: icmp_seq=5 ttl=64 time=0.125 ms 64 bytes from 172.16.1.11: icmp_seq=6 ttl=64 time=0.125 ms 64 bytes from 172.16.1.11: icmp_seq=7 ttl=64 time=0.198 ms 64 bytes from 172.16.1.11: icmp_seq=8 ttl=64 time=0.171 ms 64 bytes from 172.16.1.11: icmp_seq=9 ttl=64 time=0.194 ms 64 bytes from 172.16.1.11: icmp_seq=10 ttl=64 time=0.115 ms --- 172.16.1.11 ping statistics --- 10 packets transmitted, 10 received, 0% packet loss, time 8999ms rtt min/avg/max/mdev = 0.093/0.142/0.198/0.035 ms On Fri, Jun 14, 2013 at 7:03 AM, Nathan Stratton <nathan at robotics.net> wrote:> I have been playing around with Gluster on and off for the last 6 years or > so. Most of the things that have been keeping me from using it have been > related to latency. > > In the past I have been using 10 gig infiniband or 10 gig ethernet, recently > the price of 40 gig ethernet has fallen quite a bit with guys like Arista. > > My question is, is this worth it at all for something like Gluster? The port > to port latency looks impressive at under 4 microseconds, but I don't yet > know what total system to system latency would look like assuming QSPF+ > copper cables and linux stack. > > -- >><> > Nathan Stratton Founder, CTO > Exario Networks, Inc. > nathan at robotics.net nathan at > exarionetworks.com > http://www.robotics.net > http://www.exarionetworks.com/ > > Building the WebRTC solutions today that your customers will demand > tomorrow. > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users
Most of the 40gb stuff is designed for mostly East/West traffic as that tends to be the majority of traffic in the datacenter these days. All the big guys make platforms that can keep full port to port across the platform between 4-7. 40gb has not fallen that far where it is not still a decent size investment to do right and as someone who keeps trying to fit gluster into production I have found that other storage platforms always beat out gluster in top end hardware. When 40gb is more high end and the 100gb starts to take marketshare gluster may work in some enviornments but when running top of the line network and server gear the TCO of a commerical storage product (and the support that comes with it) always wins, at least for me. On a side not the native linux drivers have not really kept up with the 40gb cards. Linux still has issues with some 10gb cards. If you are going 40gb talk to the people that license the dna driver. They are in paramus nj and do a lot of higer end networks with proper Linux drivers. The media (dac cables or om4 mtp) dont seem to affect performance much as long as you dont push the dac longer than 3-5 meeters. Salvatore "Popsikle" Poliandro Sent from my mobile, please excuse any typos. One day we will have mobile devices where we don't need this footer :) On Jun 14, 2013 10:04 AM, "Nathan Stratton" <nathan at robotics.net> wrote:> I have been playing around with Gluster on and off for the last 6 years or > so. Most of the things that have been keeping me from using it have been > related to latency. > > In the past I have been using 10 gig infiniband or 10 gig ethernet, > recently the price of 40 gig ethernet has fallen quite a bit with guys like > Arista. > > My question is, is this worth it at all for something like Gluster? The > port to port latency looks impressive at under 4 microseconds, but I don't > yet know what total system to system latency would look like assuming QSPF+ > copper cables and linux stack. > > -- > ><> > Nathan Stratton Founder, CTO > Exario Networks, Inc. > nathan at robotics.net nathan at > exarionetworks.com > http://www.robotics.net > http://www.exarionetworks.com/ > > Building the WebRTC solutions today that your customers will demand > tomorrow. > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130616/7a8e5904/attachment.html>
For whoever is interested, I've published a short puppet module that might be useful for anyone using LSI hardware RAID, particularly with gluster, but it doesn't depend on it. Ingard gets to be testuser#2. [1] I wrote a short article about this here: https://ttboj.wordpress.com/2013/06/17/puppet-lsi-hardware-raid-module/ and the code is available here: https://github.com/purpleidea/puppet-lsi I haven't tested this in a little while, but it should work. Feel free to send patches or hardware. Maybe it will be useful for someone. Primarily this adds monitoring of your RAID, and helps install all the LSI stuff. Cheers, James [1] If we don't hear from him in a while, I guess it all went wrong :P On Mon, 2013-06-17 at 09:56 +0200, Ingard Mev?g wrote:> James, I'd be interested in that puppet module :) > megacli drives me nuts from time to time as well. > > Regards > Ingard-------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: This is a digitally signed message part URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130617/7a595f91/attachment.sig>
On 21 Jun 2013, at 14:00, Shawn Nock <nock at nocko.se> wrote:> I had to keep a stock of spares in-house until I migrated to 3ware (now > LSI). I haven't had any trouble with these cards in several years (and > haven't needed to RMA or contact support).I've got a 3Ware ?9650SE-8LPML SATA RAID controller that's been a bit troublesome. It was working fine but died on a scheduled reboot, in such a way that even the BIOS wouldn't POST! 3Ware were good about replacing it, but the replacement they sent was DOA, the second one worked ok. I still find reboots on this machine very stressful! And why do makers of RAID cards make it so hard to update firmware? They persist in requiring DOS, Java or even Windows, I almost always have to resort to some unsupported hack in order to get updates done on Linux. Marcus
> On 21 Jun 2013, at 14:00, Shawn Nock wrote: > > And why do makers of RAID cards make it so hard to update firmware? They > persist in requiring DOS, Java or even Windows, I almost always have to > resort to some unsupported hack in order to get updates done on Linux.I'm pretty sure with the 3ware controllers(or at least most of the newer ones from the 9xxx series) you can flash under Linux with some CLI utility. If I remember correctly one of our 3ware SAS controllers even had an update button on the 3dm2 webpanel.> > Marcus > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users