Niklas - Thanks! Yeah, your Github issue was very useful for me to understand what is probably causing our issue (the syscall chain done on every UDP packet). Very interesting that you're able to see around 90% of a Gig line on bare metal. Were you ever able to make any further progress on adjusting Tinc based on the investigation in https://github.com/gsliepen/tinc/issues/110 ? Martin - Yeah, I was looking into Wireguard too. However, we're pretty embeded with Tinc right now (as we've been running it for a few years now) so, I'd love to be able to adjust our existing setup not get such a perf hit. - Jared On Wed, May 17, 2017, at 10:26 AM, Niklas Hambüchen wrote:> I once filed this issue and did an investigation on high CPU load on > cloud instances that might be relevant to this topic: > > https://github.com/gsliepen/tinc/issues/110 > > If I remember correctly I found that AWS EC2 instances have this problem > less than DigitalOcean instances. > > On bare metal machines with tinc 1.0 and aes-128-cbc, I can get 90% of > gigabit line speed over tinc. > > On 17/05/17 19:17, Martin Eskdale Moen wrote: > > Hi Jared, > > I've seen the same while testing on digital ocean, I think it's the > > context switching that happens when sending a packet. > > I've done some testing with wireguard and that has a lot better > > performance but it's still changing quite a lot and only does a subset > > of what tinc does so probably not a stable solution.
On 17/05/17 21:50, Jared Ledvina wrote:> Were you ever able to make any further > progress on adjusting Tinc based on the investigation in > https://github.com/gsliepen/tinc/issues/110 ?Hi Jared, No, not yet. I list a few ways for potential improvements in the ticket, but the one that I suspect would do most on the type of virtualisation that DigitalOcean does is to add a feature to the Linux kernel to sending the data for multiple UDP packets in one syscall, as mentioned in comment https://github.com/gsliepen/tinc/issues/110#issuecomment-201949838. In the last message of that Kernel code review, Alex Gartrell says "Sounds good to me. I'll get a patch turned around soon.". I don't know if they ever got around to it. It might be worth shooting an email to ask! It would be great to have that feature. For me personally the issue became less important, when I realised that the syscall overhead is more specific to the DigitalOcean virtualisation and less prominent with the virtualisation that AWS uses. I also currently use mainly bare metal so that this issue affects me even less. I would still love to see it fixed or if it myself (if I need it, or I have some free time, or if I find somebody who wants me to do it for them). Niklas
Niklas - Okay cool, totally understandable. Thanks for all the insight, I'll take a look into that patch and see if we can revive the effort. I did another round of perf testing with ProcessPriority set to high in the tinc.conf. That looks like we're getting an average of +3% across our TInc nodes. Still under 50% of the raw network bandwidth over Tinc, which was my original goal. If anyone else has ideas on settings I should try to tweak, I'm more than happy to give it a shot and retest our performance. Thanks again, Jared On Wed, May 17, 2017, at 03:10 PM, Niklas Hambüchen wrote:> On 17/05/17 21:50, Jared Ledvina wrote: > > Were you ever able to make any further > > progress on adjusting Tinc based on the investigation in > > https://github.com/gsliepen/tinc/issues/110 ? > > Hi Jared, > > No, not yet. > > I list a few ways for potential improvements in the ticket, but the one > that I suspect would do most on the type of virtualisation that > DigitalOcean does is to add a feature to the Linux kernel to sending the > data for multiple UDP packets in one syscall, as mentioned in comment > https://github.com/gsliepen/tinc/issues/110#issuecomment-201949838. > > In the last message of that Kernel code review, Alex Gartrell says > "Sounds good to me. I'll get a patch turned around soon.". I don't know > if they ever got around to it. It might be worth shooting an email to > ask! > It would be great to have that feature. > > For me personally the issue became less important, when I realised that > the syscall overhead is more specific to the DigitalOcean virtualisation > and less prominent with the virtualisation that AWS uses. > I also currently use mainly bare metal so that this issue affects me > even less. I would still love to see it fixed or if it myself (if I need > it, or I have some free time, or if I find somebody who wants me to do > it for them). > > Niklas
I noticed a large performance boost both on bare metal and in vps instances by turning on kernel routing in the tinc config, and using full host declerations for routs rather than dumping things to the tun interface ambiguously. "Forwarding = kernel" ip route add 1.2.3.4 via 4.3.2.1 dev tun -instead of- ip route add 1.2.3.4 dev tun On May 17, 2017 3:10 PM, "Niklas Hambüchen" <mail at nh2.me> wrote:> On 17/05/17 21:50, Jared Ledvina wrote: > > Were you ever able to make any further > > progress on adjusting Tinc based on the investigation in > > https://github.com/gsliepen/tinc/issues/110 ? > > Hi Jared, > > No, not yet. > > I list a few ways for potential improvements in the ticket, but the one > that I suspect would do most on the type of virtualisation that > DigitalOcean does is to add a feature to the Linux kernel to sending the > data for multiple UDP packets in one syscall, as mentioned in comment > https://github.com/gsliepen/tinc/issues/110#issuecomment-201949838. > > In the last message of that Kernel code review, Alex Gartrell says > "Sounds good to me. I'll get a patch turned around soon.". I don't know > if they ever got around to it. It might be worth shooting an email to ask! > It would be great to have that feature. > > For me personally the issue became less important, when I realised that > the syscall overhead is more specific to the DigitalOcean virtualisation > and less prominent with the virtualisation that AWS uses. > I also currently use mainly bare metal so that this issue affects me > even less. I would still love to see it fixed or if it myself (if I need > it, or I have some free time, or if I find somebody who wants me to do > it for them). > > Niklas > _______________________________________________ > tinc mailing list > tinc at tinc-vpn.org > https://www.tinc-vpn.org/cgi-bin/mailman/listinfo/tinc >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.tinc-vpn.org/pipermail/tinc/attachments/20170517/a4279684/attachment-0001.html>