Hi Jared,
I've seen the same while testing on digital ocean, I think it's the
context
switching that happens when sending a packet.
I've done some testing with wireguard and that has a lot better performance
but it's still changing quite a lot and only does a subset of what
tinc does so probably not a stable solution.
Martin
On Wed, 17 May 2017 at 18:05 Jared Ledvina <jared at techsmix.net> wrote:
> Hi,
>
> Terribly sorry about the duplicated message.
>
> I've completed the upgrade to Tinc 1.0.31 but, have not seen much of a
> performance increase. The change looks to be similar to switching to
> both aes-256-cbc w/ sha256 (which are now the default so, that makes
> sense).
> Out tinc.conf is reasonably simple:
> Name = $hostname_for_node
> Device = /dev/net/tun
> PingTimeout = 60
> ReplayWindow = 625
> ConnectTo = $remote_node_name_here
> ConnectTo = $remote_node2_name_here
> ConnectTo = $remote_node3_name_here
> ConnectTo = $remote_node4_name_here
> ConnectTo = $remote_node5_name_here
> ConnectTo = $remote_node6_name_here
>
> Sadly, I'm out of ideas on how to improve the performance here.
I've
> tried to change the following sysctl settings:
> net.core.somaxconn='4096'
> net.core.rmem_max='16777216'
> net.core.wmem_max='16777216'
> as was recommended by one of peers. That seems to have actually
> decreased the perf across the board by around 2% (but that might just be
> due to fluctuations in the network, I'm not entirely sure)
>
> One of my iperf3 tests is between 2 c3.large AEC2 instances in us-east.
> I'm able to get ~556Mb/s over the internet. Going soley over the Tinc
> network though, that drops to 158Mb/s. For instances in the same region
> (I'm testing in Ireland, Oregon, and Virginia) to instances within that
> region, I'm averaging around 26% over the bandwidth being availible
over
> the tinc network versus over the internet.
>
> Is that % overhead expected or have we just poorly configured something?
>
> Totally willing to grab any numbers/stats that might assist here.
>
> Thanks again,
> Jared
>
>
> --
> Jared Ledvina
> jared at techsmix.net
>
> On Tue, May 16, 2017, at 05:08 PM, Jared Ledvina wrote:
> > Hi,
> >
> > We've been running tinc for a while now but, have started hitting
a
> > bottleneck where the number of packets/sec able to be processed by our
> > Tinc nodes is maxing out around 4,000 packets/sec.
> >
> > Right now, we are using the default cipher and digest settings (so,
> > blowfish and sha1). I've been testing using aes-256-cbc for the
cipher
> > and seeing ~5% increases across the board. Each Tinc node does have
> > AES-NI.
> >
> > I've also read through/found
https://github.com/gsliepen/tinc/issues/110
> > which is very interesting.
> >
> > The TInc nodes are all on Centos6 AWS EC2 instances as c3.large's
w/
> > EIP's. I've been testing with iperf3 and am able to get around
510Mb/s
> > on the raw network. Over the tun interface/Tinc network, I'm only
able
> > to max it out to around 120Mb/s.
> >
> > Anyone have any suggestions on settings or system changes that might
be
> > able to assist here? I'm also curious if upgrading to 1.0.31 would
help
> > and plan on testing that tomorrow.
> >
> > Happy to provide any other information that might be useful.
> >
> > Thanks,
> > Jared
> > _______________________________________________
> > tinc mailing list
> > tinc at tinc-vpn.org
> > https://www.tinc-vpn.org/cgi-bin/mailman/listinfo/tinc
> _______________________________________________
> tinc mailing list
> tinc at tinc-vpn.org
> https://www.tinc-vpn.org/cgi-bin/mailman/listinfo/tinc
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.tinc-vpn.org/pipermail/tinc/attachments/20170517/33f1bbbf/attachment.html>