Hi everyone, I am new to using tinc and currently trying to set up a full IPv6 mesh between 4 servers of mine. Setting it up went smoothly and all of the tinc clients do connect properly. Routing through the network works fine as well. There is however a large amount of management traffic which I assume should not be the case. Here is a quick snapshot using "tinc -n netname top" showing cumulative values: Tinc sites Nodes: 4 Sort: name Cumulative Node IN pkts IN bytes OUT pkts OUT bytes node01 98749248 67445198848 98746920 67443404800 node02 37877112 25869768704 37878860 25870893056 node03 34607168 23636463616 34608260 23637114880 node04 26262640 17937174528 26262956 17937287168 That's 67GB for node01 in approximately 1.5 hours. Needless to say, this kind of traffic is entirely prohibiting me from using tinc. I have read a few messages in this mailing list, but the only thing I could find were traffic spikes, likely caused by other systems using the same devices as tinc or by the tinc daemon running for too long. The later does not apply, as the entire network was up for an hour and a half only. Additionally I do not think the first one applies, as all of my machines use entirely fresh tun/tap devices when running the network. The problem appeared even after a fresh restart with not other noteworthy processes running besides tinc. Any help would be appreciated. Kind regards Christopher
Sound like your tincs are talking to each other over and over. Can we get some info from tinc.conf and tinc host files.. need to see some ip address configs for tinc. On Wed, May 1, 2019, 10:46 AM Christopher Klinge <Christ.Klinge at web.de> wrote:> Hi everyone, > > I am new to using tinc and currently trying to set up a full IPv6 mesh > between 4 servers of mine. Setting it up went smoothly and all of the tinc > clients do connect properly. Routing through the network works fine as > well. There is however a large amount of management traffic which I assume > should not be the case. > > Here is a quick snapshot using "tinc -n netname top" showing cumulative > values: > > > Tinc sites Nodes: 4 Sort: name Cumulative > Node IN pkts IN bytes OUT pkts OUT bytes > node01 98749248 67445198848 98746920 67443404800 > node02 37877112 25869768704 37878860 25870893056 > node03 34607168 23636463616 34608260 23637114880 > node04 26262640 17937174528 26262956 17937287168 > > > That's 67GB for node01 in approximately 1.5 hours. Needless to say, this > kind of traffic is entirely prohibiting me from using tinc. I have read a > few messages in this mailing list, but the only thing I could find were > traffic spikes, likely caused by other systems using the same devices as > tinc or by the tinc daemon running for too long. The later does not apply, > as the entire network was up for an hour and a half only. Additionally I do > not think the first one applies, as all of my machines use entirely fresh > tun/tap devices when running the network. The problem appeared even after a > fresh restart with not other noteworthy processes running besides tinc. > > Any help would be appreciated. > > Kind regards > Christopher > _______________________________________________ > tinc mailing list > tinc at tinc-vpn.org > https://www.tinc-vpn.org/cgi-bin/mailman/listinfo/tinc >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.tinc-vpn.org/pipermail/tinc/attachments/20190501/405c5f10/attachment.html>
<html><head></head><body><div style="font-family: Verdana;font-size: 12.0px;"><div> <div>Hi,</div> <div> </div> <div>I'll post just a few of my configuration files if that is okay. Each host uses a network interface called "vpn0" and a static IPv6 "1111:1::X". In addition, each host is responsible for a subnet "1111:1:X::/64". These examples are from the so called node01.</div> <div> </div> <div> <div><em>/usr/local/etc/tinc/sites/tinc.conf</em></div> <div> </div> <div> Name = node01<br/> AddressFamily = ipv6<br/> Mode = switch<br/> Interface = vpn0</div> <div><br/> <em>/usr/local/etc/tinc/sites/tinc-up</em></div> <div> </div> <div> #!/bin/bash</div> <div> # interface and local subnet<br/> ip -6 link set $INTERFACE up mtu 1280 txqueuelen 1000<br/> ip -6 addr add 1111:1::1 dev $INTERFACE<br/> ip -6 route add 1111:1::/48 dev $INTERFACE</div> <div><br/> <em>/usr/local/etc/tinc/sites/hosts/node02</em></div> <div> </div> <div> Address = <public ipv6 of node02><br/> Subnet = 1111:1:1::/64<br/> <br/> # RSA and ED5519 key</div> <div><br/> <em>/usr/local/etc/tinc/sites/hosts/node02-up</em></div> <div> </div> <div> ip -6 route add 1111:1:2::/64 via 1111:1::2 metric 512<br/> ip -6 route add <public ipv6 of node02>/64 via 1111:1::2 metric 512</div> <div> </div> <div>Kind regards</div> <div>Christopher</div> </div> <div> </div> <div> <div name="quote" style="margin:10px 5px 5px 10px; padding: 10px 0 10px 10px; border-left:2px solid #C3D9E5; word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;"> <div style="margin:0 0 10px 0;"><b>Gesendet:</b> Mittwoch, 01. Mai 2019 um 16:52 Uhr<br/> <b>Von:</b> "Absolute Truth" <requiredtruth@gmail.com><br/> <b>An:</b> tinc@tinc-vpn.org<br/> <b>Betreff:</b> Re: very high traffic without any load</div> <div name="quoted-content"> <div>Sound like your tincs are talking to each other over and over. Can we get some info from tinc.conf and tinc host files.. need to see some ip address configs for tinc.</div> <div class="gmail_quote"> <div class="gmail_attr">On Wed, May 1, 2019, 10:46 AM Christopher Klinge <<a href="mailto:Christ.Klinge@web.de" onclick="parent.window.location.href='mailto:Christ.Klinge@web.de'; return false;" target="_blank">Christ.Klinge@web.de</a>> wrote:</div> <blockquote class="gmail_quote" style="margin: 0 0 0 0.8ex;border-left: 1.0px rgb(204,204,204) solid;padding-left: 1.0ex;">Hi everyone,<br/> <br/> I am new to using tinc and currently trying to set up a full IPv6 mesh between 4 servers of mine. Setting it up went smoothly and all of the tinc clients do connect properly. Routing through the network works fine as well. There is however a large amount of management traffic which I assume should not be the case.<br/> <br/> Here is a quick snapshot using "tinc -n netname top" showing cumulative values:<br/> <br/> <br/> Tinc sites Nodes: 4 Sort: name Cumulative<br/> Node IN pkts IN bytes OUT pkts OUT bytes<br/> node01 98749248 67445198848 98746920 67443404800<br/> node02 37877112 25869768704 37878860 25870893056<br/> node03 34607168 23636463616 34608260 23637114880<br/> node04 26262640 17937174528 26262956 17937287168<br/> <br/> <br/> That's 67GB for node01 in approximately 1.5 hours. Needless to say, this kind of traffic is entirely prohibiting me from using tinc. I have read a few messages in this mailing list, but the only thing I could find were traffic spikes, likely caused by other systems using the same devices as tinc or by the tinc daemon running for too long. The later does not apply, as the entire network was up for an hour and a half only. Additionally I do not think the first one applies, as all of my machines use entirely fresh tun/tap devices when running the network. The problem appeared even after a fresh restart with not other noteworthy processes running besides tinc.<br/> <br/> Any help would be appreciated.<br/> <br/> Kind regards<br/> Christopher<br/> _______________________________________________<br/> tinc mailing list<br/> <a href="mailto:tinc@tinc-vpn.org" onclick="parent.window.location.href='mailto:tinc@tinc-vpn.org'; return false;" target="_blank">tinc@tinc-vpn.org</a><br/> <a href="https://www.tinc-vpn.org/cgi-bin/mailman/listinfo/tinc" target="_blank">https://www.tinc-vpn.org/cgi-bin/mailman/listinfo/tinc</a></blockquote> </div> _______________________________________________ tinc mailing list tinc@tinc-vpn.org <a href="https://www.tinc-vpn.org/cgi-bin/mailman/listinfo/tinc" target="_blank">https://www.tinc-vpn.org/cgi-bin/mailman/listinfo/tinc</a></div> </div> </div> </div></div></body></html>
Hello Christopher, Am Wed, 1 May 2019 12:37:33 +0200 schrieb "Christopher Klinge" <Christ.Klinge at web.de>:> There is however a large amount of management traffic which I assume should > not be the case.indeed - I never noticed an unreasonable amount of tinc management traffic with any of my setups. How exactly did you verify, that tinc meta traffic is really the culprit? Did you compare the traffic over your uplink interface with the traffic over the tinc interface? Maybe there is just a huge amount of payload traffic exchanged between the nodes over the tinc VPN? Since you are using "switch" mode, this could even be broadcast traffic. Cheers, Lars
<html><head></head><body><div style="font-family: Verdana;font-size: 12.0px;"><div> <div>Good evening,</div> <div> </div> <div>all of my servers where set up fresh with no other applications running besides tinc and my ssh sessions. I just double checked and those are the two only processes on my machines that have active sockets. Additionally, the SSH sessions do not go through the VPN, but are set up directly to the machines. Does tinc provide a way for differentiating between between meta and payload traffic?</div> <div> <div> </div> <div>Kind regards and thanks for your time,</div> <div>Christopher</div> <div> </div> <div name="quote" style="margin:10px 5px 5px 10px; padding: 10px 0 10px 10px; border-left:2px solid #C3D9E5; word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;"> <div style="margin:0 0 10px 0;"><b>Gesendet:</b> Mittwoch, 01. Mai 2019 um 23:29 Uhr<br/> <b>Von:</b> "Lars Kruse" <lists@sumpfralle.de><br/> <b>An:</b> tinc@tinc-vpn.org<br/> <b>Betreff:</b> Re: very high traffic without any load</div> <div name="quoted-content">Hello Christopher,<br/> <br/> <br/> Am Wed, 1 May 2019 12:37:33 +0200<br/> schrieb "Christopher Klinge" <Christ.Klinge@web.de>:<br/> <br/>> There is however a large amount of management traffic which I assume should<br/> > not be the case.<br/><br/> indeed - I never noticed an unreasonable amount of tinc management traffic<br/> with any of my setups.<br/> <br/> How exactly did you verify, that tinc meta traffic is really the culprit?<br/> Did you compare the traffic over your uplink interface with the traffic<br/> over the tinc interface?<br/> Maybe there is just a huge amount of payload traffic exchanged between the<br/> nodes over the tinc VPN?<br/> Since you are using "switch" mode, this could even be broadcast traffic.<br/> <br/> Cheers,<br/> Lars<br/> _______________________________________________<br/> tinc mailing list<br/> tinc@tinc-vpn.org<br/> <a href="https://www.tinc-vpn.org/cgi-bin/mailman/listinfo/tinc" target="_blank">https://www.tinc-vpn.org/cgi-bin/mailman/listinfo/tinc</a></div> </div> </div> </div></div></body></html>
Seemingly Similar Threads
- very high traffic without any load
- Aw: Re: very high traffic without any load
- Aw: Re: very high traffic without any load
- [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
- [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements