similar to: Point-to-Point persistent connection on Tinc 1.1pre14

Displaying 20 results from an estimated 4000 matches similar to: "Point-to-Point persistent connection on Tinc 1.1pre14"

2018 Apr 24
2
Upgrading 1.1pre14 nodes to 1.1pre15 in an existing mesh
Hi I have a Tinc cluster of about 100 nodes, and they are all running tinc 1.1pre14. I'd like to upgrade to tinc 1.1pre15. Is there a suggested mechanism to do this while keeping the cluster up? For instance can I simply automate the installation of tinc 1.1pre15 on each node and reload the existing configuration using 'tinc reload' Will the temporary state of having a mix set of
2017 Aug 22
3
using both ConnectTo and AutoConnect to avoid network partitions
Hi Guus Thanks for clarifying. Some follow up questions: - How do we patch 1.1pre14 with this fix? Or will there be a 1.1pre15 to upgrade to? - What is the workaround until we patch with this fix? Using a combination of AutoConnect and ConnectTo? - When we use ConnectTo, is it mandatory to have a cert file in the hosts/* dir with an IP to ConnectTo ? -nirmal On Tue, Aug 22, 2017 at 12:10
2017 Aug 31
2
using both ConnectTo and AutoConnect to avoid network partitions
Thanks Guss, some comments and questions: If you make the yellow nodes ConnectTo all other nodes, and not have > AutoConnect = yes, and the other nodes just have AutoConnect = yes but > no ConnectTo's, then you will get the desired graph. The reason this approach is not desirable is because it fails at automation. It requires us to add a new line of AutoConnect = <new node that
2017 Aug 22
2
using both ConnectTo and AutoConnect to avoid network partitions
Hi Today our Tinc network saw a network partition when we took one tinc node down. We knew there was a network partition since the graph showed a split. This graph is not very helpful but its what I have at the moment: http://i.imgur.com/XP2PSWc.png - (ignore node labeled ignore, since its a dead node anyways) - node R was shutdown for maintenance - We saw a network split - we brought node R
2017 Aug 31
2
using both ConnectTo and AutoConnect to avoid network partitions
Hi Guus Following your suggestion we reconfigured our tinc network as follows. Here is a new graph and below is our updated configuration: http://imgur.com/a/n6ksh - 2 Tinc nodes (yellow labels) have a public external IP and port 655 open. They both have ConnectTo's to each other and AutoConnect = yes - The remainder tinc nodes (blue labels) have their tinc.conf set up as follows:
2017 Aug 24
1
using both ConnectTo and AutoConnect to avoid network partitions
Thanks Guus I have one more question. - We see several log messages that we dont currently understand - Can you comment on what they mean and if they are concerning? I've obfuscated IP's and node names so please ignore those. Our tinc daemon command is: tincd -n <vpn name> -- Received short packet -- Got REQ_KEY from node003 while we already started a SPTPS session! -- Invalid
2017 Aug 31
0
using both ConnectTo and AutoConnect to avoid network partitions
On Thu, Aug 31, 2017 at 01:37:28PM -0700, Nirmal Thacker wrote: > If you make the yellow nodes ConnectTo all other nodes, and not have > > AutoConnect = yes, and the other nodes just have AutoConnect = yes but > > no ConnectTo's, then you will get the desired graph. > > The reason this approach is not desirable is because it fails at > automation. It requires us to
2015 Jan 12
2
tinc connectTo cleanup
I have a use case where my tinc.conf ConnectTo can go upto 20 + hosts. I am planning to automate a periodic cleanup of ConnectTo in the tinc.conf file, the issue is I am not able to figure out which ConnectTo is been used and which are stale, say NOT used in last 2 to 3 days. I want to remove those ConnectTo which are no longer actively used. Is it possible to find which ConnectTo are not used.
2017 Aug 23
0
using both ConnectTo and AutoConnect to avoid network partitions
On Tue, Aug 22, 2017 at 03:19:18PM -0700, Nirmal Thacker wrote: > - How do we patch 1.1pre14 with this fix? Or will there be a 1.1pre15 to > upgrade to? There will be an 1.1pre15, but if you want you can apply the following commit: https://tinc-vpn.org/git/browse?p=tinc;a=commitdiff;h=92fdabc439bdb5e16f64a4bf2ed1deda54f7c544 > - What is the workaround until we patch with this fix?
2018 Dec 11
3
subnet flooded with lots of ADD_EDGE request
Hello, We're suffering from sporadic network blockage(read: unable to ping other nodes) with 1.1-pre17. Before upgrading to the 1.1-pre release, the same network blockage also manifested itself in a pure 1.0.33 network. The log shows that there are a lot of "Got ADD_EDGE from nodeX (192.168.0.1 port 655) which does not match existing entry" and it turns out that the mismatches
2017 Aug 22
0
using both ConnectTo and AutoConnect to avoid network partitions
On Mon, Aug 21, 2017 at 05:37:06PM -0700, Nirmal Thacker wrote: > Today our Tinc network saw a network partition when we took one tinc node > down. > > We knew there was a network partition since the graph showed a split. This > graph is not very helpful but its what I have at the moment: > > http://i.imgur.com/XP2PSWc.png The graph is very clear. > Some questions:
2015 Jan 13
2
tinc connectTo cleanup
thanks Guus for the quick response. I am using tinc 1.1 if I use AutoConnect = yes then will it automatically remove connections that are no longer in use? What are the security issues with 'AutoConnect = yes' I should be worried? for my use case I might go upto 20 to 30 + tinc hosts connected to single tinc box. as per the doc AutoConnect = yes is experimental, I am using it in our
2017 Aug 31
0
using both ConnectTo and AutoConnect to avoid network partitions
On Thu, Aug 31, 2017 at 10:40:39AM -0700, Nirmal Thacker wrote: > Following your suggestion we reconfigured our tinc network as follows. > Here is a new graph and below is our updated configuration: > http://imgur.com/a/n6ksh [...] > We are concerned that: > - We still dont see edges in the graph that show connections between every > blue labeled node to both the yellow labeled
2014 Dec 29
2
tinc reload not establishing new connections
I have a use case where I have to add new "ConnectTo=host" in tinc.conf and reload tinc. This is to make sure existing connections do not get disconnected. I use ... /usr/local/sbin/tinc --pidfile /var/run/tinc.vpn.pid -n vpn reload this works for most part, however, I am now seeing instance where I have to do a restart instead of reload. New connection works after a restart. Is there a
2017 Sep 02
2
[Announcement] Tinc versions 1.0.32 and 1.1pre15 released
With pleasure we announce the release of tinc versions 1.0.32 and 1.1pre15. Here is a summary of the changes in tinc 1.0.32: * Fix segmentation fault when using Cipher = none. * Fix Proxy = exec. * Support PriorityInheritance for IPv6 packets. * Fixes for Solaris tun/tap support. * Bind outgoing TCP sockets when ListenAddress is used. Thanks to Vittorio Gambaletta for his contribution to this
2017 Sep 02
2
[Announcement] Tinc versions 1.0.32 and 1.1pre15 released
With pleasure we announce the release of tinc versions 1.0.32 and 1.1pre15. Here is a summary of the changes in tinc 1.0.32: * Fix segmentation fault when using Cipher = none. * Fix Proxy = exec. * Support PriorityInheritance for IPv6 packets. * Fixes for Solaris tun/tap support. * Bind outgoing TCP sockets when ListenAddress is used. Thanks to Vittorio Gambaletta for his contribution to this
2016 Jun 21
2
Metadata flooding
Hi, we use a tinc network of about 400 nodes, all of them linux servers, partly in different datacenters (but generally low latency). Usually this is working very well (for weeks without a problem). >From time to time the whole network goes down though. This happened when we restarted a larger number of servers or when there was a connectivity issue between datacenters or some (short)
2017 Sep 13
2
purge doesn't remove dead nodes
> > Maybe I should allow the reachable keyword for the dump graph command as > well, so you can do: > > tincctl -n <netname> dump reachable graph > > ...and not see any nodes which are unreachable. Is that what you want? This would help since dead nodes do not clutter the visual representation. What are the effects, if any, of dead nodes in the hosts/ dir? Thanks
2016 Jun 22
1
Metadata flooding
Thank you for the helpful advice. We will try to group the servers with different ConnectTo servers first. If this does not help we will look at the TunnelServer solution. Just to make sure we understand TunnelServer correctly: do you need to specify every host as ConnectTo that the host should be able to communicate with or is it sufficient to just provide the hosts files? Thanks, Hendrik
2015 Jun 11
2
tinc as layer 2 switch doesn't automatically mesh with other nodes
We have a handful of nodes set up. Some are NAT'd but a few have direct access to the Internet. Sample confs: HostA: Name = HostA AddressFamily = any Interface = tap0 Mode = switch Connectto = HostB GraphDumpFile = /tmp/mesh HostB: Name = HostB AddressFamily = any Interface = tap0 Mode = switch Connectto = HostA GraphDumpFile = /tmp/mesh And so on. If I use HostA as the main meta sever.