Displaying 20 results from an estimated 10000 matches similar to: "Tinc routing question"
2018 Jan 15
0
Tinc routing question
On Wed, Jan 10, 2018 at 11:29:22AM +0100, cr0n wrote:
> Meta connection graph:
>
> A – B – C – D
> │ │
> └─ E ───────┘
>
> Node configuration:
>
> • StrictSubnets = yes
> • AutoConnect = yes
> • B has Forwarding = internal, all other nodes have Forwarding = off
>
> All nodes can reach each other directly with UDP *except* A and D.
> Packets
2018 Oct 10
1
Tinc invite options
Dear All,
We are trying the Tinc invites to let nodes join the network.
This is working as described but we want to push some configuration for
some nodes but this seemed not to be working.
What is working is the following invite:
Name = test_invite
NetName = test_VPN
ConnectTo = test_hub01
Ifconfig = 172.16.1.4/24
Subnet = 172.16.1.4
2015 Nov 22
5
Authenticating VPN addresses: a proposal
TL;DR: a proposal for a new tinc feature that allows nodes to filter
ADD_SUBNET messages based on the metaconnection on which they are
received, so that nodes can't impersonate each other's VPN Subnets.
Similar to StrictSubnets in spirit, but way more flexible.
BACKGROUND: THE ISSUE OF TRUST IN A TINC NETWORK
In terms of metaconnections (I'm not discussing data tunnels here),
one of
2013 Jan 24
3
Conflicting Default Values. A trusts B. B trusts EvilNode. Does that mean A trusts EvilNode?
*You should repeat this for all nodes you ConnectTo, or which ConnectTo
you. However, remember that you do not need to ConnectTo all nodes in the
VPN; it is only necessary to create one or a few meta-connections, after
the connections are made tinc will learn about all the other nodes in the
VPN, and will automatically make other connections as necessary. *
The above is from the docs. Assuming
2015 Jan 12
2
tinc connectTo cleanup
I have a use case where my tinc.conf ConnectTo can go upto 20 + hosts.
I am planning to automate a periodic cleanup of ConnectTo in the tinc.conf
file, the issue is I am not able to figure out which ConnectTo is been used
and which are stale, say NOT used in last 2 to 3 days.
I want to remove those ConnectTo which are no longer actively used.
Is it possible to find which ConnectTo are not used.
2015 May 04
3
Isolating a subnet on demand
On 4 May 2015 at 20:53, Anne-Gwenn Kettunen <anwen at asphodelium.eu> wrote:
> We started to take a look about that, and apparently, it seems that the IP
> in the public key is taken into account when a client connects to a gateway.
> Spoofing at that level doesn't seem easy, because the IP address seems to be
> part of the authentication process.
I'm having trouble
2014 Dec 29
2
tinc reload not establishing new connections
I have a use case where I have to add new "ConnectTo=host" in tinc.conf and
reload tinc. This is to make sure existing connections do not get
disconnected.
I use ...
/usr/local/sbin/tinc --pidfile /var/run/tinc.vpn.pid -n vpn reload
this works for most part, however, I am now seeing instance where I have to
do a restart instead of reload. New connection works after a restart.
Is there a
2015 Jan 13
2
tinc connectTo cleanup
thanks Guus for the quick response.
I am using tinc 1.1
if I use AutoConnect = yes then will it automatically remove connections
that are no longer in use?
What are the security issues with 'AutoConnect = yes' I should be worried?
for my use case I might go upto 20 to 30 + tinc hosts connected to single
tinc box.
as per the doc AutoConnect = yes is experimental, I am using it in our
2013 Jan 13
1
Understanding tinc edge connections and re-routing
Hi,
I have successfully setup a tinc network between five hosts (in switch
mode). Two of the hosts have static and known IP addresses (S1 and
S2). Other hosts (H3-H5) connect one (or both) of them. The traffic flows
nicely between all hosts.
The initial edges (ConnectTo configuration directives) in my test network
are:
S1<->S2
H3 -> S1 and S2
H4 -> S1
H5 -> S2
As far as I have
2017 Aug 22
2
using both ConnectTo and AutoConnect to avoid network partitions
Hi
Today our Tinc network saw a network partition when we took one tinc node
down.
We knew there was a network partition since the graph showed a split. This
graph is not very helpful but its what I have at the moment:
http://i.imgur.com/XP2PSWc.png
- (ignore node labeled ignore, since its a dead node anyways)
- node R was shutdown for maintenance
- We saw a network split
- we brought node R
2016 Nov 10
1
static configuration
Hello,
I am tying to create tinc vpn for the ~1000 nodes and was thinking why meta connections are
needed at all if I only need static configuration where every node knows addresses of other hosts
and due to the amount of traffic any indirect connections will not work, so DirectOnly=yes is a must
and then passing around routing information is not needed, right? Currently I have 10 nodes
2017 Aug 31
2
using both ConnectTo and AutoConnect to avoid network partitions
Hi Guus
Following your suggestion we reconfigured our tinc network as follows.
Here is a new graph and below is our updated configuration:
http://imgur.com/a/n6ksh
- 2 Tinc nodes (yellow labels) have a public external IP and port 655 open.
They both have ConnectTo's to each other and AutoConnect = yes
- The remainder tinc nodes (blue labels) have their tinc.conf set up as
follows:
2017 May 02
4
Multiple default gateway from tinc node
Hi, Lars
Thanks for your suggestion, will give it a try later to see how it performs.
But, yesterday, I did a below test:
A ConnectTo B and C, B ConnectTo D, C ConnectTo D; All nodes turned "IndirectData" on in its host configuration, so the tunnel only follow metacomnection instead of direct connect.
D announced default route by having the Subnet = 0.0.0.0/0 statement in its host
2017 May 01
2
Multiple default gateway from tinc node
Hi, Tinc expert
If there’re multiple tinc nodes announce default route in their host configuration of Subnet = 0.0.0.0/0, how for the remaining nodes to select which is the best route to get out?
All of them participant in the same tinc net.
I did some test, like A as the branch, B,C,D as the nodes to announce default route; when all up , A select B, but if B down, A will go C, C down, A will
2018 Apr 24
1
Point-to-Point persistent connection on Tinc 1.1pre14
Hi
I'd like to build a Point-to-Point connection in Tinc 1.1pre14. My question
specifically is how does one configure the conf file to achieve this
Here's a simplified example:
1. There are 10 clients and 2 server nodes
2. All 10 clients have a Point-to-Point connection with the 2 server nodes
3. The 2 server nodes have Point-to-Point connection with all 10 clients.
4. In some ways this
2017 Aug 31
2
using both ConnectTo and AutoConnect to avoid network partitions
Thanks Guss, some comments and questions:
If you make the yellow nodes ConnectTo all other nodes, and not have
> AutoConnect = yes, and the other nodes just have AutoConnect = yes but
> no ConnectTo's, then you will get the desired graph.
The reason this approach is not desirable is because it fails at
automation. It requires us to add a new line of AutoConnect = <new node
that
2017 Aug 22
3
using both ConnectTo and AutoConnect to avoid network partitions
Hi Guus
Thanks for clarifying. Some follow up questions:
- How do we patch 1.1pre14 with this fix? Or will there be a 1.1pre15 to
upgrade to?
- What is the workaround until we patch with this fix? Using a combination
of AutoConnect and ConnectTo?
- When we use ConnectTo, is it mandatory to have a cert file in the hosts/*
dir with an IP to ConnectTo ?
-nirmal
On Tue, Aug 22, 2017 at 12:10
2017 Sep 12
2
purge doesn't remove dead nodes
Hi
We have several stale nodes in our tinc network and I'd like to remove
these.
These nodes show up in graph dumps as red nodes, indicating they are
unreachable.
We run: tinc -n <vpn-name> purge
Nothing happens. If we tail the logs at /var/log/syslog, we dont see an ack
or message concerning the purge either. The dead nodes still show up in the
graphs and their certs are still
2018 Dec 11
3
subnet flooded with lots of ADD_EDGE request
Hello,
We're suffering from sporadic network blockage(read: unable to ping
other nodes) with 1.1-pre17. Before upgrading to the 1.1-pre release,
the same network blockage also manifested itself in a pure 1.0.33
network.
The log shows that there are a lot of "Got ADD_EDGE from nodeX
(192.168.0.1 port 655) which does not match existing entry" and it
turns out that the mismatches
2015 Jun 11
2
tinc as layer 2 switch doesn't automatically mesh with other nodes
We have a handful of nodes set up. Some are NAT'd but a few have direct
access to the Internet.
Sample confs:
HostA:
Name = HostA
AddressFamily = any
Interface = tap0
Mode = switch
Connectto = HostB
GraphDumpFile = /tmp/mesh
HostB:
Name = HostB
AddressFamily = any
Interface = tap0
Mode = switch
Connectto = HostA
GraphDumpFile = /tmp/mesh
And so on. If I use HostA as the main meta sever.