Hi all, I am currently deploying tinc as an alternative to OpenVPN. My setup includes a lot of nodes and some of them are sitting together behind the same router on the same network segment. (E.g. connected to the same switch.) I noticed, that those nodes do never talk directly to each other via their private ip-addresses, but instead use the NATed address they got from the router. Furthermore, some talk only over a third node, that sits outside the LAN. ====Example === Router1 : Public IP 1.1.1.1 Local LAN behind said router Subnet 192.168.0.x/24 Tinc-VPN : Subnet 172.25.3.0/24 Node1 Behind Router1 NAT-UDP 1.1.1.1:1001 LAN-IP 192.168.0.101 Tinc-IP 172.25.3.101 Node2 Behind Router1 NAT-UDP 1.1.1.1:1002 LAN-IP 192.168.0.102 Tinc-IP 172.25.3.102 Node3 Public IP 2.2.2.2 Tinc-IP 172.25.3.1 Node1 connects to Node3. Node2 connects to Node3. Both nodes can ping Node3's tinc-ip. But both nodes (1 & 2) do not get a direct connection, they only talk via Node3. So pinging Node2 from Node1 results in a packet from Node1 to Node3 and from Node3 to Node2's NATed UDP-Port at the router. Sometimes, It results in a "direct" packet from Node1 to Node2's public UDP-Port. It seems to me as if tinc is unable to see, that Node1 and Node2 are sitting "right next to each other", and is only considering the publicly visible UDP port to send data to. Can anyone confirm this, or do I have some misunderstanding regarding tinc? Additional information: Every Node has every other node's public key. The host configuration is always the same: Port = 1655 IndirectData = no PMTUDiscovery = yes Compression = 10 Only Node3 has a Address set. This node acts kinda like a "server", where all other nodes connect to. I plan to add more "server-like" nodes in the near future that provide a fixed address. The config file looks like this: Name = NodeX ConnectTo = Node3 (this line is of course missing on Node3) Device = {.. Windows UUID.. } DeviceType = tap Mode = switch Node adresses are assigned using a DHCP server on Node3. I'd be happy hearing from you guys. Best regards Daniel Schall -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.tinc-vpn.org/pipermail/tinc/attachments/20100506/5f1bec65/attachment.htm>
On Thu, May 06, 2010 at 03:47:57PM +0200, Daniel Schall wrote:> I am currently deploying tinc as an alternative to OpenVPN. > > My setup includes a lot of nodes and some of them are sitting together > behind the same router on the same network segment. > > (E.g. connected to the same switch.) > > I noticed, that those nodes do never talk directly to each other via their > private ip-addresses, but instead use the NATed address they got from the > router. > > Furthermore, some talk only over a third node, that sits outside the LAN.[...] If you only have the nodes behind the NAT ConnectTo the node with a public IP address, they will never be able to discover that they are on the same LAN. However, if you add a "ConnectTo = Node2" to Node1's tinc.conf, and add "Address = 192.168.0.102" to Node1's hosts/Node2 file, then it will make a direct connection. -- Met vriendelijke groet / with kind regards, Guus Sliepen <guus at tinc-vpn.org> -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: Digital signature URL: <http://www.tinc-vpn.org/pipermail/tinc/attachments/20100506/f6317742/attachment.pgp>
> Node1 connects to Node3. > > Node2 connects to Node3. > > Both nodes can ping Node3's tinc-ip. > But both nodes (1 & 2) do not get a direct connection, they only talk via > Node3.Tin does no autoconnect to each other, it does only connect if you set "connectto" line in tinc.conf. But tinc does mesh after connect, so all connections will be announced and packtes will find destination automaticly. ALBI...
Thank you guys for your answers.> If you only have the nodes behind the NAT ConnectTo the node with a publicIP> address, they will never be able to discover that they are on the sameLAN.> However, if you add a "ConnectTo = Node2" to Node1's tinc.conf, and add> "Address = 192.168.0.102" to Node1's hosts/Node2 file, then it will make a> direct connection.Unfortunately, the nodes get their IP by DHCP, so a fixed address would not help. Setting up a local node behind each router with a static IP is also not possible, since my nodes are often on-the-go in foreign networks, where I am unable to set up static addresses by myself.> Tin does no autoconnect to each other, it does only connect if you set> "connectto" line in tinc.conf. But tinc does mesh after connect, so all> connections will be announced and packtes will find destination> automaticly.As far as I understood the documentation, "connectto" is used to connect nodes to each other in order to exchange meta-information, where each node is located at (IP:PORT). The actual "meshing" always occurs directly between all nodes, unless this is impossible due to firewalls etc. So shouldn't it be enough to connect all nodes to a centralized one (Node3), where they can exchange their address-details in order to connect diretly to those addresses afterwards? I have attached a sketch of my issue. The upper half shows the physical setup with 3 nodes, one static and two behind a router. The bottom half shows the communication flow between node1 and node2. The packets follow the line to the NAT-Port, get passed to the other NAT-Port and back to the target node. The dottet line shows the desired flow, which would be much shorter than going over the router. In my opinion, tinc does not support multiple endpoints, hence Node3 saves only the publicly visible (NATed) endpoints for Node1 and Node2. The privately visible endpoints in the LAN are not saved and announced back. Therefore, Node1 and Node2 never know, they are on the same network. Do you have any advice for me, how to achieve the desired behavior? I'd suggest that each node announces its local endpoint to other nodes on connectto and the other node saves this endpoint together with the publicly visible one where it sees the packets coming from. That would enable each node to select the "best" endpoint to connect to the other node. This selection could either be algorithmic by calculating the shortest distance to the other endpoint or by trying out and selecting the one with the lowest round trip time. Best Daniel -------------- n?chster Teil -------------- Ein Dateianhang mit HTML-Daten wurde abgetrennt... URL: <http://www.tinc-vpn.org/pipermail/tinc/attachments/20100507/cca3b129/attachment-0001.htm> -------------- n?chster Teil -------------- Ein Dateianhang mit Bin?rdaten wurde abgetrennt... Dateiname : TINC.jpg Dateityp : image/jpeg Dateigr??e : 22744 bytes Beschreibung: nicht verf?gbar URL : <http://www.tinc-vpn.org/pipermail/tinc/attachments/20100507/cca3b129/attachment-0001.jpg>
> Thanks for the diagram - what did you use to create it?The diagram was made using Microsoft Visio 2007.> First, what version of tinc are you using on your nodes - is it 1.0.13?I am using tinc 1.013, the pre-compiled version from the website. All the nodes are running windows.> A third option that might work is when doing PMTU discovery afterexchanging> session keys between Node1 and Node2 (via their meta connections withNode3 of> course), that they also send some MTU probes to the broadcast address. The > receiving node will update the known address of the peer when it receivesa> valid UDP packet, whereever it came from.> I think the third option is easiest to implement, I don't know if it willwork> though. I'm a little busy this month so if you or someone else wants totry to> implement it, please go ahead :)I am curious to implement it, but I am also rather busy. Currently, I am studying the sources to evaluate, what to implement: a) a broadcast discovery algorithm to find all nodes in the same network segment OR b) making each node send its endpoints to all other nodes to let them choose what endpoint they want to contact the node In the meantime, I've found out why the nodes do not communicate over their public NAT-addresses in some circumstances. It's the router that blocks UDP packets from other sources than the one the connection was originally established, especially packets from "behind" the router that get passed to a port at its public interface. That will also prevent other nodes to contact the ones behind the router, since the packets they send come from other endpoints than the one the internal node connected to in the first place. Best Daniel
On Thu, May 6, 2010 at 8:47 AM, Daniel Schall <Daniel-Schall at web.de> wrote:> Hi all, > > > > I am currently deploying tinc as an alternative to OpenVPN. > > My setup includes a lot of nodes and some of them are sitting together > behind the same router on the same network segment. > > (E.g. connected to the same switch.) > > > > I noticed, that those nodes do never talk directly to each other via their > private ip-addresses, but instead use the NATed address they got from the > router. > > Furthermore, some talk only over a third node, that sits outside the LAN. > > > > ====Example ===> > > > Router1?????????????? : > > Public IP????????????? 1.1.1.1 > > > > Local LAN behind said router > > ??????????????? Subnet ??????????????? 192.168.0.x/24 > > > > Tinc-VPN???????????? : > > ??????????????? Subnet???????????????? 172.25.3.0/24 > > > > > > Node1 > > ??????????????? Behind Router1 > > ??????????????? NAT-UDP??????????? 1.1.1.1:1001 > > ??????????????? LAN-IP???????????????? 192.168.0.101 > > ??????????????? Tinc-IP????????????????? 172.25.3.101 > > > > Node2 > > ??????????????? Behind Router1 > > ??????????????? NAT-UDP??????????? 1.1.1.1:1002 > > ??????????????? LAN-IP???????????????? 192.168.0.102 > > ??????????????? Tinc-IP????????????????? 172.25.3.102 > > > > Node3 > > ??????????????? Public IP????????????? 2.2.2.2 > > ??????????????? Tinc-IP????????????????? 172.25.3.1 > > > > Node1 connects to Node3. > > Node2 connects to Node3. > > Both nodes can ping Node3?s tinc-ip. > > > > But both nodes (1 & 2) do not get a direct connection, they only talk via > Node3. > > So pinging Node2 from Node1 results in a packet from Node1 to Node3 and from > Node3 to Node2?s NATed UDP-Port at the router. > > Sometimes, It results in a ?direct? packet from Node1 to Node2?s public > UDP-Port. > > > > It seems to me as if tinc is unable to see, that Node1 and Node2 are sitting > ?right next to each other?, and is only considering the publicly visible UDP > port to send data to. > > > > > > Can anyone confirm this, or do I have some misunderstanding regarding tinc? > > > > Additional information: > > Every Node has every other node?s public key. The host configuration is > always the same: > > Port????????????????????????????????????????????????????? = 1655 > > IndirectData????????????????????????????????????? = noI assume you tried IndirectData both ways and it did not help. Not sure which way it should be but i would think you would need other tinc deamons to make a direct connection to you even if they are not in the ConnectTo list.> > PMTUDiscovery ???????????????????????????? = yes > > Compression??????????????????????????????????? = 10 > > > > Only Node3 has a Address set. This node acts kinda like a ?server?, where > all other nodes connect to. > > I plan to add more ?server-like? nodes in the near future that provide a > fixed address. > > > > The config file looks like this: > > Name?????????????????? = NodeX > > ConnectTo???????? = Node3 (this line is of course missing on Node3) > > Device? ?????????????? = {.. Windows UUID.. } > > DeviceType??????? = tap > > Mode?????????????????? = switch > > > > Node adresses are assigned using a DHCP server on Node3.Are you saying your tinc addresses are already received via DHCP? i am interested, please explain.> > > > > > I?d be happy hearing from you guys. > > > > > > Best regards > > > > Daniel Schall > > > > _______________________________________________ > tinc mailing list > tinc at tinc-vpn.org > http://www.tinc-vpn.org/cgi-bin/mailman/listinfo/tinc > >
> Are you saying your tinc addresses are already received via DHCP? i > am interested, please explain.The internal addresses (of the tun/tap interface) get set using DHCP.> i don't know if totally understand what you are saying but "disabling > loopback" on the router or disabling wireless-to-wireless connections > would be detrimental.I do not have control over the router. It's a third-party network where the nodes are just guests. Therefore, I can't change settings on the router. Best Daniel
Hey guys, just wanted to let you know what I planned to implement in the near future: To make nodes on the same LAN discover each other, we've got two options as already mentioned (and discussed) before: 1) Sending broadcasts that other nodes pick up and thus notice, there is another node on the same LAN. Although this is straightforward to implement and a valid solution to tackle my original issue, this approach raises another issue. We would have to agree on a common broadcast port, which is not that easy when thinking about multiple tinc-daemons on one machine. Only one daemon would get the broadcast, since only one daemon can listen to the port the broadcasts will be sent to. Of course one could argue that each tinc-network specifies a different broadcast port, but this would have to be a network-specific setting and in tinc, there seems to be no room for this type of settings. Therefore, I will stick with option 2: 2) Each node publishes its own private endpoints in the tinc meta-layer like it publishes information about its edges (ADD_ENDPOINT, DEL_ENDPOINT). Other Nodes will get this information and check, if they've got an endpoint on the same network (IP / NETMASK combination). If so, they will try to connect to the announced endpoint. On success, they will drop the direct connection to the public endpoint of the host they just connected to and will use the local endpoint instead for forwarding data packets. Otherwise, if the connection fails, they will retry again later, with increasing intervals. Changes in the endpoint availability will be sent to the meta-layer and each node adapts its own connections by reading this information. So if an endpoint looses it's connectivity, the public address of the node will be used instead, like a local endpoint had never existed at all. The meta-connection will not be affected by this approach, only the routing for data-packets will be using the local endpoints. For now, I was thinking about a few configuration switches for each host: LocalAddressAnnounce [yes | no ] whether a host will announce its local endpoints or not LocalAddressConnect [yes | no ] wheter a host will connect to local endpoints announced by others LocalAddressPriority [yes | no ] if the local address should always be used to connect to or only if connections to the public endpoint fail LocalAddressCheckInterval [ seconds ] how often should local interfaces be checked for changes in connectivity / endpoint address The details of this approach are yet to work out and I am quite busy with other stuff. But I hope to get it implemented in a few weeks from now on. Any comments are welcome. Best, Daniel
Hi, Am 25.05.2010 15:55, schrieb Daniel Schall:> 2) On success, they > will drop the direct connection to the public endpoint of the host they just > connected to and will use the local endpoint instead for forwarding data > packets.Imagine C -- A -- B with A is the public endpoint and B, C are in the same subnet. Now let C, B discover that they are in the same subnet. Then they will partition the network into B -- C, A where B,C cannot reach A and vica versa.> The details of this approach are yet to work out and I am quite busy with > other stuff. But I hope to get it implemented in a few weeks from now on. > Any comments are welcome.Regards, M. Braun -------------- n?chster Teil -------------- Ein Dateianhang mit Bin?rdaten wurde abgetrennt... Dateiname : signature.asc Dateityp : application/pgp-signature Dateigr??e : 262 bytes Beschreibung: OpenPGP digital signature URL : <http://www.tinc-vpn.org/pipermail/tinc/attachments/20100525/fdebf22f/attachment.pgp>
Hi Michael, On Tue May 25 20:13:51 CEST 2010, Michael Braun wrote:> > 2) On success, they > > will drop the direct connection to the public endpoint of the host theyjust> > connected to and will use the local endpoint instead for forwarding data > > packets.> Imagine C -- A -- B with A is the public endpoint and B, C are > in the same subnet. Now let C, B discover that they are in the same > subnet. Then they will partition the network into B -- C, A where > B,C cannot reach A and vica versa.The "public endpoint" refers to the local UDP-endpoint of each node, for example a node which has two physical LANs connected with IP addresses 192.168.0.10 and 10.0.0.12 and also has a NATed connection to the outside world, would have three UDP endpoints: 192.168.0.10:655, 10.0.0.12:655 and pu.bl.ic.ip:1337 (an endpoint on the NAT router) Regarding your example this would mean for the three nodes C -- A -- B, with B and C on the same LAN, that after a short while, the connections between B and C should be established directly. Let's make an example and annotate UDP endpoints for the nodes: A: 1.1.1.1:123 (public) B: 192.168.0.1:123 (LAN) 2.2.2.2:1123 (public, NATed) C: 192.168.0.2:123 (LAN) 2.2.2.2:2123 (public, NATed) First, the situation would look like this: C (2.2.2.2:2123) -- (1.1.1.1:123) A (1.1.1.1:123) -- (2.2.2.2:1123) B After C and B have detected, they are on the same LAN, they will directly connect: C (2.2.2.2:2123) -- (1.1.1.1:123) A (1.1.1.1:123) -- (2.2.2.2:1123) B ++++ C (192.168.0.2:123) -- (192.168.0.1:123) B The connection to A is not affected by this. Best, Daniel
> 1) Sending broadcasts that other nodes pick up and thus notice, there is > another node on the same LAN. > Although this is straightforward to implement and a valid solution to tackle > my original issue, this approach raises another issue. > We would have to agree on a common broadcast port, which is not that easy > when thinking about multiple tinc-daemons on one machine. > Only one daemon would get the broadcast, since only one daemon can listen to > the port the broadcasts will be sent to. > Of course one could argue that each tinc-network specifies a different > broadcast port, but this would have to be a network-specific setting and in > tinc, there seems to be no room for this type of settings. > Therefore, I will stick with option 2:It is not more easy to use mDNS messages using the AVAHI library ? It uses multicast on UDP port 5353. It is really the way to go to do service discovery, that is what you are doing. Saverio