Is there some information somewhere that explains just how exactly the ?full mesh routing? works? It seems to me that if Node B is supposed to be able to send stuff directly to Node C (instead of Node A), Node B needs to know where Node C is. Does this mean that each Node has to have a complete network map? This seems to become very unwieldy with larger deployments. How far does tinc reasonably scale up, in terms of nodes? Also, what happens if some of those Nodes keep changing their IP address, e.g. because they are on laptops moving from WiFi hotspot to hotspot, and the like? (I?m new to this; I?m trying to figure out whether tinc is right for a project ?) Thanks, Johannes.
On 4 June 2015 at 22:36, Johannes Ernst <johannes.ernst at gmail.com> wrote:> It seems to me that if Node B is supposed to be able to send stuff directly to Node C (instead of Node A), Node B needs to know where Node C is. Does this mean that each Node has to have a complete network map? This seems to become very unwieldy with larger deployments.Yes, each node knows about the entire graph and the physical address of every node.> How far does tinc reasonably scale up, in terms of nodes?I've not done any large scale tinc deployment, but I would guess it should be able to handle at least 100. Maybe 1000, but that's less clear. It also depends on how much "activity" there is on the graph (e.g. nodes connecting and disconnecting), because such activity is broadcast to every single node.> Also, what happens if some of those Nodes keep changing their IP address, e.g. because they are on laptops moving from WiFi hotspot to hotspot, and the like?tinc should be able to handle that just fine, it is designed to be used in hostile network environments such as these. Just make sure you set aggressive timeouts so that tinc notices quickly when the network environment changes. That said, as mentioned above, such events do generate messages that are broadcast to the entire graph, which can become expensive if it happens frequently and the graph contains a large number of nodes.
This is very helpful, thanks. In my scenario, most nodes will run behind firewalls. Some nodes (but not many) will reside behind the same firewall (without port forwarding). Is there a way of configuring tinc so that all traffic gets routed via the ?server? node on the public internet, except for the communications between two nodes behind the same firewall? And if so, is there a way of cutting down on the graph updates given they are mostly pointless in this scenario? Thanks, Johannes.> On Jun 4, 2015, at 16:00, Etienne Dechamps <etienne at edechamps.fr> wrote: > > On 4 June 2015 at 22:36, Johannes Ernst <johannes.ernst at gmail.com> wrote: >> It seems to me that if Node B is supposed to be able to send stuff directly to Node C (instead of Node A), Node B needs to know where Node C is. Does this mean that each Node has to have a complete network map? This seems to become very unwieldy with larger deployments. > > Yes, each node knows about the entire graph and the physical address > of every node. > >> How far does tinc reasonably scale up, in terms of nodes? > > I've not done any large scale tinc deployment, but I would guess it > should be able to handle at least 100. Maybe 1000, but that's less > clear. It also depends on how much "activity" there is on the graph > (e.g. nodes connecting and disconnecting), because such activity is > broadcast to every single node. > >> Also, what happens if some of those Nodes keep changing their IP address, e.g. because they are on laptops moving from WiFi hotspot to hotspot, and the like? > > tinc should be able to handle that just fine, it is designed to be > used in hostile network environments such as these. Just make sure you > set aggressive timeouts so that tinc notices quickly when the network > environment changes. That said, as mentioned above, such events do > generate messages that are broadcast to the entire graph, which can > become expensive if it happens frequently and the graph contains a > large number of nodes.
Possibly Parallel Threads
- Mesh and scalability
- tinc as layer 2 switch doesn't automatically mesh with other nodes
- LocalDiscovery flip flopping and network design tips
- Concept clarification between multiple ConnecTo and multiple netname
- Concept clarification between multiple ConnecTo and multiple netname