On Fri, Feb 22, 2013 at 02:58:09PM +0000, Mike C wrote:
> I am looking networking together about 1000-2000 sites across the
> country. I've been looking through these mailing lists. Saw the thread
> from the person who had 1000+ running on Amazon, and how they
> essentially stripped all security out of it. Also know that the
> ChaosVPN uses tinc, for at least 130+ sites although I'm a bit fuzzy
> on the details for it.
Currently ChaosVPN has 200+ registered nodes, but not all of them are always
online.
> Are there any other cases of very large VPNs? Are there any pitfalls,
> tricks or special settings required for such large numbers? I was
> thinking of simulating an environment using Amazon, but looks like
> Amazon may not be the best to use for a test bed.
Tinc requires roughly 250 bytes of memory for each node in the VPN. So if you
have 2000 sites, then it will use 500 kilobytes. So this not really an issue
unless you run it on devices with very little memory. Tinc makes connections
on demand, and uses connectionless UDP for most of them, so there is not much
overhead there, except for the hub nodes.
The largest overhead is that tinc daemons need to exchange information with
each other. This happens when a tinc daemon just starts and connects to the
hub, from which it has to learn information about the other nodes. Also, each
time a node joins or leaves the VPN this will be broadcast to all other nodes.
There is roughly 100 bytes of information that needs to be exchanged per node;
so in a 2000 node network a new node that connects to the hub will receive 200
kilobyte from the hub, and 100 bytes are broadcast to all other nodes to inform
them of the new node that just connected. You can calculate the load based on
how many nodes you expect to join/leave the VPN every second.
> I have read elsewhere on this list that tincd isn't multi-threaded and
> to get the most out of a multi-core server you should split the VPN
> into smaller VPNs. Is this still the case and if so, are there any
> reasons that would prevent it being made multi-threaded?
The reason it is not multi-threaded is because that makes the code more
complex, especially because it would require locking for many data structures.
Also, the CPU (assuming it is a decent one) becomes the bottleneck only if you
have a network faster than 100 Mbit/s. But indeed, if you have that, and you
have a hub-and-spoke model anyway, then you can run multiple daemons on the
hub.
> In my case, the majority of the traffic will ultimately reach 1
> location/datacentre. So hub-and-spoke model. There's no need for
> meshing between sites, except for maybe between the hub(s) themselves.
> IPSec doesn't work so well, given problems with NAT (even with NAT-T)
> - which is where tinc comes in. The hub itself is unfortunately NAT'd,
> and so are most of the remote sites, so I am trying to think of
> alternative approaches. Thinking tinc could be used as an intermediary
> between the dc and the remote sites. E.g.
>
> Datacentre <--> intermediary tinc server on non-NAT public IP
<--> remote sites.
>
> In fact I was thinking of running multiple intermediary tinc servers,
> to provide some form of redundancy if one failed (using the Subnet
> #weight setting).
Using multiple intermediary servers is a good idea. However, you don't have
to
assign Subnets to them at all, they can just be there to help the datacentre
and remote sites punch holes through their NATs.
--
Met vriendelijke groet / with kind regards,
Guus Sliepen <guus at tinc-vpn.org>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: Digital signature
URL:
<http://www.tinc-vpn.org/pipermail/tinc/attachments/20130222/f7c4c254/attachment.pgp>