raul
2014-Sep-23 10:55 UTC
Using Tinc to create overlay network for VMs or LXC containers?
Hi, I am trying to understand Tinc better. I have gone through the documentation and the recent mailing lists. What I am trying to do is see if Tinc can be an alternative to using OVS with GRE tunnels to connect VMs on 2 subnets, only in this case I am using LXC containers. LXC creates a NAT network called lxcbr0 typically on a 10.0.3.0 subnet (similar in functionality to virbr0 for KVM) that connects the containers to each other and to the internet. So if we take a scenario of 2 LXC hosts; Host A and Host B across the Internet with let's say 5 containers in each host connected to their respective lxcbr0 networks. Containers have IPs 10.0.3.2 .. etc. Host A and B will have public IPs ie 1.2.3.4 & 3.4.5.6. With that setup all containers use respective lxcbr0 to connect to the internet. Now if I add an OVS bridge to Host A and Host B and connect the 2 OVS bridges with a GRE tunnel, I can connect the containers to the OVS bridge (this will be a second network interface for the containers so the will have 2 IPs, one on lxcbr0 and one of the new OVS bridge) on each side and have them on the same network. Is it possible to replicate the OVS bridge with Tinc? The advantage with using TINC is you don't have to set up an OVS bridge on the host and limits dependencies on host. I have currently installed and set up Tinc on one of the containers on each side an can get them to ping each other. As both containers are behind a NAT with private IPs I had to port forward 655 udp/tcp on one host (can be either Host A or Host B) to have the Tinc containers connect to each other. With this setup container A on Host A can ping container B on Host B and vice versa, but this is a one to one connection. I could install Tinc in all the containers on both sides and use 'ConnecTo' and create a mesh which will possibly work, but is there a better or more efficient way to do this? Thanks a ton to all the users on this list! Some of the stuff can get really complex to get your head around. And thanks to Guus for a fantastic program! -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.tinc-vpn.org/pipermail/tinc/attachments/20140923/a86a1d63/attachment.html>
Etienne Dechamps
2014-Sep-24 15:55 UTC
Using Tinc to create overlay network for VMs or LXC containers?
On Tue, Sep 23, 2014 at 11:55 AM, raul <raulbe at gmail.com> wrote:> I could install Tinc in all the containers on both sides and use 'ConnecTo' > and create a mesh which will possibly work, but is there a better or more > efficient way to do this?I don't see why it wouldn't work. Note that you don't need a full mesh, just make sure all tinc nodes are part of the same graph and you should be good to go. tinc will automatically figure it out and establish direct UDP connections between nodes (using UDP hole punching to circumvent NATs if necessary), even if they don't have a direct "ConnectTo" declaration for each other. In other words tinc will always use the most efficient route (i.e. direct UDP communication) assuming it's technically feasible over the underlying network.
Saverio Proto
2014-Sep-24 21:29 UTC
Using Tinc to create overlay network for VMs or LXC containers?
> Is it possible to replicate the OVS bridge with Tinc? The advantage with > using TINC is you don't have to set up an OVS bridge on the host and limits > dependencies on host.yes you can. Keep in mind that OVS is a solution running completely in kernel space, while tinc uses tun/tap drivers so you will measure different performances. Saverio
Seemingly Similar Threads
- Query - Issue creating VMs when not using bridge that Xen created, but using ovswitch.
- Re: create ovs port without root
- create ovs port without root
- Re: libvirt + openvswitch, <parameters interfaceid='x'/> seems less-than-useful?
- Using Openvswitch and qemu:///session