Gus: I guess my primary point of confusion is that the non-vpn LAN ip addresses are duplicated in each cluster. So within a cluster, the LAN addresses are unique. But when you look at 2 clusters, 2 different servers share the 10.99.0.11 address. So that is why I created a VPN for inside the cluster on the LAN interfaces using the private 10.0.1.xx range. THen, I created a separate VPN on the WAN interfaces using publicly visible IP Addresses. This VPN solely exists to process cross cluster traffic. So at the end of the day, every server has a Real IP on eth0, a Private IP on eth1, and then a TINC VPN LAN IP on 10.0.1.x and a TINC VPN WAN on 10.1.x.x. I would love to understand how to make the next jump and get a single TINCD to keep all of this working. I think the key is the ifconfig and ip commands issued in tinc-up that allow for another tunx interface to be created and given a WAN VPN ip address The TINC VPN LAN address was assigned in tinc-up: ifconfig $INTERFACE 10.0.1.11 netmask 255.255.255.0 md On 12/15/2014 5:12 PM, md at rpzdesign.com wrote:> Guus: > > Ok, I accept your challenge. > > But I am clueless in terms of getting the routing table correct. > > So each server has a dual identity, both a LAN private identity with a > PRIVATE IP address and a WAN public identify with a PUBLIC ip address. > > And how to have 2 different tun devices show up in the ifconfig -a so > that LAN IP address can be assigned to the tun0 and a WAN IP address can > be assigned to the tun1 > > When I run 2 tincd daemons, I keep both "networks" separate. > > You expert judgement needed here to realize your statement about only > needing a single tincd daemon. > > > md > > On 12/14/2014 7:14 AM, Guus Sliepen wrote: >> On Fri, Dec 12, 2014 at 02:21:08AM -0500, md at rpzdesign.com wrote: >> >>> Oops, I got it to work only after putting the WAN on port 656 so it >>> did not interfere with port 655 for the LAN. >> >> You should not need to have two tinc daemons just because you have a WAN >> and a LAN interface. By default (ie, if you don't specify BindToAddress >> and/or BindToInterface), tinc listens on all interfaces, and the >> kernel should normally take care of selecting which outgoing interface >> to use for tinc's packets. >> >> >> >> _______________________________________________ >> tinc-devel mailing list >> tinc-devel at tinc-vpn.org >> http://www.tinc-vpn.org/cgi-bin/mailman/listinfo/tinc-devel >> >
On Mon, Dec 15, 2014 at 05:29:16PM -0500, md at rpzdesign.com wrote:> I guess my primary point of confusion is that the non-vpn LAN ip > addresses are duplicated in each cluster. So within a cluster, the LAN > addresses are unique. > > But when you look at 2 clusters, 2 different servers share the > 10.99.0.11 address.That should not be a problem for tinc. As long as both nodes in each cluster connect to another node in the other cluster, all nodes will know each other's WAN addresses and they can all talk to each other. For the connections between two nodes in a cluster, just provide them with their LAN address. So on Server #A, in hosts/ServerB, you put: Address = 145.61.252.81 And in hosts/ServerD you put: Address = 10.99.0.12 On Server #C, in hosts/ServerB you put: Address = 10.99.0.11 (assuming that's what it's LAN IP address is in Data Center #2) And in hosts/ServerD you put: Address = 105.61.252.21 So when Server #A makes a connection to Server #D, it knows and will use the LAN address, and when Server #C makes a connection to #D, it uses its WAN address. Note that you might also be able to add routes so that traffic from 105.61.252.20 to 105.51.252.21 will go via the LAN interface, for example using this command on Server #A: ip route add 105.51.252.21 via 10.99.0.12 That way you don't have to worry about what tinc is doing at all.> So that is why I created a VPN for inside the cluster on the LAN > interfaces using the private 10.0.1.xx range. THen, I created a > separate VPN on the WAN interfaces using publicly visible IP Addresses. > This VPN solely exists to process cross cluster traffic. > > So at the end of the day, every server has a Real IP on eth0, a Private > IP on eth1, and then a TINC VPN LAN IP on 10.0.1.x and a TINC VPN WAN on > 10.1.x.x.The question is whether you really want to have two separate VPNs with their own network interface and address range? I think you just want one VPN with one range, so just set up only one tinc daemon on each node. With the above configuration tinc should choose the right addresses for traffic between the nodes. -- Met vriendelijke groet / with kind regards, Guus Sliepen <guus at tinc-vpn.org> -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: Digital signature URL: <http://www.tinc-vpn.org/pipermail/tinc-devel/attachments/20141216/ad067228/attachment.sig>
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Guus: Ok, I gave it a real good try today to decipher the single daemon setup. But there are just too many details, routing, firewalls, etc to get it sorted out. Plus, the setup is NOT OBVIOUS. And there are risks of incorrect routing: tinc -c /tinc/conf info 10.0.1.20 There might be multiple matches, like in your case with two identical /24 Subnets, in that case again, the first match will be used. So I will just stay with my already double VPN setup because it is intuitive and obvious. Local traffic on the local LAN VPN on a given range of IP (Routable also!) Inter-Cluster traffic on the the intercluster WAN VPN. Hope this does not let you down Guus! Cheers, md On 12/15/2014 6:03 PM, Guus Sliepen wrote:> On Mon, Dec 15, 2014 at 05:29:16PM -0500, md at rpzdesign.com wrote: > >> I guess my primary point of confusion is that the non-vpn LAN ip >> addresses are duplicated in each cluster. So within a cluster, >> the LAN addresses are unique. >> >> But when you look at 2 clusters, 2 different servers share the >> 10.99.0.11 address. > > That should not be a problem for tinc. As long as both nodes in > each cluster connect to another node in the other cluster, all > nodes will know each other's WAN addresses and they can all talk to > each other. For the connections between two nodes in a cluster, > just provide them with their LAN address. > > So on Server #A, in hosts/ServerB, you put: > > Address = 145.61.252.81 > > And in hosts/ServerD you put: > > Address = 10.99.0.12 > > On Server #C, in hosts/ServerB you put: > > Address = 10.99.0.11 (assuming that's what it's LAN IP address is > in Data Center #2) > > And in hosts/ServerD you put: > > Address = 105.61.252.21 > > So when Server #A makes a connection to Server #D, it knows and > will use the LAN address, and when Server #C makes a connection to > #D, it uses its WAN address. > > Note that you might also be able to add routes so that traffic > from 105.61.252.20 to 105.51.252.21 will go via the LAN interface, > for example using this command on Server #A: > > ip route add 105.51.252.21 via 10.99.0.12 > > That way you don't have to worry about what tinc is doing at all. > >> So that is why I created a VPN for inside the cluster on the LAN >> interfaces using the private 10.0.1.xx range. THen, I created a >> separate VPN on the WAN interfaces using publicly visible IP >> Addresses. This VPN solely exists to process cross cluster >> traffic. >> >> So at the end of the day, every server has a Real IP on eth0, a >> Private IP on eth1, and then a TINC VPN LAN IP on 10.0.1.x and a >> TINC VPN WAN on 10.1.x.x. > > The question is whether you really want to have two separate VPNs > with their own network interface and address range? I think you > just want one VPN with one range, so just set up only one tinc > daemon on each node. With the above configuration tinc should > choose the right addresses for traffic between the nodes. > > > > _______________________________________________ tinc-devel mailing > list tinc-devel at tinc-vpn.org > http://www.tinc-vpn.org/cgi-bin/mailman/listinfo/tinc-devel >-----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (MingW32) iQEcBAEBAgAGBQJUkiYKAAoJEPo4S5nQw5H/9ZoH/RTC9CDmdQzdQFLixboXxEJ3 RrtwNC2Y5Mu2kkIDty+PcpojzNmWaZc85iVfbUeTWN5PO8WRrXG8vVyiJY47X5nk 2vVxKW/c4vIaA+C8GhYZFDEs6dMNwX6yUOjzl8J07MXgMyc+MxuFDmRsFExAGkJU G1xVuC71mK86zXBMFw8+Pzhu+mG58HjzInMXF6c5pwzOHVil4MGpXfnNBXPNx7MX 23hDBjFkcnbumV3qhuJLHSdHrxAalK6DRovNLJw2cewnLqU2X7H5L2ndFJWObvXz C9zbt10RCC0cKkwijZdyYQXfzNSaHjI5/yocQJlgI/zpX4XZ44LbDIjFB2eVHT8=/0mW -----END PGP SIGNATURE-----