kolesnikov at infonetwork.ru
2011-Feb-22 01:00 UTC
Direct connections between nodes are in the same LAN (behind common NAT)
Hi I'm trying to implement a scheme in which the nodes will have a direct UDP tunnel to each other. First, all nodes make connection with one public node, and then make connections with each other. And I came across the following problem: Remotely located nodes can establish a direct UDP connection, but the nodes that are in the same local network can not, and all traffic goes through the public node. In the log files I see that the nodes can not agree on the MTU. 1298030480 tinc.vpn[4056]: No response to MTU probes from client_01 I understand this so that local nodes can not receive messages MTU probe from each other. Although from the remote nodes they successfully receive these messages. Tell me please, how can I solve this problem? Additional information: I have 4 nodes: 1) VPNGATE - public node and all the other nodes are connected with it. 2) CLIENT_01, CLIENT_02 - nodes are located in the same LAN. 3) CLIENT_03 - remotely located node. === VPNGATE ==tinc/vpn/hosts/vpngate tinc/vpn/hosts/client_01 tinc/vpn/hosts/client_02 tinc/vpn/hosts/client_03 ... tinc.conf: AddressFamily = ipv4 BindToAddress = x.x.x.x (public IP address) BindToInterface = eth0 Name = vpngate Device = /dev/net/tun PrivateKeyFile = /etc/tinc/vpn/rsa_key.priv Mode = switch === CLIENT_0X ==tinc/vpn/hosts/vpngate tinc/vpn/hosts/client_0X ... tinc.conf: AddressFamily = ipv4 Name = client_0X ConnectTo = vpngate Interface = tinc.vpn PrivateKeyFile = C:\Program Files\tinc\vpn\rsa_key.priv Mode = switch === HOST FILES ==VPNGATE: Compression = 9 Address = x.x.x.x (public IP address) Subnet = 192.168.10.0/24 Port = 655 -----BEGIN RSA PUBLIC KEY----- CLIENT_0X: Compression = 9 Subnet = 192.168.10.X/32 -----BEGIN RSA PUBLIC KEY----- and when I have full connectivity: ping CLIENT_01 ---> VPNGATE = 150 ms ping CLIENT_01 ---> CLIENT_03 = 15 ms ping CLIENT_01 ---> CLIENT_02 = 300 ms Best regards, Dmitry Kolesnikov
Donald Pearson
2011-Feb-22 04:48 UTC
Direct connections between nodes are in the same LAN (behind common NAT)
I think this is what "indirectdata = yes" is used for in the host files? On Mon, Feb 21, 2011 at 8:00 PM, <kolesnikov at infonetwork.ru> wrote:> Hi > > I'm trying to implement a scheme in which the nodes will have a direct UDP > tunnel to each other. > First, all nodes make connection with one public node, and then make > connections with each other. > > And I came across the following problem: > Remotely located nodes can establish a direct UDP connection, but the nodes > that are in the same local network can not, and all traffic goes through the > public node. > In the log files I see that the nodes can not agree on the MTU. > > 1298030480 tinc.vpn[4056]: No response to MTU probes from client_01 > > I understand this so that local nodes can not receive messages MTU probe > from each other. Although from the remote nodes they successfully receive > these messages. > > Tell me please, how can I solve this problem? > > > > Additional information: > > I have 4 nodes: > 1) VPNGATE - public node and all the other nodes are connected with it. > 2) CLIENT_01, CLIENT_02 - nodes are located in the same LAN. > 3) CLIENT_03 - remotely located node. > > > === VPNGATE ==> tinc/vpn/hosts/vpngate > tinc/vpn/hosts/client_01 > tinc/vpn/hosts/client_02 > tinc/vpn/hosts/client_03 > > ... tinc.conf: > AddressFamily = ipv4 > BindToAddress = x.x.x.x (public IP address) > BindToInterface = eth0 > Name = vpngate > Device = /dev/net/tun > PrivateKeyFile = /etc/tinc/vpn/rsa_key.priv > Mode = switch > > > === CLIENT_0X ==> tinc/vpn/hosts/vpngate > tinc/vpn/hosts/client_0X > > ... tinc.conf: > AddressFamily = ipv4 > Name = client_0X > ConnectTo = vpngate > Interface = tinc.vpn > PrivateKeyFile = C:\Program Files\tinc\vpn\rsa_key.priv > Mode = switch > > > === HOST FILES ==> VPNGATE: > Compression = 9 > Address = x.x.x.x (public IP address) > Subnet = 192.168.10.0/24 > Port = 655 > -----BEGIN RSA PUBLIC KEY----- > > CLIENT_0X: > Compression = 9 > Subnet = 192.168.10.X/32 > -----BEGIN RSA PUBLIC KEY----- > > > and when I have full connectivity: > > ping CLIENT_01 ---> VPNGATE = 150 ms > ping CLIENT_01 ---> CLIENT_03 = 15 ms > ping CLIENT_01 ---> CLIENT_02 = 300 ms > > > Best regards, > Dmitry Kolesnikov > > _______________________________________________ > tinc mailing list > tinc at tinc-vpn.org > http://www.tinc-vpn.org/cgi-bin/mailman/listinfo/tinc >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.tinc-vpn.org/pipermail/tinc/attachments/20110221/0dcaa627/attachment.htm>
Guus Sliepen
2011-Feb-22 07:23 UTC
Direct connections between nodes are in the same LAN (behind common NAT)
On Tue, Feb 22, 2011 at 04:00:00AM +0300, kolesnikov at infonetwork.ru wrote:> I'm trying to implement a scheme in which the nodes will have a direct UDP tunnel to each other. > First, all nodes make connection with one public node, and then make connections with each other. > > And I came across the following problem: > Remotely located nodes can establish a direct UDP connection, but the nodes that are in the same local network can not, and all traffic goes through the public node. > In the log files I see that the nodes can not agree on the MTU. > > 1298030480 tinc.vpn[4056]: No response to MTU probes from client_01 > > I understand this so that local nodes can not receive messages MTU probe from each other. Although from the remote nodes they successfully receive these messages. > > Tell me please, how can I solve this problem?The easiest way is to add "ConnectTo = client_02" to client_01's tinc.conf, and add "Address = <LAN IP address>" to client_1's hosts/client_02. The problem with your setup is that since client_01 and client_02 are behind a NAT, and both only connect to vpngate, they will never see the IP address they have on the LAN, only the public address they got from the NAT device. Daniel Schall is working on a patch to have tinc daemons on the sae LAN autodetect each other, so in the future tinc may solve this problem automatically. -- Met vriendelijke groet / with kind regards, Guus Sliepen <guus at tinc-vpn.org> -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: Digital signature URL: <http://www.tinc-vpn.org/pipermail/tinc/attachments/20110222/5a72317f/attachment.pgp>
Guus Sliepen
2011-Feb-22 07:25 UTC
Direct connections between nodes are in the same LAN (behind common NAT)
On Mon, Feb 21, 2011 at 11:48:52PM -0500, Donald Pearson wrote:> I think this is what "indirectdata = yes" is used for in the host files?No, setting that option will actually make sure client_01 and client_02 never talk to each other directly. -- Met vriendelijke groet / with kind regards, Guus Sliepen <guus at tinc-vpn.org> -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: Digital signature URL: <http://www.tinc-vpn.org/pipermail/tinc/attachments/20110222/d534f1df/attachment.pgp>
Donald Pearson
2011-Feb-22 13:50 UTC
Direct connections between nodes are in the same LAN (behind common NAT)
Ah, thanks. On Tue, Feb 22, 2011 at 2:25 AM, Guus Sliepen <guus at tinc-vpn.org> wrote:> On Mon, Feb 21, 2011 at 11:48:52PM -0500, Donald Pearson wrote: > > > I think this is what "indirectdata = yes" is used for in the host files? > > No, setting that option will actually make sure client_01 and client_02 > never > talk to each other directly. > > -- > Met vriendelijke groet / with kind regards, > Guus Sliepen <guus at tinc-vpn.org> > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.11 (GNU/Linux) > > iEYEARECAAYFAk1jZP4ACgkQAxLow12M2nuRJwCglqXDN5VKwsCnSNxpSQ/h0G9q > woEAoKpRuQRrZN3qYdLQAsS0uvHxxsFs > =FFrt > -----END PGP SIGNATURE----- > > _______________________________________________ > tinc mailing list > tinc at tinc-vpn.org > http://www.tinc-vpn.org/cgi-bin/mailman/listinfo/tinc > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.tinc-vpn.org/pipermail/tinc/attachments/20110222/beb11713/attachment.html>
ZioPRoTo (Saverio Proto)
2011-Feb-23 15:52 UTC
Direct connections between nodes are in the same LAN (behind common NAT)
> Daniel Schall is working on a patch to have tinc daemons on the sae LAN > autodetect each other, so in the future tinc may solve this problem > automatically.please consider using multicast packets for discovery instead of broadcast. using multicast tincd will be able to discover other tincd demons across subnets of the multicast domain. This is very common in wireless communities. Moreover using dbus and libavahi you can reuse service discovery protocols/solutions without recoding all the stuff from scratch. Saverio