Hi Guus, Am Freitag, den 25.09.2015, 09:36 +0200 schrieb Guus Sliepen:> On Fri, Sep 25, 2015 at 08:41:06AM +0200, Marcus Schopen wrote: > > > I'm running some tinc clients behind a NAT (masquerading, Cisco Router) > > connecting to a host outside on a public IP in a different network. The > > tunnels get unstable every few minutes and I see packet loss when > > pinging the clients on their internal tunnel IPs from the host side. > > Before putting the tinc clients behind the NAT they were running on > > public IPs too (clients and host in different networks) and the tunnels > > were rock stable without any problems. As a workaround(?) I added > > "TCPOnly = yes" [1] to the host's config file and since then all tunnels > > seem to work stable again, but I can't explain this to me as the NAT > > should handle UDP connections. Any ideas? > > Maybe the timeout for UDP NAT mappings is a bit short on your Cisco. Try > adding PingInterval = 30 to the tinc.conf on those clients, perhaps that > will help.Thanks for pushing me into the right direction. I disabled "TCPOnly yes" on the host and started with "PingInterval = 30" on each client behind the NAT. The tunnels from the host side were still unstable until I reduced PingIntervall down to 10 seconds, which seems to work fine for the moment. I check the manual of the the Cisco NAT for any TCP/UDP timeout settings, but there is no way to modify anything like "keeps TCP/UDP connections alive". So should I keep this UDP configuration or would you go back to TCPOnly? And another thing which came up since the clients (all in the same subnet) are running behind the NAT: the traffic in-between the clients run through the hosts and not locally/directly anymore, which means higher latency and outgoing traffic. I don't see any blocked packages on the client's firewall. Is there a way to let them talk directly again? Ciao Marcus
On Fri, Sep 25, 2015 at 04:51:22PM +0200, Marcus Schopen wrote:> > Maybe the timeout for UDP NAT mappings is a bit short on your Cisco. Try > > adding PingInterval = 30 to the tinc.conf on those clients, perhaps that > > will help. > > Thanks for pushing me into the right direction. I disabled "TCPOnly > yes" on the host and started with "PingInterval = 30" on each client > behind the NAT. The tunnels from the host side were still unstable until > I reduced PingIntervall down to 10 seconds, which seems to work fine for > the moment.Ok, that means by default the UDP NAT timeout on the Cisco is extremely short.> I check the manual of the the Cisco NAT for any TCP/UDP > timeout settings, but there is no way to modify anything like "keeps > TCP/UDP connections alive".It wouldn't be called something like that, rather a "nat translation timeout" or something similar.> So should I keep this UDP configuration or would you go back to > TCPOnly?I'd keep the UDP setting. It does generate more background traffic though, if you have to pay for bandwidth you could consider going back to TCPonly.> And another thing which came up since the clients (all in the same > subnet) are running behind the NAT: the traffic in-between the clients > run through the hosts and not locally/directly anymore, which means > higher latency and outgoing traffic. I don't see any blocked packages on > the client's firewall. Is there a way to let them talk directly again?This is probably because the Cisco doesn't support hairpin routing. Add LocalDiscovery = yes to tinc.conf on the clients, that way they can detect each other's LAN address and do direct traffic again. -- Met vriendelijke groet / with kind regards, Guus Sliepen <guus at tinc-vpn.org> -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: Digital signature URL: <http://www.tinc-vpn.org/pipermail/tinc/attachments/20150925/490ad2fc/attachment.sig>
Hi Guus, Am Freitag, den 25.09.2015, 17:04 +0200 schrieb Guus Sliepen:> Ok, that means by default the UDP NAT timeout on the Cisco is extremely > short. > > > I check the manual of the the Cisco NAT for any TCP/UDP > > timeout settings, but there is no way to modify anything like "keeps > > TCP/UDP connections alive". > > It wouldn't be called something like that, rather a "nat translation > timeout" or something similar.Shame on me. Deep in the configuration of the NAT I found that UDP timeout is set to 30 seconds by default. I increased the value to 120 seconds. And disabled the PingIntervall completely on the clients behind the NAT. Tunnels got unstable again. Then I put "PingIntervall = 30" to the client's config back again (before it was set to 10 seconds) and this seems to works.> > So should I keep this UDP configuration or would you go back to > > TCPOnly? > > I'd keep the UDP setting. It does generate more background traffic > though, if you have to pay for bandwidth you could consider going back > to TCPonly.Good. Current setup is "PingIntervall = 30" on the clients and 120 seconds timeout on the Cisco NAT's.> > And another thing which came up since the clients (all in the same > > subnet) are running behind the NAT: the traffic in-between the clients > > run through the hosts and not locally/directly anymore, which means > > higher latency and outgoing traffic. I don't see any blocked packages on > > the client's firewall. Is there a way to let them talk directly again? > > This is probably because the Cisco doesn't support hairpin routing. Add > LocalDiscovery = yes to tinc.conf on the clients, that way they can > detect each other's LAN address and do direct traffic again.Hmmm ... I've tried "LocalDiscovery = yes" in /etc/tinc/mytunnel/tinc.conf already, but that didn't help. Config on client A is: --------------- Name = clienta AddressFamily = ipv4 Interface = tun0 ConnectTo = host PingInterval = 30 LocalDiscovery = yes --------------- Ciao!