Hi Guus, Am Freitag, den 25.09.2015, 17:04 +0200 schrieb Guus Sliepen:> Ok, that means by default the UDP NAT timeout on the Cisco is extremely > short. > > > I check the manual of the the Cisco NAT for any TCP/UDP > > timeout settings, but there is no way to modify anything like "keeps > > TCP/UDP connections alive". > > It wouldn't be called something like that, rather a "nat translation > timeout" or something similar.Shame on me. Deep in the configuration of the NAT I found that UDP timeout is set to 30 seconds by default. I increased the value to 120 seconds. And disabled the PingIntervall completely on the clients behind the NAT. Tunnels got unstable again. Then I put "PingIntervall = 30" to the client's config back again (before it was set to 10 seconds) and this seems to works.> > So should I keep this UDP configuration or would you go back to > > TCPOnly? > > I'd keep the UDP setting. It does generate more background traffic > though, if you have to pay for bandwidth you could consider going back > to TCPonly.Good. Current setup is "PingIntervall = 30" on the clients and 120 seconds timeout on the Cisco NAT's.> > And another thing which came up since the clients (all in the same > > subnet) are running behind the NAT: the traffic in-between the clients > > run through the hosts and not locally/directly anymore, which means > > higher latency and outgoing traffic. I don't see any blocked packages on > > the client's firewall. Is there a way to let them talk directly again? > > This is probably because the Cisco doesn't support hairpin routing. Add > LocalDiscovery = yes to tinc.conf on the clients, that way they can > detect each other's LAN address and do direct traffic again.Hmmm ... I've tried "LocalDiscovery = yes" in /etc/tinc/mytunnel/tinc.conf already, but that didn't help. Config on client A is: --------------- Name = clienta AddressFamily = ipv4 Interface = tun0 ConnectTo = host PingInterval = 30 LocalDiscovery = yes --------------- Ciao!
Hi Guus, Am Freitag, den 25.09.2015, 17:46 +0200 schrieb Marcus Schopen:> Hmmm ... I've tried "LocalDiscovery = yes" > in /etc/tinc/mytunnel/tinc.conf already, but that didn't help. Config on > client A is: > > --------------- > Name = clienta > AddressFamily = ipv4 > Interface = tun0 > ConnectTo = host > PingInterval = 30 > LocalDiscovery = yes > ---------------I think I figured the problem out. The clients behind the local NAT connect the host and all traffic is running through the host, which is working as NAT itself for accessing the internet (internet proxy/gateway). On each client this script is executed, when starting the tunnel connection to the host: --------- #!/bin/sh VPN_GATEWAY=10.20.0.1 ORIGINAL_GATEWAY=`ip route show | grep ^default | cut -d ' ' -f 2-5` ip route add $REMOTEADDRESS $ORIGINAL_GATEWAY ip route add $VPN_GATEWAY dev $INTERFACE ip route add 0.0.0.0/1 via $VPN_GATEWAY dev $INTERFACE ip route add 128.0.0.0/1 via $VPN_GATEWAY dev $INTERFACE --------- If I disable above routing rules, the clients behind the NAT can talk directly to each other. But how do I have to configure the ip route rule, so that all "internet" traffic is going through the external tinc host and the same time the tinc clients behind the NAT talk directly? On the local eth0 interface each client can ping or connect to services at each another client in the local network. What did I miss to configure here? Ciao Marcus
Am Freitag, den 25.09.2015, 22:45 +0200 schrieb Marcus Schopen:> Hi Guus, > > Am Freitag, den 25.09.2015, 17:46 +0200 schrieb Marcus Schopen: > > Hmmm ... I've tried "LocalDiscovery = yes" > > in /etc/tinc/mytunnel/tinc.conf already, but that didn't help. Config on > > client A is: > > > > --------------- > > Name = clienta > > AddressFamily = ipv4 > > Interface = tun0 > > ConnectTo = host > > PingInterval = 30 > > LocalDiscovery = yes > > --------------- > > I think I figured the problem out. The clients behind the local NAT > connect the host and all traffic is running through the host, which is > working as NAT itself for accessing the internet (internet > proxy/gateway). > > On each client this script is executed, when starting the tunnel > connection to the host: > > --------- > #!/bin/sh > > VPN_GATEWAY=10.20.0.1 > ORIGINAL_GATEWAY=`ip route show | grep ^default | cut -d ' ' -f 2-5` > > ip route add $REMOTEADDRESS $ORIGINAL_GATEWAY > ip route add $VPN_GATEWAY dev $INTERFACE > ip route add 0.0.0.0/1 via $VPN_GATEWAY dev $INTERFACE > ip route add 128.0.0.0/1 via $VPN_GATEWAY dev $INTERFACE > --------- > > If I disable above routing rules, the clients behind the NAT can talk > directly to each other. But how do I have to configure the ip route > rule, so that all "internet" traffic is going through the external tinc > host and the same time the tinc clients behind the NAT talk directly? On > the local eth0 interface each client can ping or connect to services at > each another client in the local network. What did I miss to configure > here?Problem seems to be the routing rule, which I took from the "redirecting the default gateway to a host on the VPN" Howto [1] ip route add 128.0.0.0/1 via $VPN_GATEWAY dev $INTERFACE Without this route, the clients can handle out each other. Hmmm ... Last Problem seems to be the local UFW Firewall on the clients whichs seems to block the Broadcast for LocalDiscovery = yes. Need to check the logs here. Ciao! Marcus [1] http://www.tinc-vpn.org/examples/redirect-gateway/