Hi,
Can you help me explain some behaviour please? I've 2 tinc clients that
happen to be on the same network and behind the same NAT gateway.
They've been working for ages. Without anything changing, they've
stopped. They both died, in sequence while I was actively connected to
them and using an SSH session.
When I check the logs of another tinc node they connect to I see this.
(IP and other details sanitised)
Oct 22 10:47:45 jaipur tinc.myvpn[2222]: Sending ID to <unknown>
(1.2.3.4 port 55651): 0 jaipur 17
Oct 22 10:47:45 jaipur tinc.myvpn[2222]: Sending 12 bytes of metadata to
<unknown> (1.2.3.4 port 55651)
Oct 22 10:47:45 jaipur tinc.myvpn[2222]: Flushing 12 bytes to <unknown>
(1.2.3.4 port 55651)
Oct 22 10:47:45 jaipur tinc.myvpn[2222]: Got ID from <unknown> (1.2.3.4
port 55651): 0 aws 17
Oct 22 10:47:45 jaipur tinc.myvpn[2222]: Sending METAKEY to aws (1.2.3.4
port 55651): 1 94 64 0 0 3D17BECA901718
3B8F7AB3360--blah--0079AC05C5EA9ED
Oct 22 10:47:45 jaipur tinc.myvpn[2222]: Sending 525 bytes of metadata
to aws (1.2.3.4 port 55651)
Oct 22 10:47:45 jaipur tinc.myvpn[2222]: Flushing 525 bytes to aws
(1.2.3.4 port 55651)
Oct 22 10:47:50 jaipur tinc.myvpn[2222]: Timeout from aws (1.2.3.4 port
55651) during authentication
Oct 22 10:47:50 jaipur tinc.myvpn[2222]: Closing connection with aws
(1.2.3.4 port 55651)
When the first node stopped working I was still able to connect to it
via the standard network from the second working connection, but then
later that suffered the same fate.
What could cause this?
Thanks for all your help.
Chris.