Jörg Weske
2010-Jun-04 11:44 UTC
Tinc crashes when node with identical configuration is present twice
Hello list, we have been running tinc to connect multiple nodes without problems for quite some time now. Thanks for this great piece of software! Our configuration is as follows: Two "supernodes" A and B running the tinc daemon are publicly reachable from the internet. Node A is running Linux and has a static public IP address. Node B is running Windows with port forwarding through a firewall with a DynDNS address. All our tinc nodes connect to both supernodes. We are running tinc 1.0.13 in switch mode, ipv4-only on all nodes. The tinc VPN network is stable and fast. Here are the contents of the config file we are using for our roaming Windows nodes: Interface=TAP-VPN AddressFamily=ipv4 MaxTimeout=60 Mode=switch Name=ROAMING_NODE_X ConnectTo=SUPERNODE_A ConnectTo=SUPERNODE_B Today we observed an interesting scenario. By mistake, one node with identical configuration was present twice inside our tinc network, logged in from a different dialup connection. (This happened after a migration from an old PC to a new one, as the tinc-directory was simply copied to the new PC.) This led to an almost immediate crash of the tinc daemons on both supernodes within a few dozen seconds of the second PC with the identical configuration coming online. Any attempt to restart the daemons would lead to another crash within a few minutes. The following log entry appears before the crash: 1275643910 tinc[31243]: Ready 1275644063 tinc[31243]: Error while translating addresses: ai_family not supported 1275644063 tinc[31243]: Got unexpected signal 8 (Floating point exception) I found the following message in the tinc list archives mentioning the same error and crash, though apparently for a different reason: http://www.mail-archive.com/tinc at tinc-vpn.org/msg00538.html Although I understand having the same node twice inside the network is clearly a hefty configuration error, maybe there is a way to make tinc a bit more robust against such a situation? After all, our mistake may accidentially occur in other setups as well. We were only get our tinc network back online after identifying the culprit and disabling one of the two nodes causing the trouble. Thank you! -- Best regards, J?rg Weske
Guus Sliepen
2010-Jun-04 12:35 UTC
Tinc crashes when node with identical configuration is present twice
On Fri, Jun 04, 2010 at 01:44:15PM +0200, J?rg Weske wrote:> Today we observed an interesting scenario. By mistake, one node with > identical configuration was present twice inside our tinc network, logged in > from a different dialup connection. (This happened after a migration from an > old PC to a new one, as the tinc-directory was simply copied to the new PC.) > > This led to an almost immediate crash of the tinc daemons on both supernodes > within a few dozen seconds of the second PC with the identical configuration > coming online. Any attempt to restart the daemons would lead to another crash > within a few minutes. > > The following log entry appears before the crash: > > 1275643910 tinc[31243]: Ready > 1275644063 tinc[31243]: Error while translating addresses: ai_family not supported > 1275644063 tinc[31243]: Got unexpected signal 8 (Floating point exception)That is definitely not supposed to happen. It would help if you could run tinc with a higher debug level (-d5) and try to reproduce it, and send me the log.> I found the following message in the tinc list archives mentioning > the same error and crash, though apparently for a different reason: > http://www.mail-archive.com/tinc at tinc-vpn.org/msg00538.htmlYes. The crash is forced by the code, when it is unable to parse an address. I could let tinc continue, but it would probably not work properly anymore anyway. I do not know yet why it has a problem parsing an address when you have two identical nodes... it is probably because of an unrelated bug somewhere else in the code.> Although I understand having the same node twice inside the network > is clearly a hefty configuration error, maybe there is a way to make > tinc a bit more robust against such a situation? After all, our > mistake may accidentially occur in other setups as well. > > We were only get our tinc network back online after identifying the > culprit and disabling one of the two nodes causing the trouble.I'll see if I can make tinc detect such a situation. -- Met vriendelijke groet / with kind regards, Guus Sliepen <guus at tinc-vpn.org> -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: Digital signature URL: <http://www.tinc-vpn.org/pipermail/tinc/attachments/20100604/13039adc/attachment.pgp>
Possibly Parallel Threads
- tincd.exe -K (sorry if present twice)
- Result of Applying IGD VGA Passthrough Patches to Xen 4.4-unstable Changeset 27238
- Result of Applying IGD VGA Passthrough Patches to Xen 4.4-unstable Changeset 27238
- Samba and netbeui
- Errors compiling tinc 1.0.12 on QNAP NAS (x86)