bugzilla-daemon at mindrot.org
2025-Jul-15 19:32 UTC
[Bug 3850] New: concurrent runs of ssh corrupts the known_hosts file
https://bugzilla.mindrot.org/show_bug.cgi?id=3850 Bug ID: 3850 Summary: concurrent runs of ssh corrupts the known_hosts file Product: Portable OpenSSH Version: 10.0p2 Hardware: amd64 OS: Linux Status: NEW Severity: minor Priority: P5 Component: ssh Assignee: unassigned-bugs at mindrot.org Reporter: toralf.foerster at gmx.de If I run few (about 16) ssh commands in parallel as seen in [1], then from time to time the known_hosts files gets being corrupted. In that case I have to delete those lines where 2 parallel ssh commands wrote into the same line. I do wonder whether ssh allows concurrent runs at the same machine (for new systems where the ssh host key is not yet known) or not? [1] https://github.com/toralf/tor-relays/blob/main/bin/trust-host-ssh-key.sh#L16 -- You are receiving this mail because: You are watching the assignee of the bug.
bugzilla-daemon at mindrot.org
2025-Jul-15 22:37 UTC
[Bug 3850] concurrent runs of ssh corrupts the known_hosts file
https://bugzilla.mindrot.org/show_bug.cgi?id=3850 Darren Tucker <dtucker at dtucker.net> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |dtucker at dtucker.net --- Comment #1 from Darren Tucker <dtucker at dtucker.net> --- I'm not sure ssh is behaving unreasonably here: you explicitly told multiple parallel instances of it to modify the same file:> while ! xargs -r -P ${jobs} -I '{}' ssh -4 -n -o StrictHostKeyChecking=accept-new -o ConnectTimeout=2 {}One way to avoid this is to use the TOKEN expansion for UserKnownHostsFile (which was added in v8.4) to put each host into its own file based on hostname: UserKnownHostsFile ~/.ssh/known_hosts.d/%h or hostkey: UserKnownHostsFile ~/.ssh/known_hosts.d/%k either in your ~/.ssh/config or, in your use case, more likely as an -o option to ssh in the script. -- You are receiving this mail because: You are watching the assignee of the bug. You are watching someone on the CC list of the bug.
bugzilla-daemon at mindrot.org
2025-Jul-16 03:27 UTC
[Bug 3850] concurrent runs of ssh corrupts the known_hosts file
https://bugzilla.mindrot.org/show_bug.cgi?id=3850 Damien Miller <djm at mindrot.org> changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |RESOLVED Resolution|--- |FIXED CC| |djm at mindrot.org --- Comment #2 from Damien Miller <djm at mindrot.org> --- FYI this was committed after openssh-10.0, which should help your case. It will be in openssh-10.1, due in the next few months. commit e048230106fb3f5e7cc07abc311c6feb5f52fd05 Author: djm at openbsd.org <djm at openbsd.org> Date: Wed Apr 30 05:26:15 2025 +0000 upstream: make writing known_hosts lines more atomic, by writing the entire line in one operation and using unbuffered stdio. Usually writes to this file are serialised on the "Are you sure you want to continue connecting?" prompt, but if host key checking is disabled and connections were being made with high concurrency then interleaved writes might have been possible. feedback/ok deraadt@ millert@ OpenBSD-Commit-ID: d11222b49dabe5cfe0937b49cb439ba3d4847b08 -- You are receiving this mail because: You are watching the assignee of the bug. You are watching someone on the CC list of the bug.
bugzilla-daemon at mindrot.org
2025-Jul-16 06:54 UTC
[Bug 3850] concurrent runs of ssh corrupts the known_hosts file
https://bugzilla.mindrot.org/show_bug.cgi?id=3850 --- Comment #3 from Toralf F?rster <toralf.foerster at gmx.de> --- Oh cool, that commit could avoid the breakage here. OTOH may I ask if ssh does not use write file locking? Becuase it is my understanding, that by a write lock more than 1 processes are able to attempt to write into the same file w/o clashes, right? -- You are receiving this mail because: You are watching someone on the CC list of the bug. You are watching the assignee of the bug.