Hi, As far as I can tell, when working in an environment with many servers, there seem to be several ways for your client to authenticate the HostKeys of each: 1) Set StrictHostKeyChecking=no, and hope you don't get MITM'd the first time you connect to a server. 2) Use SSHFP records (which generally requires you to have DNSSEC fully deployed to be meaningful compared to #1, I think?) 3) Build a massive /etc/ssh/ssh_known_hosts file with $N * $M entries, for $N servers using $M hostkey algorithms. 4) Use ssh-keygen to create a single "host certificate" key, and have a secure process to use to sign the host keys on all of your servers. Then, put that certificate in /etc/ssh/ssh_known_hosts on all your servers. 5) Use the same HostKeys everywhere, and just put those keys in /etc/ssh/ssh_known_hosts using a wildcard for your whole domain (e.g. "*.example.com ssh-rsa AAAAA....."). This makes revocation very difficult (since you need to securely re-key all of your servers). I also saw some discussion recently on this list about storing hostkeys in specialized security hardware. I'm not familiar with how "that stuff" works, but I assume it doesn't scale very well when you get up to thousands of servers, without a significant increase in your cost? ---- So, my questions are: 1) Is there some other option that I'm missing above? 2) Are there any good resources on "best practices" for any of the above? 3) Are there any tools that help make maintaining one of the above not-super-tedious? 4) Many of the above options seem to make revocation somewhat difficult (especially #5), but I think that in most other cases, "@revoked * ssh-rsa AAAAA...." should work to revoke a stolen key for a specific host? 5) How do "cloud" people handle host keys? As far as I understand, they often spawn and destroy many instances of servers over time, but I assume they still want to be sure they're reaching the right host somehow? For any answers to the above, a solution which requires minimal/no human interaction would be basically required. It's assumed that, say, public keys would be set up via some "other channel" to allow whatever access is required. Any advice would be appreciated. -- Mike Kelly
On 01/16/2013 10:40 AM, Mike Kelly wrote:> 1) Is there some other option that I'm missing above?The monkeysphere is another option: http://web.monkeysphere.info/ The monkeysphere wraps your host key in an OpenPGP certificate, which allows you (and anyone) to certify the key's association with a given host; it also allows anyone to verify those certifications directly. For servers that the general public interacts with (e.g. pair's shared servers) this is could useful even beyond the in-house utility of known key distribution. The only human interactions needed would be to supply some credential to grant access to the certifying key when a machine is newly created (if you have an automated installation mechanism, this could be included directly). The monkeysphere also allows you to take care of revocations using standard OpenPGP revocation mechanisms, and to distribute those revocations via the same OpenPGP keyserver network that is already in use. If you don't want the keys public, you can also distribute them via a private keyserver (or a private keyserver network), which is pretty straightforward to install. hth, --dkg PS if it's not clear, i'm a contributor to the monkeysphere project. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 1027 bytes Desc: OpenPGP digital signature URL: <http://lists.mindrot.org/pipermail/openssh-unix-dev/attachments/20130116/5b944f7d/attachment.bin>
On 01/16/2013 10:40 AM, Mike Kelly wrote:> As far as I can tell, when working in an environment with many servers, > there seem to be several ways for your client to authenticate the > HostKeys of each: > > [...]Two people replied to me off-list to mention GSSAPIKeyExchange, which seems to be part of some patches that aren't in the main OpenSSH distribution (yet?), with this being their source, as far as Google can tell me: http://www.sxw.org.uk/computing/patches/openssh.html Those don't seem to have been updated for versions 5.8, 5.9, 6.0, or 6.1, though... so I guess it's been abandoned? Also, as far as I'm aware (though, maybe I've just not learned enough about Kerberos), using Kerberos basically requires someone to interactively (and somewhat regularly) kinit, to get fresh credentials. For a situation where you want to allow various servers to talk to each other over an SSH channel, without any direct human intervention (e.g. cron jobs, etc)... it seems that would rule out Kerberos completely? But, maybe there's "something" that I'm missing, that would allow Kerberos to be used like Public Keys can be now? -- Mike Kelly
Maybe Matching Threads
- [Bug 3627] New: openssh 9.4p1 does not see RSA keys in know_hosts file.
- [Bug 1993] ssh tries to add keys to ~/.ssh/known_hosts though StrictHostKeyChecking yes is set
- Non-root hostname auth problem
- help with openssh
- [Bug 2631] New: Hostkey update and rotation - No IP entries added to known_hosts