Christian Weisgerber <naddy at mips.inka.de> writes:> On 2020-01-12, Dustin Lundquist <dustin at null-ptr.net> wrote: > >> I think the intended application is to proxy through a proxy host provided by the service provider. If SSH had a SNI like feature where a host identifier was passed in plain text during the initial connection. This way the user would just need to register their host identifier and IPv6 address (e.g. via AAAA DNS records), and the service provider wouldn?t need to maintain a list of allowed users. The proxy would have no more access to the contents of the SSH connection than any other intervening stateful firewall. > > You can do this with a jump host, see ProxyJump in ssh_config(5).That is correct, but requires client configuration. This only works if you can communicate with each and every user. The problem I am trying to solve is: there are thousands of users on IPv4 only networks who I cannot all communicate with. And they need to access resources on IPv6 only systems. The typical jump host / proxy command approach surely works, but only for a small percentage of the users. The big part actually reaches out to the support and has severe problems if they cannot just use "plain ssh" (i.e. need to configure ssh or don't land on the target host immediately). I hope the motivation and scenario is understandable and it would be very much appreciated if there was any way to dispatch to multiple end hosts with ssh directly. Whether that's via SNI or another mechanism, I don't have a strong opinion on. Best regards, Nico -- Modern, affordable, Swiss Virtual Machines. Visit www.datacenterlight.ch
On 2020/01/13 11:10, Nico Schottelius wrote:> > That is correct, but requires client configuration. This only works if > you can communicate with each and every user. > > The problem I am trying to solve is: there are thousands of users on > IPv4 only networks who I cannot all communicate with. And they need to > access resources on IPv6 only systems. > > The typical jump host / proxy command approach surely works, but only > for a small percentage of the users. The big part actually reaches out > to the support and has severe problems if they cannot just use "plain > ssh" (i.e. need to configure ssh or don't land on the target host > immediately).Even if such a mechanism were added, you would be waiting a long time before new enough OpenSSH versions filter through to the usual client OS, and for other clients to gain support. It wouldn't be an easy way out for your problem.> I hope the motivation and scenario is understandable and it would be > very much appreciated if there was any way to dispatch to multiple end > hosts with ssh directly. Whether that's via SNI or another mechanism, I > don't have a strong opinion on. > > Best regards, > > Nico > > -- > Modern, affordable, Swiss Virtual Machines. Visit www.datacenterlight.chIf you have users that are wanting to access a v6-only system but are themselves unable to setup their own v6 access, the easiest way is probably web-based ssh (via a dual-stack host). If they want more it's not so hard to setup v6 via a tunnel/VPN.
On 01/13/2020 11:10 AM, Nico Schottelius wrote:> The problem I am trying to solve is: there are thousands of users on > IPv4 only networks who I cannot all communicate with. And they need to > access resources on IPv6 only systems. > > The typical jump host / proxy command approach surely works, but only > for a small percentage of the users. The big part actually reaches out > to the support and has severe problems if they cannot just use "plain > ssh" (i.e. need to configure ssh or don't land on the target host > immediately).Out of interest: 1. If an extended mechanism were to be implemented, which server pubkey do you expect to be seen/stored/verified by the client? The proxy's / v4 middlebox's, or the v6 backend's? Or would you require that all server-side machines use the *same* host keypairs? 2. Are there any clients *with* v6 accessing the same backends? Via generic v6? How is the distinction made, FQDNs given in the public DNS with the proxy's v4 and the backend's v6 IP and leave the selection to the client? Could client machines *switch* between both modes, short of an all-out reconfig by the sysadmins' hands? Proxy pubkey (? backend pubkey) for v4 and clients can switch between v4 and v6 ==> Users get MitM alerts after every switch. Backend pubkey (? proxy pubkey) for v4 ==> Any user using the ssh-keyscan tool will probably thus stuff his known_hosts file with the *wrong* one(s). Etcetera. Regards, -- Jochen Bern Systemingenieur Binect GmbH Robert-Koch-Stra?e 9 64331 Weiterstadt -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4278 bytes Desc: S/MIME Cryptographic Signature URL: <http://lists.mindrot.org/pipermail/openssh-unix-dev/attachments/20200113/4a581064/attachment.p7s>
Hey Jochen, Jochen Bern <Jochen.Bern at binect.de> writes:> On 01/13/2020 11:10 AM, Nico Schottelius wrote: >> The problem I am trying to solve is: there are thousands of users on >> IPv4 only networks who I cannot all communicate with. And they need to >> access resources on IPv6 only systems. >> >> The typical jump host / proxy command approach surely works, but only >> for a small percentage of the users. The big part actually reaches out >> to the support and has severe problems if they cannot just use "plain >> ssh" (i.e. need to configure ssh or don't land on the target host >> immediately). > > Out of interest: > 1. If an extended mechanism were to be implemented, which server pubkey > do you expect to be seen/stored/verified by the client? The proxy's > / v4 middlebox's, or the v6 backend's? Or would you require that all > server-side machines use the *same* host keypairs? > 2. Are there any clients *with* v6 accessing the same backends? Via > generic v6? How is the distinction made, FQDNs given in the public > DNS with the proxy's v4 and the backend's v6 IP and leave the > selection to the client? Could client machines *switch* between both > modes, short of an all-out reconfig by the sysadmins' hands? >I love those two questions. I'll answer it with how TLS proxying works, because that works quite nice: - The proxy does not have *any* certificate on our HTTPS servers - The proxy analyses the first few bytes, listens for the name in the TLS handshake and then sends the buffered request to the correct backend In terms of DNS management, how it is done for HTTPS: - AAAA -> goes directly to the target host - A -> goes to the proxy The proxy itself is there also stateless. If it is configured for example.com, it will lookup the AAAA entry and forward the packet without needing to know anything special about the IPv6 backend. So to answer your question: the client sees the same pubkey for both proxied and non-proxied connections. You can test that for yourself by curl'ing https://webmail.ungleich.ch. The host providing that service is only reachable by IPv6 and IPv4 is handled by the proxy. For SSH this could work very similar, it does not need full blown X.509 certificates or a rich protocol like http to support this. A simple "Host: target-host" alike greeting from the client would suffice. Cheers, Nico p.s.: HAProxy, which we use, can even forward the original client IP to the end host using the "proxy protocol". pps: The whole haproxy configuration for it looks as following. It supports smtps, imaps. https and http at the moment. # ipv4 https frontend frontend httpsipv4 bind ipv4@:443 mode tcp option tcplog tcp-request inspect-delay 5s tcp-request content accept if { req_ssl_hello_type 1 } default_backend httpsipv4 backend httpsipv4 mode tcp use-server webmail.ungleich.ch if { req_ssl_sni -i webmail.ungleich.ch } server webmail.ungleich.ch ipv6 at webmail.ungleich.ch ... -- Modern, affordable, Swiss Virtual Machines. Visit www.datacenterlight.ch
Hi, On Mon, Jan 13, 2020 at 03:16:00PM +0000, Jochen Bern wrote:> Out of interest: > 1. If an extended mechanism were to be implemented, which server pubkey > do you expect to be seen/stored/verified by the client? The proxy's > / v4 middlebox's, or the v6 backend's? Or would you require that all > server-side machines use the *same* host keypairs?I'd do the "SNI" part before exchanging server host keys ("just as it is done in https, for good reason"). That way, every backend can have its own key. The "middle box" would see some unencrypted handshake, but afterwards would have no more knowledge of the connection than any IP router or proxy in the path. Actually *doing* it sounds like you need a protocol change (more than "just an option after the key handshake"), as the server sends it "SSH-2.0..." message first, which would have to be deferred until the client tells it where it wants to connect to... so, not really trivial (and back to square one, might take longer to roll out upgraded clients than to roll out v6 to those clients). gert -- "If was one thing all people took for granted, was conviction that if you feed honest figures into a computer, honest figures come out. Never doubted it myself till I met a computer with a sense of humor." Robert A. Heinlein, The Moon is a Harsh Mistress Gert Doering - Munich, Germany gert at greenie.muc.de -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3614 bytes Desc: not available URL: <http://lists.mindrot.org/pipermail/openssh-unix-dev/attachments/20200113/c46401a0/attachment.bin>