Hey Jochen, Jochen Bern <Jochen.Bern at binect.de> writes:> On 01/13/2020 11:10 AM, Nico Schottelius wrote: >> The problem I am trying to solve is: there are thousands of users on >> IPv4 only networks who I cannot all communicate with. And they need to >> access resources on IPv6 only systems. >> >> The typical jump host / proxy command approach surely works, but only >> for a small percentage of the users. The big part actually reaches out >> to the support and has severe problems if they cannot just use "plain >> ssh" (i.e. need to configure ssh or don't land on the target host >> immediately). > > Out of interest: > 1. If an extended mechanism were to be implemented, which server pubkey > do you expect to be seen/stored/verified by the client? The proxy's > / v4 middlebox's, or the v6 backend's? Or would you require that all > server-side machines use the *same* host keypairs? > 2. Are there any clients *with* v6 accessing the same backends? Via > generic v6? How is the distinction made, FQDNs given in the public > DNS with the proxy's v4 and the backend's v6 IP and leave the > selection to the client? Could client machines *switch* between both > modes, short of an all-out reconfig by the sysadmins' hands? >I love those two questions. I'll answer it with how TLS proxying works, because that works quite nice: - The proxy does not have *any* certificate on our HTTPS servers - The proxy analyses the first few bytes, listens for the name in the TLS handshake and then sends the buffered request to the correct backend In terms of DNS management, how it is done for HTTPS: - AAAA -> goes directly to the target host - A -> goes to the proxy The proxy itself is there also stateless. If it is configured for example.com, it will lookup the AAAA entry and forward the packet without needing to know anything special about the IPv6 backend. So to answer your question: the client sees the same pubkey for both proxied and non-proxied connections. You can test that for yourself by curl'ing https://webmail.ungleich.ch. The host providing that service is only reachable by IPv6 and IPv4 is handled by the proxy. For SSH this could work very similar, it does not need full blown X.509 certificates or a rich protocol like http to support this. A simple "Host: target-host" alike greeting from the client would suffice. Cheers, Nico p.s.: HAProxy, which we use, can even forward the original client IP to the end host using the "proxy protocol". pps: The whole haproxy configuration for it looks as following. It supports smtps, imaps. https and http at the moment. # ipv4 https frontend frontend httpsipv4 bind ipv4@:443 mode tcp option tcplog tcp-request inspect-delay 5s tcp-request content accept if { req_ssl_hello_type 1 } default_backend httpsipv4 backend httpsipv4 mode tcp use-server webmail.ungleich.ch if { req_ssl_sni -i webmail.ungleich.ch } server webmail.ungleich.ch ipv6 at webmail.ungleich.ch ... -- Modern, affordable, Swiss Virtual Machines. Visit www.datacenterlight.ch
On Mon, Jan 13, 2020 at 05:14:02PM +0100, Nico Schottelius wrote:> p.s.: HAProxy, which we use, can even forward the original client IP to > the end host using the "proxy protocol". > > pps: The whole haproxy configuration for it looks as following. It > supports smtps, imaps. https and http at the moment. > > # ipv4 https frontend > frontend httpsipv4 > bind ipv4@:443 > mode tcp > option tcplog > tcp-request inspect-delay 5s > tcp-request content accept if { req_ssl_hello_type 1 } > default_backend httpsipv4 > > backend httpsipv4 > mode tcp > use-server webmail.ungleich.ch if { req_ssl_sni -i webmail.ungleich.ch } > server webmail.ungleich.ch ipv6 at webmail.ungleich.ch > ... >Neat. I do something similar: in order to circumvent obnoxious airport / coffee shop firewalls that block non-HTTPS traffic, I configured haproxy to offer 'SSH over HTTPS'. haproxy terminates the HTTPS connection (which is SNI-aware) while sshd on the target machine terminates the tunneled SSH connection. In ssh_config, I use ProxyCommand to invoke gnutls-client to create the HTTPS connection. You've indicated that you don't want to compel your users to make significant changes to ssh_config, but others in this thread have noted that an SNI option for OpenSSH will take some time to propagate from ideation through development through widespread* deployment Would this SSH-over-HTTPS option be worth considering for your use case while the SNI-aware OpenSSH gets more backers? (I think I might be one, now. You may wish to ask for Proxy-Protocol support, also.) * sufficiently widespread that your users can get packages from distros -- Luca Filipozzi
Ciao Luca, Luca Filipozzi <lfilipoz at emyr.net> writes:>> [ ... ] > Neat. I do something similar: in order to circumvent obnoxious airport / > coffee shop firewalls that block non-HTTPS traffic, I configured haproxy > to offer 'SSH over HTTPS'. haproxy terminates the HTTPS connection > (which is SNI-aware) while sshd on the target machine terminates the > tunneled SSH connection. > > In ssh_config, I use ProxyCommand to invoke gnutls-client to create the > HTTPS connection.Quite nice as well!> You've indicated that you don't want to compel your users to make > significant changes to ssh_config, but others in this thread have noted > that an SNI option for OpenSSH will take some time to propagate from > ideation through development through widespread* deploymentI perfectly understand that. At the moment we give out a wireguard IPv6 VPN for free to all users, which also has the nice side effect of giving anyone anywhere (even behind cgnat) IPv6 connectivity. Surprisingly adding a totally new program with totally different characteristics so far turned out to be easier than having users edit their ssh config.> Would this SSH-over-HTTPS option be worth considering for your use case > while the SNI-aware OpenSSH gets more backers? (I think I might be one, > now. You may wish to ask for Proxy-Protocol support, also.) > > * sufficiently widespread that your users can get packages from distrosI might have mixed up two cases in my previous mails a bit, which share a lot properties: a) enabling IPv4 to IPv6 users b) enabling load balancing for multi clusters The (b) case has 1 name per cluster, each serving multiple nodes behind the name. (b) is currently solved using round robin DNS with a 60s timeout. And yes, indeed all those nodes have the same host keys and it needs 1 public IPv4 address per cluster. Both cases would significantly profit from an ability of dispatching by name or intent, not only for us, but also other organisations we work with. So I am fine with taking some time to find a good solution that can be agreed on and waiting for all the ripple effects, because I literally see the potential of making life easier for thousands of people. Best regards, Nico -- Modern, affordable, Swiss Virtual Machines. Visit www.datacenterlight.ch