The goal is to host 4 or 5 websites for friends. all low traffic, so a single box should be fine. 16 cores, 32g ram, 1 nic, 1 public IP. hostname: prox each site gets a VM, created manually, (they all get debian) add friends ssh keys and let them ssh in and do whatever they want in their vm. and be able to ansible over ssh like ansible does. hostnames vm1, vm2... friends all manage their own domain name register / dns, point their www's at my IP. I would like to keep ports all standard: 22 for ssh, 80/443 for http/s, etc. and route to the VM based on hostname. ssh user at prox gets the host, ssh user at vm1 gets vm1. curl http://vm1 gets vm1. There are lots of ways to do this, I'm trying to work out a config that makes it easy on their end. Telling them all to use ProxyJump isn't out of the question, but I'm hoping there are other options. I don't mind a separate solution for ssh and http. like for http I can run an nginx on the public IP with server_name vm1; location / { proxy_pass http://10.0.0.1; -- Carl K
Am Mi., 21. Sept. 2022 um 23:08 Uhr schrieb Carl Karsten <carl at nextdayvideo.com>:> single box should be fine. 16 cores, 32g ram, 1 nic, 1 public IP.IPv6 exists. Best Martin
Of course, the ideal solution would be to use IPv6 :-) Given that's not ubiquituous, you still need some sort of gateway on IPv4.? What I do is to use an SOCKS5 server, configured to allow inbound unauthenticated access (on IPv4) and make outbound to local network port 22 only (on IPv6). Then on the ssh client side: Host *.yourdomain.com ProxyCommand sh -c 'nc %h %p || ncat --proxy-type socks5 --proxy 192.0.2.2:22080 %h %p' Notes: * 192.0.2.2 is the machine where the SOCKS5 server is running, listening on port 22080 * The "nc %h %p ||" bit is so that it tries a direct connection first - assuming you have an AAAA record in the DNS - and fallback to using SOCKS only if that fails to connect. You can remove this if you *only* support access via the SOCKS5 proxy. The SOCKS5 server I'm using is danted, and I'll paste the sanitised danted.conf below.? It's back-to-front compared to normal deployments: the "internal" interface is the outside public IP, and the "external" interface is the one which originates connections to your local network. I've found this works pretty well. The main downside is that you lose visibility of the original source IPv4 address in your sshd logs.? One way I've thought of handling that is to have the SOCKS5 server bind to a /96 block of IPv6 addresses, and embed the client's IPv4 address in the low 32 bits - but that's more code hacking than I had time for. I can think of variations on this theme. For example, you could choose to require authentication on the SOCKS5 connection, and each user could have different credentials that allow them access only to "their" target server.? But in my case, I trust sshd: I allow port 22 from the world directly via IPv6 anyway, so I have no problem also allowing inbound SOCKS5 proxying to port 22. HTH, Brian. ===== 8< ==== logoutput: stderr # Accept connections from the outside interface. Use a non-default # port to be slightly less susceptible to port scanning internal.protocol: ipv4 internal: 192.0.2.2 port = 22080 # "Outgoing" connections use IPv6 only external.protocol: ipv6 external: eth0 # methods for client-rules (on initial connection) clientmethod: none # methods for socks-rules (during negotiation) socksmethod: username none user.privileged: root user.notprivileged: proxy user.libwrap: proxy # The rules prefixed with "client" are checked first and say who is allowed # and who is not allowed to speak/connect to the server # # The "to:" in the "client" context gives the address the connection # is accepted on, i.e the address the socksserver is listening on, or # just "0.0.0.0/0" for any address the server is listening on. client pass { ??????? from: 0.0.0.0/0 port 1-65535 to: 0.0.0.0/0 } # you probably don't want people connecting to loopback addresses, # who knows what could happen then. socks block { ??????? from: 0.0.0.0/0 to: 127.0.0.0/8 ??????? log: connect error } # unless you need it, you could block any bind requests. socks block { ??????? from: 0.0.0.0/0 to: 0.0.0.0/0 ??????? command: bind ??????? log: connect error } # Allow inbound by hostname to home network SSH (unauthenticated) socks pass { ??????? from: 0.0.0.0/0 to: .yourdomain.com port=22 ??????? protocol: tcp udp ??????? #socksmethod: username none ??????? #user: proxy } # Allow inbound by IP to home network SSH (unauthenticated) socks pass { ??????? from: 0.0.0.0/0 to: 2001:db8::/64 port=22 ??????? protocol: tcp udp ??????? #socksmethod: username none ??????? #user: proxy } #socks pass { #??????? from: 0.0.0.0/0 to: 10.0.0.0/8 port=22 #??????? protocol: tcp udp #??????? #socksmethod: username none #??????? #user: proxy #} # last line, block everyone else.? This is the default but if you provide # one yourself you can specify your own logging/actions socks block { ??????? from: 0.0.0.0/0 to: 0.0.0.0/0 ??????? log: connect error }
On Thu, 22 Sept 2022 at 07:02, Carl Karsten <carl at nextdayvideo.com> wrote:> Telling them all to use ProxyJump isn't out of the question, but I'm > hoping there are other options.ProxyJump is likely the easiest solution. It requires an account on the host, but it can be authenticated by your friends' keys and restricted to only allow port forwarding. It does not require any additional software beyond OpenSSH.> I don't mind a separate solution for ssh and http. like for http I > can run an nginx on the public IP with > > server_name vm1; > location / { proxy_pass http://10.0.0.1;Other possible solutions: - configure nginx on port 80 to allow a HTTP CONNECT to the VMs on port 22 then use a ProxyCommand like netcat that supports HTTP CONNECT. (Make sure you *only* allow connections to your VMs, lest you become an open proxy.) - maybe you could cook up a config using a SSH+SSL demultiplexer like sslh although from a quick glance at the man page it's not obvious if that would even be possible. -- Darren Tucker (dtucker at dtucker.net) GPG key 11EAA6FA / A86E 3E07 5B19 5880 E860 37F4 9357 ECEF 11EA A6FA (new) Good judgement comes with experience. Unfortunately, the experience usually comes from bad judgement.
Hi, I'm not sure if I understand your problem correctly but if so I would recommend this: From that one public IP you can use tool caled sniproxy (It's pretty awesome tool, I used it for a quite long time) and that would enable you to send even https traffic based of hostnames to correct hosts. Unfortunatelly I don't have similiar solution for ssh (I think you can make some kind of weird server with paramico, but it would be not trivial) But I would suggest you to do something different. If I were your clients, I would just install cloudflare client on their VMs and use cloudflare tunnel for both http/s and ssh, it works great. (if you want to hear more or help with setting up, feel free to ping me), This is solution that I'm currently using on some projects. -- Kind regards, b. On 9/21/22 22:59, Carl Karsten wrote:> The goal is to host 4 or 5 websites for friends. all low traffic, so a > single box should be fine. 16 cores, 32g ram, 1 nic, 1 public IP. > hostname: prox > > each site gets a VM, created manually, (they all get debian) add > friends ssh keys and let them ssh in and do whatever they want in > their vm. and be able to ansible over ssh like ansible does. > > hostnames vm1, vm2... friends all manage their own domain name > register / dns, point their www's at my IP. > > I would like to keep ports all standard: 22 for ssh, 80/443 for > http/s, etc. and route to the VM based on hostname. > > ssh user at prox gets the host, ssh user at vm1 gets vm1. curl http://vm1 gets vm1. > > There are lots of ways to do this, I'm trying to work out a config > that makes it easy on their end. > > Telling them all to use ProxyJump isn't out of the question, but I'm > hoping there are other options. > > I don't mind a separate solution for ssh and http. like for http I > can run an nginx on the public IP with > > server_name vm1; > location / { proxy_pass http://10.0.0.1; >
Hi Carl. On 21.09.22 22:59, Carl Karsten wrote:> The goal is to host 4 or 5 websites for friends. all low traffic, so a > single box should be fine. 16 cores, 32g ram, 1 nic, 1 public IP. > hostname: prox > > each site gets a VM, created manually, (they all get debian) add > friends ssh keys and let them ssh in and do whatever they want in > their vm. and be able to ansible over ssh like ansible does. > > hostnames vm1, vm2... friends all manage their own domain name > register / dns, point their www's at my IP. > > I would like to keep ports all standard: 22 for ssh, 80/443 for > http/s, etc. and route to the VM based on hostname. > > ssh user at prox gets the host, ssh user at vm1 gets vm1. curl http://vm1 gets vm1. > > There are lots of ways to do this, I'm trying to work out a config > that makes it easy on their end. > > Telling them all to use ProxyJump isn't out of the question, but I'm > hoping there are other options. > > I don't mind a separate solution for ssh and http. like for http I > can run an nginx on the public IP with > > server_name vm1; > location / { proxy_pass http://10.0.0.1;Another option could be to use `openssl s_client ...` with the `ProxyCommand`. ``` ssh -o ProxyCommand="openssl s_client -quiet -connect 172.16.0.10:2222 -servername 192.168.0.201" dummyName1 ``` Some more good examples can be found in this blog post with routing examples via HAProxy. https://www.haproxy.com/blog/route-ssh-connections-with-haproxy/ Hth Alex
Hi, Le 21/09/2022 ? 22:59, Carl Karsten a ?crit?:> The goal is to host 4 or 5 websites for friends. all low traffic, so a > single box should be fine. 16 cores, 32g ram, 1 nic, 1 public IP. > hostname: prox > > each site gets a VM, created manually, (they all get debian) add > friends ssh keys and let them ssh in and do whatever they want in > their vm. and be able to ansible over ssh like ansible does. > > hostnames vm1, vm2... friends all manage their own domain name > register / dns, point their www's at my IP. > > I would like to keep ports all standard: 22 for ssh, 80/443 for > http/s, etc. and route to the VM based on hostname. > > ssh user at prox gets the host, ssh user at vm1 gets vm1. curl http://vm1 gets vm1. > > There are lots of ways to do this, I'm trying to work out a config > that makes it easy on their end.you can use sshproxy which I'm maintaining :?https://github.com/cea-hpc/sshproxy With sshproxy's routing system, you can proxy each user to its respective VM, without them having a shell on the gateway.> > Telling them all to use ProxyJump isn't out of the question, but I'm > hoping there are other options. > > I don't mind a separate solution for ssh and http. like for http I > can run an nginx on the public IP with > > server_name vm1; > location / { proxy_pass http://10.0.0.1; >-- Cyril
On 21.09.22 22:59, Carl Karsten wrote:> I would like to keep ports all standard: 22 for ssh, 80/443 for > http/s, etc. and route to the VM based on hostname.Unlike the Host: header in HTTP (since 1.1) and SNI extension of TLS, the SSH protocol AFAICT does not provide a means for the client to tell the server about the original/requested server name, much less doing so *before* the server starts talking (and thus effectively identifies itself). Hence, this can only be done by intransparently wrapping SSH into another protocol layer, at which point you might make certain (non-OpenSSH) client implementations difficult or impossible to use. On the other hand, while sticking to the standard ports has advantages with web servers (ability to use https://www.ssllabs.com/ssltest/ , or an ACME client with HTTP challenge-response against Let's Encrypt, for example), nonstandard ports for SSH are more common, if not even recommended for Internet-facing systems (less noise in the logfiles at least). Thus, my recommendation would be to randomize the ports (which AFAIK all usual SSH clients support), rather than to try to come up with some in-band trickery and then find out how portable it is IRL. :-3 Regards, -- Jochen Bern Systemingenieur Binect GmbH -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3449 bytes Desc: S/MIME Cryptographic Signature URL: <http://lists.mindrot.org/pipermail/openssh-unix-dev/attachments/20220926/0d35f586/attachment.p7s>