Forward HTTPS to another server

MichaelC

Ars Legatus Legionis
33,907
Subscriptor++
So I have a need to forward HTTPS to another server... that end server. I cannot run HTTPS on that end server for reasons.

So the traffic would look like this:

internet -> forwarder (with certificate) 1.1.1.1 -> http server 1.1.1.2 -> internet

or to maintain encryption? would it need to be:

internet -> forwarder (with certificate) 1.1.1.1 -> http server 1.1.1.2 -> forwarder -> internet

I will be using either CentOS or Ubuntu for the forwarder. The forwarder would host the certificate and would be on a different server than the HTTP server. I was thinking I should be able to do this with a Linux server running apache or nginx. But I'm not entirely sure how to go about setting this up. Is there a good guide out there somewhere?

The 1.1.1.2 server can have a certificate, it's just that it's old and does not support TLS 1.2. And it's not going to be easy to move. So until we can get the application rebuilt on a new server, I'm looking for a way to keep it usable.

thank you.
 
D

Deleted member 43669

Guest
btw, nginx can also act as a reverse proxy too.

Reverse proxy is the term you should Google for. Good ol' Apache HTTPD can do it too (I use it a lot for that).

The cool feature in this era is automatic Let's Encrypt certificate handling. For Internet-exposed reverse proxies, it is possible to automagically configure and renew a certificate with basically 0 effort. I think all the new kids (Caddy, Traefik, whatever) come with this by default. I'm quite sure that there are plugins that add this feature to Apache HTTPD and Nginx, at the very least- maybe also to HAProxy.

(If you want to do this for non-Internet-exposed reverse proxies, it's possible, but you need something fancy like a proper DNS setup and an API-friendly DNS server).
 

malor

Ars Legatus Legionis
16,093
This is one of the most frequent uses for nginx, btw. The syntax can be a little awkward, but there's tons of examples online.

The more advanced solutions are usually about providing a single front end to a server farm and maintaining connection state, so that any given client keeps talking to the same server until it's been gone awhile. This lets you add and remove machines from a pool behind the proxy easily, without clients knowing or caring.

All that complexity is probably more than you need. A simple 1:1 reverse proxy would be easily handled by either apache or nginx. If you're going to expand to multiple servers in the future, however, doing something like HAProxy would probably be a better idea.

edit: note that you could probably run the proxy on the same machine, so that the proxy is listening on 443, and then connecting to 127.0.0.1:80 in the background. You'd have to configure the other webserver to listen on 127.0.0.1 as well as its existing IP.

Note, however, that this would leave port 80 exposed to the Internet, which might be a bad idea. You could have the existing server listen only on 127.0.0.1, and then use nginx to mirror that to public ip:443, to convert to an HTTPS-only site while barely changing the existing config, and without using another server.

A second server running the proxy is likely to be more up to date, however, and you can configure it with all best practices. That might not be possible on the old machine. That would give you a little more insulation against bad guys, as long as the existing server can't be directly contacted by the Internet at large.
 
D

Deleted member 43669

Guest
That's not entirely accurate.

If something is bound to 127.0.0.1, it should only be reachable from the same host, so it wouldn't be exposed to the Internet.

Although I'm not sure how much useful it is nowadays, I'd recommend having a publicly accessible web server on :80 which just redirects everything to https.

I basically do this using Apache HTTPD to proxy http://internal.example.com/ as https://public.example.com/

Code:
<VirtualHost *:80>
  ServerName public.example.com
  DocumentRoot "/bogus"

  <Directory "/bogus">
    Options Indexes FollowSymLinks MultiViews
    AllowOverride None
    Require all granted
  </Directory>

  Redirect permanent / https://public.example.com/
</VirtualHost>

<VirtualHost *:443>
  ServerName public.example.com
  DocumentRoot "/bogus"

  <Directory "/bogus">
    Options Indexes FollowSymLinks MultiViews
    AllowOverride None
    Require all granted
  </Directory>

  ProxyRequests Off
  ProxyPreserveHost On
  ProxyPass        / http://internal.example.com/
  ProxyPassReverse / http://internal.example.com/

  SSLEngine on
  SSLCertificateFile      "/etc/letsencrypt/live/public.example.com/cert.pem"
  SSLCertificateKeyFile   "/etc/letsencrypt/live/public.example.com/privkey.pem"
  SSLCertificateChainFile /etc/letsencrypt/live/public.example.com/chain.pem   
</VirtualHost>

With https://httpd.apache.org/docs/trunk/mod/mod_md.html , it would even be simpler.
 

malor

Ars Legatus Legionis
16,093
That's not entirely accurate.

If something is bound to 127.0.0.1, it should only be reachable from the same host, so it wouldn't be exposed to the Internet.

The machine itself is exposed to the Internet, and all the other services the box runs are as well. A lot of the time, when you're stuck on really old software for one service, everything on the box is old, and putting any part of it on a publicly routable IP address is risky.

They might be fortunate enough to be able to run modern, patched software everywhere except for the actual web server code, in which case just binding it to 127.0.0.1 and running a local proxy would fix the problem fine.
 

Coleman

Ars Legatus Legionis
16,790
Subscriptor
I used Traefik for exactly that; https://docs.traefik.io/

I'm also a Traefik user for reverse proxying. I really like the built in, automatic Let's Encrypt certificate support. I'd say as a solution, though you can run it standalone, it really is intended for a Docker deployment to sit in front of other stuff being hosted on Docker. It's so great to be able to just create a DNS record, then spin up the container assigned to that DNS name and Traefik handles the rest of the cert stuff. It can do the DNS part even, if you are on a supported registrar(I'm not).
 

doornail

Ars Scholae Palatinae
1,238
I used Traefik for exactly that; https://docs.traefik.io/

I'm also a Traefik user for reverse proxying. I really like the built in, automatic Let's Encrypt certificate support. I'd say as a solution, though you can run it standalone, it really is intended for a Docker deployment to sit in front of other stuff being hosted on Docker. It's so great to be able to just create a DNS record, then spin up the container assigned to that DNS name and Traefik handles the rest of the cert stuff. It can do the DNS part even, if you are on a supported registrar(I'm not).

Agreed. The cert handling is fantastic. Running it stand-alone is not. I've sat down a couple times to migrate from 1.0 to 2.0 and it's kicking my butt. To be honest, the new configuration and greater abstraction might not be for me. The documentation is frustrating because there are so many options that are configurable in radically different ways that everything is kinda breezed over and 2.0 hasn't been out long enough for third-party tutorials to fill in the gap.

I'm looking at switching over to HA-Proxy + Cerbot.
 

Coleman

Ars Legatus Legionis
16,790
Subscriptor
I've sat down a couple times to migrate from 1.0 to 2.0 and it's kicking my butt. To be honest, the new configuration and greater abstraction might not be for me. The documentation is frustrating because there are so many options that are configurable in radically different ways that everything is kinda breezed over and 2.0 hasn't been out long enough for third-party tutorials to fill in the gap.

I'm looking at switching over to HA-Proxy + Cerbot.

Yeah, it took me a few days to get my setup upgraded from v1 to v2. Docker was really nice for that though, letting me switch between the two versions easily, so I didn't have to just be offline forever until I migrated successfully.

I don't know that I think of it as any more abstract than v1 was, but they conceptually changed around how they approached things, and inserted a 3rd piece in the pipeline as well. It makes way more sense and makes a lot more things easier, once you get over to it, but you are best off if you throw away anything you knew from v1 as a frame of reference. It's better to treat v2 as a completely different product that does the same thing. And yeah, their documentation is kinda...unique? It's very strangely structured, and inconsistent in how it uses examples, particularly, sometimes the example _is_ the documentation on a thing. Once I figured that out, that helped me get past a couple of things. It's almost worth it IMO just for the dashboard you get in v2. It's a bit nicer than the v1 dash, and is helpful for finding where you are having flow failures when adding a new service or a problem cropping up in an existing one.

The subreddit is pretty helpful(I've been sporadically active there), but I found their official forums to not be really useful. I've still got one mysterious problem that I never figured out and no one apparently had any idea why it happened. I ended up just reverting to my previous solution, which honestly, was less complicated, so probably is better in the big picture.

If you wanna drop me a PM, I can try and help you convert to v2, though if you are running standalone you may have things going on I'm not going to know anything about. It wouldn't surprise me if you are running Traeffik standlone, HA-Proxy+Certbot is a better/easier solution.
 

drogin

Ars Tribunus Angusticlavius
7,973
Subscriptor++
Just wanted to chime in that if you're in a cloud environment, you can also just using something like an Application Load Balancer (in AWS) to handle the SSL termination, and just have the "target group" accepting traffic on port 80. This works fine and is simpler unless you are really using nginx for complex routing or mutation of request/response.

Pretty high-scale way of doing this is:

CDN (e.g. CloudFlare) (443) -> Application Load Balancer (443) -> Target Group (80)

This lets the SSL termination happen as far out on the edge (which scales super wide) as you can. Failing that, the ALB is going to scale dynamically based on traffic.
 

malor

Ars Legatus Legionis
16,093
Just wanted to chime in that if you're in a cloud environment, you can also just using something like an Application Load Balancer (in AWS) to handle the SSL termination, and just have the "target group" accepting traffic on port 80. This works fine and is simpler unless you are really using nginx for complex routing or mutation of request/response.

Pretty high-scale way of doing this is:

CDN (e.g. CloudFlare) (443) -> Application Load Balancer (443) -> Target Group (80)

This lets the SSL termination happen as far out on the edge (which scales super wide) as you can. Failing that, the ALB is going to scale dynamically based on traffic.

Is there some reason you can't also do TLS internally? It seems to me that having an unencrypted link between the client and the server defeats the purpose of having encryption in the first place, no?
 
SSL/TLS makes traffic inspection harder. Presumably, once the traffic is already on your network, you are responsible for its provenance anyway, so there's no issue if you inspect it. You may want to inspect if for a huge variety of reasons such as IDS or troubleshooting runaway processes or whatever. SSL outside your network protects your users on the path over the Internet to your network. A lot of internal traffic isn't user data anyway, it's traffic between your various services for stuff like business logic.

This philosphy would apply whether your network is virtual on AWS/Azure/GCP or physical on-prem.
 

drogin

Ars Tribunus Angusticlavius
7,973
Subscriptor++
Just wanted to chime in that if you're in a cloud environment, you can also just using something like an Application Load Balancer (in AWS) to handle the SSL termination, and just have the "target group" accepting traffic on port 80. This works fine and is simpler unless you are really using nginx for complex routing or mutation of request/response.

Pretty high-scale way of doing this is:

CDN (e.g. CloudFlare) (443) -> Application Load Balancer (443) -> Target Group (80)

This lets the SSL termination happen as far out on the edge (which scales super wide) as you can. Failing that, the ALB is going to scale dynamically based on traffic.

Is there some reason you can't also do TLS internally? It seems to me that having an unencrypted link between the client and the server defeats the purpose of having encryption in the first place, no?

Well, it won't be unencrypted on a public network.

You typically have your Load Balancer exposed publicly, but you don't allow anything outside of (at least) your network talk to the actual servers which are unencrypted.

One thing is that it does take time and CPU overhead to encrypt/decrypt traffic. So you are saving some cycles by not doing that between links you know to be private anyway.

So for example:

CloudFlare/CDN (obviously on a public network) -> Load Balancer (exposing 443 on a public IP) -> Servers (on a private network the LB can reach, exposing port 80).

In AWS, I use what's called a Security Group so that the Load Balancer can talk to the servers. Nothing else can really talk to them.