Skip to content
Biz & IT

Web Served: How to make your site all-HTTPS, all the time, for everyone

Adding in SSL termination and HSTS compliance because it’s the right thing to do.

Lee Hutchinson | 75
Story text

Welcome to a supplemental edition of our “Web Served” series, a DIY guide on tackling the challenges of setting up and running a Web server for fun. It’s been a while since we last published an entry—so long in fact, that at some point very soon, I’ll be going back through the series and bringing everything up to date with current versions and commands. But after spending the last weekend tinkering with shifting my personal site over to all-HTTPS, it was just too much fun not to share.

Note that if you’re not the kind of person who thinks screwing around with the command line is fun, this probably isn’t a guide you’re going to be interested in.

Encrypt all the things

The unencrypted Web is on the way out, and that’s a good thing. We’re still making the switch here at Ars—subscriptors can use HTTPS today, but we’re still working out the mixed content kinks for everyone else (the main holdup is handling the ad networks. Since subscriptors don’t see ads, there’s no holdup there!). But if you’ve followed along with the previous Web Served pieces you’ve probably got a shiny Nginx instance happily serving up pages and an SSL/TLS certificate so that privacy-minded visitors have the option of using HTTPS on your site.

In this guide, we’re going to take things a step further and make everything HTTPS for everyone. At the same time we’re going to start participating in HSTS—that’s “HTTP Strict Transport Security,” a way to ensure that your site communicates to your visitors that not only do you support HTTPS, but that you insist on it.

How will we accomplish this? There are a lot of potential ways, but the way I did it with my personal site (and the way I’m going to describe herein) is by employing “HTTPS termination.” In other words, we’re going to stick a reverse-proxy application in front of the Web server to handle the HTTPS part. This winds up being a lot simpler and more flexible than trying to do all-HTTPS with just your Web server’s redirection abilities. So while it may seem a little counterintuitive that adding another app to the stack is the simpler way, trust us: it really, really is.

Dat stack

To start, we’re going to make the same assumptions about software that we’ve made in all the previous Web Served pieces: this is targeted at admins running a Linux-based system with Nginx as the Web server application. You might also have any number of components or applications in line behind Nginx, like php-fpm or WordPress or wilder things. That’s OK; they’ll all benefit.

At home, I’ve also got Varnish Cache sitting in front of my Nginx instance. Varnish is a fast Web caching application that has saved my poor little personal site from some crazy reddit- and Ars-driven traffic storms in the past, but most caching software won’t work with HTTPS traffic. Because the HTTPS negotiation happens between the end-user and Nginx—which sits below Varnish in the stack—all Varnish sees of HTTPS traffic is the encrypted side. You can’t cache what looks like an unending string of unique, encrypted nonsense.

The thing we’re bolting on in front of every other application in our Web stack is a little application called HAProxy. HAProxy is best known as a powerful load balancer—you stick it in front of your website and use it to parcel out requests to a bunch of physical Web servers. A bunch of enormous sites on the Internet make heavy use of HAProxy’s ability to spread out and manage traffic (like reddit, for example), but as of its latest version HAProxy also gained the ability to do SSL termination. Now it can negotiate and establish HTTPS connections with remote clients on behalf of the actual Web server.

This is how I ended up doing this at home, with HAProxy on a separate sever from my main Nginx instance. However, you can do this all on a single server if you like.
This is how I ended up doing this at home, with HAProxy on a separate sever from my main Nginx instance. However, you can do this all on a single server if you like.

That’s the key: we’re going to install HAProxy, feed it our SSL/TLS certificates, tell it to redirect all HTTP requests to HTTPS, and then point it at our actual Web server as its back-end. Keeping Varnish in the mix is what motivated me to do this on my personal site, since this sidesteps almost all of the problems with caching encrypted content. However, even if you don’t have a cache layer (or if you’re using Nginx’s built-in static asset caching abilities, which are less flexible but also easier to deal with than Varnish’s config language), this guide will still work for you without any problems.

So—let’s begin.

Installing HAProxy

I elected to put HAProxy on its own physical server for simplicity’s sake, but there’s no reason you couldn’t do this all on a single box if that’s all you’ve got to work with. Ports 80 and 443 on the HAProxy server will be exposed to the Internet and get both HTTP and HTTPS traffic. All HTTP requests will be given a 301 redirect response to the same URL but with an HTTPS scheme, and then requests will be forwarded to the back-end Web server (your Nginx instance) as plain HTTP. The HAProxy instance handles all of the SSL/TLS connections and Nginx sees everything as plain ol’ HTTP.

The HAProxy packages included with Ubuntu 14.04 LTS aren’t anywhere remotely near current, and so we need to add the following PPA before we get going:

sudo add-apt-repository ppa:vbernat/haproxy-1.5

This PPA is provided by the Debian HAProxy team, so it’s OK to trust (the PPA here will get you the latest version, but there’s also an option for adding a backported stable repo if you’d prefer). After adding the PPA, update your sources and install haproxy:

sudo aptitude update
sudo aptitude install haproxy

The main HAProxy configuration file lives at /etc/haproxy/haproxy.cfg, so pop that open for editing. Here’s the configuration I’m using:

global
	log /dev/log	local0
	log /dev/log	local1 notice
	chroot /var/lib/haproxy
	stats socket /run/haproxy/admin.sock mode 660 level admin
	stats timeout 30s
	user haproxy
	group haproxy
	daemon

	# Default SSL material locations
	ca-base /etc/ssl/certs
	crt-base /etc/ssl/private

	ssl-default-bind-ciphers  EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!RC4
	ssl-default-bind-options no-sslv3 no-tlsv10
	tune.ssl.default-dh-param 4096

defaults
	log		global
	mode http
	option	httplog
	option	dontlognull
	option	forwardfor
	option	http-server-close
	timeout connect 5000
	timeout client  50000
	timeout server  50000
	errorfile 400 /etc/haproxy/errors/400.http
	errorfile 403 /etc/haproxy/errors/403.http
	errorfile 408 /etc/haproxy/errors/408.http
	errorfile 500 /etc/haproxy/errors/500.http
	errorfile 502 /etc/haproxy/errors/502.http
	errorfile 503 /etc/haproxy/errors/503.http
	errorfile 504 /etc/haproxy/errors/504.http

frontend yourservername
	bind *:80
	bind *:443 ssl crt /etc/ssl/private/cert1.pem crt /etc/ssl/private/cert2.pem
	acl secure dst_port eq 443
	redirect scheme https if !{ ssl_fc }
	rspadd Strict-Transport-Security:\ max-age=31536000;\ includeSubDomains;\ preload
	rsprep ^Set-Cookie:\ (.*) Set-Cookie:\ \1;\ Secure if secure
	default_backend webservername

backend webservername
	http-request set-header X-Forwarded-Port %[dst_port]
	http-request add-header X-Forwarded-Proto https if { ssl_fc }
	server webservername 192.168.1.50:80

listen stats *:9999
	stats enable
	stats uri /

Nothing like a gigantic code block to get the blood pumping! Let’s break this down. If you need a vim syntax highlight config for HAProxy, you can grab a good one right here.

Global

We’re leaving the first chunk under the global section alone. This sets up HAProxy’s logging level and stats reporting config, along with the directory where HAProxy will be chroot’ed when it starts and the user/group for HAProxy to run under while daemonized.

We do care about the next bits, which tells HAProxy where our system’s CA root store is and where your SSL/TLS certificates will be located. You might need to change ca-base and crt-base to match your system’s root and cert store.

The ssl-default-bind-ciphers line determines which SSL/TLS ciphers HAProxy will use when establishing secure connections with clients. I’m using the recommended list from Qualys/SSL Labs for perfect forward secrecy across most browser/OS combos. I’ve also edited the ssl-default-bind-options line to disallow SSLv3 and TLS1.0 ciphers, to help protect against vulnerabilities and security holes. The last line in the global section, tune.ssl.default-dh-param, tells HAProxy to use a max of 4096 bits for its ephemeral Diffie-Hellman parameter when doing a DHE key exchange.

Defaults

We’re leaving the first four options in the defaults section alone. They establish what HAProxy should and shouldn’t log; these options tell it that in all cases for us we want HAProxy operating in HTTP (layer 7) and not TCP (not layer 7) mode. For this example we only care about HTTP/HTTPS requests; if you wanted to use HAProxy to load balance things other than HTTP/HTTPS, you’d use the mode directive to help with that.

We’re going to add a couple of things, though: option forwardfor and option http-server-close. We’re using HAProxy as a reverse-proxy (the SSL termination is a subset of that functionality), and so HAProxy needs to be able to tell the upstream servers what IP address all of its requests used. Otherwise, all the traffic to the Web server looks like it’s coming from the HAProxy server. Rather than fiddling with manual header editing, option forwardfor tells HAProxy that it’s operating as a reverse proxy and it needs to add the appropriate X-Forwarded For header on things it passes on to the upstream server.

The http-server-close option is added for performance; it lets HAProxy be smart about either closing or reusing HTTP sessions as needed, while still supporting more advanced HTTP trickery like WebSockets.

Certificates

Now we get to the meat. The frontend section tells HAproxy what kind of traffic it should be listening for and where to send that traffic.

We’re binding HAProxy to both ports 80 and 443, having it listen for incoming HTTP traffic on port 80 and incoming HTTPS traffic on port 443. For the HTTP side, we’re feeding HAProxy two different SSL/TLS certificates (I’m actually using three in production). HAProxy uses Server Name Identification (SNI) to match the hostname on incoming HTTPS requests against the right SSL/TLS certificate, so it knows which cert to use based on what the client is asking for. My own Web server hosts three main sites and uses three different wildcard certificates (*.bigdinosaur.org, *.chroniclesofgeorge.com, and *.bigsaur.us) and HAProxy has no problems picking out the right one for each request.

One thing you might have to do is concatenate your certificate and its decrypted private key file into a single .pem file, since that’s the format HAProxy supports. You can do this from the command line:

cat your-decrypted-ssl-key.key your-ssl-cert.crt > your-ssl-cert.pem

Note that as with your decrypted SSL key, you should ensure the concatenated .pem file is owned by the root user and group and that its permissions are set so that only root can read it:

sudo chown root:root your-ssl-cert.pem
chmod 400 your-ssl-cert.pem

HTTP to HTTPS, with bonus HSTS

(Hat tips to rnewson and Nathan at Stupendous.net for this section’s code!)

Next, we very quickly define an ACL, or an access control list. This doesn’t quite work the same way as an ACL in, say, Windows. With HAProxy, ACLs are lists of things that match certain criteria.

acl secure dst_port eq 443

We’ve created an ACL called secure that matches anything headed for destination TCP port 443. We’ll use this ACL in a minute to apply a few actions only to HTTPS traffic.

The next line in the config is where we’re actually performing the redirect of all traffic from HTTP to HTTPS. This one’s important.

redirect scheme https if !{ ssl_fc }

This tells HAProxy that if an incoming request is not HTTPS, to send a 301 redirect for the same resource with the HTTPS scheme instead of HTTP.

Remember the bit waaaaay back in the intro where I said we’d be turning on HTTP Strict Transport Security? Here’s where we actually make that happen, with a header sent to all users’ browsers telling the browsers that they should use only HTTPS for your website and for all other sites on the same parent domain:

rspadd Strict-Transport-Security:\ max-age=31536000;\ includeSubDomains;\ preload

Here we use the rspadd config option to append the HSTS header at the end of HTTP requests. The contents of the header follow the HSTS header guidelines (and note that the colon and semicolon characters have to be escaped with backslashes or things will break!). This header tells any browser that supports HSTS that the site and all its subdomains prefer to be spoken to via HTTPS, and that this preference will stay in effect for one year (31,536,000 seconds). Further, appending the preload directive tells the folks at Google that you’re consenting to have your site added to their big list of HSTS-compliant websites—a list that’s shared so that other browser developers can use it.

Using HSTS is the right thing to do. An end-to-end encrypted Web is a good thing, and responsible Web server administrators should do everything they can to broaden the use of SSL/TLS. If you intend to use HSTS, you should also go submit your domain to the HSTS preload list once you’re done getting everything here set up.

Speaking of security: because we’re doing this big HTTP-to-HTTPS redirect, we need to make sure that any cookies set by the Web server are appropriately modified to include a “secure” attribute (because as far as your back-end Web server is concerned, you’re only using HTTP). To do this, we use the rsprep option to substitute in that attribute based on a regex:

rsprep ^Set-Cookie:\ (.*) Set-Cookie:\ \1;\ Secure if secure

Note the if secure at the end. This ensures that we’re only modifying cookies that are actually passing back to the user over HTTPS (secure is the ACL we built a few lines back to match HTTPS traffic). Everything should be HTTPS, so this shouldn’t be necessary, but there’s nothing wrong with wearing suspenders with your belt. Actually, that’s not true—don’t do that in real life. But in your security configuration it’s fine.

The last line in the section tells HAProxy where to send traffic. This is the default_backend line. If you had multiple web servers, you could build configs to send traffic to them for different reasons—like, if you had an app server, you could send app traffic to it while sending non-app traffic to other web servers. Or, if you had multiple backends and wanted to load balance between them, you could do that here as well.

Our needs are simple—one back-end server. You can name it whatever you’d like.

The back end

Because we’re only using a single backend server, this section is short. HAProxy’s role as an SSL/TLS terminator means we need to add a couple of headers so that the Web server understands that the actual client communication is happening over HTTPS, even if all it sees is HTTP:

http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }

The first sets the X-Forwarded-Port header so that Varnish and Nginx know that the client originally connected to port 443 (this can be important for returned responses, so that web applications know not to generate HTTP URLs back to clients and instead generate URLs with the proper scheme).

The second reinforces the first, setting the X-Forwarded-Proto to HTTPS if the requests came in over HTTPS—again, so that the actual Web server and its applications know what’s going on and don’t generate the wrong kinds of responses.

Then, we tell HAProxy where to actually send the requests—here you put the name, IP address, and port for your Web server. For me, this is a separate server, so I’m putting in a different IP address and port 80. If you’re doing this all on one box, you’d put localhost and then the port number where Varnish is listening (or just Nginx, if you’re not using Varnish).

Stats if you like

The last section simply tells HAProxy to display its internal status page at the port you specify. You can include basic user name/password authentication by adding stats auth username:password below what’s there, or specify a different URL other than / if you wish.

Note that if you’re using HSTS with the includesubdomains option, you might not be able to reach the stats page by name, since your Web browser will attempt to load the HTTPS version of the page and HAProxy only serves the page out via HTTP. You can work around this by accessing the stats page by IP address instead (entering http://192.168.x.x:9999 in your browser’s address bar instead of trying by your Web server’s name).

Clean up time

Save your config file, but don’t restart the HAProxy service quite yet. If you’re running HAProxy on the same box as Varnish or Nginx and one of those two is already listening on ports 80 and 443, you’ll need to do some clean up first.

For Varnish, if you have it, that means editing /etc/default/varnish and changing the DAEMON_OPTS section. The listen port is set with the -a flag, so change that from -a :80 to whatever other port you’d like to use (it doesn’t really matter what, so long as it’s not in use and you’re pointing to it in the server line from your HAProxy backend config).

For Nginx, we might need to do some more serious surgery. Because we’ll no longer be serving HTTPS requests with Nginx at all, you can elide every one of your HTTPS server stanzas from every one of your Nginx virtual host files. This might feel, as Morpheus says, a little weird, but HAProxy is taking care of everything now. If you’re not using Varnish, and Nginx is bound to port 80, you’ll need to change all of those bindings as well.

And that ought to do it. There aren’t any Nginx or Varnish-specific extras you need to add—though if you’re not already doing it, you might want to set the following two lines in your main nginx.conf file, just to make sure Nginx knows to use the X-Forwarded-For header to replace the IP addresses on requests that come in from 127.0.0.1:

set_real_ip_from 127.0.0.1;
real_ip_header X-Forwarded-For;

This will only work if you’re using a version of Nginx built with ngx_http_realip_module, so check the output of nginx -V before adding this.

Restart the HAProxy service (service haproxy restart) and it will begin listening for requests.

Verifying it works, with or without caching

If you don’t have a caching layer, then verifying everything works is pretty simple: hit up your site with an http scheme. If the URL immediately flips to https and you see your site, then it’s working! You can also make sure the HSTS header is being properly sent using a quick curl command:

curl -s -D- https://yoursite.whatever/ | grep Strict

That ought to display the strict transport header received from your Web server.

If you’re using Varnish and you want to check and make sure everything’s working and your HTTPS requests are being properly cached, there’s a quick way to do this. First, make sure that Varnish is appending a cache hit header with its responses—that’s the easiest way to tell if a request is coming from cache or not. To do this, edit your Varnish vcl file (which will be /etc/varnish/default.vcl if you’re lazy like me) and add the following immediately at the start of the sub vcl_deliver section:

# Display hit/miss info
if (obj.hits > 0) {
	set resp.http.X-Cache = "HIT";
}
else {
	set resp.http.X-Cache = "MISS";
}
What you see if you look at the headers for my personal site—caching with HTTPS.
What you see if you look at the headers for my personal site—caching with HTTPS.

Restart Varnish, then clear your Web browser’s cache and load up your site a couple of times to warm up Varnish. Once you’ve reloaded and clicked around a bit, check your headers (in Chrome, for example, you can pull up the Developer Tools console (More Tools > Developer Tools) and click the “Network” tab, then reload the page and click on the individual page elements. Images and other static assets should come back showing cache hits in the Response Headers section:

And if that’s what you’re seeing, we’re done! Relax and bask in your Varnish-accelerated, all-HTTPS site.

Disclaimer: portions of this piece originally appeared on the author’s personal blog.

Photo of Lee Hutchinson
Lee Hutchinson Senior Technology Editor
Lee is the Senior Technology Editor, and oversees story development for the gadget, culture, IT, and video sections of Ars Technica. A long-time member of the Ars OpenForum with an extensive background in enterprise storage and security, he lives in Houston.
75 Comments