At a tech training session, I wanted to get access to some of my class-related email on the training computer. But I didn’t want to log into my primary email on an open network, or on someone else’s computer at all. I have no idea what they’re logging, whether they’re doing SSL inspection, whether there’s a keylogger on it — probably not, but who knows?

Heck, I didn’t even want to use my own device on the hotel Wi-Fi without a VPN, and that was at least secured by WPA2! (then again…)

I ended up forwarding the extra class materials to a disposable email account and logging into that one. No risk to other accounts if it got sniffed, at any level.

But I remembered how we all used to get at email when traveling back in the early 2000s, before smartphones, and before every laptop and every Starbucks had Wi-Fi:

Internet Cafes.

We’d walk into a storefront and rent time on one of their computers. Then we’d go to our webmail site and type in our primary email login and password over plain, unsecured HTTP without TLS.

I’d never do that today. Admittedly, I wouldn’t need to in most cases — I can access my email wirelessly from a device I own that I carry in my pocket. (Whether that’s a good thing remains up for debate.)

But more importantly, we know how easy it is for someone to break into that sort of setup. Even if your own devices are clean, someone else’s computer might have malware or keyloggers or a bogus SSL cert authority on their browser to let them intercept HTTPS traffic. An HTTP website is wide open, no matter whose device you use. And an open network is easy to spoof.

So these days it’s defense in depth: If it needs a password, it had better be running on HTTPS. If I don’t trust the network, I use a VPN. And I really don’t want to enter my login info on somebody else’s device.

I’ve been checking in on redirected & dead links lately, a few minutes here and there, updating, replacing, and removing where appropriate. And I’m happy to see that a lot of sites have moved to HTTPS. News sites, online stores, social networks, personal sites, publishers…. Not everyone, of course, but it’s a lot easier than it used to be. Now more than half of all web traffic is protected from eavesdropping and alteration when used across insecure networks.

The one that made me laugh, though: Badger Badger Badger. Now there’s a flashback!

It’s a silly animation loop that went viral back in 2003. The canonical site is still around…and even they upgraded!

That took a lot longer than I intended.

But I’ve finally made all of Hyperborea.org run over HTTPS.

It’s been possible to view the whole site over HTTPS ever since I turned it on for the admin area of this blog years ago, but I left HTTP as the canonical URL and didn’t redirect anything until I updated the Les Mis section, and later this blog. Now, any page you visit on this entire site should load over an encrypted connection.

(Well, any page except for the old Dillo RPMs page, since that minimalist web browser still only has experimental HTTPS support.)

The problem is when you have decades of hand-crafted web pages to go through, it can take a while to make sure everything embeds only secure or same-origin content. Every image, every script, every video. I had to update lots of absolute links, remove some widgets and ads, update other widgets, embedded videos and metadata…and just a bit at a time in my spare time.

Finally I switched on the redirects this morning. Even that took longer than expected, because I’d forgotten that mod_rewrite rules in a directory override any parent directory’s rules, so I had to copy the HTTP-to-HTTPS rewrite rule to each folder that had its own rewrite rules. Then I had to fix the interaction between mod_rewrite and ErrorDocument that was causing custom errors to redirect to the error template instead of loading it behind the scenes.

The cost of implementing HTTPS on your own site is a lot lower now than it used to be. For instance:

  • Let’s Encrypt offers free certificates for any site, and some web hosts have software integration that make ordering, verifying and installing a certificate as simple as checking a box and clicking a button. (I’m impressed with DreamHost. I turned on secure hosting for some of my smaller sites a few months ago by just clicking a checkbox. It generated and installed the certs within minutes, and it’s been renewing them automatically ever since.)
  • Amazon now has a certificate manager you can use for CloudFront and other AWS services that’s free (as long as you don’t need static IP addresses, anyway) and only takes a few minutes to set up.
  • CloudFlare is offering universal HTTPS even on its free tier. You still need a cert to encrypt the connection between your site and CloudFlare to do it properly, but they offer their own free certs for that. They’ll also let you use a self-signed certificate on the back end if you want. (It’s still not perfect because it’s end-to-Cloudfront-to-End instead of end-to-end, but it’s better than plaintext.)

You may not need a unique IP address anymore. Server Name Indication (SNI) enables HTTPS to work with multiple sites on the same IP address, and support is finally widespread enough to use in most cases. (Unless you need to support IE6 on Windows XP, or really old Android devices.)

Now, if you want the certificate to validate your business/organization, or need compatibility with older systems, you may still want to buy a certificate from a commercial provider. (The free options above only validate whether you control the domain.) And depending on your host, or your chosen software stack if you’re running your own server, you may still have to go through the process of generating a request, buying the cert and going through the validation process, and installing the cert.

But if all you want to do is make sure that your data, and your users’ data, can’t be intercepted or altered in transit when connecting to reasonably modern (2010+) software and devices, it’s a lot less pain than it was even a year ago.

The hard part: Updating all your old links and embedded content. (This is why I’m still working on converting Speed Force and the rest of hyperborea.org in my spare time, though this blog is finally 100% HTTPS.)

And of course dealing with third-party sources. If you connect to someone else’s site, or to an appliance that you don’t control, you have to convince them to update. That can certainly be a challenge.

Expanded from a comment on Apple: iOS to Require HTTPS for Apps by January at Naked Security.

The free TLS certificate provider Let’s Encrypt automates the request-and-setup process using the ACME protocol to verify domain ownership. Software on your server creates a file in a known location, based on your request. The certificate authority checks that location, and if it finds a match to your request, it will grant the certificate. (You can also validate it using a DNS record, but not all implementations provide that. DreamHost, for instance, only uses the file-on-your-server method.)

That makes it really simple for a site that you want to run over HTTPS.

Redirected sites are trickier. If you redirect all traffic from Site A to Site B, Let’s Encrypt won’t find A’s keys on B, so it won’t issue (or renew!) the cert. You need to make an exception for that path.

On the Let’s Encrypt forums, jmorahan suggests this for Apache:


RedirectMatch 301 ^(?!/\.well-known/acme-challenge/).* https://example.com$0

That didn’t quite work for me since I wanted a bit more customization. So I used mod_rewrite instead. My rules are a little more complicated (see below), but the relevant part boils down to this:


RewriteEngine On
RewriteBase /

# Redirect all hits except for Let's Encrypt's ACME Challenge verification to example.com
RewriteCond %{REQUEST_URI} !^.well-known/acme-challenge
RewriteRule ^(.*) https://example.com/$1 [R=301,L]

These rules can go in your server config file if you run your own server, or the .htaccess for the domain if you don’t.

Continue reading