mark nottingham

Trying out TLS for HTTP:// URLs

Monday, 17 March 2014

HTTP

The IETF now considers “pervasive monitoring” to be an attack. As Snowden points out, one of the more effective ways to combat it is to use encryption everywhere you can, and “opportunistic encryption” keeps on coming up as one way to help that.

I was asked to introduce the session on this topic at the recent STRINT workshop. There was a lot of disagreement both about the terminology to use, as well as back-and-forth on whether it’s a good idea. but progress on opportunistic encryption is being made. I’ll try to set out where we’re at for HTTP below.

“Opportunistic Encryption” in HTTP

As I’ve mentioned before, some members of the HTTPbis Working Group has been interested in opportunistic encryption for some time. Late last year I wrote a draft outlining an approach, later splitting the actual mechanism into a separate draft called “HTTP Alternate Services” that Patrick Mcmanus graciously offered to help with. We discussed it inconclusively in our Zurich interim meeting, and more recently in London. Between the two meetings, Patrick started to implement the drafts in Firefox, and released a private build, which I tested during STRINT with both this Web site and redbot.org (it works! You can see the Alt-Svc header sent by this site here).

Based on Patrick’s implementation experience as well as the impetus of the STRINT discussion, people in the HTTPbis meeting thought that it would be good to document – but not require support for – opportunistic encryption in HTTP/2, based upon the Alternate Services approach.

In HTTP, we prefer to call this “TLS for http:// URLs”, because it more clearly conveys what’s happening. The idea is that a HTTP:// URL will be upgraded to use TLS encryption in a completely transparent fashion; it won’t show a lock icon, and the security context of the page (e.g., for cookies) will be the same as if it wasn’t.

This is important, because this approach to encryption only works against passive attackers – such as pervasive monitoring, sniffing attacks like FireSheep, and drive-by monitoring like that performed by Google StreetView.

It doesn’t help at all with active attacks; for example, it’s trivial to downgrade this mechanism to plaintext by simply removing a header. Furthermore, Firefox currently doesn’t check the certificate when using TLS for http:// URLs, so an attacker can pretend to be the server and the browser won’t be the wiser.

In other words, if you want real security, you still need to use “full” https://.

Why?

The mantra in many of the conversations about pervasive monitoring is “secure by default.” One way to do that would be to move the entire Web over to https://, giving the full security benefit of TLS to the Web. However, switching to https:// cannot happen overnight.

This is partly because there are lots of different parts of the Web security model that hang off of the difference between “http” and “https”, such as cookie scoping and content embedding rules; for some sites, this makes switching a big headache.

Furthermore, implementing https:// requires you to get a valid certificate for WebPKI – a process that is, at a minimum, pretty painful and invasive. If you think about all of the various kinds of devices including embedded systems, “Internet of Things” devices and small corporate Web servers, and you quickly see the limits of using WebPKI. Getting certs onto all of those systems is hard.

Ignoring the certificate makes it super-easy to deploy TLS and frustrate passive attacks, because the server can automatically generate a self-signed certificate. Thus, we can transparently and easily upgrade the security of the Web, just by deploying updated Web servers and browsers, but not with any of the mess of certificate management. Furthermore, if we don’t change http:// URIs when we do this, we don’t have to worry about changing the security model overall.

Why Not?

While encouraging more use of TLS is good, the second order effects of TLS for HTTP:// URLs still aren’t clear.

For example, it could be that those currently performing passive attacks simply upgrade to active ones; as pointed out, some of them are very simple. Some believe that this is a good thing; if passive attackers “go active”, it makes them easier to detect, and most attackers want to evade detection. On the other hand, pervasive monitoring isn’t exactly a secret, so it could be that forcing the attacks to become active will backfire; people will become accustomed to them. It may not even be your government doing it; it might be your ISP, so that they can do things like caching content or re-encoding video to your mobile phone. If this becomes commonplace, everyone’s security suffers.

Additionally, if people – whether they’re using a browser or administering a Web server – start to believe they’re “secure” because of this approach to encryption, it can cause confusion and even slow deployment of “full” TLS using https:// URIs, which has obviously better security properties.

Moving Forward

So far, only Firefox has shown firm interest in implementing TLS for http:// URIs on the browser side, although others are paying attention. As the experiment progresses, I think we’ll get more information about how people perceive the security (or lack thereof) of this mechanism, and we’ll figure out what role it has to play in securing the Web.

Additionally, this is only one way to improve encryption. HTTP/2 is doing other things like raising the bar for how we use TLS, the TLS Working Group is working on TLS1.3 to protect more information in the handshake, and people are taking about end-to-end encryption for various protocols. In parallel, there’s a lot of discussion on how to improve the experience of TLS – both on the server, and on the browser – to make encryption even easier.

It’s interesting to me that there’s already some impact; today, Fairfax reports that the Australian Security Intelligence Organisation has noticed:

“Since the Snowden leaks, public reporting suggests the level of encryption on the internet has increased substantially,” ASIO said.

“In direct response to these leaks, the technology industry is driving the development of new internet standards with the goal of having all web activity encrypted, which will make the challenges of traditional telecommunications interception for necessary national security purposes far more complex.”

Well, yes; yes we are. The point is not to make enforcement of the law more difficult; legal intercept is a necessary part of living in a society. Casual retention of everyone’s data, ripe for misuse, however, is not, and that’s what the industry – from Google and Yahoo!, to the IETF and Tim Berners-Lee – are pushing back on.