mark nottingham

"Why Don't You Just…"

Tuesday, 18 December 2012

HTTP

A proposal by John Graham-Cumming is currently doing the rounds:

HMURR (pronounced ‘hammer’) introduces a new pipelining mechanism with explicit identifiers used to match requests and responses sent on the same TCP connection so that out-of-order responses are possible. The current HTTP 1.1 pipelining mechanism requires that responses be returned in the same order as requests are made (FIFO) which itself introduces a head-of-line blocking problem.

This seems attractive at first glance; rather than starting a whole new protocol, why not just incrementally improve an existing one?

It turns out that this isn’t the first time this has been suggested; Jeff Mogul proposed something similar more than ten years ago, with Support for out-of-order responses in HTTP.

What stopped Jeff, and what makes this current proposal difficult, is that “small” backwards-incompatible changes to deployed protocols tend to bring out a lot of heretofore-unseen bugs in deployed software.

This is especially true when you change something as fundamental as the message parsing algorithm, or the underlying message exchange pattern of the protocol. For example, HTTP/1.1 added pipelining and the expect/100-continue pattern to HTTP/1.0, but neither was foreseen in that protocol. Support for pipelining in particular was indicated by the HTTP/1.1 version identifier; it should have signalled that was OK to pipeline.

However, HTTP/1.0 implementers happily added other things (such as Cache-Control and chunked encoding) to support HTTP/1.1 as it evolved, but because these new “small” features involved some pretty fundamental architectural changes in those implementations, they weren’t nearly as well-supported. And, since some of those implementations weren’t expecting requests with pipelining or Expect headers, they behaved in strange and sometimes dangerous ways.

In theory, you could introduce new features like this using existing signalling mechanisms in the protocol (such as the HTTP version, as John suggests, or the Connection and Transfer-Encoding headers). In practice, however, if you make something look like HTTP/1.x, someone will assume it is HTTP/1.x, and ignore the new model.

This is why my proposal for improving pipelining (which included a response identifier) isn’t going very far either. It’s why the WebSockets work bent over backwards to avoid any possibility of looking like HTTP on the wire. It’s one of the things that makes our current discussion on HTTP upgrade so difficult.

There are a few other reasons as to why this is a difficult approach; Patrick covers many of them in his comments, which are worth a read. Losing the textual nature of HTTP is indeed unfortunate, but so far, consensus seems to be that it’s worth it. At one of the recent meetings, an IETF grey beard (apologies, I forgot who) said that human readability (i.e., protocol-as-text) a nice-to-have, but that it shouldn’t be the deciding factor in protocol design; i.e., absent other arguments, make your protocol textual. So far, however, we have several good arguments for a binary protocol here, as Patrick covers.

All of that said, it’s great that people are thinking about bringing alternate ways of achieving the goals of HTTP/2.0. I encourage John and everyone else to bring your ideas to the Working Group. For example, the idea of being able to update the status code as (or after) the message body is sent has already been brought up, and the reception seemed to be positive.