.@ Tony Finch – blog


During a discussion on a work mailing list about TCP performance across the Atlantic, one of my colleagues made some comments that were ridiculous even by his standards. (He's notorious for declaring ex cathedra that things are extremely difficult and no-one understands them any more except a few giants (like himself) from the age when dinosaurs ruled the machine room.) I felt compelled to correct his most egregiously wrong statements, and I'm posting my comments here since they seem to be of wider interest than just University IT staff. In the quoted sections below, when he talks about "products" he means software that games or breaks TCP's fairness and congestion control algorithms.

I went to a very interesting talk by the only TCP/IP experts I have ever met who explained that the only reason the Internet works is that such products were NOT in widespread use.

The importance of congestion control has been common knowledge since the Internet's congestion collapse in the 1980s and Van Jacobson's subsequent TCP algorithm fixes. (It's taught to all CS undergrads, e.g. look at the Digital Communications 1 syllabus in CST part 1B.) However software that games TCP's fairness model *is* in widespread use: practically all P2P software uses multiple TCP connections to get better throughput.

In the mid 1990s the Internet was going to melt down because of all the web browsers making lots of connections that were too short for TCP's congestion control algorithms to be effective. So we got HTTP/1.1 with persistent connections and a limit on the concurrency that browsers use. However these TCP and HTTP measures only work because practically all software on end-hosts co-operates in congestion control: there's nothing to enforce this co-operation.

The rise of P2P means that ISPs are increasingly interested in enforcing limits on bandwidth hogs, hence the arguments about "network neutrality" as the ISPs point fingers at YouTube and the BBC and the file sharers. They are also deploying traffic shapers in the network which manipulate TCP streams with varying degrees of cleverness. (The most sophisticated manipulate the stream of ACKs going back to the sender which causes it to send at the network's desired rate; the stupid ones like Comcast's just kill connections they don't like.)

However, a better approach is to use a more realistic idea of fairness as the basis for congestion control instead of equal bandwidth per TCP connection. e.g. for ISPs, balance per subscriber. Bob Briscoe's recent position paper makes this argument in readable fashion. RFC 2140 groped in the same direction 10 years ago, but was never adopted. Either way, enforcement at the network's edge (instead of trusting the hosts) is likely to be the future whatever model of fairness is used.

Also interesting is the research on adapting TCP to high-bandwidth networks. There are many examples of what Briscoe argues against: great efforts of design and analysis to preserve TCP-style fairness in the new protocols. The GÉANT2 high-speed TCP page has lots of relevant links.

Luckily, there are very few people who know enough about TCP/IP and related protocols to write an effective product because, as I implied, there are probably no more than half-a-dozen TCP/IP experts in the world still working (and may even be none).

Tosh. There's a substantial market out there, both for managing congestion and for exploiting weaknesses in congestion control.