.@ Tony Finch – blog

A scalable system's size and complexity is not constrained by the capacity of its components.

By this definition the Internet is not scalable, since every router must have a large enough routing table to handle every network address prefix. Since the Internet has scaled impressively over the last 20 years, this might seem to be a theoretical problem, but I think it is a very practical problem and we have largely become inured to its consequences.

Consider your laptop or smartphone. It has three or four network interfaces - wifi, Ethernet, cellular, Bluetooth - but it can't roam from one uplink interface to another without resetting its network stack, nor balance the network load across interfaces, and its ability to provide connectivity to other devices is restricted. You can make these features work with special hackery (NAT, mobile IP, HIP) but they aren't supported by the architecture in the way that first-class multihoming is. The Internet cannot cope with a billion small and highly-mobile networks.

The cause of the scalability problem is the addressing scheme. Each packet contains a globally unique destination address that indicates where the packet should be delivered, without any information about how to get there. Therefore every router must maintain a map of the entire address space so it knows where to forward each packet. Routing tables must be very fast as well as very large - the packet arrival interval in 10Gbit Ethernet is as little as 75ns - so backbone routers require expensive custom silicon. Maintaining the routing tables requires all significant connectivity changes to be communicated to all routers over BGP, so communication overheads scale badly as well as routing table sizes.

The current strategy for controlling this problem relies on aggregation: the Internet registries try to allocate numerically adjacent address blocks to topologically adjacent networks, and hope that distant routers can use a single large routing table entry to span several blocks. But the need for multihoming and traffic engineering encourages de-aggregation despite the external costs this imposes. The regional Internet registries try to use their allocation policies to minimise the harm, but this has the side effect of slowing the growth of the Internet.

All network engineers are aware of this problem, and there has been a lot of work on developing better routing schemes. At the academic end there have been many "Future Internet" research programmes, which funded a lot of projects that have often ignored any need for backwards compatibility with interesting results. At the more practical end has been the work in the IETF and IRTF, especially the Routing Research Group. RFC 6115 has an overview of their work.

The most notable proposals are HIP and LISP, which both follow the locator / identifier split plan. The idea is to add a topology-independent overlay which endpoints use to identify each other. For example, identifiers are used instead of IP addresses in TCP connection tuples. Multi-homing etc. are handled by a mapping layer that translates between identifiers and locators. The locators can then be allocated much more strictly along topological lines than is currently reasonable. HIP requires upgraded endpoints with little support from the network, whereas LISP supports existing endpoints with an upgraded network. HIP puts an extra end-to-end header in packets whereas LISP is closer to the 8+8 scheme of GSE and ILNP.

I am not convinced that the ID/loc split will be a great success. It keeps the same unscalable globally unique address scheme, but duplicated, with a complicated mapping layer in between. It may, at great expense, make widespread site multihoming feasible, but I doubt that will ever be a feature of consumer connectivity let alone edge device connections.

I believe a properly scalable replacement for the Internet should be cheaper and have more features than the current Internet. However it looks like it takes at least 20 years to go from prototype system to a plausible replacement for the old network - see for example the replacement of traditional telephony with VOIP, and IPv4 with IPv6. So we are unlikely in the foreseeable future to see any anything come and sweep away the accumulating ugly complication of the Internet and replace it with an elegant clean slate.