ADDRESSING THE PROBLEMS OF IPV4 AND IPV6 Brian Candler revision 2, 2004-11-19 In June 1996, I was a visiting instructor at the Internet Society's Network Training workshop in Montreal, Canada, for my first time. I remember clearly sitting next to another instructor one lunchtime, who was explaining to me excitedly about how important the Internet Protocol (IP) version 6 was, and how it would be rolled out across the Internet within a year. More than 8 years later, IPv6 still remains little more than a plaything. There are small islands where IPv6 runs natively, and tunnels linking them. One of the mirrors of www.freebsd.org is reachable via IPv6, for example. Operating systems (including Windows) have had IPv6 stacks for years; many applications have been modified to work with IPv6, and shown to work correctly. And yet despite this huge investment, IPv6 still hasn't taken off. Why is this? It's my opinion that IPv6 is a solution in search of a problem: it doesn't solve any of the problems of IPv4, probably not even the obvious one of address-space depletion. In this article I'll discuss my thoughts on what the real problems of IPv4 are, which in turn leads onto what the solutions might look like. IPv4 has been a huge success, and in some ways has become a victim of its own success as scalability issues have arisen, although to date these have been kept under control. So here is a list of the problems and limitations as I see them. Many of them are closely related, and so these points are not in any particular order of importance. 1. ROUTING TABLE EXPLOSION ========================== Routing tables hold information which says for a particular range of IP addresses, what are the possible 'next hops' you could take to move the packet closer to its destination. As more organisations join the Internet, the sizes of routing tables in core Internet routers have grown enormously, because those routers have to learn the routes to all possible destinations. In the early days of the Internet, every organisation was allocated its own IP address range, and would announce it to the whole Internet, creating one entry in every core router for every connected organisation. This clearly was not a sustainable model, as the number of connected organisations grew from thousands into millions. This forced a new strategy for address allocation: aggregation. Each ISP is given a large continguous range of IP addresses, and then splits it into smaller blocks to give to its customers. As a result, the ISP need only announce one route covering its whole range (the 'aggregate') to the Internet; that's sufficient for traffic for all its customers to find the ISP. Once it arrives there, the ISP can look in more detail at the destination address to route it to the correct customer. Although you might imagine that there would now be one route per ISP, this is not the case. Although the explosion has been curtailed, currently core routers are still having to carry around 150,000 routes, and tables of this size are reaching hardware limitations for existing equipment. There are a number of reasons why the number of routes in the Internet today is much larger than the number of networks: (1.1) For legacy reasons an organisation may have a bunch of class C (/24) netblocks instead of a single larger netblock (1.2) ISPs inherit customers who have provider-independent (PI) address space which still needs to be announced; customers are reluctant to renumber (see below), and even more reluctant to give up their precious PI space which is no longer obtainable (1.3) Conservative allocation strategies may mean that an ISP is given a medium-sized block of IP addresses initially, then another medium-sized block as it proves the first is full, then another medium-sized block later, and so on, rather than a single large allocation up-front. This is due to the address depletion problem (see below). (1.4) For traffic-engineering purposes, such as balancing traffic flows between different links, an ISP may announce smaller fragments of its netblocks as separate routes (1.5) Multi-homed customers (see below) have to have separate route announcements (1.6) Overriding operational concerns may force an ISP to announce smaller fragments. For example, if someone else on the Internet is announcing their address space falsely in smaller fragments (see below), the only way they can compete is also to announce the smaller fragments (1.7) Simple mismanagement and laziness in configuration, not helped by software which defaults to 'classful' behaviour Does IPv6 help? In principle it might, but only because we could have a clean slate to work with. If everyone moved to IPv6, then clearly we would get rid of all the legacy allocations and routes; and if everyone got a big-enough allocation to start with, then they would only need one route each. (It would be much the same if we built a second IPv4 Internet with a fresh numbering plan, however) On the other hand, if the allocation of IPv6 addresses is handled in a haphazard way, as appears to be the case at the moment, then we could be in as bad a situation as before. This is especially because of the way IPv6 is being rolled out as a series of 'islands', where the address space may come from an 'experimental' block or a friendly partner organisation, rather than from the (topologically sensible) upstream ISP, who may not be interested in IPv6 today. As a result, many address allocations do not match Internet topology at all. If the Internet ever did move to IPv6, then those early adopters would be uninclined to renumber out of their old allocations, and we would end up with a new set of legacy routes. And in any case, routes for multi-homed customers would still need to be announced. 2. THE RENUMBERING PROBLEM ========================== There is a strong business need to be able to change easily from one provider to another - if the current provider is not giving good service, or is charging too much, then free market economics demand that the you should be able to take your custom elsewhere. Unfortunately, there is a major problem here, and that's renumbering. As mentioned above, in order to avoid having a route in the core Internet routers for every end-user, they are allocated IP addresses from a larger block belonging to their ISP; the ISP then announces just the larger block to the rest of the world. But the necessary consequence of that is that if you change ISP, you must get a different range of IP addresses and therefore need to renumber your entire network. Renumbering is a serious issue for all but the smallest networks, and attempting it can cause weeks or even months of headaches and downtime. A smooth, well-planned operation by a technically competent person could proceed as follows. Firstly, connect the second link to the second ISP, and configure your networks to carry both old and new IP addresses simultaneously. Next, reduce the Time-To-Live in the DNS for all the services you announce to the outside world. Renumber client machines by changing the DHCP servers and then restarting the clients. If you are clever and have the patience, you may be able to give server machines two IP addresses concurrently and swing the DNS for a reasonably seamless switch; otherwise you may have to pick a time in the middle of the night when an outage is least inconvenient, change the IP address, and change the DNS simultaneously. In practice, systems are interlinked and it's very hard to renumber one machine without having unforeseen effects on others. There is significant fallout from applications which have hard-coded IP addresses buried in their configuration files or even in binaries, which can take much effort to fix, and much frustration from the business with the corresponding downtime. A good network can have the ability to renumber as once of its design objectives (e.g. using DHCP for clients), but in practice this is never a primary concern when servers are being set up and configured to talk to each other, and a rapid dependence on hard-coded IPs builds up. In fact, this is often done on purpose, to reduce the application dependence on DNS servers which may or may not always be reachable, or because users do not wish to publish information about their internal network in the global DNS, nor set up a private internal DNS service. They may be communicating with a third-party via an 'extranet' or VPN, which shares no common DNS infrastructure, so hard-coded IP addresses are the only option. In the past, businesses for whom this was a concern were able to get provider-independent address space and get the ISP of their choice to announce it. Because of the routing table explosion, this option is simply not permitted any more. Does IPv6 help here? Not much that I can see. IPv6 has the ability to auto-configure interfaces and learn the addresses of routers, similar to the service that DHCP provides with IPv4. I have seen proposals for automatic renumbering (RFC2894) and DNS extensions to aid this (RFC2874). But servers are typically configured statically, and nothing prevents sysadmins hardcoding IP addresses in files - and so they do, sometimes with good reason as outlined above, and sometimes just because of poor practice. Either way, renumbering a network of servers will remain a painful and expensive operation always, in my opinion, simply because renumbering is not just about changing interface configuration; it's about changing all the applications and their network interactions with each other. 3. THE MULTI-HOMING PROBLEM =========================== The Internet is now a critical piece of infrastructure for many businesses, and will become moreso as voice-over-IP threatens to take over from traditional telephony. There is therefore a business driver to become "multi-homed", that is, to have more than one Internet connection and for traffic to continue to flow seamlessly even if one link fails (albeit at reduced capacity of course). There are essentially two ways of multihoming: having multiple links to the same ISP, or having separate links to multiple ISPs. The first case is straightforward to implement technically, and has no impact on IP numbering or routing in the global Internet. However, the business is still at risk if the ISP itself suffers a catastrophic problem. Whilst the reliability of ISPs is improving, it's probably fair to say the frequency of duration of outages in an ISP link is still significantly higher than would be expected from a PSTN provider. So, it's preferable from the end-business's point of view to link to two or more separate ISPs. However in the current Internet infrastructure causes a number of resource issues, which would become crippling if even a few tens of thousands of businesses decided to do it. The problems stem from the way the Internet is connected together and manages its topology; once the same entity is reachable through more than one part of the Internet, the whole of the Internet has to learn about the different ways to reach that entity. (3.1) ROUTING. The multi-homed entity will announce its route through both ISP A and ISP B, resulting in a new route and two sets of network-layer reachability information propagating through all backbone routers in the Internet. This exacerbates the explosion in routing tables outlined above. Upgrading backbone routers everywhere has an unacceptably high cost, and in response people are starting to filter out routes they see as "unimportant". The net result is loss of connectivity to parts of the Internet, especially to smaller ranges of IP addresses such as multi-homed entities. (3.2) AUTONOMOUS SYSTEM NUMBERS. In order to build a map of the topology of the Internet, it is divided into "islands" called autonomous systems, each of which is an independent network with its own routing policy. These are called "autonomous systems", and are identified by an autonomous system number. A multi-homed entity (when multihomed to two different AS's) needs its own AS number: AS 1234 AS 5678 ISP A ISP B \ / \ / AS 54321 our business AS numbers are 16 bits, thus giving a maximum of 65536 autonomous systems in the Internet (actually a bit less than this, since some ranges of AS numbers are reserved), of which over 18,000 are active already. Thus it only takes a few tens of thousands more multi-homed entities before we run out. Notice that our multi-homed business AS 54321 almost certainly does not carry any transit traffic - that is, no packets from ISP A to ISP B or vice versa are carried by the business. It's just an endpoint - but it still needs an AS number. (3.3) IP NUMBERING. The multi-homed entity will either have to get address space from ISP A, and re-announce it through ISP B, or will have to get provider-independent address space and announce it through both. (3.3.1) If the entity gets address space from one ISP, the same addresses will end up getting announced twice: once by the ISP containing the aggregate route for the whole block, and once from the multi-homed entity containing just its smaller portion. The existence of more specific routes causes difficulties for route filtering (see later). (3.3.2) If the entity gets provider-independent space then that means potentially large numbers of 'small' routes being advertised. Any route smaller than /24 or even /22 is currently likely to be filtered out by backbone routers, to reduce bogus routes and routing table size. As a result, an entity that only needs (say) 16 IP addresses may end up being allocated 1024 or more, just so that its route is not lost. Does IPv6 help us here? No. It sits on top of the same Internet topology, learned by the same BGP protocol, with the same 16-bit AS numbers. It runs on the same hardware with the same problems if too many routes are learned. Potentially there could be more scope for allocating provider-independent addresses (3.3.2), but current policy for IPv6 does not permit it, because of the routing table explosion problem. Closely linked to multihoming is the idea of 'mobility' - decoupling the unique identity of device from its location on the network, so that an end machine can move seamlessly between networks without affecting traffic flows. A good mobility solution might also solve renumbering and multihoming problems. Architecturally, IPv6 is not really any different from IPv4 when it comes to mobility. 4. ADDRESS DEPLETION ==================== This is the most obvious of the problems with IPv4, and although in practice not as pressing as it once was, it will clearly become a problem in the end. There was a suggestion that an explosion in IP use from mobile telephones would be the driver which would force acceptance of IPv6; however this is yet to occur. IP addresses are 32-bit numbers, giving a maximum of 2^32 unique addresses in total, or just over 4 billion. But there are a number of facts-of-life which prevent this all being used efficiently: (4.1) Some addresses are reserved: e.g. multicast and experimental (224/3, which is 1/8th of the entire space), loopback (127/8) and other reserved blocks such as 0/8. (4.2) In the early days of the Internet enormous blocks of address space were allocated to organisations which needed only a fraction of that space, but it is proving very hard to reclaim it (mainly because it is so painful for them to renumber - see point (2) above) (4.3) Each physical network (e.g. a LAN) needs its own block of IP addresses, so that all machines on that network share the same 'prefix'; and these blocks must be allocated in sizes of powers of two. Furthermore, people like to allocate bigger blocks than are strictly necessary, to allow for growth without the pain of renumbering later (see (2)) (4.4) The old addressing plan allocated blocks of /8 (class A), /16 (class B) and /24 (class C). Whilst this is no longer required - the buzzwords are Classless Interdomain Routing (CIDR) and Variable-Length Subnet Masks (VLSM) - it is often still done just for convenience. When working with IP addresses in their decimal form, e.g. 192.168.4.22, each of the dots is on an 8-bit boundary and laziness means that people will allocate a /24 when a smaller block would do. As a result, IP addresses are becoming increasingly hard to come by; any allocation from your ISP has to be justified, because the ISP in turn has to justify it to the Regional Internet Registry from which their larger blocks come. Failure to perform this process properly means an ISP risks not getting additional blocks of IP numbers when they need them. Now, IPv6 has 128-bit IP numbers. *Surely* this will never suffer from any address depletion problem? Don't be too sure. Start by reading the short document RFC 3587, which refers to the IPv6 allocation policy at http://www.ripe.net/ripe/docs/ipv6policy.html Firstly, there is a hardcoded break in the IP address: 64 bits are for the network number (prefix), and 64 bits identify a machine on the network. This was done essentially for laziness in the IPv6 autoconfiguration mechanism; on many media types there is a unique key which can be used in the lower 64 bits, such as the 48-bit MAC address on an Ethernet card. So what you gain is the ability to construct a unique IP address just from the network prefix (which you learn by listening to a router) and your own MAC address; what you lose is half the length of the IP address. Secondly, because of this hard-coded break, you cannot realistically give an organisation even a /64 of address space; that would restrict it to only having one physical network (LAN), and hence be unable to have multiple LANs connected by routers. The current policy is to give a /48 to each end-user organisation, allowing it 2^16 networks. The RIPE policy does not make it clear what a dial-up ISP is supposed to do; I imagine that each person who dials in ought to be allocated a /48 too, otherwise they wouldn't be able to route a multi-LAN network from it without engaging in NAT trickery. But if you have to allocate a /48 for every leased-line customer, every DSL customer, and every dial-up port, then a medium-sized ISP is going to fill a /32 easily [the minimum ISP allocation size from the current ipv6policy] and will be coming back for more. Now, even though we are still in early experimental IPv6 rollout, address allocation clearly is a mess. I see nearly every week an announcement that a new /23 block has been allocated from ICANN to RIPE. Quite often, they will allocate eight separate /23's, but not adjacent ones which could be aggregated into a /20 ! Where is the sense in that? Why can't ICANN just allocate a /12 to RIPE and let them get on with it for a few years? Perhaps because if they did, and gave the same to all the other regional Internet registries, then the address depletion time-bomb would clearly be seen to be ticking again. IPv6 addresses are a limited resource; the RIPE IPv6 allocation document stresses (para 3.5) the importance of conservation of addresses, the need for supporting documentation for allocations, and the avoidance of stockpiling unused addresses. So there will be the same bureaucracy for obtaining IPv6 addresses as exists for IPv4. There is currently a proposal that the ITU should take over IPv6 address allocation, and manage it on a country-by-country basis. That would mean slicing up the IPv6 address space even further, into hundreds of country-sized units. But unlike telephone numbers, which can be extended by a digit if you run out, this is a finite resource which can never be extended without redesigning the protocol. Hopefully this takeover won't happen, simply because the driver behind address allocation should be technical - and that's related primarily to routability, not geography. But if it does, it will be another threat to the longevity of IPv6. 5. NETWORK ADDRESS TRANSLATION ============================== One of the ways Internet users have been getting around the address-depletion problem is to use NAT - Network Address Translation. Although it was never intended for this purpose, RFC1918 defines ranges of 'private' IP addresses which are guaranteed never to appear on the public Internet. As a result, you can build your own network using these addresses, and rewrite them dynamically as packets leave and enter your network, effectively 'sharing' one or a few IP addresses between all your users. As a side benefit, you get the ability to renumber your network externally at little cost. There is a perceived security benefit from the stateful nature of NAT, which allows traffic 'out' but not 'in' (it's no better than a properly-designed stateful firewall, but people like it). However, NAT has its downsides, and is generally viewed as evil in Internet engineering circles. The problems include: (5.1) Some protocols carry IP addresses within them, like FTP; if you want to perform NAT on these protocols, then the routing device must actually understand the contents of the packets, take them apart, modify and reassemble them. This is expensive computationally, it's not really what a router should be doing in the first place, and is limited to whichever protocols the vendor has decided to implement. (5.2) When two businesses merge, they generally want to combine their networks. However if they have both been allocating from RFC1918 address space independently, in many cases there are conflicts which force painful renumbering. The same can apply to 'extranets' where you wish to give an external entity direct access to your own network, e.g. through a VPN. Surely, with so much address space available, IPv6 does not need NAT? Unfortunately, it does. There are people running NAT for IPv6, right now. The main problem with IPv6 is one of privacy: if the bottom 64 bits of your IPv6 address is your MAC address, then wherever you take your laptop and whichever cybercafe you plug it into, every server knows when it's talking to the same person just by looking at the packets you send. It's like a permanent and unerasable cookie sitting on your hard drive; but worse, because it's visible to every server you talk to and every network device your traffic passes through. (And once thieves realise this, and start altering MAC addresses to sell stolen PCs, then the whole system of unique allocation of MAC addresses will break). So people who care about privacy are using mechanisms to dynamically allocate or masquerade the lower 64 bits of IPv6 addresses. Additionally, if dial-up ISPs decide they can only allocate /64 or /128 addresses to users on modems or mobile phones, then IPv6 NAT will become ubiquitous, if only for allowing users with their own networks to have dial access to the Internet. (Notice, of course, that dial-up ISPs will only make allocations smaller than /48 if they perceive dial-up ports as contributing to the IPv6 address depletion problem!) And if you have a permanent connection with a /48, but want to use a dialup account for backup, then typically you'll get a *different* /48 when you dial in (i.e. unless the ISP provides you with a special dial-up account which interacts with the ISP's routing mechanism to give you the same block). At that point you either have to renumber or use NAT to make use of the link. The one place where IPv6 scores here is that if every organisation gets its own unique /48 prefix when it connects to the Internet, the problem of RFC1918 addresses clashing should be eliminated. 6. SMALL PROVIDERS, DEVELOPING COUNTRIES ======================================== Many of the problems above are especially acute for small ISPs in the developing world: (6.1) they cannot get the address space they need, because their upstream ISP is afraid of giving out blocks as large as a /24 or /22 or whatever they say they need (6.2) they in turn are afraid of giving out anything larger than single IP addresses to their customers, to avoid running out of IP addresses; in some cases they use NAT and don't even give their customer a real IP address. (6.3) renumbering is especially painful, because not only all their own systems would need to be renumbered, but all their customers would have to renumber as well. (6.4) they want to be multi-homed to more than one provider; typically they have to use satellite rather than fibre, making communications affected by weather conditions and orders of magnitude less reliable. (6.5) getting provider-independent address space costs money (you need to join a regional internet registry); in the early start-up phase this may not be financially viable, even though it is initial ISP infrastructure such as DNS caches which can be the hardest to renumber later. (6.6) typically, their upstream will not be one of the major transit network providers; it may be a smaller ISP which specialises in handling satellite connections, for example. Or their regulatory regime may force them to use a particular monopoly provider, because the government does not permit otherwise. Such upstreams may have neither the experience nor muscle nor incentive to get larger allocations on behalf of their customers. Does IPv6 help here? Not really. If they get addresses from their upstream, they will probably only be allocated a /48. That limits very much what the ISP can do (e.g. it could have 65536 end-users each with a /64, but we already said that a /64 was too small to give to an end-user). And they still have the same problems with having to renumber later. To get provider-independent space they will still have to join a registry and absorb the cost, just the same as if they got an IPv4 provider-independent block; the RIPE policy also requires that they have a plan for making at least 200 /48 assignments to other organisations within two years, which may not be achievable for a micro ISP in a developing country. And considering that in the world today no ISP can run on IPv6 alone, if they want to be able to talk to any significant percentage of the rest of the world, then they will need an IPv4 allocation as well anyway - in which case they are back to square one. 7. FALSE ROUTES AND INSTABILITY =============================== There are a number of problems with IPv4 routing in the core of the Internet, which can result in instability (e.g. blackholes or loops), meaning that parts of the Internet become unreachable for a period of time. One of those is 'route flaps' - repeatedly changing announcements about the status of a network, forcing all the core routers to propagate this information and to recalculate their routing tables. 'Flap dampening' has proved to be a good solution to this problem; you ignore flapping routes and don't pass them on, thereby penalising the flapper by not routing traffic to him until he reverts to proper behaviour. However, more insiduous is the problem of false route announcements. There is little to stop a provider in one part of the world either accidentally or maliciously injecting routes for address space which belongs to another provider. This problem is made worse if the routes they announce are for smaller fragments (more specific addresses) than the real announcement. The rules of IPv4 routing say that the most specific address wins; so an ISP which is many AS hops away can end up sucking all the traffic from a bigger ISP. Route filtering is a partial solution: that is, you examine registry information to work out where a particular route *should* be announced from, and if it comes from somewhere else, you ignore it. However, it is currently only a partial solution. Firstly, there is no current way to say in a routing registry entry "you can expect to see the aggregate route, but no more specifics". Secondly, there are legitimate reasons why you might be announcing more specific prefixes, such as a multi-homed customer using your address space or for traffic engineering, so not all more-specific routes are bad. Thirdly, filtering is most effective when done closest to the source of the problem. Unfortunately, there are thousands of ASes out there and not all of them adhere to the same high standards of network cleanliness. Once the bad routes have found their way into parts of the Internet, then you are reliant on *everyone* else having suitable filters. Those who do not, will lose your traffic. Additionally, filtering on routes after they have been through other AS hops means that you have to trust the intermediate AS's not to have tampered with them. A malicious router could easily inject or modify a route so that it *appears* to have the correct origin AS. Does IPv6 help here? Not at all - it sits on the same BGP topology/route learning infrastructure as IPv4. 8. SOURCE ADDRESS SPOOFING ========================== How do you know who's knocking on your door? In the Internet world, every packet has a 'source IP address' declaring the IP address of the sender. Unfortunately, this address is not used as part of the process of delivering a datagram, and often is completely unchecked. It is trivial to send a packet with a 'spoofed' IP address which appears to come from someone else. Whilst it may not achieve anything useful - any response packet would be sent to the wrong person - it can cause a lot of mischief, and also means you cannot use it to infer anything about the true source of the packet, which you might use for policy reasons. Network attacks which involve spoofed packets are notoriously hard to trace, as you have to follow them backwards through the network, link by link, ISP by ISP. It's a slow and laborious process, relies on cooperation and goodwill between ISPs, and often the attack stops before the true source has been identified. Good-neighbour ISPs will perform anti-spoofing filtering against their customers, to prevent their customers injecting packets with spoofed source addresses. However, only a small proportion of ISPs do this, because the benefit accrues primarily to the rest of the Internet and not to themselves. Hence there is little self-interest drive to do it. Furthermore it's technically difficult to roll out across every single port on every single access platform (dial-up, leased line, DSL etc). Current Internet architecture makes it very difficult to detect spoofed packets at your borders. You can reject packets whose source is from a known bad range (e.g. RFC1918 private addresses), and often you can reject packets from another ISP whose source is one of your own addresses (although not for addresses used by multi-homed customers). But if the source address is not one of those, then routers generally cannot tell whether the packet could have legitimately originated from or transited through the peer your received it from. Some rules probably could be inferred from routing registries, but because of the processing costs, generally only route announcements are filtered in this way, not actual packets. Does IPv6 help? No, it suffers exactly the same source address spoofing problems as IPv4. 9. ABUSE ======== The Internet is subject to a whole host of different forms of abuse, including: - bandwidth flooding (e.g. streams of packets from "owned" machines), with or without spoofed source addresses - SYN-flooding, with or without spoofed source addresses - malformed packets triggering bugs in kernel implementations - probing and port scanning - attacks against O/S and applications; "rootkitting" - reception of unwanted data, in particular "spam" To my knowledge, IPv6 is no less succeptible to these than IPv4. These problems stem from the low cost of Internet bandwidth (making ping sweeping and spamming cheap pastimes); poor software; the rise of permanently-connected home machines running this poor software; spoofed source addresses; the difficulty of tracing attacks in real time; the inability to establish with certainty the credentials of the person you are talking to; and the connectionless nature of the Internet, resulting in an inability to reject a stream of traffic which you don't want. The second network stack in IPv6-aware machines opens a second potential set of security holes. I build my kernels with IPv6 removed completely for that reason. 10. FAIR RESOURCE SHARING AND "QUALITY OF SERVICE" ================================================== This is a large area which is often poorly understood (by me included). However I tend to think about it in two parts: (10.1) In the core Arguably, there is no problem with Quality of Service within the core of the Internet. Bandwidth is cheap, and is capped (or metered and charged) at the ingress points at the edges. If congestion occurs somewhere down the line, then that's poor management, but there's not much you can do about it because the traffic is aggregated. Somebody's packets are going to be dropped, and I certainly wouldn't want *my* packets to be dropped just because some else decided to mark their packets as "high priority" and therefore somehow intrinsically more important than mine. In that model, everyone would mark their packets "high priority" too, and we'd be back at square one. Some people conceive Quality of Service as negotating an end-to-end service level for a particular stream of traffic (such as bandwidth available, latency, maximum packet loss) and then delivering, monitoring (and paying for) such an agreement. There may be a value proposition there, if the network really can deliver on its promise and the end-user can ensure they are getting what they're paying for. However it's emulating the public switched telephony network, and perhaps you should just be using that instead... besides which, the cost of implementation would be enormous, and few people are prepared to pay a sufficiently high premium when best-effort traffic is "good enough" for most purposes. (10.2) At the edges This is where congestion certainly does apply: people buy a pipe of 512K or 2M or whatever, and then throw a whole office full of users on it. I've often experienced the problem where I'm behind (say) a 2 meg pipe, and someone else is downloading MP3's at 2 megabits per second. My attempt at an interactive ssh or web session, averaging a few kilobits per second, suffers huge latency because of the first-in-first-out nature of queues in networking devices. The sending machine sends as fast as it can, throttled back only by the TCP window which limits the amount of unacknowledged packets which can be 'in transit' at any one time. As a result, the queue always remains full by that amount, and my traffic ends up behind it in the queue. It seems patently unfair: if a bunch of users are pulling 10K bits per second each, then you'd think the biggest user(s) should be throttled back incrementally to use only spare bandwidth left over after the small users are satisfied, until such point as each user is getting the same. Defining a 'user' is problematic though (is it an end-point IP address? all the streams belonging to one application? a single stream?) At the edge, traffic prioritisation is sometimes configured - so your "important" real-time voice and videoconferencing data can take precedence over your boring stock control and invoice files. This approach can have benefits, as long as you take account of several fundamental things: - priority is a relative term - if everything ends up getting marked as "high" priority then there is no benefit - once you fill your pipe with "high" priority voice or video, then any further traffic causes catastrophic degredation for everyone - this traffic can starve out other traffic which needs to get through. In particular, I fail to see how bandwidth-wasteful services like videoconferencing should be treated as any more "important" to a business than web service to its customers, for example. So it works best where the proportion of "high" priority traffic is low, compared to the total traffic. But in any case, setting all these concerns aside for now, there doesn't seem to be anything new in IPv6 which improves matters compared to IPv4. 11. SECURITY ============ Public access networks are open to traffic being intercepted (sniffed) and modified in transit. Establishing beyond doubt the identity of the person you are communicating with is an important problem, as witnessed by the number of "phishing" scams used to steal money from unsuspecting punters. Both IPv4 and IPv6 share a mechanism which can solve some of these problems at the network layer: IPSEC. However, without a universal mechanism for keying sessions, and a suitable trust model, IPSEC has been relegated to site-to-site VPN links and home workers. I'm unaware of any implementation which can dynamically establish a secure connection with an unknown site ('secure' including secure against man-in-the-middle attacks), or if there is, it has not been widely implemented. IPv6 mandates that IPSEC must be provided as part of an implementation, but not that you have to turn it on or how to make use of it. So in that regard it's no more secure than IPv4. So for now, we continue to rely on end-to-end security at layer 4 and above, primarily SSL/TLS, using certificates from trusted Certificate Authorities to bind public keys to DNS hostnames. 12. ADDITIONAL PROBLEMS CAUSED BY IPV6 ====================================== If IPv6 doesn't solve any problems of IPv4, it certainly introduces some short-term ones of its own. (12.1) Compatibility. If you run IPv6, then you can only talk to other IPv6 machines. However, almost every useful server (e.g. web or mail) on the Internet talks exclusively IPv4 right now. So if you want to talk to them, then you must either have an IPv6-to-IPv4 translator or proxy somewhere in your network, or you must run dual stacks - IPv4 in parallel with IPv6. The first adds complexity to your network, and the second adds complexity to your client. There is little or no gain accrued to the site performing the rollout. (12.2) Changing standards. DNS records are AAAA (RFC 3596) or A6 (RFC 2874)? Make your mind up. (12.3) Broken applications. Take the following example from my FreeBSD box: $ dig @ns.ripe.net. ripe.net. soa ; <<>> DiG 8.3 <<>> @ns.ripe.net. ripe.net. soa ; (2 servers found) ;; res options: init recurs defnam dnsrch ;; res_nsend: Protocol not supported What's happened? ns.ripe.net has two addresses: IPv4 (193.0.0.193) and IPv6 (2001:610:240:0:53::193). The client has given precedence to the IPv6 address, found it doesn't work (I don't have IPv6 compiled into the kernel, nor any IPv6 routing), and given up rather than trying the IPv4 address. This is annoying. I guess applications have to give precedence to IPv6 addresses; if they gave precedence to IPv4, then the IPv6 stack would never get exercised at all. (12.4) Changes to APIs. Existing code using the sockets API has to be rewritten to support IPv6. Even though TCP and the higher level protocols are unchanged, the API functions for mapping an IP address in character format into internal format and vice versa, and resolving hostnames, are different. Whole clusters of new applications (e.g. ping6, telnet6), or new command-line flags to existing applications, have arisen. (12.5) IPv6 addresses are a pain to type, and impossible to remember. (Some would argue this is an advantage, as it discourages hard-coding of IP addresses in places where DNS names should be used instead) (12.6) Learning curve. There are new practices to be learned for router discovery and interface autoconfiguration. The security aspects of IPv6, such as good practices for firewalls and packet filtering, are often not well understood. (12.7) Privacy concerns, as outlined in point (5) above. HOW DO THESE PROBLEMS RATE? =========================== How pressing these problems are depends on your point of view. For end users with permanent connections, I believe the most pressing problems are network reliability (route instability) and abuse. Second to that is the need to renumber when changing provider. Third comes the problems caused by RFC1918 addresses when merging businesses or interconnecting with third parties. The ability to multi-home easily would be a bonus. For ISPs and transit providers, the biggest concerns are routing table explosion, route instability and spoofing, network abuse, and address depletion (or at least the tight allocation policies and bureaucracy which derive from it) TOWARDS A SOLUTION ================== If IPv6 is not the solution, then what would the solution look like? Here are a few thoughts about some desirable properties it might have. * The end-user wants the network to be as dynamic as possible; for example, to be able to connnect to ISP A over DSL, then make a dial-up call to ISP B and share the traffic, and then drop the DSL connection to ISP A, with no loss of connectivity. Or to be able to hop seamlessly from one WiFi cell to another on a different provider. This level of mobility would be the ultimate solution to the renumbering and multi-homing issues. (But a user should also be able to move from one part of the Internet to another _without_ revealing that she is the same person, if she so wishes) On the other hand, the network provider wants the network to be as static as possible: no route flaps, no injections of false routing information, minimum routing table size for the topology of the Internet. A clever solution would meet both these apparently conflicting goals. * Network addresses would be extensible, so there is no lifetime limit on the total address space available. (Telephone numbers can gain an extra digit when depletion looks likely; why not the same with IP addresses?). And for community networks and developing countries, it would be very desirable for any end-point host to be able to host a large downstream network, whose users can each host large downstream networks and so on. * Route aggregation should take place automatically and seamlessly wherever possible. This might take place in conjunction with address extension; for example, if you are an ISP allocated prefix AAAA, then you could allocate prefixes AAAA0, AAAA1, AAAA2 etc. to your clients. Each client in turn could decide to become an ISP, and offer longer prefixes to its own clients. * It's highly desirable that packets cannot have spoofed source addresses (by inherent design, not by the good-will of people rolling out optional anti-spoofing filters). Also, it should not be possible to 'steal' someone else's IP address and use it yourself, e.g. by pretending to be them roaming from one network to another. * It's also desirable that if you receive a packet from a source you don't like, you should be able to block all further packets from that sender *at source*, so they don't waste bandwidth on your incoming link. It should be possible to share lists of "owned" machines, as spam blacklists do, and block traffic from them at the network level. This solution could end up allowing end-users to set their own routing policies across the Internet, or for such routing policies to be determined dynamically. * In my opinion, an ideal solution would have either no IP number registries, or many registries; either way you should be able to get as much address space as you need or want, without significant cost or bureaucracy and without negative impact on other Internet users. Such a solution would not be much like IP as we know it, and therefore would be a risky development, much as the development of the Internet itself was a risky experiment. I can see how IPv6 appears as a "safe bet", given that it uses exactly the same design as IPv4 bar some tweaks to address size and packet formats and options. But if it fails to address the problems of IPv4, then it could turn out to be an expensive waste or just a stopgap. FOOTNOTE ======== After writing this article, mainly to organise my own thoughts on the matter, I was pointed to the following thoughtful document my Geoff Huston who is infinitely more qualified that me to make such comments: http://ispcolumn.isoc.org/2003-01/Waiting.html At the bottom it also links to a response from the IPv6 forum, which makes for amusing reading. It's highly defensive and they are clearly very touchy on the matter. Geoff's conclusion is that IPv6 ultimately *will* roll out, based on vast volumes of mobile devices. He may be right. But unless every device which can potentially act as a router is given a /48 assignment, we can look forward to IPv6 NAT remaining with us indefinitely :-) UPDATE 2007-01-16 ================= (1) I have also heard an argument that because IPv6 eliminates NAT, it also eliminates the need for application layer gateways (ALGs) in firewalls, to handle protocol problems which are introduced by NAT. These are typically for protocols which have separate control and data streams (e.g. FTP has separate control and data channels; SIP/SDP + RTP are separate streams for VOIP) In a NAT world, firewalls must parse and modify the messages in the command stream, and then set up suitable NAT mappings for the data streams. However, a little thought makes it clear that a firewall must perform almost identical actions even in a NAT-free IPv6 world. This is because a firewall cannot have a "permit all inbound" rule; based on what was negotiated on the control stream, it must install temporary holes for the data stream which permit only traffic for the correct source host/port and destination host/port pairs. This is essentially the same as installing temporary NAT mappings. The firewall implementation is made slightly easier without NAT, because it no longer needs to *modify* the contents of the control stream, which involves tweaking the TCP checksums and sequence numbers. However it still has to fully *parse* them, understand and validate the data stream activity, and open the appropriate temporary holes. So ALGs are still needed in an IPv6 world, and they are almost as complex as ALGs in an IPv4+NAT world. (2) A claimed advantage of IPv6 is that it has a fixed header size, and therefore hardware implementations of IPv6 routers should scale better to terabit links. This should probably also be a goal in the "towards a solution" section. (3) I wrote before: "The one place where IPv6 scores here is that if every organisation gets its own unique /48 prefix when it connects to the Internet, the problem of RFC1918 addresses clashing should be eliminated." However, this doesn't help you if you change ISP. The address block you were using must be returned to that ISP, and the ISP may allocate it to another customer. If you continue using it, even just for internal purposes, then you risk being unable to communicate with whoever now has your old address allocation.