A theorem of Thorup and Zwick (Proposition 5.1 in 2001’s Approximate Distance Oracles) states that a routing function on a network with n nodes and m edges uses, on average, at least min(m,n^2) bits of storage if the “route stretch” (the ratio between actual path length and optimal path length) is less than 3 (i.e, if two nodes are two hops apart, the actual route taken between them must be less than six hops). On the Internet topology, we can expect the n^2 term to dominate, so spreading these n^2 bits out among n nodes yields an average of n bits per node – i.e, each router’s routing table has to hold one bit for every device on the network.

Not a very encouraging result for those of us designing routing protocols.

Yet there is hope. The result is only an average. We can do better than the average if we allow our routing function to be skewed towards certain network topologies. And it occurs to me that the Internet doesn’t change fast enough that we can’t skew our routing function towards the current network topology.

How can we do this? With dynamic addressing: skewing our address summarization scheme to reflect the current network topology.

Of course, we’re already doing this, to an extent. It’s what every IP network engineer does when he assigns summarization blocks; they correspond to groupings of devices within the network topology.

But I’d like to take this quite a few steps farther. I’d like the summarization blocks themselves to be automatically allocated. I’d like to be able to handle the case where a WiFi link connects devices from two different ISPs, or when a company wants to become multi-homed with two ISPs. In both cases, we’re connecting devices together from radically different summarization blocks. How can we handle this?

By dynamically readdressing the network. When new links appear, exception routes are placed into routing tables. Over time, the network readdresses itself so that those exceptions can now be summarized together.

Let’s take a simple example – a single autonomous system uniformly running OSPF. When a new link appears, routers recompute their routing tables and then auto-summarize – automatically group addresses into routing table blocks. Then another algorithm runs, which computes, for each router, the compete set of destinations which should be routed to each next hop. The collection of all of these sets forms a topology on the destinations. The objective is to now (re)assign addresses so that as many of those subsets as possible can be neatly summarized. Then we reassign addresses to the network devices and gradually morph our addressing structure toward this new topology.

Obviously, there are performance issues here. These algorithms can be quite time consuming if they are run on every route flap. Finding a suitable address assignment might be an NP-complete problem. How do you handle a link between two autonomous systems? We probably want to consider possible future link failures, as well, when assigning those addresses.

But let’s leave all that stuff out for now. It’s not too hard to see what changes we need to start making now to head in this direction, and it’s starting to sound like a routing loop on this blog – we need to shed our dependence on static IP addresses.

We need dynamic DNS. Not like we’ve got it today, so heavily packet filtered that it’s almost useless, taken on as an afterthought, client-only dynamic DNS. Dynamic DNS, coupled with DHCP, needs to be our default mode of operation. A machine boots, be it client, server, or even router. It DHCPs to get its IP addresses and then uses a cryptographically secured transaction to install them on its DNS name. It has to be prepared for those addresses to timeout and be replaced with different ones, which will be migrated into its DNS information. Protocols like TCP have to be prepared for mobility, in the sense that even a server (yikes!) may be changing IP addresses mid-session.

And yes, routers too have to use DHCP. In fact, routers have to become not only DHCP servers, but they need to participate in a next generation DHCP that assigns entire blocks of addresses. A router requests a block of x number of addresses for its subnet, and then hands those addresses out to its clients. And has to be prepared for that entire block to timeout and be replaced by another.

While all that is being implemented (or not), we can start experimenting with protocols and algorithms to actually implement dynamic address allocation (or not). And defeat Thorup and Zwick’s n^2 by constantly shifting the network to adjust.