Wireless Internet

For a long time, conventional wisdom held that the telephone system was a natural monopoly, or at least a natural oligopoly, because of the need for a large physical infrastructure, namely, the telephone wires. In recent years, though, the widespread deployment of cellular telephones illustrates that this is not necessarily true. Clearly, cell phone networks are capable of handling significant traffic loads and delivering near-landline quality of service. The emergence of all-digital cellular telephones, such as PCS, shows that data can be effectively transferred using wireless, and leads me to wonder; could we build a totally wireless Internet?

I’m not the only person to ask this question. The Ricochet Network has been in operation for several years now, the PalmVII handheld offers wireless Internet access, and companies such as Intel are promoting the Mobile Data Initiative. Yet these efforts are largely technical in nature, while I perceive wireless as changing the fundamental rules of the game. Without dependence on a landline infrastructure, is telecommunications really a natural oligopoly anymore? Could we build a free wireless digital communications system?


Decentralization, not privatization,

should be the buzzword of freedom.

First, what do I mean by the term “free”? Primarily two things – first, the absence of recurring charges, such as monthly or per-minute fees, and second, an open, non-proprietary infrastructure free from patent or regulatory barriers to entry. The only fee I’m prepared to concede is the initial purchase price of the equipment itself. The telephone will ideally become like a computer or a microwave oven – once you purchase it, you can continue using it free of charge.

The capitalist model would be for the communication system to be controlled by companies, driven by competition to improve service, but with access limited by economic barriers, i.e, the phone gets switched off if you don’t pay your bill. The socialist model would ensure access by having the government control the phone system, as well everything else. A better solution than either would be to eliminate the large institutions entirely, by eliminating the centralized infrastructure. Decentralization, not privatization, should be the buzzword of freedom.

A decentralized communications infrastructure would have to be based primarily, if not totally, on wireless radio technology. Any non-wireless service, such as the existing telephone system, would require landlines connecting to a central office, implying right-of-ways, centralized ownership of the lines, and consequent dependence on large institutions, either governments or corporations. Only a wireless scheme would allow devices to communicate directly, without any centralized infrastructure.

Current wireless technology (i.e, cell phones) still rely on centralized infrastructure. Sadly, they’re designed to rely on it. A cell phone communicates exclusively with a radio tower, which then relays the call to its destination, typically over landlines. As the cell phone moves, it switches from one tower to another, but never communicates directly with another cell phone. I can’t make a phone call from one car to the next without going through a radio tower.

What’s needed is a new kind of wireless infrastructure – one where the telephones and computers are designed to communicate directly with each other, without relying on a phone company’s switches. Clearly, if I have such a wireless telephone, and my next door neighbor has such a telephone, they can directly connect and we can talk without any dependence on third parties, and consequently without any recuring charges.

Yet, the big question remains – can I call from Maryland to California with such a telephone? Hopefully, the answer is yes, and the key lies in the routing technology of the Internet. A typical Internet connection will be relayed through a dozen or more routers. No single connection exists between the source and the destination, but by patching together dozens of connections, a path can be traced through the network for the data to flow through. Network engineers have spent literally decades developing the software technology to find these paths quickly. Theoretically, there is no single point of failure in the system, since the routers can change the data paths on-the-fly if some part of the network fails. The keys to making it work are the adherence to open standards, such as TCP/IP, and the availability of multiple redundant paths through the network.

A wireless infrastructure can be built on similar technology. If I can call my neighbor’s telephone directly, and my neighbor’s telephone can reach the grocery store’s telephone directly, they I can call the grocery store by relaying the data through my neighbor’s telephone. If adequate bandwidth is designed into the system, my neighbor’s telephone can relay the data without any impairment to her service. She can be talking to her hairdresser without even knowing that her phone is relaying my conversation with the grocery clerk.

What stands in the way of building such a free, national digital communications infrastructure?

First, the presence of standards. Just as English is the standard used by the author and readers of this document, and TCP/IP is the standard used by the Internet devices that relay the document, standards are required for the wireless devices to communicate. IEEE recently standardized a wireless data LAN (802.11) capable of handling 1 to 2 Mbps. To put this into perspective, an uncompressed voice conversation requires 64 Kbps. Thus, a 1 Mbps circuit could handle 15 such conversations. Not only can I talk to the grocery store while my neighbor talks to the hairdresser, but a dozen other people can use the same circuit with any service impairment. Newer compression techniques can improve this performance by a factor of ten.

IEEE 802.11 is a good start, but the power limitations imposed by FCC regulations may impede its use for any but short-range applications. However, Metrocom’s Ricochet network demonstrates that this might not be a show stopper. Working in conjunction with power companies, Metrocom pioneered the novel approach of putting low-power radio repeaters on existing utility poles. The repeaters communicated directly with each other, eliminated the need for a landline data connection; only power was required, which was readily available on the pole. A similar approach could be used to build a network that would provide 802.11 coverage to an entire metropolitan area.

Also, the existing 802.11 devices aren’t very sophisticated in their design. They’re designed to be cheap, not effective. Their single most glaring problem is their antenna design. Existing 802.11 transceivers use mainly low-gain, omnidirectional antennas, although Raytheon recently announced the availability of an 802.11 PCMCIA card with a jack for connecting an external, hopeful better, antenna. Improved antennas will probably take one of two forms. Adaptive arrays are preferred by the military, and justly so, but are complex and expensive. Directional arrays, typified by TV aerials, are simpler and therefore cheaper, but must be physically pointed at their destination. One possible scenario would be for routers to use the more expensive adaptive arrays, and for end systems to use mechanically steered antennas. In my opinion, the development of improved 802.11 devices is the single most important advance needed today.

Second, an initial infrastructure is required. A 802.11 telephone would be a popular item if everyone else had one, but initially few people would possess such devices, making it difficult if not impossible to route a connection through such a sparse matrix. Philanthropies could be formed to build infrastructures. A Ricochet-type network could be deployed in cooperation with power companies, who might be persuaded to donate the relatively small amounts of electricity the routers would consume. After an initial investment in the (hopefully) rugged and scalable pole-top devices, the entire network could be managed from a central location. At this point, the network would provide 802.11 service to an entire metropolitan area, jump-starting the service. As more and more people bought these devices, each capable of relaying traffic on its own, the dependence on the initial infrastructure would diminish, hopefully to the point where the pole-top devices wouldn’t need replacement when they started to fail.

Furthermore, users would want to call telephones on the conventional phone network, requiring some sort of gateway. A solution to this chicken-and-egg problem would be to provide mechanisms for some fee-based services. Thus, a service provider could construct a network that would, for a monthly fee, interconnect its users and provide gateway service to the existing phone network. It’s possible that the only fee the provider would need to charge would be for the gateway service – initially, almost all connections would go through the gateway, since few people would have the new phones and most calls would be relayed onto the existing phone system. As the wireless network became more and more widely deployed, more and more destinations would go wireless, and the reliance on gateway systems would diminish.

In short, wireless is in its infancy. This exciting new technology offers great possibilities not just to expand existing phone and data networks, but to break down the old service models and replace them with newer, more decentralized designs. The oft-touted idea of the free phone call might even become a reality.

One Reply to “Wireless Internet”

  1. Then I saw the TODO list. I was recently interested in exploiting the capabilities of OSPF and was about to start experimenting on it. But I thought most of the work must have already been done until I saw ‘OSPF modifications to handle 802.11 networks’ and ‘OSPF handling default routes via PPP’ titles. Aren’t those capabilities already implemented in the hundreds of commercial implementors of OSPF? What is so special about them? I think you must have some reason to list them so specifically

    Well, starting with the 802.11 (wireless) networks. OSPF is currently based on the assumption that all the hosts on a subnet are completely reachable from all the other hosts on that subnet. That’s fine for Ethernet, but for wireless you can get into a situation where one host is only reachable by sending traffic through another host. How to handle this? One possibility (the obvious one, to me) is to use OSPF to do what it’s designed to do – detect network topology and route through it. Exactly how to do it is a bit up in the air. Probably you need to have OSPF running on all the hosts (not just the routers) on your wireless LAN, and they need to ignore the subnet structure and act as if there were point-to-point links between the hosts. Probably the whole subnet should act as a sub-area; i.e, point-to-point links within the subnet, but this gets summarized out to the rest of the network so all everyone else sees is a single subnet.

    As far as the default routes are concerned, that’s mainly an operating system issue. Linux routing support, in particular, is a bit weak. For example, the only O/S routing primitives are “add route” and “delete route” (and a few others), so if you kill your OSPF process, all its routes stay in the routing table – you have to do a graceful shutdown to have the process remove its routes. Generally, when you start the routing process, it removes all the routes from the routing table (!) on the assumption that they are “stale”. That’s fine if the routing process is the only source of routing information, but in the case of PPP you might get a default route that way, and it shouldn’t be deleted as a stale route. The latest thinking is that you have a Forwarding Information Base (FIB) used to forward packets as well as multiple Routing Information Bases (RIBs) constructed from the routing protocols – an OSPF RIB, a PPP RIB, a static RIB configured by the administrator. Then the FIB is constructed from the RIBs. This model explicitly allows for multiple routing sources feeding in to your kernel routing table, but it isn’t well supported on UNIX.

    UNIX in general needs a better management interface. I’ve been thinking about SNMP as a generic UNIX management interface. Starting with the kernel, you could make the routing table SNMP managable, then your OSPF routing process could just use SNMP to install a RIB and tell the kernel to import it into the FIB. Ultimately, it’d be nice if all the UNIX system services were SNMP managable, then you could save the state of your UNIX machine in a single file, instead of having all these different config files spread all over /etc. You’d also be able to take the running state of the UNIX system, snapshot it, and know that it would return to that state when you restart – like “copy run start”, if you’re familiar with Ciscos.

    What was the most impressive TODO list item was ‘Wireless national data infrastructure’. I have been thinking on this line for several months.

    The TODO list is out of date. I’ve finished the essay; it’s in the “papers and essays” section; I don’t know if you read it.

    For example today’s routers are dedicated computers where everything happens in the main processor and its memory and even the hard disk. If the same thing can happen in a small routing chip then it can be optimised at the signal level to the extent that recieving a bit and retransmitting it to the neighbor can be only separated by few cycles. If a complex thing like a DSP processor could designed why not a router chip with radio communication capability?

    That’s a pretty good idea. We basically do that with Ethernet and ATM using switches, and DSP technology is pretty promising, so there’s no reason to think it couldn’t happen.

    I’ve got a couple of 802.11 wireless cards. They’re nice, but clearly limited, partly by their primitive design, partly by the restrictions put on them by the FCC. In an environment with a lot of E-M interference (i.e, a room full of computers), their range drops to about 30-40 meters. They need to do better than that to be really useful.

    I figure hardware, not software, is driving this right now. Until we have wireless hardware that can operate reliably up to a few kilometers (cell phones do, so there’s no reason why computers can’t) the software won’t help. I’m thinking that an important next step is to write some freely available E-M simulation software to help design microwave circuitry. You can’t design in the GHz range without it because the normal low frequency electronic assumptions no longer apply, and all the E-M design software I know of is proprietary and expensive. Having a good E-M design program would open the door to ordinary people designing the wireless hardware, so maybe then we’d get some better devices.

    Looking over my email, it’s got some pretty good ideas relating to wireless, so I’m cc’ing it to my wireless mailing list/web page…

Leave a Reply

Your email address will not be published. Required fields are marked *