Archive for the ‘Computer Networking’ Category
napster.com, a website facilitating the on-line exchange of digital music, has been highly publicized by the mass media. The legal wrangling over copyright issues has overshadowed other, equally legitimate questions. Is Napster for real, or is it just hype? Are the issues it presents purely legal, or are there technical lessons to be learned, too? What does Napster reveal about the future of the Internet?
First, we need to recognize that Napster represents a real technological advance. It is one of the newest and most prominent examples of a directory service. Directory services are based on the realization that centralized data stores tend to generate performance bottlenecks. All the data being served to the clients has to come from a centralized server or a handful of centralized servers. Throwing bandwidth at the problem is sometimes realistic, but a better solution is to design more efficient networks. Distributing data sources across the network is a major, emerging technique for achieving greater efficiency.
For example, one of the reasons we currently lack decent video-on-demand services are the bandwidth requirements of video. It’s simply not feasible to construct a centralized server to feed two hour movies to a million people. The bandwidth requirements are too great; the centralized server becomes too much of a bottleneck. A Napster-esk solution would be to have thousands of video servers, each capable of serving perhaps a dozen video streams, spread all over the network. Due to the current bandwidth demands of video, this is still unrealistic, but similar schemes are immediately plausible for books, software, and websites in general.
In fact, it’s reasonable to suppose that at least 90% of the present Internet’s traffic is unnecessary. The net is young and rapidly evolving. The protocols currently in use are inefficient, some more so than others. As the network continues to mature, it will become more efficient, and the bandwidth requirements of particular applications will decrease. The present boom in bandwidth demand is driven by new users and new applications. At some point, most people will be “connected”, and the uses of the network will stabilize. From that point onward, improvements in network design will begin to drive bandwidth requirements downward. The network will be most inefficient while it is young, so we can expect bandwidth requirements to peak at some point, I’d estimate within the next two decades, and then begin heading down.
Directory services such as Napster will be instrumental in reducing demand for network bandwidth. Other keys to more effectively using bandwidth are compression, caching, and multicast, all of which are in their infancy. Many issues remain to be addressed, for example, server selection. Napster currently presents the user with a list of servers for each song, one of which is manually selected to download the song from. Developing automated techniques for server selection will be an important step forward in making this technology more seamless, and therefore more attractive for other applications.
Security deserves special mention, since distributing data across the net would seem to seriously compromise security, but this is probably not so. Encrypting the data allows it to be distributed even to insecure servers, which could serve the data, but couldn’t read it. Then, the centralized directory would provide a key that could be used to decrypt and read the data. Controlling access to the key would control access to the data. Typically block cipher keys are only a few dozen bytes long, so access to a 100KB file could be granted by a directory server in less than 1KB – a 100-to-1 savings in centralized bandwidth requirements. The authenticity of the data could be verified by X.509 certificates – placed in the directory, of course.
While Napster represents a real advance over older, more centralized, techniques, this doesn’t mean that the current protocol can’t be improved. Let me outline how I’d redesign Napster, if I were given the task:
- Use LDAP. The Lightweight Directory Access Protocol (LDAP) has become an accepted standard for directory service. Furthermore, a “pure” directory service, such as Napster, doesn’t require any special handling on the part of the directory server. All the server has to do is register directory entries, then fed them back out again in response to search requests. A standard LDAP server, such as OpenLDAP, could be used unmodified.
- Define and publish a standard schema. LDAP, and directory access protocols in general, use “schemas” to define the format of directory entries. In Napster’s case, a standard schema would probably include a “Song” class, defining artist, title, and year, and perhaps an “Album” class, listing all the tracks on a particular album. The “Song” class could then be extended (subclassed) into a “NetSong” class that would also include URLs where the song can be accessed. Using a standard, published schema would clearly define the directory structure, and make it easier to reuse the directory for new applications.
- Use HTTP or FTP. Just as there’s no need to create a custom directory service, there’s no need to invent new file transfer methods, either. Specifying a URL in the directory entry, using one of the standard methods, “http:” or “ftp:”, should suffice. Of course, most “clients” aren’t set up to be “servers”. In the present computing environment, Napster would be quite hard to configure if it relied on an external web or FTP server, and much more complex if it included an entire web server within it. The “peer-to-peer” paradigm ultimately implies that a machine can be simultaneously both a client and a server, and must be configured to act as both. This obviously contrasts with Microsoft’s policy of separate “client” and “server” operating system packages (the “server” usually being much more expensive), but free software hasn’t solved this problem completely, either. How exactly does an arbitrary software program go about registering itself with the local web server in order to share files?
Napster isn’t the first directory based system to be deployed on the Internet, but it is one of the newest and most exciting. If the government and economic leaders can be persuaded to surrender a measure of control, its decentralized nature may pave the way to a more distributed and more efficient network.
For a long time, conventional wisdom held that the telephone system was a natural monopoly, or at least a natural oligopoly, because of the need for a large physical infrastructure, namely, the telephone wires. In recent years, though, the widespread deployment of cellular telephones illustrates that this is not necessarily true. Clearly, cell phone networks are capable of handling significant traffic loads and delivering near-landline quality of service. The emergence of all-digital cellular telephones, such as PCS, shows that data can be effectively transferred using wireless, and leads me to wonder; could we build a totally wireless Internet?
I’m not the only person to ask this question. The Ricochet Network has been in operation for several years now, the PalmVII handheld offers wireless Internet access, and companies such as Intel are promoting the Mobile Data Initiative. Yet these efforts are largely technical in nature, while I perceive wireless as changing the fundamental rules of the game. Without dependence on a landline infrastructure, is telecommunications really a natural oligopoly anymore? Could we build a free wireless digital communications system?
Decentralization, not privatization,
should be the buzzword of freedom.
First, what do I mean by the term “free”? Primarily two things – first, the absence of recurring charges, such as monthly or per-minute fees, and second, an open, non-proprietary infrastructure free from patent or regulatory barriers to entry. The only fee I’m prepared to concede is the initial purchase price of the equipment itself. The telephone will ideally become like a computer or a microwave oven – once you purchase it, you can continue using it free of charge.
The capitalist model would be for the communication system to be controlled by companies, driven by competition to improve service, but with access limited by economic barriers, i.e, the phone gets switched off if you don’t pay your bill. The socialist model would ensure access by having the government control the phone system, as well everything else. A better solution than either would be to eliminate the large institutions entirely, by eliminating the centralized infrastructure. Decentralization, not privatization, should be the buzzword of freedom.
A decentralized communications infrastructure would have to be based primarily, if not totally, on wireless radio technology. Any non-wireless service, such as the existing telephone system, would require landlines connecting to a central office, implying right-of-ways, centralized ownership of the lines, and consequent dependence on large institutions, either governments or corporations. Only a wireless scheme would allow devices to communicate directly, without any centralized infrastructure.
Current wireless technology (i.e, cell phones) still rely on centralized infrastructure. Sadly, they’re designed to rely on it. A cell phone communicates exclusively with a radio tower, which then relays the call to its destination, typically over landlines. As the cell phone moves, it switches from one tower to another, but never communicates directly with another cell phone. I can’t make a phone call from one car to the next without going through a radio tower.
What’s needed is a new kind of wireless infrastructure – one where the telephones and computers are designed to communicate directly with each other, without relying on a phone company’s switches. Clearly, if I have such a wireless telephone, and my next door neighbor has such a telephone, they can directly connect and we can talk without any dependence on third parties, and consequently without any recuring charges.
Yet, the big question remains – can I call from Maryland to California with such a telephone? Hopefully, the answer is yes, and the key lies in the routing technology of the Internet. A typical Internet connection will be relayed through a dozen or more routers. No single connection exists between the source and the destination, but by patching together dozens of connections, a path can be traced through the network for the data to flow through. Network engineers have spent literally decades developing the software technology to find these paths quickly. Theoretically, there is no single point of failure in the system, since the routers can change the data paths on-the-fly if some part of the network fails. The keys to making it work are the adherence to open standards, such as TCP/IP, and the availability of multiple redundant paths through the network.
A wireless infrastructure can be built on similar technology. If I can call my neighbor’s telephone directly, and my neighbor’s telephone can reach the grocery store’s telephone directly, they I can call the grocery store by relaying the data through my neighbor’s telephone. If adequate bandwidth is designed into the system, my neighbor’s telephone can relay the data without any impairment to her service. She can be talking to her hairdresser without even knowing that her phone is relaying my conversation with the grocery clerk.
What stands in the way of building such a free, national digital communications infrastructure?
First, the presence of standards. Just as English is the standard used by the author and readers of this document, and TCP/IP is the standard used by the Internet devices that relay the document, standards are required for the wireless devices to communicate. IEEE recently standardized a wireless data LAN (802.11) capable of handling 1 to 2 Mbps. To put this into perspective, an uncompressed voice conversation requires 64 Kbps. Thus, a 1 Mbps circuit could handle 15 such conversations. Not only can I talk to the grocery store while my neighbor talks to the hairdresser, but a dozen other people can use the same circuit with any service impairment. Newer compression techniques can improve this performance by a factor of ten.
IEEE 802.11 is a good start, but the power limitations imposed by FCC regulations may impede its use for any but short-range applications. However, Metrocom’s Ricochet network demonstrates that this might not be a show stopper. Working in conjunction with power companies, Metrocom pioneered the novel approach of putting low-power radio repeaters on existing utility poles. The repeaters communicated directly with each other, eliminated the need for a landline data connection; only power was required, which was readily available on the pole. A similar approach could be used to build a network that would provide 802.11 coverage to an entire metropolitan area.
Also, the existing 802.11 devices aren’t very sophisticated in their design. They’re designed to be cheap, not effective. Their single most glaring problem is their antenna design. Existing 802.11 transceivers use mainly low-gain, omnidirectional antennas, although Raytheon recently announced the availability of an 802.11 PCMCIA card with a jack for connecting an external, hopeful better, antenna. Improved antennas will probably take one of two forms. Adaptive arrays are preferred by the military, and justly so, but are complex and expensive. Directional arrays, typified by TV aerials, are simpler and therefore cheaper, but must be physically pointed at their destination. One possible scenario would be for routers to use the more expensive adaptive arrays, and for end systems to use mechanically steered antennas. In my opinion, the development of improved 802.11 devices is the single most important advance needed today.
Second, an initial infrastructure is required. A 802.11 telephone would be a popular item if everyone else had one, but initially few people would possess such devices, making it difficult if not impossible to route a connection through such a sparse matrix. Philanthropies could be formed to build infrastructures. A Ricochet-type network could be deployed in cooperation with power companies, who might be persuaded to donate the relatively small amounts of electricity the routers would consume. After an initial investment in the (hopefully) rugged and scalable pole-top devices, the entire network could be managed from a central location. At this point, the network would provide 802.11 service to an entire metropolitan area, jump-starting the service. As more and more people bought these devices, each capable of relaying traffic on its own, the dependence on the initial infrastructure would diminish, hopefully to the point where the pole-top devices wouldn’t need replacement when they started to fail.
Furthermore, users would want to call telephones on the conventional phone network, requiring some sort of gateway. A solution to this chicken-and-egg problem would be to provide mechanisms for some fee-based services. Thus, a service provider could construct a network that would, for a monthly fee, interconnect its users and provide gateway service to the existing phone network. It’s possible that the only fee the provider would need to charge would be for the gateway service – initially, almost all connections would go through the gateway, since few people would have the new phones and most calls would be relayed onto the existing phone system. As the wireless network became more and more widely deployed, more and more destinations would go wireless, and the reliance on gateway systems would diminish.
In short, wireless is in its infancy. This exciting new technology offers great possibilities not just to expand existing phone and data networks, but to break down the old service models and replace them with newer, more decentralized designs. The oft-touted idea of the free phone call might even become a reality.