A Theory for Analyzing Addressing Structures

Years ago, I described the “fundamental principle of routing” that “logical addresses correspond to physical locations”. This implies some kind of relationship between addressing structure and network topology. Using concepts from (mathematical) topology, we can make this relationship more precise, and obtain a theory for analyzing addressing structures. I use this theory particularly to note the inadequacy of CIDR and to establish a framework for analyzing possible extensions or replacements to CIDR.

Continue reading “A Theory for Analyzing Addressing Structures”

The Location Layer

Over the past quarter century, stack-based layered architectures have become ubiquitous in networking, most notably the seven-layer OSI model. OSI seperates a Network Layer (responsible primarily for routing) from the Transport Layer and other layers above it. In recent years the Internet’s difficulty in handling mobile devices had suggested flaws in this original design. I propose that a new layer, the “Location Layer”, is needed between the Transport and Network Layers, whose function is to locate network resources, including mobile devices.

Continue reading “The Location Layer”

Standardized caching of dynamic web content

Internet Engineering Task Force
Expires March 2003

             Standardized caching of dynamic web content

			   by Brent Baccala
			     August, 2002

This document is an Internet-Draft and is subject to all provisions of
Section 10 of RFC2026.

Internet-Drafts are working documents of the Internet Engineering Task
Force (IETF), its areas, and its working groups.  Note that other
groups may also distribute working documents as Internet-Drafts.

Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time.  It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."

The list of current Internet-Drafts can be accessed at

The list of Internet-Draft Shadow Directories can be accessed at


   Summarizes the present state of web caching technology.  Points out
   the need for caching dynamic web sites, and the inadequacy of
   present caching technology for anything but static sites.  Proposes
   the adoption of Java servlets, cryptographically signed Web
   Application Archives (WARs), and LDAP as standards for dynamic web
   caching, using an expanded interpretation of existing DNS standards
   to locate and authenticate cached information.

The World Wide Web (WWW), probably the most successful networking
technology of the 1990s, provides a global graphical user interface
(GUI) that presently dominates the Internet.  The current design of
the web has an overwhelming advantage over older connection-oriented
protocols such as TELNET or X Windows.  The web is data-oriented, not
connection-oriented, or is at least more so than conventional
protocols.  A web page is completely defined by a block of HTML, which
is downloaded in a single operation.  Highlighting of links, typing
into fill-in forms, scrolling - all are handled locally by the client.
Rather than requiring a connection to remain open to communicate mouse
and keyboard events back to the server, the entire behavior of the
page is described in the HTML.

The advent of web caches changes this paradigm subtly, but
significantly.  In a cached environment, the primitive operation in
displaying a web page is no longer an end-to-end connection to the web
server, but the delivery of a named block of data, specifically the
HTML source code of a web page, identified by its URL.  The presence
of a particular DNS name in the URL does not imply that a connection
will be made to that host to complete the request.  If a local cache
has a copy of the URL, typically because it was requested and
retrieved earlier, it will simply be delivered, without any wide area
operations.  Only if the required data is missing from the local
caches will wide area network connections be opened to retrieve the
data.  Generally, caches store content based on the URLs, and
sometimes use inter-cache protocols such as ICP to communicate to
other caches which URLs they possess.  A variant on this scheme is the
web replica, in which an entire web site, or some logical subsection
of one, is duplicated elsewhere.

Experience with web caches demonstrates that they provide several
benefits.  First, the bandwidth requirements of a heavily cached,
data-oriented network is much less than an uncached,
connection-oriented network.  A cached copy of a web page, stored
anywhere on the network, works as well as the original.  As the
network becomes more heavily cached, fewer and more localized
connections are required to carry out various operations, reducing
overall network load.  Furthermore, cached or replicated web sites are
more fault-tolerant, since their data can still be accessed even if
the origin server fails or the network becomes partitioned.  A general
consensus seems to exist that caching improves network performance;
more widespread adoption of web caching has been limited by technical

One of the greatest of these challenges is caching dynamic content,
that is, pages generated by software as they are requested, such as
response pages to search requests.  Presently, web caching protocols
provide means for including meta information, in either HTTP or HTML,
that inhibits caching on dynamic pages, and thus forces a connection
back to the origin server.  While this works, it negates the
advantages of caching.  To maintain the flexibility of dynamic content
in a cached network, we need to lose the end-to-end connection
requirement and this seems to imply caching the programs that generate
the dynamic web pages.  While cryptographic techniques for verifying
the integrity of data have been developed and are increasingly widely
deployed, no techniques are known for verifying the integrity of
program execution on an untrusted host, such as a web cache.  Barring a
technological breakthrough, it seems impossible for a cache to
reliably run the programs required to generate dynamic content.  The
only remaining solution is to cache the programs themselves (in the
form of data), and let the clients run the programs and generate the
dynamic content themselves.  Thus, what's needed is a standard for
transporting and storing programs in the form of data.

A closely related problem arises when replicating a web site.  A
significant hurdle for building web replicas is the lack of a standard
to deliver the executable components that underlay dynamic content.
While scripting languages such as Perl and Python are readily
available, installing a web replica almost invariably requires
tweaking configuration files and downloading various additional
packages needed by the scripts.  Without a standard for dynamic
content, there is simply no way to automatically replicate a web site,
unless its content is completely static.  Also, running a Perl script
typically provides little in the way of security.  Either the script
must be carefully reviewed by the installer, or the author must simply
be trusted.

Java "servlets" are a step in the right direction, since they provide
a CGI-type capability that enables a web cache to present dynamic
content without a connection to the origin server.  Since they are
Java-based, they provide solutions to the security issues that
surround something like Perl.  Java's security model provides the
tools to limit servlet access to the host system.  This allows a
cached servlet to reference a collection of Java classes it needs for
proper operation, and have them loaded automatically without the need
of manual intervention.

Part of the Java servlet specification is WAR (Web Application
Archive), an extension to JAR that provides Java servlets, HTML and
JSP pages, and XML meta data all packaged up into a single archive
file to provide a "web application".  In the current implementation,
the server administrator "installs" the WAR at a particular URL by
loading it onto a Java servlet-enabled web server.  If the WAR format
were altered slightly to include, perhaps in the XML meta data, a
"master" URL, and the servlet-enabled web server were to function more
as a proxy, handling requests locally if it possessed a valid WAR,
passing them along otherwise, this would be a big step in the right
direction.  Ultimately, though, to get away from having to trust a
proxy to execute WAR content, the client has to execute the content
itself.  Servers and caches should eventually do nothing but hand out
data, and the responsibly for executing it should fall exclusively to
the client, not the cache.  For the time being, using a local, trusted
cache will enable experimentation with these ideas without changing
client implementations.

Using WARs for application caching, instead of the manual installation
of applications that they were originally designed for, presents some
challenges.  In addition to adding XML entries to the WAR to specify
the base URL, additional entries may be needed to specify a time
interval for which the WAR is valid, as well as whether an outdated
WAR can continue to be used if a more recent one can't be retrieved.
Furthermore, Java servlets typically run with a fairly trusted
security model.  A more restricted security environment should be used
for cached WARs downloaded from foreign web sites.

Also, provisions should be made for incremental updating of the WAR,
since only a portion of a large archive may change in an update.
Although protocols such as rsync have been developed to incrementally
update files, they have limitations.  Rsync depends on changes being
localized within the file.  Files with small changes spread widely
across them, such as search engine indices, don't update well using
rsync, suggesting that something more flexible would be preferred.
Since the WAR is already Java-based, perhaps specifying Java classes,
or pointers to Java classes, in the WAR for performing incremental WAR
updates would provide a powerful mechanism for tailoring the update
mechanism to the type of files contained in the archive.  Perhaps many
of these functions, like deciding the validity of a WAR, should be
specified via Java classes, for maximum flexibility.

Security and authentication are major concerns, especially in a cached
environment.  In this case, some protocols exist to provide
authentication services, yet have many outstanding issues.  Some are
not widely deployed - DNS key services, for example.  The most widely
deployed solution - X.509 certificates - has been priced and managed
into a realm when only e-business sites can realistically justify
their costs.  Web security can't be just for those who can and will
shell out hundreds of dollars for certificates that keep expiring.  In
a heavily cached environment, it's easier than ever to spoof
somebody's URLs, and X.509-based authentication needs to be in place
for 99% of the net's web sites, not 1% of them.  Standards exist
for storing public keys in DNS (KEY and CERT resource records),
which can be used to validate signed JAR/WAR files.

For more rapid response time, the Range: header could be used to
retrieve first the WAR file's table of contents, then the compressed
data of the particular URL, resulting in a retrieval time comparable
to straight HTTP, ignoring the search time required to find the cache
item to begin with and the compilation/startup time of any dynamic
code (both of which may be significant).  Of course, in addition to
such a "partial retrieval", a cache could do a "full retrieval",
obtaining the entire packaged WAR and begin sharing it with other
caches.  The decision of how to choose between partial and full
retrieval is left "for further study", in other words, the user has to
make those decisions manually until we figure it out better.  Napster
has demonstrated that letting the users make caching decisions
manually is workable, so long as the cache items are reasonably sized
(not too large or too small) and well labeled.

A major choice remains, that of the search protocol to find the cached
WARs.  Mainstream caching research tends to largely ignore the most
successful example of a cached network service - Napster and its
various spinoffs, most notably Gnutella, which seem to go by the
buzzword peer-to-peer file sharing, or P2P.  For example, RFC 3040,
"Internet Web Replication and Caching Taxonomy", a January 2001
document discussing "protocols, both open and proprietary, employed in
web replication and caching today," never mentions the word "Napster".
Since peer-to-peer was designed to share music and not HTML documents,
the oversight can be forgiven, but this point needs to be made and
made strongly - Napster, Gnutella, and friends _are_ caching services,
and by far the most successful ones built to date.  Peer-to-peer
seems to be the way to go.

The legal problems of Napster and the highly critical reception of the
technical community to Gnutella suggest against either of these
protocols.  At present, LDAP seems the best choice, due to its
maturity as a protocol, the widespread availability of both client and
server implementations, and its straightforward application to the
problem at hand.  The only serious issue surrounding LDAP is the lack
of a standardized means for server location in a P2P environment, the
critical issue swirling around Gnutella.

I suggest dealing with both the security issues and the P2P server
location issue through a simple solution: assume the correct operation
of DNS even in the face of server failure.  This allows site
administrators to use resource records to specify both a set of LDAP
servers to search for WARs, as well as cryptographic keys to verify
the contents of those WARs once they are retrieved.  Although this
makes proper operation of a cached web site dependent on proper DNS
operation, this should presently be a minor tradeoff, since proper web
site operation is already based on DNS, and DNS had proven to be one
of the most reliable of the Internet technologies.

Thus, to enable dynamic web caching, as outlined in this document, a
web server administrator should add two kinds of additional resource
records to the web server's DNS records.  First, a set of SRV records
should specify a set of LDAP servers, any of which can be searched for
the web site's WARs.  These LDAP servers should form a replicated set,
so that a response from any one of them should be considered a
complete answer by a client.  These servers may also allow arbitrary,
unauthenticated web caches to add entries to the LDAP directory when
they elect to cache one or more of a site's WARs.  Since clients are
expected to cryptographically verify a WAR upon retrieving it,
allowing unauthenticated additions to an LDAP directory should not
allow site spoofing, but a large number of bogus WAR entries could
form the basis for a denial of service attack.  A benefit of this
proposal is that site administrators can select sets of LDAP servers
based on their own policies.  At least one set of publicly updatable,
replicated, highly available LDAP servers should exist for the use of
small web sites without the capability to set up large replicated

The DNS SRV records in question can simply be the "_ldap._tcp" records
mentioned as a example in RFC 2782.  Thus, to specify LDAP servers for
registering or searching WARs for "www.freesoft.org" URLs, DNS SRV
records should be added for "_ldap._tcp.www.freesoft.org".  In the
case of the publicly available LDAP servers mentioned above, and other
LDAP servers used by multiple web sites, careful consideration should
be given to making the "_ldap._tcp" record a CNAME pointing to a set
of SRV records, allowing the LDAP server administrators to modify the
list of LDAP servers without requiring changes to every web site using
the service.  Furthermore, the use of "_ldap" for this new service may
conflict with existing LDAP applications.  Another name, perhaps
"_webldap" might be a better choice.  Another possibility would be to
use both names, specifying that "_webldap" would take precedence over
"_ldap" for this application, and the "_ldap" records would be used
only if "_webldap" records did not exist.  This would allow the
Internet community to use "_webldap" if needed, expecting that this
name would simply fall into disuse if only "_ldap" is really needed.

Also, the web administrator will need to add at least one KEY record
specifying a public key that must be used by clients to validate the
integrity of a retrieved WAR.  Due to the ease with which a bogus WAR
could be registered with a public LDAP service, this is regarded as a
critical step.  The administrator must provide the KEY record and the
client must validate it before trusting the WAR.  Unsigned WARs are
invalid and so are DNS entries without KEY records - both the SRV and
KEY records must be present.  Perhaps a CERT record would be a better
choice than KEY, also, we need to consider how multiple KEY or CERT
records should be handled by a client.

So, for example, consider the "www.freesoft.org" web site, originally
specified in DNS like this:

   $ORIGIN freesoft.org.

   www		IN  CNAME          sparky.freesoft.org.
   sparky	IN  A	 

To add WAR-based caching of dynamic web content for this site, records
similar to these should be added:

   www			IN  KEY	           --- public key goes here ---
   _ldap._tcp.www	IN  CNAME          ldapsrv
   ldapsrv		IN  SRV  0 0 389   ldap1.freesoft.org.
   ldapsrv		IN  SRV  0 0 389   ldap2.freesoft.org.
   ldapsrv		IN  SRV  0 0 389   ldap3.freesoft.org.

Retaining the original CNAME record would require moving the KEY
record to "sparky", since CNAME records can't co-exist with other
records.  An alternative to retaining the original server
configuration would be to replace the "www" entries with A records
pointing to a set of web caches.  Thus, any legacy client trying to
retrieve a page from this web site would be automatically directed to
a web cache.  It'd be convenient to specify a CNAME for "www" pointing
to a set of A records for the web caches, but of course this would
preclude a unique KEY record for the server.  Perhaps the KEY record
should appear on a unique name, such as "_key.www", specifically to
permit this feature.  The interaction of CNAME with the other resource
records requires more consideration.

RFC 2535 specifies the structure of KEY records, and recommends the
assignment of new Protocol Octet values for new applications.  If
this proposal is adopted, IANA should assign a new Protocol Octet
value for validation of dynamic web archives.

A typical client request would follow these steps:

1. Client is configured to use a local web cache, or, attempts a
   standard retrieval and gets A records for web caches

2. Client sends request to web cache

3. Web cache does DNS lookup and gets KEY and SRV records

4. Web cache does LDAP search for the URL and gets a list of WAR
   directory entries, placed there by other (remote) web caches

5. Web cache picks an entry, contacts the remote cache using HTTP
   and either retrieves the entire WAR or just the parts it needs
   to serve the requested URL

6. Web cache validates that WAR was signed using the private key
   corresponding to the public key retrieved in the DNS KEY record,
   and recurses to step 5 (using a different remote cache) if not

7. If the cache elected to retrieve the entire WAR, it (subject to
   considerations like being behind a firewall) registers itself with
   one of the site's LDAP servers as possessing the WAR and being
   willing to serve it to other sites

7a LDAP servers replicate this information among themselves

8. Web cache runs the Java in the WAR to generate the dynamic web page
   and returns the result to the client

Several other options present themselves.  Perhaps the LDAP directory
should include entries for web caches willing to run the Java and
serve the dynamic pages themselves, though this would be present a
security risk since those caches might be untrusted by either client
or server.  Perhaps provision could be made for the server to issue
X.509 certificates certifying certain web caches as trusted.  Perhaps
the user should be prompted before embarking on the potentially time
consuming process of retrieving and locally processing a WAR.
Finally, the functionally of a "locally trusted cache" should
ultimately be rolled into the client itself, which should retrieve and
verify the integrity of the WAR before running the Java itself.

In summary, I recommend the following steps:

1. Recognize the importance of data-oriented design, as opposed to
   connection-oriented design.  Break the dependence on special server
   configurations and realize that the client has to do almost all the
   work in a scalable, cached, redundant web architecture.

2. Select standards for the network delivery of executable web
   content, and for packaging the contents of a web server into a
   single compressed archive.  Java/WAR seems the most likely current

3. Develop an LDAP schema for registering WARs, and for searching
   the registrations to find the WARs matching a particular URL.

4. Extend the WAR specification to include root URL, Java classes for
   determining lifespan of WAR, performing incremental updates, and
   other identified needs.  Specify the security environment in which
   these "foreign" WARs will operate.

5. Extend Squid to support the algorithm outlined above.  Alternately,
   extend Apache Tomcat to function as a web cache, with similar

The caching scheme outlined above is far from perfect.  In my essay
"Data-oriented networking" I discuss more long-term prospects.
However, the current proposal has several key advantages: it can be
deploying using existing technology; it requires no client-side
changes or client-visible protocol updates; it allows web sites to
easily opt in so long as one public set of LDAP servers and/or trusted
caches are available; and it solves a pressing problem.  Ultimately,
the Internet is a work in progress, and its more technically savvy
users are probably ready for a serious attempt at a working caching
scheme for dynamic content.


Data-oriented networking

   "Data-oriented networking", Brent Baccala, Internet Draft

Domain Name System (DNS)

   Dozens of RFCs specify various aspects of DNS operation.  Only
   those most pertinent to basic DNS operation, SRV records, and
   KEY/CERT records are listed here.

   RFC 1034 - Domain Names - Concepts and Facilities

   RFC 1035 - Domain Names - Implementation and Specification

   RFC 1912 - Common DNS Operational and Configuration Errors

   RFC 2535 - Domain Name System Security Extensions

   RFC 2536 - DSA KEYs and SIGs in the Domain Name System

   RFC 2538 - Storing Certificates in the Domain Name System

   RFC 2782 - A DNS RR for specifying the location of services (DNS SRV)

   RFC 3110 - RSA/SHA-1 SIGs and RSA KEYs in the Domain Name System

   Paul Vixie's Internet Software Consortium produces BIND, the most
   widely used (and freely available) DNS server

Lightweight Directory Access Protocol (LDAP)

   RFC 2251 - LDAP v3 (protocol spec)

   RFC 2252 - LDAP v3 Attribute Syntax Definitions (schema spec)

   OpenLDAP is an actively developed (as of mid-2002) open source LDAP
   server, and C-based client library.  Various client implementations
   exist for other languages, such as Perl


   rsync is a program and protocol developed to incrementally update
   files that have only been slightly modified, by first transferring
   a set of MD5 digests that identify which parts of the file have
   been modified and only transferring those parts


   Java Virtual Machine (JVM) specification
      somewhere on http://java.sun.com/

   Bill Venner's excellent Under the Hood series for JavaWorld
   is a better starting point than the spec for understanding JVM.
   He also has written a book - Inside the Java Virtual Machine
   (McGraw-Hill; ISBN 0-07-913248-0)

   Java 2 language reference
      somewhere on http://java.sun.com/

   Java languages page (other languages that compile to Java VM)

   Criticism of Java

Java Servlets/WARs

   "Tomcat is the servlet container that is used in the official
    Reference Implementation for the Java Servlet and JavaServer Pages

   Java Servlets - server-side Java API (CGI-inspired; heavily
   HTTP-based) The Java servlet specification includes a chapter
   specifying the WAR (Web Application Archive) file format, an
   extension of ZIP/JAR


   RFC 3040 - Internet Web Replication and Caching Taxonomy
      broad overview of caching technology

   RFC 2186 - Internet Cache Protocol (ICP), version 2

   RFC 2187 - Application of ICP

   Squid software

   NLANR web caching project

   Various collections of resources for web caching
WEB-CACHE.COM – Online Security and Privacy – Objective Reviews
http://www.web-caching.com/ http://www.caching.com/ IETF Web Intermediaries working group (webi) http://www.ietf.org/html.charters/OLD/web-charter.html IETF Web Replication and Caching working group (wrec) http://www.wrec.org/ RFC 3143 - Known HTTP Proxy/Caching problems Cache Array Routing Protocol (CARP) - used by Squid http://www.microsoft.com/Proxy/Guide/carpspec.asp http://www.microsoft.com/proxy/documents/CarpWP.exe RFC 2756 - Hypertext Caching Protocol (HTCP) - use by Squid Napster and its variants Napster, the original peer-to-peer file sharing service, has been fraught with legal difficulties, having recently entered bankruptcy http://www.napster.com/ Napster's protocol lives on, even if the service is dead. It's basically a centralized directory with distributed data http://opennap.sourceforge.net/ http://opennap.sourceforge.net/napster.txt Gnutella has emerged as the leading post-Napster protocol, employing both a distributed directory and distributed data http://www.gnutella.com/ http://www.gnutelladev.com/ http://www.darkridge.com/~jpr5/doc/gnutella.html Several popular clients use the Gnutella network and protocol http://www.morpheus-os.com/ http://www.limewire.org/ http://www.winmx.com/ Other proprietary peer-to-peer systems http://www.kazaa.com/ Other free peer-to-peer systems http://www.freenetproject.org/

Data-oriented Networking

Internet Engineering Task Force
Expires March 2003

		       Data-oriented networking

			   by Brent Baccala
			     August, 2002

This document is an Internet-Draft and is subject to all provisions of
Section 10 of RFC2026.

Internet-Drafts are working documents of the Internet Engineering Task
Force (IETF), its areas, and its working groups.  Note that other
groups may also distribute working documents as Internet-Drafts.

Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time.  It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."

The list of current Internet-Drafts can be accessed at

The list of Internet-Draft Shadow Directories can be accessed at


   Differentiates between connection-oriented and data-oriented
   networking, identifies the advantages of data-oriented networks,
   argues that Internet web architecture is becoming more
   data-oriented, and suggests ways of encouraging and accelerating
   this trend.

Contemporary Internet architecture is heavily connection-oriented.  IP
underlies almost all Internet operations, and its fundamental
operation is to deliver a data packet to an endpoint.  TCP uses IP to
sequence streams of data packets to those endpoints; higher-level
services, such as HTTP, are built using TCP.  All of these operations
are based upon the underlying IP addresses, which identify specific
machines and devices.  Even UDP operations are connection-oriented in
the sense that UDP addresses identify a specific machine on the
Internet with which a connection (even just a single packet) must be
established.  Note that I use the term connection-oriented in a
somewhat different sense than the traditional distinction between
connection-oriented and connection less protocols.

More recently, Uniform Resource Locators (URLs) have emerged as the
dominant means for users to identify web resources.  The distinction
is not merely one of introducing a new protocol with new terminology,
either.  URLs are used to name blocks of data, not network devices.
Especially with the advent of caching, it's now clear that a web
browser may not have to make any network connections at all in order
to retrieve and display a web page.  "Retrieving" a URL differs
significantly from opening an HTTP session, since an HTTP session
implies a network connection to a named device, while accessing a URL
implies only that its associated data (stored, perhaps, on a local
disk) is made available.  HTTP, SMTP, ssh, and other TCP-based
protocols are inherently connection-oriented, while the URL is
inherently data-oriented.

The Internet is moving away from a connection-oriented model and
becoming more data-oriented.  Since the original Internet design was
modeled, at least loosely, after a telephone system, all of its
original protocols were connection-oriented.  Increasingly, we're
becoming aware that often a user is not interested in connecting to
such-and-such a computer, but rather in retrieving a specific piece of
data.  Since such operations are so common, Internet architects need
to recognize the distinction between connection-oriented and
data-oriented operations and design networks to support both well.
Data-oriented models will not replace connection-oriented models;
sometimes, you'll still want to make the telephone call.  Rather, the
pros and cons of each need to be understand, so both can be
incorporated into the Internet of the 21st century.

To understand the emergence of data-oriented networking, it is useful
to consider the historical development of the Internet.  Initially,
the driving application for what became the Internet was email,
allowing virtually instantaneous communications over large distances,
FTP and TELNET were second and third.  FTP provided file transfer and
a rudimentary publication system; TELNET extended the 1970s command
line interface over the network, letting people "log in" over the net,
thus allowing remote use and management of computer systems.

Even in these early years of the Internet, the network was becoming
more data-oriented than a cursory examination of its protocols would
suggest.  FTP archive sites, such as uunet and wuarchive, began
assembling collections of useful data, including programs, protocol
documents, and mailing list archives in central locations.  Other
sites began mirroring the archives, so that retrieving a particular
program, for example, did not require a connection to a centralized
server for the program, but only a connection to a convenient mirror
site.  The practice continues to this day.  Of course, accessing the
mirror sites required using the connection-oriented protocols, and the
process of finding a mirror site or archive that contained the
particular program you wanted remained largely a manual process.  It
still does.

A significant change occurred during the 1980s - the appearance of
graphical user interfaces (GUIs) in personal computers by the end of
the decade.  In the early to mid 90s, the world wide web extended the
GUI over the network, much as TELNET had extended the command line
interface over the net.  More than anything else, the web represents a
global GUI, a means of providing the commonly accepted point-and-click
interface to users around the world.

It is impossible to understate the impact of the web.  The GUI was a
critical technology that made computers more accessible to the average
person.  No longer did you need to type cryptic instructions at a
command prompt.  To open a file, represented by a colorful icon, just
move a pointer to it and click.  Yet until the web, you still needed
to use the old command-line interface to use the network.  Your
desktop PC might use a GUI, but connecting to another computer
generally meant a venture into TELNET or FTP.  The web extended the
GUI metaphor over the network.  Instead of learning FTP commands to
retrieve a file, you could just browse to a web site and click on an

Other technologies could have provided a network GUI, but not as well
as HTML and HTTP.  X Windows certainly was designed specifically with
network GUI applications in mind, but provided so little security that
using it to "browse" potentially untrusted sites was never realistic.
AT&T's Virtual Network Computing (VNC) is similar to X Windows, and is
designed so that its effects can be confined to a single window.  With
some extensions, it could be used as the basis for a network GUI.
However, both X Windows and VNC share a single common major flaw -
they are connection-oriented protocols that presuppose a real-time
link between client and server.  The user types on the keyboard or
clicks on a link, then the client transmits this input to the server,
which processes the input and sends new information to the client,
which redraws the screen.  X Windows has never been widely used over
the global Internet, because the bandwidth and delay requirements for
interactive operation are more stringent than the network can
typically provide.  VNC is very useful for using GUI systems remotely,
but still doesn't provide the performance of local software.

The present HTML/HTTP-based design of the web does have one
overwhelming advantage over X Windows / VNC, however.  The web is
data-oriented, not connection-oriented, or is at least more so than
conventional protocols.  A web page is completely defined by a block
of HTML, which is downloaded in a single operation.  Highlighting of
links, typing into fill-in forms, scrolling - all are handled locally
by the client.  Rather than requiring a connection to remain open to
communicate mouse and keyboard events back to the server, the entire
behavior of the page is described in the HTML.

The advent of web caches changes this paradigm subtly, but
significantly.  In a cache environment, the primitive operation in
displaying a web page is no longer an end-to-end connection to the web
server, but the delivery of a named block of data, specifically the
HTML source code of a web page, identified by its URL.  The presence
of a particular DNS name in the URL does not imply that a connection
will be made to that host to complete the request.  If a local cache
has a copy of the URL, it will simply be delivered, without any wide
area operations.  Only if the required data is missing from the local
caches will network connections be opened to retrieve the data.

Experience with web caches demonstrates that data-oriented networks
provide several benefits.  First, the bandwidth requirements of a
heavily cached, data-oriented network is much less than a
connection-oriented network.  Connection-oriented protocols such as X
windows, VNC, and TELNET presuppose a real-time connection between
client and server, and in fact could not operate without such a
connection, since the protocols do not specify how various user
events, such as keyclicks, should be handled.  All the protocols do is
to relay the events across the network, where the server decides how
to handle them, then sends new information back to the client in
response.  A data-oriented network, which specifies the entire
behavior of the web page in a block of HTML, does not require a
real-time connection to the server.  Having retrieved the data to
describe a web page, the connection can be severed and the user can
browse through the page, scrolling, filling out forms, watching
animations, all without any active network connection.  Only when the
user moves to another web page is a connection required to retrieve
the data describing the new page.  Furthermore, since the data
describing the pages is completely self-contained, no connection to
the original server is required at all if a copy of the web page can
be found.  A copy, stored anywhere on the network, works as well as
the original.  As the network becomes more data-oriented, fewer and
briefer connections are required to carry out various operations,
reducing overall network load.

A data-oriented network is also more resilient to failures and
partitions than a connection-oriented network.  Consider the
possibility of a major network failure, such as the hypothetical
nuclear strike that originally motivated the Defense Department to
build the packet-based network that evolved into the Internet.  Modern
routing protocols would probably do a fairly good job of rerouting
connections around failed switching nodes, probably in a matter of
minutes, but what if the destination server itself were destroyed?
The connection would be lost, and no clear fallback path presents
itself.  The obvious solution is to have backup copies of the server's
data stored in other locations, but creating and then finding these
backups is currently done by hand.  Existing routing protocols can
reroute connections, but are woefully inadequate for rerouting data.

A more mundane, but far more common scenario is the partitioned
network.  Simply operating in a remote area may dictate long periods
of operation without network connectivity.  In such an environment,
it'd be convenient to drop any information that might be needed on a
set of CD-ROMs.  That works fine until the first search page comes up
that connects to a specialized program on the web server, or a CGI
script that presents dynamic content, or an image map.  Solutions have
been developed to put web sites on CD-ROMs - none of them standard,
most of them incomplete.  A more data-oriented design, that didn't
depend on connections to a server, would be far better suited to such

HTML, the workhorse protocol of the web, was never designed with use
as a network GUI in mind, even though this is the role it has evolved
into.  It's the HyperText Markup Language (HTML), and hypertext is not
a GUI.  Hypertext is text that includes hyperlinks.  Perhaps we can
expand the definition somewhat into a "hyperdocument" that can include
colors, diagrams, pictures, and even animation.  A GUI is much more
than a hyperdocument, however.  A GUI is a complete user interface
that provides the human front end to a program.  Not only can it
include dialog boxes, pull down menus and complex mouse interactions,
but more than anything else it provides the interface to a program,
which could perform any arbitrary task, and is thus not just a
document.  The program could be a document browser, a document editor,
a remote login client, a language translator, a simulation package,
anything.  What was pioneered by Xerox PARC, deployed by Apple Lisa,
marketed by Macintosh and brought with such stunning success to the
masses by Microsoft was not hypertext, but the GUI.  The GUI is what
we are trying to extend across the network, not hypertext, and thus
HTML just isn't very well suited for the task.

Since it wasn't designed to provide a network GUI, HTML doesn't
provide the right primitives for the task it has been asked to
perform, and thus we've seen a long series of alterations and
enhancements.  First there was HTML 2, then HTML 3, then HTML 4, now
HTML with Cascading Style Sheets, soon XHTML, plus Java applets,
Javascript, CGI scripts, servlets, etc, etc...  The fact that HTML has
had to change so much, and that the changes require network-wide
software updates, is a warning sign that the protocol is poorly
designed.  The problem is that HTML has been conscripted as a network
GUI, though, to this day, it has never been clearly designed with this
goal in mind.  Part of what is needed is a replacement for HTML
specifically designed to act as a network GUI.

In addition, one of the great challenges to a data-oriented model is
dynamic pages.  Presently, web caching protocols provide means for
including meta information, in either HTTP or HTML, that inhibits
caching on dynamic pages, and thus forces a connection back to the
origin server.  While this works, it breaks the data-oriented metaphor
we'd like to transition towards.  To maintain the flexibility of
dynamic content in a data-oriented network, we need to lose the
end-to-end connection requirement and this seems to imply caching the
programs that generate the dynamic web pages.  While cryptographic
techniques for verifying the integrity of data have been developed and
are increasingly widely deployed, no techniques are known for
verifying the integrity of program execution on an untrusted host,
such as a web cache.  Baring a technological breakthrough, it seems
impossible for a cache to reliably run the programs required to
generate dynamic content.  The only remaining solution is to cache the
programs themselves (in the form of data), and let the clients run the
programs and generate the dynamic content themselves.  Thus, another
part of what's needed is a standard for transporting and storing
programs in the form of data.

An important change in moving to a more data-oriented network would be
to replace HTML with a standard specifically designed to provide a
data-oriented network GUI.  The features of this new protocol:

  1) It must be data-oriented, not connection-oriented.  Thus, the
     protocol must define a data format that can describe GUI behavior
     on a remote system.  HTML already basically does this.

  2) It must programmatic.  The whole point is to eliminate the server
     and replace it with a standard, network-deliverable specification
     of the GUI behavior.  The exact behaviors of GUI interfaces
     vary dramatically, and simply providing an escape mechanism
     to connect back to the server violates the data-oriented
     design goal.  Thus, the protocol must implement powerful enough
     primitives to describe arbitrary GUI behaviors without escape
     mechanisms, i.e. it must support arbitrary programming constructs.

  3) It must be secure.  Since the program may be untrusted to
     the client, it must be limited from performing arbitrary
     operations on the client system.

  4) It must provide GUI primitives, and cleanly interact with other
     GUI applications, such as window managers, and provide features
     such as drag-and-drop functionality between windows.

  5) It must provide backwards compatibility with the existing web.

The programmatic requirement fairly well dictates some kind of
virtual machine architecture, and the obvious candidate is therefore
Java, but Java may or may not be the best choice.  Netscape began
work on a 100% Java web browser, but abandoned this effort in 1998.
Commenting on the demise of "Javagator", Marc Andreesen quipped - "it's
slower, it will crash more and have fewer features... it will simplify
your life".

This misses the point.  We're not trying to build an Java-based HTML
web browser that would simply achieve cross-platform operability.  The
goal is to build a web browser that, as its primary metaphor, presents
arbitrary Java-based GUIs to the user.  HTML could be displayed using
a Java HTML browser.  The difference is that the web site designer
controls the GUI by providing the Java engine for the client to use
for displaying any particular page.  Switching to a different web site
(or web page) might load a different GUI for interacting with that
site's HTML, or XML, or whatever.  Unlike Andreesen's "Javagator", the
choose of GUI is under control of the web server, not tied into a
Java/HTML web browser.

For example, consider if a web site wants to allow users to edit its
HTML pages in a controlled way.  Currently, you have a few choices,
none completely satisfactory.  First, you could put your HTML in an
HTML textbox, and allow the user to edit it directly, clicking a
submit button to commit it and see what the page will actually look
like.  Alternately, you could allow the HTML to be edited with
Netscape Composer or some third party HTML editor on the client,
accepting the HTML back from the client in a POST operation.  This
provides the server very little control over exactly what the user can
and can't do to the page.  Since parts of the page might be
automatically generated, this isn't satisfactory, nor do we really
know much about this unspecified "third party editor".  On the other
hand, with a Java browser, the web site could simply provide a
modified HTML engine that would allow the user to edit the page, in a
manner completely specified by the web designer, prohibiting
modifications to parts of the page automatically generated, and
allowing special cases, such as spreadsheet tables with the page, to
be handled specially.

Another advantage to this proposal is that it provides a solution to a
problem plaguing XML - how do you actually display to the user the
information you've encoded in XML?  This is left glaringly unaddressed
by the XML standards, the solution seeming to be that you either use a
custom application capable of manipulating the particular XML data
structures, or present the data in two different formats - XHTML for
humans and XML for machines.  A Java-based web browser addresses this
problem.  You ship only one format - XML - along with a Java
application that parses and presents it to the user.

On the other hand, let's keep Andressen's criticism in mind. Java may
not be suitable for such a protocol, for either technical or political
reasons.  The speed issues seem to be largely addressed by the current
generation of Just-In-Time (JIT) Java runtimes, but whatever the
standard is, it should be an RFC-published, IETF standard-track
protocol, and if the intellectual property issues around Java preclude
this, then something else needs to replace it.  Alternatives include
Parrot, the yet unfinished Perl 6 runtime, and Microsoft's .NET
architecture, based around a virtual machine architecture recently
adopted as ECMA standard ECMA-335.

PDF also deserves consideration.  Though it lacks the generality to
provide a network GUI, its presentation handling is vastly superior to
HTML's, giving the document author complete control over page layout,
and allowing the user to zoom the document to any size for easy
viewing.  It is also easier to render than HTML, since its page layout
is more straightforward for the browser to understand.

A definite metaphor shift is required.  Rather than viewing HTML as
the primary standard defining the web, the primary standard must
become Java or something like it, that provides full programmability.
Browsing a web page becomes downloading and running the code that
defines that page's behavior, rather than downloading and displaying
HTML, that might contain an embedded applet.

Backwards compatability can be provided along the lines of HotJava,
Sun's proprietary Java-based web browser, which implements HTML in
Java.  To display an HTML page, Java classes are loaded which parse
the HTML and display it within a Java application.  The browser
provides little more than a Java runtime that can download arbitrary
Java and run it in a controlled environment.  Initially, 99% of the
pages would be HTML, viewed using a standard (and cached) HTML engine
coded in Java.

Notwithstanding the creeping featurism present in Java, adopting this
approach would avoid the creeping featurism so grossly apparent in web
browsers.  Even the casual observer will note that mail, news, and
teleconferencing are simply bloat that results in multi-megabyte
"kitchen sink" browsers.  Will the next release of Netscape, one might
ask, contain an Emacs mail editor with its embedded LISP dialect?  And
if not, why not?  Only because the majority of users wouldn't use
Emacs to edit their mail?  Why should we all be forced to use one type
of email browser?  Why should we have Netscape's email browser
packaged into our web browser if we don't use it?  Like the constant
versioning of HTML, the shear size of modern browsers is a warning
sign that the web architecture is fundamentally flawed.  A careful
attempt to standardize "network Java" would hopefully result in
smaller, more powerful browsers that don't have to be upgraded every
time W3C revs HTML; you simply update the Java GUI on those particular
sites that are taking advantage of the newer features.

Another tremendous advantage is the increased flexibility provided to
web designers.  HTML took a big step in this direction with Cascading
Style Sheets, but CSS doesn't provide the power of a full GUI.  For
example, if a web page designer wanted to, he could publish an HTML
page with a custom Java handler that allowed certain parts of the HTML
text to be selectively edited by the user.  This simply can't be done
using CSS.

Network-deliverable, data-oriented GUIs aren't a panacea, or course.
For starters, one of the advantages of the present model is that all
web pages have more or less the same behavior (since they are all
viewed with the same GUI).  The "Back" and "Forward" buttons are
always in the same place, the history function always works the same
way, you click on a link and basically the same thing happens as
happens on any other page.  Providing the web designer with the
ability to load a custom GUI changes all that.  Standards need to be
developed for finding and respecting user preferences concerning the
appearance of toolbars, the sizing of fonts, the operation of links.
The maturing Java standards have already come a long way towards
addressing issues such as drag-and-drop that would have to be
effectively implemented in any network GUI.

Hurdles need be crossed before we can reach a point where web
designers can depend on Java-specific features.  One possibility would
be to migrate by presenting newer web pages to older browsers using a
Java applet embedded in the web page.  Performance might suffer, but
clever design would hopefully make it tolerable.  For starters,
consider that the web page data presented to the applet need not be
the source HTML, but could be a processed version with page layout
already done.  Newer, Java-only browsers should be leaner and faster.

In summary, I recommend the following steps:

1. Recognize the importance of data-oriented design, as opposed to
   connection-oriented design.  Break the dependence on special server
   configurations and realize that the client has to do almost all the
   work in a scalable, cached, redundant web architecture.

2. Migrate the web towards being based on a model of a network GUI,
   rather than a massively enhanced hypertext system.

3. Select a standard for the network delivery of executable content,
   Java being the most likely candidate

4. Develop a Java-based HTML browser along the lines of HotJava, but
   completely open, allowing existing HTML-based websites to be
   browsed via Java.  Provide an applet version that allows web
   designers to specify a custom Java applet to browse their HTML
   sites using conventional web browsers.

5. Develop a lean, fully Java-based web browser, with Multivalent
   being the most obvious candidate.

6. Recognize the transient nature of HTML/HTTP and specify their
   operation in terms of a generic API, based on the network
   executable content standard (probably Java), for finding and
   delivering the GUI presented by a specified URL.

With all the inertia built up behind the present web design, one needs
to question the wisdom of abandoning HTML and completely re-vamping
the structure of the web, even if a migration path is in place.  The
promise of data-oriented networking is a leaner, more reliable, more
efficient network.  Ultimately, if this analysis is correct, then the
cost of migrating away from a flawed design will ultimately be less
than the cost of constantly shoehorning it.  However, in his essay,
"The Rise of 'Worse is Better'", Richard Gabriel suggests "that it is
often undesirable to go for the right thing first.  It is better to
get half of the right thing available so that it spreads like a
virus."  Bearing this in mind, I've proposed an alternate set of
recommendations, aimed at something more immediately practical, in a
companion essay, "Standardized caching of dynamic web content".  At
the same time, I think it's time to take another look at the
Java-based web browser, and to seriously ask if Java isn't a better
choice than HTML for a network standard GUI.


Hypertext Markup Language (HTML) / Extensible Markup Language (XML)

   HTML has a long and sordid history.  HTML version 2, specified in
   RFC 1866, was one of the earliest (1995) documented HTML versions.
   Later revisions added tables (RFC 1942), applets (HTML 3.2),
   JavaScript and Cascading Style Sheets (HTML 4.01).


Universal Resource Locators

   RFC 1630 - Universal Resource Identifiers in WWW

   RFC 1737 - Functional Requirements for Uniform Resource Names, a
      short document notable for its high-level overview of URN

   RFC 1738 - Uniform Resource Locators, a technical document of more
      importance to programmers than architects


   International standard X.509 (not available on-line)



DNS Security

   RFC 2535 - Domain Name System Security Extensions

   RFC 2536 - DSA KEYs and SIGs in the Domain Name System

   RFC 2538 - Storing Certificates in the Domain Name System

   RFC 3110 - RSA/SHA-1 SIGs and RSA KEYs in the Domain Name System


   Postscript specification

   PDF Reference: Adobe portable document format version 1.4
   (ISBN 0-201-75839-3)

   Ghostscript - freely available Postscript interpreter that also
      reads and writes PDF and thus can be used to convert PS to PDF

   Multivalent (see below) includes a Java PDF viewer

   html2ps - a largely illegible Perl script written by Jan Karrman
      to convert HTML to Postscript.  Yes, it can be done.

X Windows

   Developed by an MIT-lead consortium, X Windows is one of the most
   successful network GUIs

VNC (Virtual Network Computing)

   Similar in concept to X Windows, but radically different in
   design - an absurdly simple protocol combined with various
   compression techniques to achieve decent WAN performance


   Java Virtual Machine (JVM) specification

   Bill Venner's excellent Under the Hood series for JavaWorld
   is a better starting point than the spec for understanding JVM.
   He also has written a book - Inside the Java Virtual Machine
   (McGraw-Hill; ISBN 0-07-913248-0)

   Java 2 language reference

   Java languages page

   Criticism of Java

Other virtual machines

   Perl 5 runtime

   Parrot - Perl 6 runtime

   Microsoft's .NET architecture includes the Common Language
   Infrastucture, based around a virtual machine, now adopted as

Various Java-based web browsers

   HotJava, Sun's Java browser, but with binary-only licensing

   Multivalent, an open-source web browser written totally in Java,
   with an extension API to add "behaviors" similar to applets

   NetBeans, an attempt to develop a "fully functional Java browser"

   Jazilla, a now defunct attempt to carry the "Javagator" project
   forward under an open source banner

Java Servlets/WARs

   "Tomcat is the servlet container that is used in the official
    Reference Implementation for the Java Servlet and JavaServer Pages

   Java Servlets - server-side Java API (CGI-inspired; heavily
   HTTP-based) The Java servlet specification includes a chapter
   specifying the WAR (Web Application Archive) file format, an
   extension of ZIP/JAR


   RFC 3040 - Internet Web Replication and Caching Taxonomy
      broad overview of caching technology

   RFC 2186 - Internet Cache Protocol (ICP), version 2

   RFC 2187 - Application of ICP

   Squid software

   NLANR web caching project

   Various collections of resources for web caching

   IETF Web Intermediaries working group (webi)

   IETF Web Replication and Caching working group (wrec)

   RFC 3143 - Known HTTP Proxy/Caching problems

   Cache Array Routing Protocol (CARP) - used by Squid

   RFC 2756 - Hypertext Caching Protocol (HTCP) - use by Squid

Napster and its variants

   Napster, the original peer-to-peer file sharing service, has been
   fraught with legal difficulties, having recently entered bankruptcy

   Napster's protocol lives on, even if the service is dead.  It's
   basically a centralized directory with distributed data

   Gnutella has emerged as the leading post-Napster protocol,
   employing both a distributed directory and distributed data

   Several popular clients use the Gnutella network and protocol

   Other proprietary peer-to-peer systems

   Other free peer-to-peer systems

Richard Gabriel, "The Rise of 'Worse is Better'"

Brent Baccala, "Standardized caching of dynamic web content"

The Spirit at Thirty

My earlier spiritual journey I documented in Bicycling across America. At the end of that account, I related how I had experienced a sort of revelation in Arizona, which could basically be summed up “Your problem is that you think you can do everything yourself.” I gave away my bicycle along with my money and almost all my worldly possessions, and started walking along the back roads of Arizona. After three days of this, having driven myself to walk forty miles with almost no sleep, I gave up. I walked back into Wickenburg, Arizona, contacted a good friend of mine, and got $200 wired to me for bus fare to California.

In the eight years since, I have often wondered about that experience. Did I set a pattern for the rest of my life by giving up? Did I commit, then and there, some fatal error from which I can never recover? If I had kept walking, would I have experienced some life-changing revelation like those of the prophets? Did I abandon God?

In my nights of despair, I plead with the Lord to forgive me this and my other sins of omission. I beg him not to give up on me. I implore him to make me an instrument of his will, to grant me the wisdom to know that will, and to bolster me with the courage I so often seem to lack. In depression, I muse that my life is already a failure, that I’ve already missed my fate, that everything for here on out is just a shell of a life, for “the man who liveth not his dream is living death.

Then I pick myself up and carry on. I view my experience in Arizona as just one stumble among many, many that I’ve committed. I reflect on Christ’s promise that “he who believes in me shall not perish, but have eternal life,” and trust that God will find in his heart the mercy to do his will in my life, imperfect as it may be. I haven’t given up. Though the light was dim, and at times appeared to have vanished completely, I’m still moving forward.


Three years after the bicycle trip, in 1996, I returned to one of the places I had passed through on my bike – the Shiloh community in Sulphur Springs, Arkansas. A non-denominational Christian community nestled in the Ozark mountains, Shiloh numbers about a dozen long-time members, and various transients. The community provides the no-stop-light town of Sulphur Springs with it’s only industry – a commercial bakery in the basement of the community’s main building, a one-time military academy on the crest of a grassy knoll. No doubt about it – Shiloh bakes the best bread I’ve ever tasted.

The community’s led by Pastor James, the aging inheritor from Shiloh’s founder. A quiet man, James reads heavily in mystical Christianity, and always conducted a meditation session at the outset of the community’s morning meeting. Prayer, singing, and some kind of spiritual reading (usually of a mystical nature, never the Gospels) were always mainstays of the hour-long meetings. Never, during the two months I was there, did I witness James take or administer communion.

Probably the most dominant personality was the pastor’s wife. In her late fifties and blessed with good health, Anna Lee managed the bakery, often donning a white hair net and helping work the assembly line. She was also one of the chief proponents of the community’s philosophy, which she usually summarized in the “Four Rules”: no smoking, no alcohol, no drugs without a doctor’s prescription, no sex outside a heterosexual marriage.

I made several friends at Shiloh, Paul Clough and John Knoderer, the local computer programmers, and including Anna Lee herself, I think. Most significant were two local teenagers I got to know – Jeremiah, a seventeen-year-old whose family rented a house from the community, and Robert, a thirteen-year-old who was good friends with Paul’s son, Micah.

Jeremiah’s interests included fast driving, loud rock music, and smoking marijuana. We hit it off right away. I tried to be a bit of a calming influence – teaching him how to start a stick-shift on a hill instead of just grinding the gears; driving slowly through town and saving Speed Racer for the highway. I remember him using my computer to research Marylon Manson on the Internet and asking if I thought demonic influences are real. I replied in the affirmative, and Jeremiah later told me that he had destroyed all his Marylon Manson CDs.

Robert, on the other hand, was a quieter boy who played Dungeons and Dragons with his friends and came up to visit me and browse the Internet almost every day, enjoying the interactive role-playing games, the net’s Multi-User Dungeons (MUDs). Robert would also practice on the piano while Jeremiah and I would fool around on the guitar. I adored Robert; found him quite attractive, really. Yet I was afraid of a sexual relationship developing, not because I was worried about the police or what people would think, but because I myself am very reluctant to explore gay sexuality, especially with a thirteen-year-old. The bottom line was that, to my lasting regret, I never told him how I really felt about him. Putting sex aside, the truth is that I loved him. Yet I never put my arm around him, never said the words, “I love you”. Teenagers need to know the difference between love and sex, I think, otherwise it’s easy to get them confused. Coming from a broken home, I think Robert needed love, and I desperately wanted to give it to him, but never could quite manage.

Finally, somebody smelt the marijuana smoke from Jeremiah’s and my near-daily smoke-outs, and all hell broke loose. After being confronted with this charge at the community meeting, a vote was taken that I was to leave in a week and not have any contact with the children in the meantime. I began preparing to leave, but the part about the children I ignored. Jeremiah’s father came to the next community meeting to voice his support for me, but Pastor James refused to let either of us speak and ended the meeting. The next day, one of the older ladies came into my office while Robert was there, told him to leave, and in about the nastiest voice you could imagine, told me “we’re not going to let you hurt these children”. I left, but not before literally wiping the dirt from my shoes, as the disciples were told by Christ:

And if anyone will not welcome you or listen to your message, shake off the dust from your feet as you leave that house or that town. I tell you the truth, it will be more bearable for Sodom and Gomorrah in the day of judgment than for that town.

Matthew 10:14-15

Friends, let me exhort you never to lay down any curse, even if you proclaim the Gospel and be rejected in everything. We are taught to love our enemies, not to curse them. That curse I laid has brought so much grief into my life that at times I can not fathom how I could possibly have cursed the town where two of my best friends lived. It’s most obvious effect was on me! Even though I wanted badly to maintain contact with my friends, I took the curse very seriously and broke off all contact with Sulphur Springs. After two years, my nagging concern for my friends began to win out over the curse. I wrote Robert a letter for his sixteenth birthday; it was returned undelivered, as he had moved. The next year I actually returned to the town, and it took another year to track down my friends. Jeremiah had married, had a kid on the way, but was in most ways the same person; we now stay in touch. On the other had, Robert had changed completely, becoming very materialistic and selfish, and wanting nothing to do with me. Can I blame him? During the years when he needed me most, I was nowhere to be found. He was the closest thing I ever had to a little brother. I fear I’ve lost him forever.

The Drug War

Early in 1997, having returned from Arkansas, I lived with a college friend of mine who was waiting tables at a Glen Burnie restaurant. He was also a small-time drug dealer, keeping marijuana and cocaine in the house in addition to the usual alcohol. At any rate, the police found out and the house was raided. Five days later, we were evicted. What followed was the most profound faith struggle of my life.

In the midst of this crisis, I sought re-baptism. Through my prayers and contemplations, I recognized that Jesus had been baptized, not as an infant, but as a grown man, at the outset of his ministry. I decided to pursue the same course, though not for the redemption of sin (perhaps a serious error), but in search of an answer from God to this political campaign I was complementing. Just as Christ received a sign at his baptism, before pursuing his ministry, so I sought a similar sign at mine. While this may seem incredibly arrogant (it seems so to me, in retrospect), I can honestly tell you that I entered into the venture with the profound conviction that if God wanted me to pursue this campaign, he would give me a sign at my baptism.

I fasted for a week, then traveled to Ohio, where I had met a minister during my bicycle trip who baptized by immersion. After attending his service, I asked him afterward for baptism. Since he was busy that afternoon, he said that unless I could wait a few days, it would have to occur immediately. And that’s exactly what happened. He announced the baptism to those of his congregation still mingling after the service, we drove to a nearby lake, and with perhaps fifty witnesses, he baptized me in the name of the Father, Son, and Holy Spirit. I received no sign.

I drove home to Maryland telling myself that I didn’t have to do it, that I had no calling from God, that there was no obligation for me to pursue this campaign that so deeply troubled me. While I had many more doubts and agonies over it, I believe my baptism in Ohio was probably the turning point in my decision to scrap the campaign. In a moment of paranoia (What if the police busted me again?) I burned the notebook I had prepared in planning the campaign, and mostly got on with my life.

I would return to the drug war again. In early 2000, I had what you might call a relapse. I had rejected civil disobedience, but still considered the possibility of a speaking and protest campaign. I published It’s the Drug War, Stupid. Looking back on that document, I have to tell you that what disturbs me most about it is not the anger it relates, because that was real, but how political it is; how totally coaxed it is in political rights and strategies; how God has been completely edited out.

Some of what I proposed in that essay came about, though I had no part of it. The “shadow conventions” of 2000 highlighted the drug war as one of their issues, and were labeled as “ultra-left” by a society that split its vote between Al Gore and George W. Bush. I’m increasingly coming to a disturbing conclusion – that the majority of the people of this country want a war in their own land, against their own people, and are absolutely committed to a policy of “zero tolerance”.


I’ll probably end up as a monk, if not in name than at least in fact. My earliest direct exposure to monasticism came on the bike trip, when I visited a Benedictine monastery in Oklahoma. St. Benedict, the founder of this order, spent three years living in a cave, his only nourishment being bread lowered to him on a rope by a friend. Later, he founded the monastery at Monte Casino and the Benedictine order. He lived about 1500 years ago.

More recently, I’ve read a biography of St. Francis of Assisi, the founder of the Franciscan order. St. Francis’ response to the Christian gospel was similar to St. Benedict’s, but also much different. Both men took their religion very seriously, and neither were content to just sing about heaven on Sunday mornings. Yet while Benedict cloistered himself in a monastery, Francis took to the road. After giving away all his worldly possessions, he began traveling around Italy by foot, preaching the gospel and begging for his food. Any money he received, he gave away immediately.

I don’t completely subscribe to Francis’ philosophy; you won’t catch me sprinkling ashes on my food because I think it tastes too good! Yet we are in agreement on many and the most significant points. I consider it a religious obligation to give to beggars, and recently have found myself on various occasions without a penny to my name. Yet I have no intention of getting a job just to produce money; I have plenty of important work to do, and frankly, pride. I despise the capitalists and will not support their nightmare system by working for them simply because I’m forced to if I want to eat. Like Francis, if I lack benefactors, I will simply go hungry. Yet God knows what we need, and will provide it – I’m not starving away, thanks to those who give to me and particularly Bruce Caslow, my most significant supporter over the last few years. It’s Bruce that paid for an apartment in Washington when I couldn’t afford the rent; Bruce who was always tossing twenty bucks my way when I didn’t have anything to eat; Bruce who was always there to review an essay or discuss my spiritual trials.

I think we need both the Benedictine and the Franciscan ideals in our lives; we find both motifs in the life of Christ. Jesus spends forty days in the wilderness, withdraws onto a mountaintop to pray, spends all night in prayer. We need to withdraw into seclusion, perhaps best the seclusion of nature, to experience God in solitude. Christ also travels from town to town, stays with friends in Jerusalem, sends forth his disciples and tells them to take no money, or packs, or extra clothing. We also need to come down from the mountaintops and express our love of God through our fellow man. Honestly, the great saints seems to know this. Francis at times withdraws into seclusion, and Benedict finally left his cave. Ultimately, we don’t need a ten-acre monastery or public vows to life as brothers in Christ. The monastery was wherever Jesus went, and the most important vows are the ones we make to God.

New Age Christianity

I’ve been exposed in the last year to New Age Christianity, most particularly through Neale Donald Walsch’s Conversations With God books. For those unfamiliar with this, Walsch claims that his books are essentially channeled from God. He would take a pad of paper, write a question, and wait for an answer to come into his head. Sometimes no answer would come, and he would put the pen down until the next day. When an answer would come, he would write it down and then ask another question. He wrote three books this way.

The basic tenet of these books is that We Are All One. When the Bible states that God made us in his image, it means this spiritually, not physically. We are, each of us, a little piece of God, which God created in order to experience the universe as individuals. Those who become completely self-aware, such as Jesus, realize their own oneness with God and, through faith, find power even over death.

This theology is radically different from traditional Christianity. It claims, among other things, that there is no Devil (we invented him ourselves); that we reincarnate again and again; that Jesus was not the only one to rise from the dead, and that we, like him, can conquer death, through faith; that spiritual masters generally don’t marry, not because they don’t have sex, but because they can’t make an exclusive commitment to one person.

I can’t quite figure what to make of this. If true, it means that we can pass through death, and if our faith is strong enough, be resurrected. If false, then it represents a temptation of the Devil and a path only to our own self-destruction. Russ Wise notes that The New Age offers man the same deal the serpent offered Eve in the garden. If you eat of this fruit, you will become like God. The fundamental question it poses is simple – is Christ a guide and teacher, to be followed and emulated, or is he the unique Son of God, whose miracles can not be duplicated?

Edgar Cayce

At a seminary, it’d be interesting to conduct a class on Modern Prophets. What do we make of people like Nostradamus? Edgar Cayce? Joseph Smith? A Course in Miracles? Conversations with God? We can’t just ignore them – the claims they make are too serious. Yet we’ve been taught there will be false prophets, so we can’t just accept them at face value, either. They require careful consideration.

Cayce lived in the early twentieth century, and would enter a sleeping, hypnotic trance in which he’d respond to questions with answers from a “Source” that appeared to have extra-worldly knowledge. The Source revealed that reincarnation occurs, that among Cayce’s previous lives was that of a priest in ancient Egypt, that the construction of the Great Pyramid of Giza was actually a prophecy in stone that records the exact moments of Chirst’s birth and death, as well as the imminent entrance of humanity into a new age symbolized by the King’s Chamber, etc, etc.

In addition to the New Age ideas here, like reincarnation, I find Cayce disturbing because of some of the prophecies he made that I tie into my own life. He prophesied his own return “in the capacity of a LIBERATOR of the world in its relationships with individuals; for he must enter again in the age that is to come, or in 1998”. At the time of my contemplated drug war campaign in 1997 I knew none of this, but in retrospect I ask myself if that wasn’t the “appearance” that was to have occurred a year later, in the election year of 1998. And just when is “the age that is to come”? Is it a subtle shift, like the turn of the millennium, or the change from the Age of Pisces to the Age of Aquarius? Or is it a dramatic change, to be characterized by political upheaval, environment disaster, or global unrest?

In retrospect, I wish I’d never read any of this! I’d rather just not know, and stumble along, making the decisions as best I can without have all this extra stuff nagging at me in my head. Others have similar doubts about Cayce; some of his prophecies just flat out never occurred.


Early in 2001, I had a dream in which I saw a newspaper tabloid on a supermarket checkout stand. It’s headline gave three prophesies for the coming year – disaster for the United States, war in the Middle East, and the appearance of a great saint. Of course, my ego thrusts me into the later role. Am I a great saint? If so, how do I “appear”?

New Age Christianity and the Cayce prophecies raise even more disturbing questions. Could I duplicate the feats of Christ? Be the reincarnation of St. Peter? Become a Messiah? This is the fundamental question raised by these teachings – was Jesus the unique Son of God, or all we all sons of God, who can seek to obtain the same level of faith and power?

Is this insanity? Not exactly. I don’t actually believe that I am Jesus, or God, or a Messiah. Yet the reading I’ve done raises these disturbing questions. It’s more an intellectual insanity, generated by competing theologies, than a physiological one with some chemical imbalance at its source.

The Spirit at Thirty One

In another dream, I was running through a cave-like maze of passages, fleeing in terror from some attacker. I soon realized, though, that my attackers weren’t really attacking me at all – they were mocking me and my books. Mocking my attempt to learn Spanish by reading it. I emerged from the cave and decided to return to the place I was fleeing from. Perhaps I thought I had killed someone, in fact, it was only a flesh wound. There was really nothing to run from at all; then I awoke.

So what am I running from? From my failure at Wickenburg? From human society? From the Drug War? From God? And what do I make of all these ideas and theologies I’ve been exposed to? Ultimately, I can’t answer these questions, and I doubt that anyone can. Only God holds the answers. So, through prayer, I’ve asked God to reveal these answers to me, and I trust that this way, I’ll get the answers from the only source that holds them.

As I finish this essay, I’ve just turned thirty-two, so perhaps the title is becoming something of a misnomer! In the last year, I’ve given up on spending all my time in front of a computer screen, thinking I’m going to save the world through a website. I’ve hitchhiked across the United States, down into Mexico, and back. I’ve become a lot more comfortable having no money, am willing to go hungry if need be, and don’t feel tied down to a nice apartment and a pile of possessions, though I regret that I can’t fit my piano into my backpack. I’m on my way now to spend at least a few weeks in the Appalachian Mountains, fasting and praying. Certainly Jesus did this at critical times in his life, and many were the saints and prophets, from Abraham to Francis, who found God in seclusion, in the wilderness. Hopefully, I’ll find these answers, too. In any event, I haven’t given up. The spirit at thirty-two is still searching…

The peace of Christ and the love of God be with you all.

Capitalism and Christianity

Is capitalism an un-Christian philosophy?

The answer to this question depends heavily on how you define your terms. “Capitalism” and “Christianity” are both complex words that mean different things to different people. Debating over the meaning of these words is largely pointless; it’s like arguing over whether a glass is half empty or half full. I’ll present my definitions up front, to make my meaning clear. If these words mean different things to you than they mean to me, then your answers may vary.

By Christianity, I refer to the religious and philosophical system taught by Jesus of Nazareth, and recorded primarily in the Bible’s four Gospels. I do not selectively endorse any one denomination or division of Christianity, nor do I reject any. The Bible is confusing, and there is room for honest disagreement among Christians. In my opinion, the key to Christianity is to believe in one man, Jesus Christ. To believe that he’s the son of God, that he came to this world and gave his life that we might be saved. To believe that one of the greatest gifts he left behind are his teachings, recorded for all time in the Gospels. To believe that his system, his philosophy, and not any other one devised by man, is the way to live your life. The parts we understand, we must strive to live in our daily lives, no matter how difficult or seemingly unreasonable. If any part of Jesus’ teachings were trivial or unimportant, he wouldn’t have bothered with them. If the ways of the world take precedence to you over the Gospel teachings, or if you simply don’t care what the Bible says, then read no further, as this essay will have little to say to you.

Capitalism, likewise, has several different connotations. In the course of writing and discussing this essay, I’ve identified three major interpretations of the term. Let me define them as follows:

  • capitalism¹ – a laissez-faire economic system, characterized by the separation of economy and state, “anti-socialism”, free markets, free trade, relatively light taxation, and a minimum of government interference in commerce

  • capitalism² – an industrial model of production, well illustrated by Henry Ford’s assembly line, characterized by heavy specialization of both capital and labor, economies of scale, with the cost of goods reflecting the distributed costs of production

  • capitalism³ – a pseudo-religion of greed, characterized by pursuit of self-interest, often associated with the claim that each individual, by advancing his own self-interest, ultimately advances the good of society

For the remainder of this essay, I’ll use the superscripts to indicate which meaning of capitalism I’m discussing.

I have no real objection to capitalism¹ or capitalism², and in fact reject socialism completely, but this isn’t the meaning of capitalism I wish to discuss. Likewise, to some people capitalism means a commitment to hard work and self reliance. I don’t really object to this, either, having no problem with either working hard or taking pride in your work, though I do feel that “self reliance” can be easily twisted into an insistence that others rely on themselves.

I take serious exception to capitalism³. One of the most important functions of religion is to provide us with a value structure through which to judge right and wrong. Capitalism³ is a philosophy of life that can only be described as pseudo-religion of greed. It usurps the role of religion to provide a distorted morality. “Give to all who beg from you,” Christianity teaches us. “What’s mine is mine,” the capitalist³ answers. “Love your neighbor as yourself,” is the Bible’s Golden Rule. “Take care of number one,” is the capitalist³ response. “Sell all your worldly possessions, give the money to the poor, and follow me,” Jesus told one of his questioners. The capitalist³ just laughs.

Let’s not be distracted by the capitalist³ talk of “freedom”, either. Someone who takes a gun and robs a convenience store has freedom. He’s just chosen to use it to evil ends. Freedom implies the ability to chose between good and evil, but doesn’t provide us with a value system to judge between them. This is the function of religion.

So often, when a capitalist³ talks about freedom, it’s really a clever attempt to intertwine capitalism¹ and capitalism³. Anyone opposed to capitalism³ is twisted into an opponent of capitalism¹, and the distinction between the two is glossed over or ignored completely. Anyone who opposes “capitalism” is depicted as a monster socialist who opposed to freedom and liberty. In fact, just because we support capitalism¹, a society largely free from government control over the economy, doesn’t imply support of capitalism³, a dog-eat-dog world where men live like wolves and prey on each other as best they can. Freedom does not imply that everyone lives for himself… unless that’s what we choose it to mean.

These are my main objections to capitalism³:

  1. The values we promote.

    Don’t underestimate the impact society’s values have on people, particularly the youth. We need to teach and practice Christian values, to lead others clearly. Making money shouldn’t be our primary goal, and we shouldn’t allow money to interfere with our commitment to Christianity. Christianity’s two greatest commandments are to “love God with all your heart and all your mind” and to “love your neighbor as yourself”. Nothing’s wrong with working hard, as long as we’ve got the right goals. Our first goal in life must be to seek God’s will for us and put it into effect in our lives. Our second goal must be to love and serve others.

    If we have a product or service for which people are willing to pay, we can make money, but be sure not to turn away those who can’t pay. Remember the Christian commandment, “give to those who beg of you”; let’s be sure to honor it! Having money isn’t the problem; the problem is what people will do to get money and then to keep it. The Gospels make it clear that generosity is one of the great virtues of our religion.

    So many times, when someone comes up with some nifty new idea, they immediately start figuring if they can get a patent on it, slap some restrictive license on it, or just keep the details secret. Instead of immediately asking “how can we make money on this?”, we should instead start by asking “how can we best serve God and man with this?” Make the commitment to God and others first; let the money come later.

  2. The kind of society we build.

    Let’s face it – not all the people who try to start a company and make a ton of money actually succeed. Yet enough do succeed to make a difference in our lives – Microsoft, WorldCom, Exxon, GM, RCA. Imagine if as many people who tried to make a fortune instead set out to make the world a better place. Not all would succeed. Yet enough would succeed to make a difference, because it’s the attempt that counts. Little by little, we’d find ourselves living in a world of love and hope. Instead, little by little, we find ourselves living in a world of greed and despair.

  3. The legacy we leave.

    What do we want our children to say about us? Do we want them to answer with pride that their parents sacrificed to make the world a better place? Or are we content to let them shrug and say, “Yeah, they made a lot of money“? How do we want our age remembered by history? Are we willing to risk being judged along with the conquistadors and robber barons? Or will we sacrifice now, so that we may be judged along with the prophets and saints? Let’s decide that the future will look back on us and say, “these people did everything in their power for the good of others”.

  4. The treatment of dissidents.

    By “dissident” I mean anyone who won’t adopt the capitalist³ philosophy. My personal experiences in a capitalist³ society are far from pleasant. In my youth, I began quite adept with computers, and ended up working for some major computer companies in the early 1990s. Yet I couldn’t stomach the secrecy with which the technology was developed, and I decided that any software I wrote was going to freely available to anyone who wanted it. That decision cost me my livelihood and turned me into an outcast on the fringe of society. And for what? Because I wanted to write software and publish it for free on the Internet. We need to build a world were people won’t be ostracized just because they won’t go along with “the system”.

A man cannot serve two masters. If he attempts to do so, the demands of his masters may for a while coincide, but ultimately will diverge. The two masters will demand two different courses of action, and then you have to chose. Christianity and capitalism³ are two different masters promoting two different value structures.

Christianity teaches us to “give to all those who beg from us”. So long as we keep this firmly in mind, fine. Yet the capitalist³ philosophy is often one of selfishness. “I take care of myself; nobody else will take care of me.”

Of course, the capitalist³ would no doubt raise a flurry of objections:

  1. Capitalism³ works…

    …in the real world,” I can almost hear you adding. Well, Christianity never claimed to work in the real world. In fact, Jesus taught that Christianity would be rejected by the world, and that his disciples would be persecuted and killed.

    Consider also that capitalism³ is not the world’s only “success story”. Fascism worked. By the end of 1940, fascism had conquered all of Europe. Germany was fascist; Italy was fascist; Spain was fascist; Poland had fallen in a couple of days; France a matter of weeks. Fascism ruled the entire continent. Fascism was a “success”. Hitler felt so confident he invaded Russia.

    Communism worked. By the middle of the twentieth century, between Russia and China and their various satellites, communism ruled half this planet. Communism turned a backwards, rural nation into an industrial super power, put the first man into space, and cast its intellectual appeal to many of the world’s left-wing thinkers. Cuba looked to communism. Angola looked to communism. Communism was a “success”. Kruschov pounded his shoe on the table and declared, “We will bury you!”

    Other notable “successes” include Negro slavery; the conquest of native Americans by both the Spanish and the Anglo Saxons; the establishment of global empires by Britain, France, and Holland; and the military dominance of the Mediterranean by Rome for nearly a millennium.

    Clearly, judging “success” is a difficult matter, made easier by the passage of time and quite difficult without the hindsight of history. Yet even if communism or fascism had genuinely succeeded over the long term, neither of these societies I’d want to live in! Success shouldn’t be measured just by the expediance of the moment, but by moral and ethical considerations. To blandly declare “Capitalism³ works,” and to use this as a trump card to cancel all other considerations, to also to accept these other societies, because each, at some time and in some way, “worked”.

  2. You have to survive.

    Total relativism. People had to survive in Soviet Russia; the way to do it was to become a communist. People had to survive in Nazi Germany; the way to do it was to become a fascist. This argument can be used to justify anything.

    Jesus’ answer to this question was not to worry about survival; let God take care of your survival. My answer is slightly different. We do have to survive, and the way to survive is to take care of each other and to build a society where people can take care of themselves, and walking into Safeway with a $20 bill doesn’t count. If you’re dependent on another man for your food, freedom quickly becomes an empty euphemism. Government welfare programs simply replace one form of dependence with another.

    The capitalists³ don’t want freedom, except for themselves. You don’t make a lot of money by setting people free. In fact, quite to the contrary, the way to make a big pile of money is to make people dependent. Bernard Ebbers didn’t build WorldCom by making long distance communications free. The way to build a WorldCom is to put a switch on every telephone line in this country, then sending people a bill every month and turning off their service if they don’t pay.

    Under capitalism³, everyone “has to survive” because everyone is dependent on the capitalists³ for food, housing, clothing, transportation, and pretty much everything else in life. The Christian solution is to love our neighbors, and one of the best ways to do this is to make our neighbors self-sufficient.

      “Give a man a fish, feed him for a day;
      Teach him to fish, feed him for life.”

  3. We don’t have raw, naked capitalism³; it’s regulated by the government.

    A good point, but not one want we’d like to carry to its natural conclusion.

    Why do we have an Environmental Protection Agency? Basically, because a bunch of people decided that it was in their business interests to build factories that dumped all their waste into the nearest river. It’d be nice if the people building factories would design them to be clean, but then those factories would be more expensive, they wouldn’t be able to compete, and the clean factories would all go out of business. Eventually, people got sick of not being able to swim in their rivers, clamored to their government for a Clean Air Act and a Clear Water Act, and now every factory in this country is regulated by the federal government.

    Why do we have anti-trust laws? Basically, because people like John D. Rockefeller realized that their oil companies could make a lot more money if they also owned the railroad companies and charged competing oil companies ten times as much to use the same rail lines. All the competing oil companies would have far higher operating costs and eventually go bankrupt. It was a smart business decision. Eventually, people got sick of having their oil prices dictated by a monopoly, the government passed the Sherman Anti-Trust Act, and now every major business deal in this country requires government approval.

    Why is Microsoft now embroiled in an anti-trust lawsuit with the U.S. Justice Department? Because Bill Gates is acting in the heritage of Dow Chemical and Standard Oil. He’s putting his own profit interests ahead of the better interests of society. So Microsoft keeps all their source code secret, engages in restrictive licensing practices, violates networking standards, and deliberately breaks the backwards-compatibility of their software. These are good business decisions, and the trend is clear. Eventually, the entire high-tech software industry will be regulated by the federal government.

    The capitalists³ love to gripe about socialism, but capitalism³ itself is one road to socialism. The capitalists³, by a constant pattern of abuse, will create a society in which all aspects of everyone’s lives are eventually regulated by the government.

    We don’t want raw, naked capitalism³, nor do we want massive government regulation of our lives. The only alternative is for people to take responsibility for their own actions and do what is in everyone’s best interest. Otherwise, the only way we’ll have a decent society is for the government to force it on us.

  4. Capitalism³ gave us everything we’ve got today.

    Maybe, but I won’t argue the point. I don’t think we have to give up own modern technology to live as Christians. Even if we did, given the choice between a modern, advanced, rational, scientific world, and living a simple, primitive life according to teachings of Christ, which would you choose?

  5. You can’t run a business like that.

    Then don’t run a business! Run a charity, or a philanthropy, or a non-profit organization. If the word “business” gets in your way, discard it, because almost anything can be done in a Christian way. Jesus doesn’t tell us what kind of house to build; he just gives us a foundation to build upon.

    If you’re running a restaurant, turn it into a soup kitchen. This doesn’t mean you have to run off your regular clientele, move to the inner city, and spray paint grafitti over your logo. Just make sure that when somebody comes it without money and asks for a meal, feed them! It doesn’t have to be the broiled lobster tail. Don’t hide or disguise this policy; make it clear to your workers and customers. If you have trouble paying your bills, let your suppliers know about your Christian practices, and if necessary find new suppliers who will reciprocate in kind. Go directly to the farmers if need be, and move your operation to a friendly church’s banquet hall if you can’t pay your rent. If some people leave and don’t come back, so be it. You can’t please everyone, but make sure one of the people you please is God.

  6. That sounds very noble, but I’m sick of working every day and want to be my own boss.

    This is the great lure of capitalism³. “Sign up for own system,” they say, “then you can work for yourself.” Well, I signed up seven years ago. I ran my own computer consulting practice, then I found two partners and started a regular company that eventually grew to have about a dozen employees. To make a long story short, there’s no better way to uncover the myths of capitalism³ than to run your own business. You don’t work for yourself. You work for the marketplace. You don’t make your own decisions. You do what sells. Unless you’re a sole proprietor, you’ll have salaries to pay, a significant tax burden, probably rent and insurance as well. If you don’t make money, you’ll lose your employees, be evicted from your space, go out of business and still have the government chasing after you for back taxes. If you can manage as a consultant or sole proprietor, you’re a lot better off, but don’t risk asking yourself if this is the best you can do for others. The answer may cost you your livelihood.

    Independence in capitalism³ is largely a myth. If you’re not aggressive and somewhat ruthless, you’ll always be a small player, still largely dependent upon the marketplace. The only way to become a big player is to go along with the program. It’s like going into a restaurant and being told that you can order anything off the menu, so long as it’s fish. If you love fish, that’s great, but what if you wanted chicken? You probably won’t come back to that restaurant, no matter how good the food, but the capitalists³ want every restaurant in town to serve only fish.

  7. This isn’t Christianity.

    One of the great advantages of Christianity is the Bible. We don’t have to take anybody’s word for Christianity; we have Jesus’ teachings, written down and preserved for us over 2000 years. To know Christianity, read the Bible, particularly the four Gospels, praying for wisdom and understanding. Don’t take my word for it, or anyone else’s. Remember that not everyone who claims to be a Christian will be saved. By the same token, don’t let the ways of the world and the opinion of others distort your interpretation.

  8. Christianity is based on faith, not works.

    This isn’t what Jesus said, and it isn’t what James said either. Faith is the basis of Christianity, but we’re clearly charged by the Gospels to put our faith into action.

  9. This just doesn’t make sense.

    Jesus never attempted to justify his philosophy by invoking reason or logic. These are the tools used by human philosophers to justify their systems of thought. Logic worked very well for science; it laid the foundation for all the technology we use daily. Scientists had developed logical systems to explain physics, chemistry and biology, perhaps philosophers could also develop systems to explain and govern human society. Thus, in the last few centuries, we’ve seen fascism, based on the logical, rational, scientific ideas of Charles Darwin; communism, based on the logical, rational, scientific ideas of Karl Marx; and capitalism³, based on the logical, rational, scientific ideas of Adam Smith. On the other hand, Christianity isn’t based on reason or logic, it’s based on faith.

  10. This is what “the people” want.

    A tricky argument that attempts to intertwine democracy and capitalism³. Democracy can not be used as a trump to justify any course of action. Suffice it to say that capitalism³ must be judged on its own merits, not based on how many people support it.

Let’s not put our faith in capitalism’s false, worldly pseudo-religion of greed. Christianity is a real religion, with a real God, a real savior, real prophets, and real salvation. “For God so loved the world that he he gave his only Son” to us. Will we accept him, and live his teachings in our lives, or turn him away?

Christianity and democracy in Les Misérables

Victor Hugo’s epic novel Les Misérables, set in post-Napoleonic France, explores a broad range of political, philosophical and religious issues. Two of the novel’s major philosophical themes are Christianity, personified by Valjean, and democracy, personified by Marius. In my opinion, Les Misérables represents Hugo’s attempt to reconcile the two; he fails.

The entire first volume is devoted to the development of Valjean’s character, and Christianity is the driving theme. First we meet the Bishop of Digne, known to the people of his town as a “just man”, and Hugo reinforces this. The Bishop gives up his episcopal palace because it’s needed by the hospital. The largest item in his budget is “for the poor”. A sudden windfall goes to the soup kitchen and orphans. He spends all day with a condemned murderer before his death.

Yet all this is preparatory to the entrance of Valjean, an unredeemed convict and outcast taken in by the priest after being turned away from every inn. “This is not my house,” he tells the stunned man, “it is the house of Jesus Christ.” Valjean betrays the Bishop’s trust, steals his valuable silverware and sneaks out the back door. Captured by the police, he is returned to the priest, who not only covers up for him by claiming that he gave him the silverware, but insists that he take the silver candlesticks as well, exemplifying Jesus’ commandment that if a man should take your coat, give him your cloak as well. The Bishop then imposes a benediction on Valjean:

“Jean Valjean, my brother, you no longer belong to evil, but to good. It is your soul that I buy from you; I withdraw it from black thoughts and the spirit of perdition, and I give it to God.”

The Bishop’s generosity triggers a profound spiritual crisis in Valjean, and he converts to Christianity. His is not an outward conversion of baptism or communion, but an deeper, inward conversion. Though he never sees the priest again, his life changes dramatically. Under an assumed name, he establishes himself in a small town, makes a clever invention which pulls the local industry out of recession, and in a few years is able to erect his own factory. With its proceeds, he improves the local hospital, builds two new schoolhouses, and funds a dispensary for the poor. In time, the King prevails upon him to become mayor. The bishop dies; Valjean wears black to mourn for him, symbolically taking the torch of Christianity, which he is to carry for the rest of the novel.

His willingness to restrain the police earns him the enmity of the town constable, Javert, destined to become his lifelong nemesis. Valjean once declared that there are no bad plants or bad men, only bad cultivators. Javert states that “these men are irremediably lost”. Javert, who knew Valjean in prison, suspects his true identity, and becomes more and more withdrawn from the mayor. The last straw comes when Valjean sides with a prostitute Javert is determined to imprison, invokes his powers as mayor, and frees her. He learns that she turned to prostitution to support her daughter. The woman dies, and Valjean promises on her deathbed to support her daughter, Cosette. Yet Valjean’s cover is soon blown, by his own refusal to let an innocent man, mistaken for him, go to the galleys for life, and he flees with Cosette. As the child grows into a young woman, Valjean lives in Paris, frequently changing identities to avoid the determined Javert.

Enter the last of the novel’s major characters – Marius, destined to fall in love with and marry Cosette. Like Valjean, he inherits a symbolic torch – the torch of revolution. His father was made a Baron by Napoleon on the battlefield of Waterloo, and passed the title to his son on his death. Marius has a hundred cards printed bearing the name “Le Baron Marius Pontmercy”, and fancies the restoration of the Napoleonic empire.

“Le Baron” is soon disowned by his maternal grandfather, and cast off into a life of poverty. He meets the Friends of the ABC, a revolutionary society of students, philosophers and poets led by the fiery Enjolras. While the Bishop of Digne converted Valjean through simple acts of generosity and mercy, Marius’ new friends resort to wit, philosophy, and a barrage of words to convince him to abandon Napoleon’s empire and adopt a new cause – republic and democracy – but the means remain the same: the sword, the cannon, the barricade.

Hugo goes to great length to present us Marius in the most sympathetic light. He gives his poor neighbor twenty-five francs for rent when he himself has only thirty. Yet he also appoints himself judge over his neighbors, declares “these wretches must be stamped upon,” when he realizes that a robbery is about to take place next door, and sends for the police. He regrets this judgment when he realizes that the ruffian about to commit the crime saved his father’s life at Waterloo, and begins fumbling for another way out, but ultimately it is Javert who bursts into the room and disrupts the crime. Hugo conveniently arranges for this act to save Valjean, but did the Bishop of Digne “stamp upon” Valjean after his crime? Wouldn’t it have been a simple rationalization for the Bishop to think he had to stop others from being victimized by Valjean? Let’s not forget Valjean’s original crime of stealing bread to feed his younger brothers, which lead him to the galleys and a life of crime. Marius has good intentions, but never adopts the Bishop’s willingness to turn away from a wrong, determined instead to fight his oppressors.

As revolutionary fervor again sweeps France, Enjolras rallies his secret society. Thirty years had passed since “Library, Fraternity, Equality” became the swish of the guillotine, the roar of cannon, the tramp of legions, and still they want more. The revolution comes, and Enjolras springs his plan into action, turning their favorite wine-shop into a fortress and erecting a barricade across the street. Valjean passes through the army lines wearing his National Guard uniform, enters the barricade, then gives up his uniform so a man with a family to support can slip away and be saved.

The government is determined to crush the uprising, and soldiers surround the barricade and storm it. Marius becomes the hero of the rebellion after winning the battle by threatening to blow up the barricade, himself, the soldiers, and all his friends. Now, if he could have blown up only the soldiers, would he have hesitated for a second? Moments earlier, Valjean was faking the death of his arch-nemesis Javert to free him on a side street. Would Marius fake the death of the soldiers? If the soldiers had not retreated, would Marius have carried out his threat of martyrdom? Probably. “Victory or death!” has been the rallying cry of radical patriots since day one.

Valjean, though present at the barricade, fires not a single shot. After freeing Javert, he turns his attention to Marius, determined to win his revolution or die fighting the soldiers. The government attacks again in force; the barricade falls. Marius, badly wounded with a broken collar bone and multiple head injuries, faints into the arms of Valjean, who lifts a sewer grate and drops in carrying the half-dead revolutionary. As he escapes through the sewer with the unconscious Marius, the wine-shop is taken, Enjolras is executed by firing squad, the ABC Society goes down fighting and Revolution of 1832 falls to pieces.

Marius recovers from his wounds to find much of his world collapsed around him. All his close friends are dead; what is left is his love for Cosette. They marry. Valjean confesses to Marius that he is a fugitive convict. Marius, not yet knowing that it was Valjean who saved him at the barricade, believing that Valjean shot Javert, and wondering if his six hundred thousand franc inheritance was stolen, gives Valjean the cold shoulder and gradually pushes him out of Cosette’s life. Valjean, believing that the girl has a husband and no longer needs a father, acquiesces.

Marius ultimately learns the truth – that the inheritance is legitimate, that it was Valjean who saved him at the barricade, that Javert’s murder was faked – and regrets having estranged Valjean. With Cosette in hand, he rushes to redeem himself with Valjean, only to find him on his deathbed. So Valjean dies, in the presence of his adopted daughter and son-in-law. Perhaps this is meant as another symbolic torch passing, but who will carry it on, and in what form? Will Marius “love his enemies”? Will his wife resist corruption by his hot-headed rebellion? Has the Bishop of Digne’s torch passed or finally died?

The failed 1832 uprising featured in Les Misérables was but one in a series of violent clashes spawned by the French Revolution. After declaring a constitutional monarchy in 1789, the French assembly within five years had executed its constitutional monarch. No provision for trying the king had been provided by the constitution; the national assembly simply tried him anyway. The masses packed into the Place de la Revolution and cheered as Louis XVI’s severed head was hoisted aloft. Of all the Marius’s leading the government, not one Valjean stepped forward to spirit the king away through the sewers under the city. After the king and the aristocrats went to the guillotine, next came the leaders of the revolution themselves. Robespierre, Saint-Just, Coulton – each got the six-inch haircut.

At last came Napoleon, who Marius once exulted as a “sun rising”. After leading the French to devastate Europe, he was finally defeated and exiled. Yet any doubt that his was anything but a popular dictatorship was put to rest following his escape from Elba, during the “100 Days”, after he landed on the French coast with 1200 men. Every town told to oppose him threw open its gates; every army unit sent to reverse him cried “Vive l’Empereur!” After being defeated in Russia, losing their entire army, and seeing their country in ruin, thousands still turned to the conquering general.

To this day, the masses insist on immolating themselves on the barricades in pursuit of truth, justice, and the French legions storming across Europe. How many millions of Germans cheered for Hitler as he proclaimed the Anschluss? How many millions “believed” in communism when Lenin proclaimed a worker’s state in Russia? How many millions today think that greed is the driving force behind all human progress? How many will surrender their prized silverware to a convict and a thief? Yes, Christianity can save democracy, but only by dragging it unconscious through the sewers of Paris.

Democracy is a system of government where the majority of people choose not only their own leaders, but everyone else’s. Democracy has little do to with right; nobody has the right to choose someone else’s leader. Democracy is primarily about responsibility; the majority has the responsibility to choose everyone’s leaders. If they choose wisely, it will succeed; if they choose poorly, it will fail.

On Napster

napster.com, a website facilitating the on-line exchange of digital music, has been highly publicized by the mass media. The legal wrangling over copyright issues has overshadowed other, equally legitimate questions. Is Napster for real, or is it just hype? Are the issues it presents purely legal, or are there technical lessons to be learned, too? What does Napster reveal about the future of the Internet?

First, we need to recognize that Napster represents a real technological advance. It is one of the newest and most prominent examples of a directory service. Directory services are based on the realization that centralized data stores tend to generate performance bottlenecks. All the data being served to the clients has to come from a centralized server or a handful of centralized servers. Throwing bandwidth at the problem is sometimes realistic, but a better solution is to design more efficient networks. Distributing data sources across the network is a major, emerging technique for achieving greater efficiency.

For example, one of the reasons we currently lack decent video-on-demand services are the bandwidth requirements of video. It’s simply not feasible to construct a centralized server to feed two hour movies to a million people. The bandwidth requirements are too great; the centralized server becomes too much of a bottleneck. A Napster-esk solution would be to have thousands of video servers, each capable of serving perhaps a dozen video streams, spread all over the network. Due to the current bandwidth demands of video, this is still unrealistic, but similar schemes are immediately plausible for books, software, and websites in general.

In fact, it’s reasonable to suppose that at least 90% of the present Internet’s traffic is unnecessary. The net is young and rapidly evolving. The protocols currently in use are inefficient, some more so than others. As the network continues to mature, it will become more efficient, and the bandwidth requirements of particular applications will decrease. The present boom in bandwidth demand is driven by new users and new applications. At some point, most people will be “connected”, and the uses of the network will stabilize. From that point onward, improvements in network design will begin to drive bandwidth requirements downward. The network will be most inefficient while it is young, so we can expect bandwidth requirements to peak at some point, I’d estimate within the next two decades, and then begin heading down.

Directory services such as Napster will be instrumental in reducing demand for network bandwidth. Other keys to more effectively using bandwidth are compression, caching, and multicast, all of which are in their infancy. Many issues remain to be addressed, for example, server selection. Napster currently presents the user with a list of servers for each song, one of which is manually selected to download the song from. Developing automated techniques for server selection will be an important step forward in making this technology more seamless, and therefore more attractive for other applications.

Security deserves special mention, since distributing data across the net would seem to seriously compromise security, but this is probably not so. Encrypting the data allows it to be distributed even to insecure servers, which could serve the data, but couldn’t read it. Then, the centralized directory would provide a key that could be used to decrypt and read the data. Controlling access to the key would control access to the data. Typically block cipher keys are only a few dozen bytes long, so access to a 100KB file could be granted by a directory server in less than 1KB – a 100-to-1 savings in centralized bandwidth requirements. The authenticity of the data could be verified by X.509 certificates – placed in the directory, of course.

While Napster represents a real advance over older, more centralized, techniques, this doesn’t mean that the current protocol can’t be improved. Let me outline how I’d redesign Napster, if I were given the task:

  1. Use LDAP. The Lightweight Directory Access Protocol (LDAP) has become an accepted standard for directory service. Furthermore, a “pure” directory service, such as Napster, doesn’t require any special handling on the part of the directory server. All the server has to do is register directory entries, then fed them back out again in response to search requests. A standard LDAP server, such as OpenLDAP, could be used unmodified.
  2. Define and publish a standard schema. LDAP, and directory access protocols in general, use “schemas” to define the format of directory entries. In Napster’s case, a standard schema would probably include a “Song” class, defining artist, title, and year, and perhaps an “Album” class, listing all the tracks on a particular album. The “Song” class could then be extended (subclassed) into a “NetSong” class that would also include URLs where the song can be accessed. Using a standard, published schema would clearly define the directory structure, and make it easier to reuse the directory for new applications.
  3. Use HTTP or FTP. Just as there’s no need to create a custom directory service, there’s no need to invent new file transfer methods, either. Specifying a URL in the directory entry, using one of the standard methods, “http:” or “ftp:”, should suffice. Of course, most “clients” aren’t set up to be “servers”. In the present computing environment, Napster would be quite hard to configure if it relied on an external web or FTP server, and much more complex if it included an entire web server within it. The “peer-to-peer” paradigm ultimately implies that a machine can be simultaneously both a client and a server, and must be configured to act as both. This obviously contrasts with Microsoft’s policy of separate “client” and “server” operating system packages (the “server” usually being much more expensive), but free software hasn’t solved this problem completely, either. How exactly does an arbitrary software program go about registering itself with the local web server in order to share files?

Napster isn’t the first directory based system to be deployed on the Internet, but it is one of the newest and most exciting. If the government and economic leaders can be persuaded to surrender a measure of control, its decentralized nature may pave the way to a more distributed and more efficient network.

Wireless Internet

For a long time, conventional wisdom held that the telephone system was a natural monopoly, or at least a natural oligopoly, because of the need for a large physical infrastructure, namely, the telephone wires. In recent years, though, the widespread deployment of cellular telephones illustrates that this is not necessarily true. Clearly, cell phone networks are capable of handling significant traffic loads and delivering near-landline quality of service. The emergence of all-digital cellular telephones, such as PCS, shows that data can be effectively transferred using wireless, and leads me to wonder; could we build a totally wireless Internet?

I’m not the only person to ask this question. The Ricochet Network has been in operation for several years now, the PalmVII handheld offers wireless Internet access, and companies such as Intel are promoting the Mobile Data Initiative. Yet these efforts are largely technical in nature, while I perceive wireless as changing the fundamental rules of the game. Without dependence on a landline infrastructure, is telecommunications really a natural oligopoly anymore? Could we build a free wireless digital communications system?

Decentralization, not privatization,

should be the buzzword of freedom.

First, what do I mean by the term “free”? Primarily two things – first, the absence of recurring charges, such as monthly or per-minute fees, and second, an open, non-proprietary infrastructure free from patent or regulatory barriers to entry. The only fee I’m prepared to concede is the initial purchase price of the equipment itself. The telephone will ideally become like a computer or a microwave oven – once you purchase it, you can continue using it free of charge.

The capitalist model would be for the communication system to be controlled by companies, driven by competition to improve service, but with access limited by economic barriers, i.e, the phone gets switched off if you don’t pay your bill. The socialist model would ensure access by having the government control the phone system, as well everything else. A better solution than either would be to eliminate the large institutions entirely, by eliminating the centralized infrastructure. Decentralization, not privatization, should be the buzzword of freedom.

A decentralized communications infrastructure would have to be based primarily, if not totally, on wireless radio technology. Any non-wireless service, such as the existing telephone system, would require landlines connecting to a central office, implying right-of-ways, centralized ownership of the lines, and consequent dependence on large institutions, either governments or corporations. Only a wireless scheme would allow devices to communicate directly, without any centralized infrastructure.

Current wireless technology (i.e, cell phones) still rely on centralized infrastructure. Sadly, they’re designed to rely on it. A cell phone communicates exclusively with a radio tower, which then relays the call to its destination, typically over landlines. As the cell phone moves, it switches from one tower to another, but never communicates directly with another cell phone. I can’t make a phone call from one car to the next without going through a radio tower.

What’s needed is a new kind of wireless infrastructure – one where the telephones and computers are designed to communicate directly with each other, without relying on a phone company’s switches. Clearly, if I have such a wireless telephone, and my next door neighbor has such a telephone, they can directly connect and we can talk without any dependence on third parties, and consequently without any recuring charges.

Yet, the big question remains – can I call from Maryland to California with such a telephone? Hopefully, the answer is yes, and the key lies in the routing technology of the Internet. A typical Internet connection will be relayed through a dozen or more routers. No single connection exists between the source and the destination, but by patching together dozens of connections, a path can be traced through the network for the data to flow through. Network engineers have spent literally decades developing the software technology to find these paths quickly. Theoretically, there is no single point of failure in the system, since the routers can change the data paths on-the-fly if some part of the network fails. The keys to making it work are the adherence to open standards, such as TCP/IP, and the availability of multiple redundant paths through the network.

A wireless infrastructure can be built on similar technology. If I can call my neighbor’s telephone directly, and my neighbor’s telephone can reach the grocery store’s telephone directly, they I can call the grocery store by relaying the data through my neighbor’s telephone. If adequate bandwidth is designed into the system, my neighbor’s telephone can relay the data without any impairment to her service. She can be talking to her hairdresser without even knowing that her phone is relaying my conversation with the grocery clerk.

What stands in the way of building such a free, national digital communications infrastructure?

First, the presence of standards. Just as English is the standard used by the author and readers of this document, and TCP/IP is the standard used by the Internet devices that relay the document, standards are required for the wireless devices to communicate. IEEE recently standardized a wireless data LAN (802.11) capable of handling 1 to 2 Mbps. To put this into perspective, an uncompressed voice conversation requires 64 Kbps. Thus, a 1 Mbps circuit could handle 15 such conversations. Not only can I talk to the grocery store while my neighbor talks to the hairdresser, but a dozen other people can use the same circuit with any service impairment. Newer compression techniques can improve this performance by a factor of ten.

IEEE 802.11 is a good start, but the power limitations imposed by FCC regulations may impede its use for any but short-range applications. However, Metrocom’s Ricochet network demonstrates that this might not be a show stopper. Working in conjunction with power companies, Metrocom pioneered the novel approach of putting low-power radio repeaters on existing utility poles. The repeaters communicated directly with each other, eliminated the need for a landline data connection; only power was required, which was readily available on the pole. A similar approach could be used to build a network that would provide 802.11 coverage to an entire metropolitan area.

Also, the existing 802.11 devices aren’t very sophisticated in their design. They’re designed to be cheap, not effective. Their single most glaring problem is their antenna design. Existing 802.11 transceivers use mainly low-gain, omnidirectional antennas, although Raytheon recently announced the availability of an 802.11 PCMCIA card with a jack for connecting an external, hopeful better, antenna. Improved antennas will probably take one of two forms. Adaptive arrays are preferred by the military, and justly so, but are complex and expensive. Directional arrays, typified by TV aerials, are simpler and therefore cheaper, but must be physically pointed at their destination. One possible scenario would be for routers to use the more expensive adaptive arrays, and for end systems to use mechanically steered antennas. In my opinion, the development of improved 802.11 devices is the single most important advance needed today.

Second, an initial infrastructure is required. A 802.11 telephone would be a popular item if everyone else had one, but initially few people would possess such devices, making it difficult if not impossible to route a connection through such a sparse matrix. Philanthropies could be formed to build infrastructures. A Ricochet-type network could be deployed in cooperation with power companies, who might be persuaded to donate the relatively small amounts of electricity the routers would consume. After an initial investment in the (hopefully) rugged and scalable pole-top devices, the entire network could be managed from a central location. At this point, the network would provide 802.11 service to an entire metropolitan area, jump-starting the service. As more and more people bought these devices, each capable of relaying traffic on its own, the dependence on the initial infrastructure would diminish, hopefully to the point where the pole-top devices wouldn’t need replacement when they started to fail.

Furthermore, users would want to call telephones on the conventional phone network, requiring some sort of gateway. A solution to this chicken-and-egg problem would be to provide mechanisms for some fee-based services. Thus, a service provider could construct a network that would, for a monthly fee, interconnect its users and provide gateway service to the existing phone network. It’s possible that the only fee the provider would need to charge would be for the gateway service – initially, almost all connections would go through the gateway, since few people would have the new phones and most calls would be relayed onto the existing phone system. As the wireless network became more and more widely deployed, more and more destinations would go wireless, and the reliance on gateway systems would diminish.

In short, wireless is in its infancy. This exciting new technology offers great possibilities not just to expand existing phone and data networks, but to break down the old service models and replace them with newer, more decentralized designs. The oft-touted idea of the free phone call might even become a reality.