Archive for August, 2002

Standardized caching of dynamic web content

Thursday, August 1st, 2002
Internet Engineering Task Force
INTERNET-DRAFT
Expires March 2003



             Standardized caching of dynamic web content

			   by Brent Baccala
			 baccala@freesoft.org
			     August, 2002

			 Address comments to:
		 dynamic-content-caching@freesoft.org

			Comments archived at:
       http://www.freesoft.org/Essays/dynamic-content-caching/



This document is an Internet-Draft and is subject to all provisions of
Section 10 of RFC2026.

Internet-Drafts are working documents of the Internet Engineering Task
Force (IETF), its areas, and its working groups.  Note that other
groups may also distribute working documents as Internet-Drafts.

Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time.  It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."

The list of current Internet-Drafts can be accessed at
http://www.ietf.org/1id-abstracts.html

The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html



			       ABSTRACT

   Summarizes the present state of web caching technology.  Points out
   the need for caching dynamic web sites, and the inadequacy of
   present caching technology for anything but static sites.  Proposes
   the adoption of Java servlets, cryptographically signed Web
   Application Archives (WARs), and LDAP as standards for dynamic web
   caching, using an expanded interpretation of existing DNS standards
   to locate and authenticate cached information.

The World Wide Web (WWW), probably the most successful networking
technology of the 1990s, provides a global graphical user interface
(GUI) that presently dominates the Internet.  The current design of
the web has an overwhelming advantage over older connection-oriented
protocols such as TELNET or X Windows.  The web is data-oriented, not
connection-oriented, or is at least more so than conventional
protocols.  A web page is completely defined by a block of HTML, which
is downloaded in a single operation.  Highlighting of links, typing
into fill-in forms, scrolling - all are handled locally by the client.
Rather than requiring a connection to remain open to communicate mouse
and keyboard events back to the server, the entire behavior of the
page is described in the HTML.

The advent of web caches changes this paradigm subtly, but
significantly.  In a cached environment, the primitive operation in
displaying a web page is no longer an end-to-end connection to the web
server, but the delivery of a named block of data, specifically the
HTML source code of a web page, identified by its URL.  The presence
of a particular DNS name in the URL does not imply that a connection
will be made to that host to complete the request.  If a local cache
has a copy of the URL, typically because it was requested and
retrieved earlier, it will simply be delivered, without any wide area
operations.  Only if the required data is missing from the local
caches will wide area network connections be opened to retrieve the
data.  Generally, caches store content based on the URLs, and
sometimes use inter-cache protocols such as ICP to communicate to
other caches which URLs they possess.  A variant on this scheme is the
web replica, in which an entire web site, or some logical subsection
of one, is duplicated elsewhere.

Experience with web caches demonstrates that they provide several
benefits.  First, the bandwidth requirements of a heavily cached,
data-oriented network is much less than an uncached,
connection-oriented network.  A cached copy of a web page, stored
anywhere on the network, works as well as the original.  As the
network becomes more heavily cached, fewer and more localized
connections are required to carry out various operations, reducing
overall network load.  Furthermore, cached or replicated web sites are
more fault-tolerant, since their data can still be accessed even if
the origin server fails or the network becomes partitioned.  A general
consensus seems to exist that caching improves network performance;
more widespread adoption of web caching has been limited by technical
challenges.

One of the greatest of these challenges is caching dynamic content,
that is, pages generated by software as they are requested, such as
response pages to search requests.  Presently, web caching protocols
provide means for including meta information, in either HTTP or HTML,
that inhibits caching on dynamic pages, and thus forces a connection
back to the origin server.  While this works, it negates the
advantages of caching.  To maintain the flexibility of dynamic content
in a cached network, we need to lose the end-to-end connection
requirement and this seems to imply caching the programs that generate
the dynamic web pages.  While cryptographic techniques for verifying
the integrity of data have been developed and are increasingly widely
deployed, no techniques are known for verifying the integrity of
program execution on an untrusted host, such as a web cache.  Barring a
technological breakthrough, it seems impossible for a cache to
reliably run the programs required to generate dynamic content.  The
only remaining solution is to cache the programs themselves (in the
form of data), and let the clients run the programs and generate the
dynamic content themselves.  Thus, what's needed is a standard for
transporting and storing programs in the form of data.

A closely related problem arises when replicating a web site.  A
significant hurdle for building web replicas is the lack of a standard
to deliver the executable components that underlay dynamic content.
While scripting languages such as Perl and Python are readily
available, installing a web replica almost invariably requires
tweaking configuration files and downloading various additional
packages needed by the scripts.  Without a standard for dynamic
content, there is simply no way to automatically replicate a web site,
unless its content is completely static.  Also, running a Perl script
typically provides little in the way of security.  Either the script
must be carefully reviewed by the installer, or the author must simply
be trusted.

Java "servlets" are a step in the right direction, since they provide
a CGI-type capability that enables a web cache to present dynamic
content without a connection to the origin server.  Since they are
Java-based, they provide solutions to the security issues that
surround something like Perl.  Java's security model provides the
tools to limit servlet access to the host system.  This allows a
cached servlet to reference a collection of Java classes it needs for
proper operation, and have them loaded automatically without the need
of manual intervention.

Part of the Java servlet specification is WAR (Web Application
Archive), an extension to JAR that provides Java servlets, HTML and
JSP pages, and XML meta data all packaged up into a single archive
file to provide a "web application".  In the current implementation,
the server administrator "installs" the WAR at a particular URL by
loading it onto a Java servlet-enabled web server.  If the WAR format
were altered slightly to include, perhaps in the XML meta data, a
"master" URL, and the servlet-enabled web server were to function more
as a proxy, handling requests locally if it possessed a valid WAR,
passing them along otherwise, this would be a big step in the right
direction.  Ultimately, though, to get away from having to trust a
proxy to execute WAR content, the client has to execute the content
itself.  Servers and caches should eventually do nothing but hand out
data, and the responsibly for executing it should fall exclusively to
the client, not the cache.  For the time being, using a local, trusted
cache will enable experimentation with these ideas without changing
client implementations.

Using WARs for application caching, instead of the manual installation
of applications that they were originally designed for, presents some
challenges.  In addition to adding XML entries to the WAR to specify
the base URL, additional entries may be needed to specify a time
interval for which the WAR is valid, as well as whether an outdated
WAR can continue to be used if a more recent one can't be retrieved.
Furthermore, Java servlets typically run with a fairly trusted
security model.  A more restricted security environment should be used
for cached WARs downloaded from foreign web sites.

Also, provisions should be made for incremental updating of the WAR,
since only a portion of a large archive may change in an update.
Although protocols such as rsync have been developed to incrementally
update files, they have limitations.  Rsync depends on changes being
localized within the file.  Files with small changes spread widely
across them, such as search engine indices, don't update well using
rsync, suggesting that something more flexible would be preferred.
Since the WAR is already Java-based, perhaps specifying Java classes,
or pointers to Java classes, in the WAR for performing incremental WAR
updates would provide a powerful mechanism for tailoring the update
mechanism to the type of files contained in the archive.  Perhaps many
of these functions, like deciding the validity of a WAR, should be
specified via Java classes, for maximum flexibility.

Security and authentication are major concerns, especially in a cached
environment.  In this case, some protocols exist to provide
authentication services, yet have many outstanding issues.  Some are
not widely deployed - DNS key services, for example.  The most widely
deployed solution - X.509 certificates - has been priced and managed
into a realm when only e-business sites can realistically justify
their costs.  Web security can't be just for those who can and will
shell out hundreds of dollars for certificates that keep expiring.  In
a heavily cached environment, it's easier than ever to spoof
somebody's URLs, and X.509-based authentication needs to be in place
for 99% of the net's web sites, not 1% of them.  Standards exist
for storing public keys in DNS (KEY and CERT resource records),
which can be used to validate signed JAR/WAR files.

For more rapid response time, the Range: header could be used to
retrieve first the WAR file's table of contents, then the compressed
data of the particular URL, resulting in a retrieval time comparable
to straight HTTP, ignoring the search time required to find the cache
item to begin with and the compilation/startup time of any dynamic
code (both of which may be significant).  Of course, in addition to
such a "partial retrieval", a cache could do a "full retrieval",
obtaining the entire packaged WAR and begin sharing it with other
caches.  The decision of how to choose between partial and full
retrieval is left "for further study", in other words, the user has to
make those decisions manually until we figure it out better.  Napster
has demonstrated that letting the users make caching decisions
manually is workable, so long as the cache items are reasonably sized
(not too large or too small) and well labeled.

A major choice remains, that of the search protocol to find the cached
WARs.  Mainstream caching research tends to largely ignore the most
successful example of a cached network service - Napster and its
various spinoffs, most notably Gnutella, which seem to go by the
buzzword peer-to-peer file sharing, or P2P.  For example, RFC 3040,
"Internet Web Replication and Caching Taxonomy", a January 2001
document discussing "protocols, both open and proprietary, employed in
web replication and caching today," never mentions the word "Napster".
Since peer-to-peer was designed to share music and not HTML documents,
the oversight can be forgiven, but this point needs to be made and
made strongly - Napster, Gnutella, and friends _are_ caching services,
and by far the most successful ones built to date.  Peer-to-peer
seems to be the way to go.

The legal problems of Napster and the highly critical reception of the
technical community to Gnutella suggest against either of these
protocols.  At present, LDAP seems the best choice, due to its
maturity as a protocol, the widespread availability of both client and
server implementations, and its straightforward application to the
problem at hand.  The only serious issue surrounding LDAP is the lack
of a standardized means for server location in a P2P environment, the
critical issue swirling around Gnutella.

I suggest dealing with both the security issues and the P2P server
location issue through a simple solution: assume the correct operation
of DNS even in the face of server failure.  This allows site
administrators to use resource records to specify both a set of LDAP
servers to search for WARs, as well as cryptographic keys to verify
the contents of those WARs once they are retrieved.  Although this
makes proper operation of a cached web site dependent on proper DNS
operation, this should presently be a minor tradeoff, since proper web
site operation is already based on DNS, and DNS had proven to be one
of the most reliable of the Internet technologies.

Thus, to enable dynamic web caching, as outlined in this document, a
web server administrator should add two kinds of additional resource
records to the web server's DNS records.  First, a set of SRV records
should specify a set of LDAP servers, any of which can be searched for
the web site's WARs.  These LDAP servers should form a replicated set,
so that a response from any one of them should be considered a
complete answer by a client.  These servers may also allow arbitrary,
unauthenticated web caches to add entries to the LDAP directory when
they elect to cache one or more of a site's WARs.  Since clients are
expected to cryptographically verify a WAR upon retrieving it,
allowing unauthenticated additions to an LDAP directory should not
allow site spoofing, but a large number of bogus WAR entries could
form the basis for a denial of service attack.  A benefit of this
proposal is that site administrators can select sets of LDAP servers
based on their own policies.  At least one set of publicly updatable,
replicated, highly available LDAP servers should exist for the use of
small web sites without the capability to set up large replicated
installations.

The DNS SRV records in question can simply be the "_ldap._tcp" records
mentioned as a example in RFC 2782.  Thus, to specify LDAP servers for
registering or searching WARs for "www.freesoft.org" URLs, DNS SRV
records should be added for "_ldap._tcp.www.freesoft.org".  In the
case of the publicly available LDAP servers mentioned above, and other
LDAP servers used by multiple web sites, careful consideration should
be given to making the "_ldap._tcp" record a CNAME pointing to a set
of SRV records, allowing the LDAP server administrators to modify the
list of LDAP servers without requiring changes to every web site using
the service.  Furthermore, the use of "_ldap" for this new service may
conflict with existing LDAP applications.  Another name, perhaps
"_webldap" might be a better choice.  Another possibility would be to
use both names, specifying that "_webldap" would take precedence over
"_ldap" for this application, and the "_ldap" records would be used
only if "_webldap" records did not exist.  This would allow the
Internet community to use "_webldap" if needed, expecting that this
name would simply fall into disuse if only "_ldap" is really needed.

Also, the web administrator will need to add at least one KEY record
specifying a public key that must be used by clients to validate the
integrity of a retrieved WAR.  Due to the ease with which a bogus WAR
could be registered with a public LDAP service, this is regarded as a
critical step.  The administrator must provide the KEY record and the
client must validate it before trusting the WAR.  Unsigned WARs are
invalid and so are DNS entries without KEY records - both the SRV and
KEY records must be present.  Perhaps a CERT record would be a better
choice than KEY, also, we need to consider how multiple KEY or CERT
records should be handled by a client.

So, for example, consider the "www.freesoft.org" web site, originally
specified in DNS like this:

   $ORIGIN freesoft.org.

   www		IN  CNAME          sparky.freesoft.org.
   sparky	IN  A	           4.22.66.35

To add WAR-based caching of dynamic web content for this site, records
similar to these should be added:

   www			IN  KEY	           --- public key goes here ---
   _ldap._tcp.www	IN  CNAME          ldapsrv
   ldapsrv		IN  SRV  0 0 389   ldap1.freesoft.org.
   ldapsrv		IN  SRV  0 0 389   ldap2.freesoft.org.
   ldapsrv		IN  SRV  0 0 389   ldap3.freesoft.org.

Retaining the original CNAME record would require moving the KEY
record to "sparky", since CNAME records can't co-exist with other
records.  An alternative to retaining the original server
configuration would be to replace the "www" entries with A records
pointing to a set of web caches.  Thus, any legacy client trying to
retrieve a page from this web site would be automatically directed to
a web cache.  It'd be convenient to specify a CNAME for "www" pointing
to a set of A records for the web caches, but of course this would
preclude a unique KEY record for the server.  Perhaps the KEY record
should appear on a unique name, such as "_key.www", specifically to
permit this feature.  The interaction of CNAME with the other resource
records requires more consideration.

RFC 2535 specifies the structure of KEY records, and recommends the
assignment of new Protocol Octet values for new applications.  If
this proposal is adopted, IANA should assign a new Protocol Octet
value for validation of dynamic web archives.

A typical client request would follow these steps:

1. Client is configured to use a local web cache, or, attempts a
   standard retrieval and gets A records for web caches

2. Client sends request to web cache

3. Web cache does DNS lookup and gets KEY and SRV records

4. Web cache does LDAP search for the URL and gets a list of WAR
   directory entries, placed there by other (remote) web caches

5. Web cache picks an entry, contacts the remote cache using HTTP
   and either retrieves the entire WAR or just the parts it needs
   to serve the requested URL

6. Web cache validates that WAR was signed using the private key
   corresponding to the public key retrieved in the DNS KEY record,
   and recurses to step 5 (using a different remote cache) if not

7. If the cache elected to retrieve the entire WAR, it (subject to
   considerations like being behind a firewall) registers itself with
   one of the site's LDAP servers as possessing the WAR and being
   willing to serve it to other sites

7a LDAP servers replicate this information among themselves

8. Web cache runs the Java in the WAR to generate the dynamic web page
   and returns the result to the client

Several other options present themselves.  Perhaps the LDAP directory
should include entries for web caches willing to run the Java and
serve the dynamic pages themselves, though this would be present a
security risk since those caches might be untrusted by either client
or server.  Perhaps provision could be made for the server to issue
X.509 certificates certifying certain web caches as trusted.  Perhaps
the user should be prompted before embarking on the potentially time
consuming process of retrieving and locally processing a WAR.
Finally, the functionally of a "locally trusted cache" should
ultimately be rolled into the client itself, which should retrieve and
verify the integrity of the WAR before running the Java itself.

In summary, I recommend the following steps:

1. Recognize the importance of data-oriented design, as opposed to
   connection-oriented design.  Break the dependence on special server
   configurations and realize that the client has to do almost all the
   work in a scalable, cached, redundant web architecture.

2. Select standards for the network delivery of executable web
   content, and for packaging the contents of a web server into a
   single compressed archive.  Java/WAR seems the most likely current
   candidate.

3. Develop an LDAP schema for registering WARs, and for searching
   the registrations to find the WARs matching a particular URL.

4. Extend the WAR specification to include root URL, Java classes for
   determining lifespan of WAR, performing incremental updates, and
   other identified needs.  Specify the security environment in which
   these "foreign" WARs will operate.

5. Extend Squid to support the algorithm outlined above.  Alternately,
   extend Apache Tomcat to function as a web cache, with similar
   features.

The caching scheme outlined above is far from perfect.  In my essay
"Data-oriented networking" I discuss more long-term prospects.
However, the current proposal has several key advantages: it can be
deploying using existing technology; it requires no client-side
changes or client-visible protocol updates; it allows web sites to
easily opt in so long as one public set of LDAP servers and/or trusted
caches are available; and it solves a pressing problem.  Ultimately,
the Internet is a work in progress, and its more technically savvy
users are probably ready for a serious attempt at a working caching
scheme for dynamic content.




REFERENCES

Data-oriented networking

   "Data-oriented networking", Brent Baccala, Internet Draft
      http://www.freesoft.org/Essays/data-networking/

Domain Name System (DNS)

   Dozens of RFCs specify various aspects of DNS operation.  Only
   those most pertinent to basic DNS operation, SRV records, and
   KEY/CERT records are listed here.

   RFC 1034 - Domain Names - Concepts and Facilities

   RFC 1035 - Domain Names - Implementation and Specification

   RFC 1912 - Common DNS Operational and Configuration Errors

   RFC 2535 - Domain Name System Security Extensions

   RFC 2536 - DSA KEYs and SIGs in the Domain Name System

   RFC 2538 - Storing Certificates in the Domain Name System

   RFC 2782 - A DNS RR for specifying the location of services (DNS SRV)

   RFC 3110 - RSA/SHA-1 SIGs and RSA KEYs in the Domain Name System

   Paul Vixie's Internet Software Consortium produces BIND, the most
   widely used (and freely available) DNS server
      http://www.isc.org/

Lightweight Directory Access Protocol (LDAP)

   RFC 2251 - LDAP v3 (protocol spec)

   RFC 2252 - LDAP v3 Attribute Syntax Definitions (schema spec)

   OpenLDAP is an actively developed (as of mid-2002) open source LDAP
   server, and C-based client library.  Various client implementations
   exist for other languages, such as Perl
      http://www.openldap.org/

Rsync

   rsync is a program and protocol developed to incrementally update
   files that have only been slightly modified, by first transferring
   a set of MD5 digests that identify which parts of the file have
   been modified and only transferring those parts
      http://rsync.samba.org/

Java

   Java Virtual Machine (JVM) specification
      somewhere on http://java.sun.com/

   Bill Venner's excellent Under the Hood series for JavaWorld
   is a better starting point than the spec for understanding JVM.
   He also has written a book - Inside the Java Virtual Machine
   (McGraw-Hill; ISBN 0-07-913248-0)
      http://www.javaworld.com/columns/jw-hood-index.shtml

   Java 2 language reference
      somewhere on http://java.sun.com/

   Java languages page (other languages that compile to Java VM)
      http://grunge.cs.tu-berlin.de/~tolk/vmlanguages.html

   Criticism of Java
      http://www.jwz.org/doc/java.html

Java Servlets/WARs

   "Tomcat is the servlet container that is used in the official
    Reference Implementation for the Java Servlet and JavaServer Pages
    technologies."
      http://jakarta.apache.org/tomcat/

   Java Servlets - server-side Java API (CGI-inspired; heavily
   HTTP-based) The Java servlet specification includes a chapter
   specifying the WAR (Web Application Archive) file format, an
   extension of ZIP/JAR
      http://java.sun.com/products/servlet/

Caching

   RFC 3040 - Internet Web Replication and Caching Taxonomy
      broad overview of caching technology

   RFC 2186 - Internet Cache Protocol (ICP), version 2

   RFC 2187 - Application of ICP

   Squid software
      http://www.squid-cache.org/

   NLANR web caching project
      http://www.ircache.net/

   Various collections of resources for web caching
      http://www.web-cache.com/
      http://www.web-caching.com/
      http://www.caching.com/

   IETF Web Intermediaries working group (webi)
      http://www.ietf.org/html.charters/OLD/web-charter.html

   IETF Web Replication and Caching working group (wrec)
      http://www.wrec.org/

   RFC 3143 - Known HTTP Proxy/Caching problems

   Cache Array Routing Protocol (CARP) - used by Squid
      http://www.microsoft.com/Proxy/Guide/carpspec.asp
      http://www.microsoft.com/proxy/documents/CarpWP.exe

   RFC 2756 - Hypertext Caching Protocol (HTCP) - use by Squid

Napster and its variants

   Napster, the original peer-to-peer file sharing service, has been
   fraught with legal difficulties, having recently entered bankruptcy
      http://www.napster.com/

   Napster's protocol lives on, even if the service is dead.  It's
   basically a centralized directory with distributed data
      http://opennap.sourceforge.net/
      http://opennap.sourceforge.net/napster.txt

   Gnutella has emerged as the leading post-Napster protocol,
   employing both a distributed directory and distributed data
      http://www.gnutella.com/
      http://www.gnutelladev.com/
      http://www.darkridge.com/~jpr5/doc/gnutella.html

   Several popular clients use the Gnutella network and protocol
      http://www.morpheus-os.com/
      http://www.limewire.org/
      http://www.winmx.com/

   Other proprietary peer-to-peer systems
      http://www.kazaa.com/

   Other free peer-to-peer systems
      http://www.freenetproject.org/

Data-oriented Networking

Thursday, August 1st, 2002


Internet Engineering Task Force
INTERNET-DRAFT
Expires March 2003




		       Data-oriented networking

			   by Brent Baccala
			 baccala@freesoft.org
			     August, 2002

			 Address comments to:
		     data-networking@freesoft.org

			Comments archived at:
	   http://www.freesoft.org/Essays/data-networking/




This document is an Internet-Draft and is subject to all provisions of
Section 10 of RFC2026.

Internet-Drafts are working documents of the Internet Engineering Task
Force (IETF), its areas, and its working groups.  Note that other
groups may also distribute working documents as Internet-Drafts.

Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time.  It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."

The list of current Internet-Drafts can be accessed at
http://www.ietf.org/1id-abstracts.html

The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html




			       ABSTRACT

   Differentiates between connection-oriented and data-oriented
   networking, identifies the advantages of data-oriented networks,
   argues that Internet web architecture is becoming more
   data-oriented, and suggests ways of encouraging and accelerating
   this trend.



Contemporary Internet architecture is heavily connection-oriented.  IP
underlies almost all Internet operations, and its fundamental
operation is to deliver a data packet to an endpoint.  TCP uses IP to
sequence streams of data packets to those endpoints; higher-level
services, such as HTTP, are built using TCP.  All of these operations
are based upon the underlying IP addresses, which identify specific
machines and devices.  Even UDP operations are connection-oriented in
the sense that UDP addresses identify a specific machine on the
Internet with which a connection (even just a single packet) must be
established.  Note that I use the term connection-oriented in a
somewhat different sense than the traditional distinction between
connection-oriented and connection less protocols.

More recently, Uniform Resource Locators (URLs) have emerged as the
dominant means for users to identify web resources.  The distinction
is not merely one of introducing a new protocol with new terminology,
either.  URLs are used to name blocks of data, not network devices.
Especially with the advent of caching, it's now clear that a web
browser may not have to make any network connections at all in order
to retrieve and display a web page.  "Retrieving" a URL differs
significantly from opening an HTTP session, since an HTTP session
implies a network connection to a named device, while accessing a URL
implies only that its associated data (stored, perhaps, on a local
disk) is made available.  HTTP, SMTP, ssh, and other TCP-based
protocols are inherently connection-oriented, while the URL is
inherently data-oriented.

The Internet is moving away from a connection-oriented model and
becoming more data-oriented.  Since the original Internet design was
modeled, at least loosely, after a telephone system, all of its
original protocols were connection-oriented.  Increasingly, we're
becoming aware that often a user is not interested in connecting to
such-and-such a computer, but rather in retrieving a specific piece of
data.  Since such operations are so common, Internet architects need
to recognize the distinction between connection-oriented and
data-oriented operations and design networks to support both well.
Data-oriented models will not replace connection-oriented models;
sometimes, you'll still want to make the telephone call.  Rather, the
pros and cons of each need to be understand, so both can be
incorporated into the Internet of the 21st century.

To understand the emergence of data-oriented networking, it is useful
to consider the historical development of the Internet.  Initially,
the driving application for what became the Internet was email,
allowing virtually instantaneous communications over large distances,
FTP and TELNET were second and third.  FTP provided file transfer and
a rudimentary publication system; TELNET extended the 1970s command
line interface over the network, letting people "log in" over the net,
thus allowing remote use and management of computer systems.

Even in these early years of the Internet, the network was becoming
more data-oriented than a cursory examination of its protocols would
suggest.  FTP archive sites, such as uunet and wuarchive, began
assembling collections of useful data, including programs, protocol
documents, and mailing list archives in central locations.  Other
sites began mirroring the archives, so that retrieving a particular
program, for example, did not require a connection to a centralized
server for the program, but only a connection to a convenient mirror
site.  The practice continues to this day.  Of course, accessing the
mirror sites required using the connection-oriented protocols, and the
process of finding a mirror site or archive that contained the
particular program you wanted remained largely a manual process.  It
still does.

A significant change occurred during the 1980s - the appearance of
graphical user interfaces (GUIs) in personal computers by the end of
the decade.  In the early to mid 90s, the world wide web extended the
GUI over the network, much as TELNET had extended the command line
interface over the net.  More than anything else, the web represents a
global GUI, a means of providing the commonly accepted point-and-click
interface to users around the world.

It is impossible to understate the impact of the web.  The GUI was a
critical technology that made computers more accessible to the average
person.  No longer did you need to type cryptic instructions at a
command prompt.  To open a file, represented by a colorful icon, just
move a pointer to it and click.  Yet until the web, you still needed
to use the old command-line interface to use the network.  Your
desktop PC might use a GUI, but connecting to another computer
generally meant a venture into TELNET or FTP.  The web extended the
GUI metaphor over the network.  Instead of learning FTP commands to
retrieve a file, you could just browse to a web site and click on an
icon.

Other technologies could have provided a network GUI, but not as well
as HTML and HTTP.  X Windows certainly was designed specifically with
network GUI applications in mind, but provided so little security that
using it to "browse" potentially untrusted sites was never realistic.
AT&T's Virtual Network Computing (VNC) is similar to X Windows, and is
designed so that its effects can be confined to a single window.  With
some extensions, it could be used as the basis for a network GUI.
However, both X Windows and VNC share a single common major flaw -
they are connection-oriented protocols that presuppose a real-time
link between client and server.  The user types on the keyboard or
clicks on a link, then the client transmits this input to the server,
which processes the input and sends new information to the client,
which redraws the screen.  X Windows has never been widely used over
the global Internet, because the bandwidth and delay requirements for
interactive operation are more stringent than the network can
typically provide.  VNC is very useful for using GUI systems remotely,
but still doesn't provide the performance of local software.

The present HTML/HTTP-based design of the web does have one
overwhelming advantage over X Windows / VNC, however.  The web is
data-oriented, not connection-oriented, or is at least more so than
conventional protocols.  A web page is completely defined by a block
of HTML, which is downloaded in a single operation.  Highlighting of
links, typing into fill-in forms, scrolling - all are handled locally
by the client.  Rather than requiring a connection to remain open to
communicate mouse and keyboard events back to the server, the entire
behavior of the page is described in the HTML.

The advent of web caches changes this paradigm subtly, but
significantly.  In a cache environment, the primitive operation in
displaying a web page is no longer an end-to-end connection to the web
server, but the delivery of a named block of data, specifically the
HTML source code of a web page, identified by its URL.  The presence
of a particular DNS name in the URL does not imply that a connection
will be made to that host to complete the request.  If a local cache
has a copy of the URL, it will simply be delivered, without any wide
area operations.  Only if the required data is missing from the local
caches will network connections be opened to retrieve the data.

Experience with web caches demonstrates that data-oriented networks
provide several benefits.  First, the bandwidth requirements of a
heavily cached, data-oriented network is much less than a
connection-oriented network.  Connection-oriented protocols such as X
windows, VNC, and TELNET presuppose a real-time connection between
client and server, and in fact could not operate without such a
connection, since the protocols do not specify how various user
events, such as keyclicks, should be handled.  All the protocols do is
to relay the events across the network, where the server decides how
to handle them, then sends new information back to the client in
response.  A data-oriented network, which specifies the entire
behavior of the web page in a block of HTML, does not require a
real-time connection to the server.  Having retrieved the data to
describe a web page, the connection can be severed and the user can
browse through the page, scrolling, filling out forms, watching
animations, all without any active network connection.  Only when the
user moves to another web page is a connection required to retrieve
the data describing the new page.  Furthermore, since the data
describing the pages is completely self-contained, no connection to
the original server is required at all if a copy of the web page can
be found.  A copy, stored anywhere on the network, works as well as
the original.  As the network becomes more data-oriented, fewer and
briefer connections are required to carry out various operations,
reducing overall network load.

A data-oriented network is also more resilient to failures and
partitions than a connection-oriented network.  Consider the
possibility of a major network failure, such as the hypothetical
nuclear strike that originally motivated the Defense Department to
build the packet-based network that evolved into the Internet.  Modern
routing protocols would probably do a fairly good job of rerouting
connections around failed switching nodes, probably in a matter of
minutes, but what if the destination server itself were destroyed?
The connection would be lost, and no clear fallback path presents
itself.  The obvious solution is to have backup copies of the server's
data stored in other locations, but creating and then finding these
backups is currently done by hand.  Existing routing protocols can
reroute connections, but are woefully inadequate for rerouting data.

A more mundane, but far more common scenario is the partitioned
network.  Simply operating in a remote area may dictate long periods
of operation without network connectivity.  In such an environment,
it'd be convenient to drop any information that might be needed on a
set of CD-ROMs.  That works fine until the first search page comes up
that connects to a specialized program on the web server, or a CGI
script that presents dynamic content, or an image map.  Solutions have
been developed to put web sites on CD-ROMs - none of them standard,
most of them incomplete.  A more data-oriented design, that didn't
depend on connections to a server, would be far better suited to such
situations.

HTML, the workhorse protocol of the web, was never designed with use
as a network GUI in mind, even though this is the role it has evolved
into.  It's the HyperText Markup Language (HTML), and hypertext is not
a GUI.  Hypertext is text that includes hyperlinks.  Perhaps we can
expand the definition somewhat into a "hyperdocument" that can include
colors, diagrams, pictures, and even animation.  A GUI is much more
than a hyperdocument, however.  A GUI is a complete user interface
that provides the human front end to a program.  Not only can it
include dialog boxes, pull down menus and complex mouse interactions,
but more than anything else it provides the interface to a program,
which could perform any arbitrary task, and is thus not just a
document.  The program could be a document browser, a document editor,
a remote login client, a language translator, a simulation package,
anything.  What was pioneered by Xerox PARC, deployed by Apple Lisa,
marketed by Macintosh and brought with such stunning success to the
masses by Microsoft was not hypertext, but the GUI.  The GUI is what
we are trying to extend across the network, not hypertext, and thus
HTML just isn't very well suited for the task.

Since it wasn't designed to provide a network GUI, HTML doesn't
provide the right primitives for the task it has been asked to
perform, and thus we've seen a long series of alterations and
enhancements.  First there was HTML 2, then HTML 3, then HTML 4, now
HTML with Cascading Style Sheets, soon XHTML, plus Java applets,
Javascript, CGI scripts, servlets, etc, etc...  The fact that HTML has
had to change so much, and that the changes require network-wide
software updates, is a warning sign that the protocol is poorly
designed.  The problem is that HTML has been conscripted as a network
GUI, though, to this day, it has never been clearly designed with this
goal in mind.  Part of what is needed is a replacement for HTML
specifically designed to act as a network GUI.

In addition, one of the great challenges to a data-oriented model is
dynamic pages.  Presently, web caching protocols provide means for
including meta information, in either HTTP or HTML, that inhibits
caching on dynamic pages, and thus forces a connection back to the
origin server.  While this works, it breaks the data-oriented metaphor
we'd like to transition towards.  To maintain the flexibility of
dynamic content in a data-oriented network, we need to lose the
end-to-end connection requirement and this seems to imply caching the
programs that generate the dynamic web pages.  While cryptographic
techniques for verifying the integrity of data have been developed and
are increasingly widely deployed, no techniques are known for
verifying the integrity of program execution on an untrusted host,
such as a web cache.  Baring a technological breakthrough, it seems
impossible for a cache to reliably run the programs required to
generate dynamic content.  The only remaining solution is to cache the
programs themselves (in the form of data), and let the clients run the
programs and generate the dynamic content themselves.  Thus, another
part of what's needed is a standard for transporting and storing
programs in the form of data.

An important change in moving to a more data-oriented network would be
to replace HTML with a standard specifically designed to provide a
data-oriented network GUI.  The features of this new protocol:

  1) It must be data-oriented, not connection-oriented.  Thus, the
     protocol must define a data format that can describe GUI behavior
     on a remote system.  HTML already basically does this.

  2) It must programmatic.  The whole point is to eliminate the server
     and replace it with a standard, network-deliverable specification
     of the GUI behavior.  The exact behaviors of GUI interfaces
     vary dramatically, and simply providing an escape mechanism
     to connect back to the server violates the data-oriented
     design goal.  Thus, the protocol must implement powerful enough
     primitives to describe arbitrary GUI behaviors without escape
     mechanisms, i.e. it must support arbitrary programming constructs.

  3) It must be secure.  Since the program may be untrusted to
     the client, it must be limited from performing arbitrary
     operations on the client system.

  4) It must provide GUI primitives, and cleanly interact with other
     GUI applications, such as window managers, and provide features
     such as drag-and-drop functionality between windows.

  5) It must provide backwards compatibility with the existing web.

The programmatic requirement fairly well dictates some kind of
virtual machine architecture, and the obvious candidate is therefore
Java, but Java may or may not be the best choice.  Netscape began
work on a 100% Java web browser, but abandoned this effort in 1998.
Commenting on the demise of "Javagator", Marc Andreesen quipped - "it's
slower, it will crash more and have fewer features... it will simplify
your life".

This misses the point.  We're not trying to build an Java-based HTML
web browser that would simply achieve cross-platform operability.  The
goal is to build a web browser that, as its primary metaphor, presents
arbitrary Java-based GUIs to the user.  HTML could be displayed using
a Java HTML browser.  The difference is that the web site designer
controls the GUI by providing the Java engine for the client to use
for displaying any particular page.  Switching to a different web site
(or web page) might load a different GUI for interacting with that
site's HTML, or XML, or whatever.  Unlike Andreesen's "Javagator", the
choose of GUI is under control of the web server, not tied into a
Java/HTML web browser.

For example, consider if a web site wants to allow users to edit its
HTML pages in a controlled way.  Currently, you have a few choices,
none completely satisfactory.  First, you could put your HTML in an
HTML textbox, and allow the user to edit it directly, clicking a
submit button to commit it and see what the page will actually look
like.  Alternately, you could allow the HTML to be edited with
Netscape Composer or some third party HTML editor on the client,
accepting the HTML back from the client in a POST operation.  This
provides the server very little control over exactly what the user can
and can't do to the page.  Since parts of the page might be
automatically generated, this isn't satisfactory, nor do we really
know much about this unspecified "third party editor".  On the other
hand, with a Java browser, the web site could simply provide a
modified HTML engine that would allow the user to edit the page, in a
manner completely specified by the web designer, prohibiting
modifications to parts of the page automatically generated, and
allowing special cases, such as spreadsheet tables with the page, to
be handled specially.

Another advantage to this proposal is that it provides a solution to a
problem plaguing XML - how do you actually display to the user the
information you've encoded in XML?  This is left glaringly unaddressed
by the XML standards, the solution seeming to be that you either use a
custom application capable of manipulating the particular XML data
structures, or present the data in two different formats - XHTML for
humans and XML for machines.  A Java-based web browser addresses this
problem.  You ship only one format - XML - along with a Java
application that parses and presents it to the user.

On the other hand, let's keep Andressen's criticism in mind. Java may
not be suitable for such a protocol, for either technical or political
reasons.  The speed issues seem to be largely addressed by the current
generation of Just-In-Time (JIT) Java runtimes, but whatever the
standard is, it should be an RFC-published, IETF standard-track
protocol, and if the intellectual property issues around Java preclude
this, then something else needs to replace it.  Alternatives include
Parrot, the yet unfinished Perl 6 runtime, and Microsoft's .NET
architecture, based around a virtual machine architecture recently
adopted as ECMA standard ECMA-335.

PDF also deserves consideration.  Though it lacks the generality to
provide a network GUI, its presentation handling is vastly superior to
HTML's, giving the document author complete control over page layout,
and allowing the user to zoom the document to any size for easy
viewing.  It is also easier to render than HTML, since its page layout
is more straightforward for the browser to understand.

A definite metaphor shift is required.  Rather than viewing HTML as
the primary standard defining the web, the primary standard must
become Java or something like it, that provides full programmability.
Browsing a web page becomes downloading and running the code that
defines that page's behavior, rather than downloading and displaying
HTML, that might contain an embedded applet.

Backwards compatability can be provided along the lines of HotJava,
Sun's proprietary Java-based web browser, which implements HTML in
Java.  To display an HTML page, Java classes are loaded which parse
the HTML and display it within a Java application.  The browser
provides little more than a Java runtime that can download arbitrary
Java and run it in a controlled environment.  Initially, 99% of the
pages would be HTML, viewed using a standard (and cached) HTML engine
coded in Java.

Notwithstanding the creeping featurism present in Java, adopting this
approach would avoid the creeping featurism so grossly apparent in web
browsers.  Even the casual observer will note that mail, news, and
teleconferencing are simply bloat that results in multi-megabyte
"kitchen sink" browsers.  Will the next release of Netscape, one might
ask, contain an Emacs mail editor with its embedded LISP dialect?  And
if not, why not?  Only because the majority of users wouldn't use
Emacs to edit their mail?  Why should we all be forced to use one type
of email browser?  Why should we have Netscape's email browser
packaged into our web browser if we don't use it?  Like the constant
versioning of HTML, the shear size of modern browsers is a warning
sign that the web architecture is fundamentally flawed.  A careful
attempt to standardize "network Java" would hopefully result in
smaller, more powerful browsers that don't have to be upgraded every
time W3C revs HTML; you simply update the Java GUI on those particular
sites that are taking advantage of the newer features.

Another tremendous advantage is the increased flexibility provided to
web designers.  HTML took a big step in this direction with Cascading
Style Sheets, but CSS doesn't provide the power of a full GUI.  For
example, if a web page designer wanted to, he could publish an HTML
page with a custom Java handler that allowed certain parts of the HTML
text to be selectively edited by the user.  This simply can't be done
using CSS.

Network-deliverable, data-oriented GUIs aren't a panacea, or course.
For starters, one of the advantages of the present model is that all
web pages have more or less the same behavior (since they are all
viewed with the same GUI).  The "Back" and "Forward" buttons are
always in the same place, the history function always works the same
way, you click on a link and basically the same thing happens as
happens on any other page.  Providing the web designer with the
ability to load a custom GUI changes all that.  Standards need to be
developed for finding and respecting user preferences concerning the
appearance of toolbars, the sizing of fonts, the operation of links.
The maturing Java standards have already come a long way towards
addressing issues such as drag-and-drop that would have to be
effectively implemented in any network GUI.

Hurdles need be crossed before we can reach a point where web
designers can depend on Java-specific features.  One possibility would
be to migrate by presenting newer web pages to older browsers using a
Java applet embedded in the web page.  Performance might suffer, but
clever design would hopefully make it tolerable.  For starters,
consider that the web page data presented to the applet need not be
the source HTML, but could be a processed version with page layout
already done.  Newer, Java-only browsers should be leaner and faster.

In summary, I recommend the following steps:

1. Recognize the importance of data-oriented design, as opposed to
   connection-oriented design.  Break the dependence on special server
   configurations and realize that the client has to do almost all the
   work in a scalable, cached, redundant web architecture.

2. Migrate the web towards being based on a model of a network GUI,
   rather than a massively enhanced hypertext system.

3. Select a standard for the network delivery of executable content,
   Java being the most likely candidate

4. Develop a Java-based HTML browser along the lines of HotJava, but
   completely open, allowing existing HTML-based websites to be
   browsed via Java.  Provide an applet version that allows web
   designers to specify a custom Java applet to browse their HTML
   sites using conventional web browsers.

5. Develop a lean, fully Java-based web browser, with Multivalent
   being the most obvious candidate.

6. Recognize the transient nature of HTML/HTTP and specify their
   operation in terms of a generic API, based on the network
   executable content standard (probably Java), for finding and
   delivering the GUI presented by a specified URL.

With all the inertia built up behind the present web design, one needs
to question the wisdom of abandoning HTML and completely re-vamping
the structure of the web, even if a migration path is in place.  The
promise of data-oriented networking is a leaner, more reliable, more
efficient network.  Ultimately, if this analysis is correct, then the
cost of migrating away from a flawed design will ultimately be less
than the cost of constantly shoehorning it.  However, in his essay,
"The Rise of 'Worse is Better'", Richard Gabriel suggests "that it is
often undesirable to go for the right thing first.  It is better to
get half of the right thing available so that it spreads like a
virus."  Bearing this in mind, I've proposed an alternate set of
recommendations, aimed at something more immediately practical, in a
companion essay, "Standardized caching of dynamic web content".  At
the same time, I think it's time to take another look at the
Java-based web browser, and to seriously ask if Java isn't a better
choice than HTML for a network standard GUI.




REFERENCES

Hypertext Markup Language (HTML) / Extensible Markup Language (XML)

   HTML has a long and sordid history.  HTML version 2, specified in
   RFC 1866, was one of the earliest (1995) documented HTML versions.
   Later revisions added tables (RFC 1942), applets (HTML 3.2),
   JavaScript and Cascading Style Sheets (HTML 4.01).

   http://www.w3c.org/MarkUp/

Universal Resource Locators

   RFC 1630 - Universal Resource Identifiers in WWW

   RFC 1737 - Functional Requirements for Uniform Resource Names, a
      short document notable for its high-level overview of URN
      requirements

   RFC 1738 - Uniform Resource Locators, a technical document of more
      importance to programmers than architects

Authentication

   International standard X.509 (not available on-line)

   http://www.openssl.org/

   http://www.openssh.org/

DNS Security

   RFC 2535 - Domain Name System Security Extensions

   RFC 2536 - DSA KEYs and SIGs in the Domain Name System

   RFC 2538 - Storing Certificates in the Domain Name System

   RFC 3110 - RSA/SHA-1 SIGs and RSA KEYs in the Domain Name System

Postscript/PDF

   Postscript specification
      http://www.adobe.com/products/postscript/pdfs/PLRM.pdf

   PDF Reference: Adobe portable document format version 1.4
   (ISBN 0-201-75839-3)
   http://partners.adobe.com/asn/developer/acrosdk/docs/filefmtspecs/PDFReference.pdf

   Ghostscript - freely available Postscript interpreter that also
      reads and writes PDF and thus can be used to convert PS to PDF

   Multivalent (see below) includes a Java PDF viewer

   html2ps - a largely illegible Perl script written by Jan Karrman
      to convert HTML to Postscript.  Yes, it can be done.

X Windows

   Developed by an MIT-lead consortium, X Windows is one of the most
   successful network GUIs
   http://www.x.org/

VNC (Virtual Network Computing)

   Similar in concept to X Windows, but radically different in
   design - an absurdly simple protocol combined with various
   compression techniques to achieve decent WAN performance
   http://www.uk.research.att.com/

Java

   Java Virtual Machine (JVM) specification

   Bill Venner's excellent Under the Hood series for JavaWorld
   is a better starting point than the spec for understanding JVM.
   He also has written a book - Inside the Java Virtual Machine
   (McGraw-Hill; ISBN 0-07-913248-0)
      http://www.javaworld.com/columns/jw-hood-index.shtml

   Java 2 language reference

   Java languages page
      http://grunge.cs.tu-berlin.de/~tolk/vmlanguages.html

   Criticism of Java
      http://www.jwz.org/doc/java.html

Other virtual machines

   Perl 5 runtime

   Parrot - Perl 6 runtime
      http://www.parrotcode.com/

   Microsoft's .NET architecture includes the Common Language
   Infrastucture, based around a virtual machine, now adopted as
   ECMA-335
      http://msdn.microsoft.com/net/ecma/
      http://www.ecma.ch/ecma1/STAND/ecma-335.htm

Various Java-based web browsers

   HotJava, Sun's Java browser, but with binary-only licensing
      http://java.sun.com/products/hotjava/

   Multivalent, an open-source web browser written totally in Java,
   with an extension API to add "behaviors" similar to applets
      http://www.cs.berkeley.edu/~phelps/Multivalent/

   NetBeans, an attempt to develop a "fully functional Java browser"
      http://netbrowser.netbeans.net/

   Jazilla, a now defunct attempt to carry the "Javagator" project
   forward under an open source banner
      http://jazilla.sourceforge.net/

Java Servlets/WARs

   "Tomcat is the servlet container that is used in the official
    Reference Implementation for the Java Servlet and JavaServer Pages
    technologies."
      http://jakarta.apache.org/tomcat/

   Java Servlets - server-side Java API (CGI-inspired; heavily
   HTTP-based) The Java servlet specification includes a chapter
   specifying the WAR (Web Application Archive) file format, an
   extension of ZIP/JAR
      http://java.sun.com/products/servlet/

Caching

   RFC 3040 - Internet Web Replication and Caching Taxonomy
      broad overview of caching technology

   RFC 2186 - Internet Cache Protocol (ICP), version 2

   RFC 2187 - Application of ICP

   Squid software
   http://www.squid-cache.org/

   NLANR web caching project
   http://www.ircache.net/

   Various collections of resources for web caching
      http://www.web-cache.com/
      http://www.web-caching.com/
      http://www.caching.com/

   IETF Web Intermediaries working group (webi)
      http://www.ietf.org/html.charters/OLD/web-charter.html

   IETF Web Replication and Caching working group (wrec)
      http://www.wrec.org/

   RFC 3143 - Known HTTP Proxy/Caching problems

   Cache Array Routing Protocol (CARP) - used by Squid
      http://www.microsoft.com/Proxy/Guide/carpspec.asp
      http://www.microsoft.com/proxy/documents/CarpWP.exe

   RFC 2756 - Hypertext Caching Protocol (HTCP) - use by Squid

Napster and its variants

   Napster, the original peer-to-peer file sharing service, has been
   fraught with legal difficulties, having recently entered bankruptcy
      http://www.napster.com/

   Napster's protocol lives on, even if the service is dead.  It's
   basically a centralized directory with distributed data
      http://opennap.sourceforge.net/
      http://opennap.sourceforge.net/napster.txt

   Gnutella has emerged as the leading post-Napster protocol,
   employing both a distributed directory and distributed data
      http://www.gnutella.com/
      http://www.gnutelladev.com/
      http://www.darkridge.com/~jpr5/doc/gnutella.html

   Several popular clients use the Gnutella network and protocol
      http://www.morpheus-os.com/
      http://www.limewire.org/
      http://www.winmx.com/

   Other proprietary peer-to-peer systems
      http://www.kazaa.com/

   Other free peer-to-peer systems
      http://www.freenetproject.org/

Richard Gabriel, "The Rise of 'Worse is Better'"
   http://www.jwz.org/doc/worse-is-better.html

Brent Baccala, "Standardized caching of dynamic web content"
   http://www.freesoft.org/Essays/dynamic-content-caching/