csail logo
MIT Logo

MIT Computer Science and
Artificial Intelligence
Lab

line decor
  
line decor
 
 
 
 

 
 

Working papers--under development and prepublication

 

Pre-publication papers:

The end-to-end argument and application design: the role of trust. David D Clark, Marjory S. Blumenthal

The end-to-end argument was first put forward in the early 1980s as a core design principle of the Internet. The argument, as framed then, remains relevant and powerful: the fundamental architecture of the Internet endures, despite change in both underlying technologies and in applications. Nevertheless, the evolution of the Internet also shows the limits of foresight, and we now see that the Internet, the applications that run on top of the Internet, and the interpretation of the end-to-end argument itself have all greatly evolved.

This paper concerns the evolution in thinking around the end-to-end argument, the design principles for modern Internet applications, and what the end-to-end argument has to do with application structure. We argue that while the literal reading of the early end-to-end argument does not directly offer guidance about the design of distributed applications and network services, there is a useful interpretation of the end-to-end argument that both informs the thinking about application design, and gives as well a perspective on the end-to-end argument in general that is valid in today’s world.

Working papers (under development):

The expressive power of the Internet design. David Clark. Version 5.0, April 2009.

The present Internet is not defined in terms of its semantics, at least at the packet level. The loose packet carriage model of “what comes out is what went in” is intentionally almost semantics-free. The packets just carry bytes. Packet boundaries can have some limited semantics, but not much. The original design presumed some constraints on the semantics of packet headers, such as global addresses, but the progress of time has violated these and the Internet keeps working. This paper argues that what defines the Internet, and the range of behavior that is available in the Internet, is the expressive power of the packet header, which has more to do with its format than any semantics.

A Multi-Theory Analysis of Long-lived Networks. David Clark. Version 2.1, April 25, 2009.

In comparison to many artifacts of computing, the Internet has lived to an old age—it is over 35 years old. Opinions differ as to the extent that it is showing its age, and among some researchers, there is a hypothesis that the Internet of 15 years from now might be built on different principles.  Whether the network of 15 years from now is a minor evolution from today’s network, or a more radical alternative, it should be a first-order requirement that this future Internet be designed so that it also can survive the test of time. The objective of longevity is easy to understand, but the principles that one would use to achieve it are less well understood. In fact, there are a number of different theories about how to design a network (or other system) that survives for a long time: theories of change (evolution), theories of stability and theories of innovation. This paper takes the point of view that many of these theories are relevant, and that one can achieve a long-lived network in different ways, by exploiting various combinations of these theories in different degree. While some theories are incompatible, many are consistent with one another.

Toward the design of a Future Internet. David D. Clark. Version 6.0 of July 8, 2009

(Long paper--61 pages. Sorry.)

This document is a very preliminary proposal for the design of a Future Internet—an outline of requirements and architecture.  This document should only be seen as a first step in such a proposal; there are many parts that remain to be considered and elaborated. But it does try to offer a rationale for making key design systems.

This document draws on a number of sources for its insights. First, it draws on our collective experience with the current Internet—what works, what has survived, and what has eroded or broken down under the pressures of evolving requirements. Second, it draws on the reasoning to be found in many of the projects in the NSF FIND program, and the overall philosophy of that program.  FIND researchers are expected to justify their ideas based on requirements, theories of design, and experience—ideas that will prove right in the long term, whether or not the idea fits well into the current Internet.  This document tries to follow that design philosophy. Third, and more broadly, the document draws on the wide range of architectural research that has been done in the networking community, including some prior project with long-range architectural objectives, such as the DARPA NewArch project.  I acknowledge the wide range of argued reasoning on which I have drawn.

The various sections of this document are organized around recognized clusters of requirements, such as security and management. They start with a discussion of what is known about how to deconstruct the requirement into component parts, and then a summary of what is generally accepted as the right way to address the requirement. The sections then lists some Points of View (POVs), perhaps conflicting, about paths to the future. Each section then tries to argue in favor of particular architectural preferences in order to meet these requirements. This document represents a serious attempt to work from requirements to mechanism. In cases where requirements do not seem to imply any need for architectural consistency, the document tries to recognize that fact and “de-architect” the issue. In other words, this document tries to derive architecture from requirements, rather than from examination of mechanism.

The discussion covers the range of traditional layers from technology to application. The traditional view of Internet architecture has a focus on the packet layer and addressing. Perhaps more interesting is the section at the end on application design—a topic that received relatively little consideration in the design of the original Internet. It is my claim that proper application design is at least as important to a successful Future Internet as the mechanisms at the packet level. There is some discussion of the traditional packet or “pipe” level, but in many cases the discussion is not to put forward an architectural proposal, but to argue that architecture at this level is not as important as we might have thought. For example, I assert that some of the traditional arguments about addressing, such as whether there should be a global address space, is the wrong question.  Based on the requirements and the design principles I put forward, I argue that a discussion about the scope of addressing is important, but the question of global addressing is misplaced.

There are several ways to go forward from this document, all appropriate. One, there are many places in the document where the reasoning and conclusions are incomplete. Additions and amendments are welcome. Second, there are many forks in the road—points where there are diverging points of view, where designers could have taken a different path.  In the context of the FIND program, NSF and the FIND leadership have always hoped that multiple ideas might emerge for the design of a Future Internet. Perhaps the initial discussion here will inspire alternative proposals. Finally, researchers may find herein suggestions for specific research project they might want to undertake. All of these paths forward are welcome.