The Future of Trespass and Property in Cyberspace
 
 
10 December 1998

 
 
 
 
 
 
 
 
 

Benjamin Adida  
Massachusetts Institute of Technology
Enoch Chang  
Harvard Law School
Lauren B. Fletcher  
Massachusetts Institute of Technology
Michelle Hong  
Harvard Law School
Lydia Sandon  
Massachusetts Institute of Technology
Kristina Page  
Harvard Law School
 
 
 
 
 

 
 
 

 
 
 
 
 

Table of Contents
 

  1.  Introduction
    1. Overview
    2. Outline of Paper
     
  2. Real Space Trespass
    1. Common Law Trespass Actions
    2. Statutory Trespass Actions
    3. Trespass Case Law
     
  3. Cyberspace Trespass
    1. Computer Fraud and Abuse Act of 1984
    2. State Statutes Regarding Computer Crime and Trespass
     
  4. Technical and Legal Introduction to Cybertrespass Cases
    1. Element I: Intent
    2. Element II: Entry
    3. Element III: Property
    4. Element IV: Permission
     
  5. Examples of Trespass in Cyberspace
    1. Spam Email
    2. Active Email
    3. Hacking a Web Page
    4. Breaking Into a Personal Computer
    5. Conclusion
     
  6. Current Strategies for Dealing With Trespass in Cyberspace
    1. Technology
    2. Legislation and Government
    3. Social Norms and Markets
     
  7. Metaphors
    1. The Trespass Metaphor in the Abstract
    2. Specific Trespass Metaphor: Street of Doorways
    3. Specific Trespass Metaphor: Direct Marketing
    4. The Usefulness and Limits of Metaphors
    5. Conceptualization: A More Helpful Approach
     
  8. Goals
    1. Internet Goals
    2. Access Goals
     
  9. Definitions
  1. Proposed Architecture
    1. Containers
    2. Ownership
    3. Barriers and Points of Entry
    4. Permissions
     
  2. Evaluations
    1. Examples of Trespass Revisited
    2. Feasibility
     
  3. Projections
    1. Code
    2. Norms
    3. Law
    4. Markets
     
  4. Conclusion
Endnotes

Appendix A: Acknowledgements

Appendix B : Credits

Appendix C: Executive Summary
 
 

1    Introduction

Web page hacks, spam email, and network break-ins, though technically very different in nature, have one unifying thread - they are all trespass. Current law treats these cyberspace occurrences under one common legal doctrine, but an evaluation of this relatively new field raises the question: should trespass be applied to cyberspace?

It is important to keep in mind that trespass was designed for real space dealing with physical property.  Moreover, trespass is only a metaphor in cyberspace.  On a superficial level, this metaphor may in fact be appropriate.  Internet terminology including "home" pages, web "sites," and "visitors" supports this analogy,[1] and it is easy to equate elements of cyber-trespass with their real space counterparts in some instances: firewalls can be thought of as virtual fences and web page notices can be viewed as virtual signs.

Upon closer inspection, however, this metaphor breaks down.  In trying to fit cyberspace into the box of common law trespass, some of the meanings are lost in the translation. Are we going too far when we carry this analogy down to the flow of electrons?  Why should we constrain ourselves to the framework of common law when we have the ability to adapt the technical and legal architecture at this relatively early stage?  What other regimes would more efficiently and effectively deter trespass in cyberspace? This paper is devoted to exploring these very questions in proposing a new way of dealing with trespass in cyberspace.

1.1    Overview

Values and interests of Internet users conflict in the debate on controlling access to information in cyberspace. Individual users, both private and commercial, system administrators, and government strive to keep data and systems secure from hackers, crackers and insiders attempting to view, to collect, or to damage stored information. Many of the early members of cyberspace communities and bulletin board operators want the Internet to remain a resource where one can freely roam, gather, and share information and want to ensure that there are some public spaces in cyberspace. Individual users wish to preserve the privacy of some data while they connect to the Internet or make some data on available for free public access. Hackers, computer techies who try to access data and systems on the Internet in order to test their technical abilities, want the freedom to test the security and privacy features of systems and software but do not want to risk a criminal or civil action when they do not damage the system they enter. In order to preserve the commercial value of proprietary or copyrighted data, commercial entities, entrepreneurs, system administrators, Internet service providers (ISPs), and owners of network hardware wish to control the uses of their data or system. Individual users would like to prevent commercial solicitations from reaching their private spaces on the Internet. Entrepreneurs and commercial entities want to use the Internet for low cost, bulk direct advertising through unsolicited commercial email.

Current approaches to regulating access in cyberspace rely on rules derived from real space. Victims of unauthorized access currently use common law doctrines such as trespass and trespass to chattel and statutes specifically aimed at preventing unauthorized access to computer data or systems to punish those who access data or systems without the owner's consent. These doctrines, based on real space unauthorized access, focus on intent, entry, property, and permission. Problems arise when unauthorized access in cyberspace challenges or conflicts with traditional understandings of real space trespass law.

Unfortunately, the property metaphor which underlies current legal regulation of access is inadequate to meet the continuing challenges which unauthorized in cyberspace will present.

Technical advances increase both the potential scope of unauthorized access to data and systems in cyberspace and the number of access attempts. Traditional legal doctrines must stretch to accommodate these changes in technology if they are to effectively punish unauthorized access. This expansion of the common law understanding of trespass in order to apply it to cyberspace illustrates the limits of current legal approaches to the problem.

Society must determine what combination of technology, legislation, social norms, property law, contract law, tort law or alternative concepts confronts and resolves the special challenges of enforcing rules regarding unauthorized access in cyberspace. The special challenges which unauthorized access presents relate to problems applying common law trespass doctrine and detecting wrongdoers and victims. Efforts to address the problems created by unsolicited commercial email, commonly referred to as spam, illustrate how technology, legislation, and social norms establish and enforce rules which regulate access in cyberspace. Property law, contract law, and tort law provide additional tools for addressing the problem of unauthorized access in cyberspace and for determining which competing interest will prevail in the unauthorized access controversy. Other concepts may provide more insight on how society can achieve its goals of regulating access in cyberspace.

1.2    Outline of Paper

The first part of this whitepaper provides the necessary background information and leads up to the current status of trespass in cyberspace. Sections 2 - 4 provide an overview of current trespass and how this common law framework has been applied to cyberspace. Through discussion of legislation and case law, these sections analyze technical and legal aspects of real space and cyberspace trespass. Section 5 presents a technical discussion of how trespass in cyberspace may occur, followed by a legal analysis of each scenario presented. Section 6 then outlines strategies that are currently employed to prevent cyberspace trespass. Next, Section 7 explores the uses and limitations of metaphors and suggests that cyberspace trespass be abstracted to a conceptual level in order to extract the underlying principles.

The remainder of the paper presents a new architecture, combining law and technology, to deal with trespass in cyberspace. Section 8 establishes the goals to be realized by the proposal, and Section 9 defines the precise language of the conceptual terms to be used. Section 10 proposes a legal and technical architecture utilizing the concepts of entities, control, and containers to prevent and govern unauthorized access in cyberspace. In section 11, the merits of this regime are evaluated in terms of meeting the stated objectives as well as feasibility of implementation and acceptance. Finally, section 13 conjectures as to how the implementation of this architecture will influence the world in terms of technology, law, markets, and social norms.

2    Real Space Trespass

The right to exclude others has been described as the most fundamental and important right of property owners.[2] Because the right of exclusion "determines what men shall acquire. . . [and] determine[s] the mode of life of many," it involves de facto political sovereignty.[3] Thus, as one commentator has argued, property law should be treated with the same "considerations of social ethics and enlightened public policy which ought to be brought to the discussion of any just form of government."[4] This section will explore traditional trespass law from the vantage points of common law, statutes, and case law, focusing particularly on the common law actions of trespass and trespass to chattel.

2.1    Common Law Trespass Actions

Under traditional common law, several different trespass actions are available to remedy offenses to an owner's exclusive use of his property. These include conversion, continuing trespass, nuisance, trespass to land, and trespass to chattel. Conversions are defined as "those major interferences with chattel, or with the plaintiff's rights in it, which are so serious, and so important, as to justify the forced judicial sale to the defendant."[5] Examples include stealing another person's hat, or selling wine discovered in one's basement while mistakenly thinking it was abandoned.[6] However, the damage must be serious or substantial; it is not conversion to ride someone else's horse after getting permission only to feed it.[7]

A continuing trespass is an invasion that is continued by a failure to remove it. Building a structure and dumping trash on another's land are examples of continuing trespasses.[8] Nuisance is very similar to trespass, and the line between them is both fuzzy and crooked.

A nuisance action involves an offense to the owner's use and enjoyment of his property, rather than to his possession of it.[9] For example, a dog that barks all night or blasting vibrations that damage a house have been found to be nuisances because they disrupt the owner's quiet enjoyment, but do not challenge his possession of, his land and house.[10]

All three of these actions can be interpreted as having cyberspace analogues. For example, the sender of commercial spam email does not try to challenge the recipient's possession of his account, but often affects his use and enjoyment by clogging his in-tray and forcing him to expend time and effort to sort through and delete the unwanted emails. A nuisance action might therefore seem appropriate in this case. However, in practice courts have limited the cyberspace application of real property law to the common law actions of trespass to chattel and trespass to land.[11]

Trespass to chattel is defined as the intentional "intermeddling" with someone else's chattel.[12] Trespass to chattel is found in four general cases: (1) X removes a chattel from Y's possession; (2) X impairs Y's chattel as to condition, quality, or value; (3) X prevents Y from using his chattel; or (4) X causes harm to Y or to something in which Y has a legally protected interest.[13]

Trespass to land is generally an injury to possession, as opposed to a challenge to title.[14] To prove trespass, one must show that there was: (1) Intentional (2) Entry (3) Onto the property of another (4) Without their express or implied permission to so enter.[15] Each of these four elements is open to interpretation.[16]

2.2    Statutory Trespass Actions

Trespass can be either criminal or civil. Because of its importance, the right to exclude is protected by all three arms of the state: it is codified in legislation,[17] enforced by law enforcement agents, and vindicated in the courts.[18]

State statutes deal with trespass in a variety of ways. Some statutes are complex, such as Massachusetts':

Whoever, without right enters or remains in or upon the dwelling house, buildings, boats or improved or enclosed land, wharf, or pier of another, after having been forbidden so to do by the person who has lawful control of said premises, whether directly or by notice posted thereon, or in violation of a court order pursuant to section thirty-four B of chapter two hundred and eight or section three or four of chapter two hundred and nine A, shall be punished by a fine of not more than one hundred dollars or by imprisonment for not more than thirty days or both such fine and imprisonment. Proof that a court has given notice of such a court order to the alleged offender shall be prima facie evidence that the notice requirement of this section has been met. A person who is found committing such trespass may be arrested by a sheriff, deputy sheriff, constable or police officer and kept in custody in a convenient place, not more than twenty-four hours, Sunday excepted, until a complaint can be made against him for the offence, and he be taken upon a warrant issued upon such complaint.

This section shall not apply to tenants or occupants of residential premises who, having rightfully entered said premises at the commencement of the tenancy or occupancy, remain therein after such tenancy or occupancy has been or is alleged to have been terminated. The owner or landlord of said premises may recover possession thereof only through appropriate civil proceedings.[19]

Other statutes are extremely simple. New York's trespass law simply provides that "[a] person is guilty of trespass when he knowingly enters or remains unlawfully in or upon premises."[20]

There is no federal statute covering general trespass[21]; general federal trespass is covered under common law only.[22] Most trespasses punishable under federal law involve federal lands or property involving a significant federal interest, such as national parks, nuclear reactor sites, and Indian reservations.[23] In general, "trespass to property, normally within the exclusive cognizance of the states, . . . [was not to be] a matter of national concern."[24] This is true even where the trespasser is a government actor.[25]

2.3    Trespass Case Law

Because trespass is such an important and well-established right, there has been no shortage of case law. Two classic cases are Bradley v. American Smelting and Refining Co.[26] and Cleveland Park Club v. Perry.[27]

In Bradley, the plaintiff homeowners were suing a copper smelting factory for blowing gases and microscopic particles onto their land. The court found that the defendant factory had had the requisite intent to commit intentional trespass to land. The court also found that the intentional deposit of microscopic particles could give rise to action for trespass as long as there was proof of actual and substantial damages.[28]

In Cleveland Park Club, a child had placed a tennis ball into the drain pipe of a swimming pool, and had thereby forced the swimming club to close the pool and make repairs. The court held that the child had the requisite intent to commit trespass, even though he did not foresee the damage that would be caused. However, the lifeguard on duty had passively watched the child insert the ball without stopping him. The court therefore remanded the case for a trial on the question of whether there had been implicit consent by the swimming club to the child's actions.[29]

These two cases illustrate the complexity of proving the elements of trespass, even in real space. These complexities multiply when traditional law is applied to cyberspace. Although the law has developed to cover much of what might be considered trespass in cyberspace, analogies to traditional trespass law have often been helpful in those areas outside the scope of established cyberlaw. The next section will discuss the development and current boundaries of the law of cyberspace. This paper will then explore the application of traditional trespass metaphors to the liminal cases arising in cyberspace.

3    Cyberspace Trespass

The metaphor of trespass from real space to cyberspace seems like a natural one.  As computers and the Internet have proliferated, the words that we use to describe them indicated heavily that people already invoke this analogy quite frequently.  When computer crime began to appear in the early 1980s, the courts tried to prosecute entry into a computer using the large body of law respecting real space trespass.  However, the following two cases illustrate instances in which the courts were unable to prosecute effectively due to the fact that real space trespass was not applicable.

In United States v. Seidlitz,[30] the defendant in the case stole confidential software by tapping into the computer system of a previous employer.  As in several of these cases, the defendant was prosecuted under federal law solely because of a tangential issue.  Two of the fifty telephone calls that he made to access the company's computers were made over state lines.  Had he not done this, there would have been no basis for federal prosecution, and he would have had to be tried under one state's wire fraud statute.

In the second case, United States v. Langevin[31], a similar crime was perpetrated by a former employee of the Federal Reserve Board.  In his position as a financial analyst at the time of the crime, he attempted to access the Board's money supply information file.  In a similar situation, there would have been no basis for federal prosecution had the telephone access calls not been made across state lines.

3.1    Computer Fraud and Abuse Act of 1984

Congress realized that its position with respect to computer crimes was weak when it made the following statement in 1984:
"[Congress is] bound by traditional legal machinery which in many cases may be ineffective against unconventional criminal operations. Difficulties in coping with computer abuse arise because much of the property involved does not fit well into categories of property subject to abuse or theft; a program, for example, may exist only in the form of magnetic impulses."[32]
In this spirit, Congress enacted the Computer Fraud and Abuse Act of 1984 (CFAA).[33]  First, the legislation protects certain information which the government deems as sensitive.  Next, the Act defines the term "classified computer," and imposes punishment for unauthorized or misused access into one of these protected computers.  Finally, the Act punishes those who commit specific computer-related actions, such as trafficking in passwords or extortion by threatening a computer.

Perhaps Mark Rasch describes the law best when he says:

"The act ... enhanced penalties for six types of computer activities:  the unauthorized access of a computer to obtain information of national secrecy with an intent to injure the United States or give advantage to a foreign nation;  the unauthorized access of a computer to obtain protected financial or credit information;  the unauthorized access into a computer used by the federal government;  the unauthorized interstate or foreign access of a computer system with an intent to defraud;  the unauthorized interstate or foreign access of computer systems that results in at least $1,000 aggregate damage; and the fraudulent trafficking in computer passwords affecting interstate commerce."[34]
In 1992, Congress amended the CFAA to include malicious code, a major omission in the original version.  As well, the Act defines punishment for reckless conduct as well as for maliciousness.  Indeed, it extends the term "protected computer" to all computers involved in interstate communication; but does that not refer to all computers on the Internet?  Yes, it would appear that it does.

However, the Act makes certain decisions regarding the definition of a computer which, it is possible, will quickly become obsolete.  For example, it specifies that a portable hand-held calculator is not a computer, although one can easily imagine that the line between computer and calculator will blur significantly enough to make the two undistinguishable.

It is also important to realize that, while there are other statues that relate in some way to computers, this is the only federal statute pertaining to computer trespass in particular.[35]

3.2    State Statutes Regarding Computer Crime and Trespass

As Rasch points out, every state but Vermont has passed some type of computer crime legislation, many of which are modeled on the Computer Fraud and Abuse Act.[36]

New York, deemed a fairly typical state in terms of computer trespass statutes, covers several kinds of crimes including "unauthorized use, computer trespass, computer tampering, duplication of computer-related material, and criminal possession of computer-related material."[37]

There are several elements to this, typical, statute.  The first is the punishment of unauthorized use, a fairly clear issue in the computer arena, preventing curious but basically harmless hackers from sniffing around computer systems where they do not belong.  The second, labeled computer trespass, requires the entering party to commit a felony while on the system, or to obtain computer material.  The third, computer tampering,  involve the deletion or reformatting of information on a computer system without access.  The fourth, duplication of computer material, requires the hacker to gain significant financial benefits from the duplication in order to be found guilty.  The fifth and final, makes criminal the possession of certain computer material, which covers possession of illegally copied materials or programs.

4    Technical and Legal Introduction to Cybertrespass Cases

In order to decide how to look at computer trespass and crime in the future, one needs to look at how they have been prosecuted in the past.  Even after the passage of the Computer Fraud and Abuse Act, the traditional elements of trespass (intent, entry, property, and permission) remain complex in cyberspace.  In an attempt to investigate how well traditional trespass and existing cybertrespass case law and legislation have been applied, it is necessary to examine several representative cases from both technological and legalistic viewpoints.  For clarity, this investigation will focus on each element individually, discussing particularly illustrative cases.

4.1    Element I: Intent

Intent is a challenging issue in cyberspace.  In real space, for example, there is only one of a person.  When he walks or drives onto a particular place, he may not know his exact location, but he knows that he is on a bridge or in a field.  He cannot be somewhere he did not intend to be; he may get lost, but he won't mysteriously find out that overnight he drove into the CIA headquarters in Virginia without his own knowledge.

In cyberspace, these two fundamental assumptions (the singularity of one's person, and the need to be somewhere in particular) are challenged.  That is, a virus that one person creates can replicate without his knowledge, effectively spreading him (in the form of a program he wrote) in more than one place at one time.  Indeed, it could also communicate itself onto computers he had not intended it to be on, all over the world; in this sense, it is in places he never intended it to be either.

Technical Discussion

This was the precise case of United States v. Morris.[38] To see how many computers he could infect, a Cornell University doctoral student of computer science wrote a program called a "worm."  The worm would try to replicate itself and spread throughout the Internet.  One in every 15 worms (statistically) was supposed to contact his Cornell machine,[39] thereby allowing him a rough count of how many computers his program had found and installed itself on.

The worm was a small C program (only about 90 lines of original code) which, upon installation, would first try to find out whether other worms were present on the computer.  If there were none, it would replicate itself.  If there were other worms, the program (theoretically) would not replicate but shut itself down. The program would then try to deduce the password of the user currently on the machine.  It utilized bugs in three major services that run on most computers using the Unix operating system (fingerd, rsh, and sendmail) to gain access to the computers.

The program was fairly clever in certain ways, but failed in others.  It was very clever in the way that it hid itself from the computer's view.  That is, the system administrator could not detect the worm's presence except to notice the performance problems that it was causing.  However, the problem failed in one very important respect: it did not property detect the presence of other worms, so it would replicate even if there were other copies of the program executing on the machine!  Thus, some copies of the worm never shut down on their own, spread to too many computers, or copied themselves as much as the computer could hold in memory.  This disabled many machines although Morris did not intend to do so, nor did he even intend to infect so many machines with the program.  Although his intent was complex and the longest running copy of the program ran for only four days, the Computer Fraud and Abuse Act was used to successfully prosecute felony trespass charges against him.

Legal Discussion

Under traditional trespass analysis, the required intent is not necessarily a hostile intent, or even a desire to do any harm. Rather, it is merely an intent to be at the place where the trespass allegedly occurred. It does not matter whether the plaintiff intended the specific harm that occurred or not.[40] Because Morris deliberately created and distributed the worm, he would have had the requisite intent under traditional trespass law.

Morris also possessed the intent required by the CFAA. Section 1030(a)(5)(A) of Title 18 covers anyone who "intentionally accesses a Federal interest computer without authorization, and by means of one or more instances of such conduct alters, damages, or destroys information in any such Federal interest computer, or prevents authorized use of any such computer or information, and thereby causes loss to one or more others of a value aggregating $1,000 or more during any one year period. . . ."[41] During his trial, Morris argued that this section required the Government to prove both that he intended to access protected computers and that he intended to prevent others from using them, thus causing a loss.[42] The Second Circuit rejected this argument and interpreted the relevant section as saying that the intent only had to go to the access, and not to the loss.[43] Therefore, the result under the CFAA in this case is the same as what would be predicted under traditional property law.

However, under traditional property law, there is a limit on how far down a chain of causation liability can attach. The relevant case is Palsgraf v. Long Island Railroad,[44] which stands for the proposition that defendants in negligence actions are only liable to foreseeable plaintiffs. In Palsgraf, a  train conductor pulled on board a man who was trying to jump onto the moving train. This caused the man to drop a package, which exploded and caused scales at the other end of the platform to fall down and injure a woman standing underneath. The Second Circuit held that although the conductor was negligent in pulling aboard the late passenger, the woman was an unforeseeable plaintiff who was outside the reasonable range of expected risk, and the company was not liable.[45]

In the Morris case, the damage caused by the Internet worm should have been sufficiently foreseeable such that the intent to spread the worm should constitute sufficient grounds for liability. However, that same damage would not have been foreseeable if Morris had been tricked into releasing the worm, or if an evil prankster had told him to send an email message that -- unbeknownst to Morris -- indirectly released the worm. The damage also would have been unforeseeable if Morris had been throwing a superball around a friend's office and had accidentally hit the keyboard of a computer, thereby unleashing the worm. Under the Palsgraf principle as applied to these circumstances, Morris would not be liable.

4.2    Element II: Entry

What is entry in real space?  This issue is slightly more complex than intent, but still understandable.  To have entered onto another's space is fairly easy to understand.  The same cannot be said of cyberspace.  What does it mean for a child to sit in her own home and type on her own computer, without ever leaving the confines of her parents' home?  Can she enter onto another person's physical property with her own person?  Of course not.

Technical Discussion

But can she enter into the computer system of another entity, even doing damage during her "visit?"  Absolutely, as has been shown in many cases already decided in the United States, notably Thrifty-Tel, Inc. v. Bezenek.[46]

In this case, the Bezenek children received some access code information into the computer system of a small telephone-service provider, Thrifty-Tel.  After gaining access, they tried some random numbers for use in generating long-distance telephone calls without being charged.  They made a few telephone calls using this slow, manual system of finding random numbers.  After stopping for several months, they once again accessed the system; this time using a random-number-generating piece of software to making 1,300 telephone calls!

It was much easier for the Bezenek children to "break in" to the Thrifty-Tel computer system and find random numbers on their computers than it would have been to break into their real space offices.  In that case, they would have been under greater physical surveillance and would have (obviously) needed more sophisticated or powerful tools of access.  What makes the computer world so accessible and interesting to the world is also what makes it so easy to trespass.  Ease of "entry" of all kinds is simply easier in the cyberworld than in the real one.

In the end, the court defined trespass as the electrons' touching the Thrifty-Tel computers, but that is perhaps a short-sighted argument.  Could trespass be just looking over someone's shoulder while they view sensitive information?  The Computer Fraud and Abuse Act is also short-sighted in this manner.

Legal Discussion

Traditional common law used to require direct physical touching or entry.[47] However, the modern common law rule recognizes indirect touching or entry, for example by microscopic particles or even sound waves.[48] Following this line of reasoning, the Thrifty-Tel court held that "the electronic signals generated by the defendant were sufficiently tangible to support a trespass cause of action."[49]See 18 U.S.C. §; 1030(e)(1)(1998). This section defines a "computer" as

an electronic, magnetic, optical, electrochemical, or other high speed data processing device performing logical, arithmetic, or storage functions, and includes any data storage facility or communications facility directly related to or operating in conjunction with such device, but such term does not include an automated typewriter or typesetter, a portable hand held calculator, or other similar device. The permutations of this concept can easily become more complicated. What if instead of a wristwatch, the device in question was a Palm Pilot, which arguably would fall within the scope of the CFAA? What if instead of looking at the device, the transgressor was breathing on it, which would arguably constitute "touching" by molecules of air? More insidiously, what if a computer emitted infrared rays that interfered with the operation of another person's Palm Pilot? These hypotheticals demonstrate the unsatisfactory and incomplete manner in which real world analysis transfer to cyberspace scenarios.

4.3    Element III: Property

The next issues, those of property and permission, seem clearer than the first, although they have not been argued extensively to date.  They will be discussed in greater detail in Section V, which deals with possible trespasses.

Technical Discussion

Some cases seem clearly to be trespass. In American Computer Trust Leasing v. Jack Farrell Implement Co.,[50] the plaintiff used a modem to access the defendant's computer system for his own purposes.  This seems to clearly have been "entry" as described above.  It is possible to draw from Bezenek and from Morris that just having computer-to-computer communication is enough to have access.  Unfortunately, as described below, the Minnesota court did not see it that way.

In State v. Olson,[51] a campus police officer was convicted of trespass when he used police computers to track female students at the University of Washington.  Although he had permission to use the property, he did not have the permission to do that for this purpose, and was found guilty because of it.  This is clearly a complex issue, although not a complex one technologically.  It is easy to argue that some property in cyberspace (that is, computers themselves) belongs to the person who purchased it or owns it physically.  The data on the other hand is a slightly more complex problem, as in this case.

Legal Discussion

"Property" for trespass purposes is usually understood to be real property or chattel. The philosophical underpinnings of property law, as set forth by John Locke,  Jeremy Bentham, Garett Hardin, and Margaret Radin, also help to define what should and should not be protected as private property.

Specifically, the Lockean approach of labor-desert would argue that property laws should apply most strongly to that with which one has mixed one's labor.  Bentham would argue that property rights should be conferred in such a way as to maximize social utility. Hardin's "tragedy of the commons" theory postulates that private ownership is necessary to avoid the overuse of property held in common, and that property that is particularly susceptible to such "overgrazing" should thus be protected. Finally, Radin's personality theory recommends that property rights should vary proportionately with the level of the individual's personal attachment.[52]

However, most states simply define what constitutes "property" for the purposes of their trespass statutes. For example, Minnesota limits the scope of its trespass statute to "wood, timber, lumber, hay, grass, or other personal property of another person. . . ."[53] In American Computer Trust Leasing v. Jack Farrell Implement Co., the federal court interpreted this statute as covering only property that is "produced by and grown upon land."[54] The plaintiff argued that even though the computerized accounting and inventory records in question had not been "produced by and grown upon land," the scope of the general trespass statute should be extended by analogy to protect such fruits of computing. However, the court rejected the plaintiff's soil-computer analogy, and declined to extend the scope of the general trespass statute to cover computer trespass.[55]

Several commentators disagree with the American Computer result and argue that real space laws should be extended by analogy to cyberspace. For example, Trotter Hardy and Ethan Katsh argue that web pages, email accounts, databases, and other intangible entities constitute property in the world of cyberspace.[56] Some state legislatures and courts seem to have already adopted this method of thinking; for example, the Washington state legislature has explicitly used the term "computer trespass,"[57], and Washington courts have recognized the legislative intent to analogize. Specifically, the court in State v. Olson acknowledged that the traditional trespass metaphor has been extended to computers such that "premises" in the general trespass statute corresponds to "databases" in cyberspace.[58]

The question of property also raises the issue of standing. So far, most of the cases have been brought by the ISPs and the actual owners of the hardware.[59] But under traditional trespass law, the property in question need not be owned by the plaintiff.  Rather, use or possession of that property may invoke sufficient property rights to merit the protection of the law. Most jurisdictions permit users of property to bring actions for trespass and trespass to chattel, even if the owners of that property choose not to.[60] Applying this principle to cyberspace, if an email account is to be considered property, the victim of a spamming attack (i.e., the user of that email account) should be permitted to bring an action on common law trespass grounds even if her service provider (i.e. the owner) were to decide not to.

Some courts already permit this. In fact, one recent case involved an ironic reversal of roles: In Cyber Promotions, Inc. v. America Online, Inc.,[61] Cyber Promotions -- a notorious spammer -- brought suit against America Online (AOL) -- a national ISP. Cyber Promotions claimed that "email bombs" sent by AOL,[62] which were intended to "reverse spam" Cyber Promotions, caused two of Cyber Promotions' ISPs to terminate their relationships with Cyber Promotions. Cyber Promotions asserted that AOL had violated the CFAA. Although the court did not reach the trespass issue, this case represents the judicial recognition of a user's property rights in her email account.

4.4    Element IV: Permission

The final issue is that of permission.  In some sense, permission is a fairly simple concept.  In the abstract, it asks: is someone allowed to do what they are doing?  In the technical sense, it states: for this particular file or entity, this person has the permission to do these particular things.  In the legal sense, however, permission seems more complex, especially when one deals with difficult issues such as email and other Internet-based services.

Technical Discussion

Some technical aspects of technical permissions, like file permission in the Unix operating system, are very clear and explicit.  If I have "read" access to a directory but not "write" access, I am allowed by the owner of those files to read (look at, browser on the Web) those files.  I am not, however, allowed to try to alter those files or delete them.  This is a very understandable protocol for assigning permissions to files or directories.

But what about email?  Do I have permission to email anyone I want, for any purpose?  It stands to reason that I should be able to, unless my emails are harassing or otherwise disturbing to the individual.  I can send snail mail to anyone I want, so why not send email?  There are two cases where courts have determined that the reason does not hold.

At Monmouth University in 1995, one student chose to "spam" two school administrators with several thousand emails.[63]  The emails overloaded the university's email handling system.  This might be a problem that a university student could have forseen, but did he not have permission to email whomever he wanted?  What if he had wanted to send thousands of postcards through the U.S. Postal Service?  Is there something which implicitly forbids him from doing this any more than emailing? In a more recent case, CompuServe, Inc. v. Cyber Promotions, Inc.[64], the court determined that Cyber Promotions did not have permission to send system-debilitating numbers of emails to CompuServe's customers.

Permission in cyberspace is further complicated by the fact that it is so easy for spammers and other computer users to conceal or falsify their identity, location, and path through the network.

Legal Discussion

Under traditional trespass law, the question of permission revolves around whether the supposed permission was express or implied, whether it was revocable, and whether there was a condition of limited use. For example, knowledge and acquiescence to an ongoing activity may constitute implied permission.[65]. Regardless of whether it is express or implied, permission to enter may be revoked subsequently by the property owner.[66] And finally, even an invitee can commit trespass if he or she exceeds any conditions of limited use imposed by the property owner.[67]

When analyzing permission in cyberspace, courts and commentators have compared these real space situations to actions such as granting authority to use a  computer or account, granting authority to use a computer or account for certain purposes, maintaining an email-accessible account, and even connecting oneself to the Internet. The CompuServe case involved analysis at all three levels of the permission issue. First, the court held that making the business decision to connect to the Internet did not constitute implied permission by CompuServe for Cyber Promotions to spam its users.[68] Next, the court found that CompuServe had revoked whatever implied permission may have existed when it notified Cyber Promotions that it no longer consented to the defendants' spamming. The defendants' continued use thereafter constituted trespass.[69] Finally, the court found that CompuServe had expressly limited the consent it had granted to Cyber Promotions to use its equipment. CompuServe's posted policy statement asserted that it would not "permit its facilities to be used by unauthorized parties to process and store unsolicited e- mail." Thus, the court found that trespass had occurred.[70]

5    Examples of Trespass in Cyberspace

Trespass is an issue that scares numerous Internet users. Rumors abound about what damage can actually be done, how "hackers" break in and steal files, yet very few people understand the real scope of the problem and often exaggerate the dangers involved. A thorough examination of the different means of cyberspace trespass is necessary to make sense of it all.

5.1    Spam Email

Technical Discussion

Spam email is one of the most salient downsides of the Internet. Email, an otherwise extremely useful medium, is abused by direct marketers and others to sell their products, get their messages across, or just generally annoy netizens. The fact that the marginal cost of sending such unsolicited messages is practically zero has magnified this problem to the point that significant time and energy is lost in trying to prevent "spam" from reaching users' email accounts.

The technical aspect of how email is sent will help in ascertaining how spam fits into the domain of cyberspace trespass, if at all. If the email is anonymous in any way, the issue is somewhat different than the one explored here, given that the right to send anonymous commercial messages has not yet been precisely defined and has even been denied in certain states like California.[71] In other cases, the steps involved in sending an email are as follows:

One particularly difficult situation that arises within this process is spam relaying. This situation occurs when the sender maliciously connects to a mail server other than his or her own in order to send the email message. If this mail server is not configured to prevent such cases, it will effectively relay the email, thereby consuming processing power for someone else's use. In numerous cases, spammers have abused such misconfigured mail servers to send hundreds of thousands of emails per day, literally bringing the mail servers to their knees under the load.

Getting back to the recipient's problem, though, it is important to note once more that the offending email arrives onto the recipient's computer only via a direct request by this recipient. Furthermore, the mail server acting on behalf of the recipient is inherently open to all connections from other mail servers because of the very open nature of email and the protocols that support it. In the end, the email sits on the recipient's computer, taking up a certain amount of disk space, a parameter which the user can easily discover before agreeing to download the message locally.

Legal Discussion

With spam, the key elements are property and permission. Since spam travels via multiple servers from its sender to the mail server, where it resides for an indefinite amount of the time until it is transferred to the user's computer, there are actually two level of properties where spam may actually cause the trespass. One is at the server space of the ISP, whether it is the recipient's mail server or some intermediate server that serves a function such as relaying messages. The other incident of trespass occurs at the level of the end user's hard drive space on the personal computer or space on her account.

The issue of property is important in determining who has "standing." Since spam travels through these two levels of property, there are many places where the spam sender created liabilities. The two levels of property also pose questions about the permissions associated with the two levels of property.

The question of permission on the mail server level is an interesting one given the existing architecture of the Internet. Since the Internet is based on the concept of packet-switching and compatible protocols, one can argue that putting a mail server on the Internet implies permission to receive any data intended for the server. However, since the amount of spam usually burdens the server more severely than the amount of spam received by an individual, the interference with the property rights are also greater on the server level. In this case, there is a question as to whether the permission to receive data is limited in some way. In the two spam incidents mentioned in Section 4.4,[72] the current analysis seems to take the approach of limited implicit permission in the case of mail servers. The argument is that the mail server implicitly agrees to receive most of the data sent to it except in the instances where such data interferes with the quality of the server's operation.

On the user level, an individual usually does not know if a piece of email belongs to the spam class. She simply sets her computer to download all messages from her mail server. In this case, there is implicit permission for all these messages, including the spam, even though there is no explicit or implicit permission to this particular piece of message. The question with regard to a user's permission is difficult since the interference with property is less than on the server level. A user may have a hard time claiming that the implicit permission is limited to not receiving a particular piece of email since it is harder to distinguish emails on their physical, non-content-related attributes. Thus, it will be harder for an individual to argue that there is no implicit permission to receive such an email since implicit permission based on content is hard to determine.

5.2    Active Email

Technical Discussion

In today's world of HTML-extended email where HTML has become a truly active component (with add-ons like Java and Javascript), one can no longer consider email messages completely passive. Downloading an email message as in the previously explained sequence of events can lead to very unexpected consequences.

Javascript, recently revamped and renamed ECMA-script by the W3C,[73] is an HTML-based scripting language, which allows for automatic actions to be taken, mostly in an event-driven way. Java, a programming language and platform-independent virtual-machine invented by Sun Microsystems, takes this active client side even further by enabling any code to run on a machine that downloads a web page (with Java explicitly enabled in the user's set of preferences). Java is built with security concerns in mind, such that code downloaded from an arbitrary web page, or included in an arbitrary HTML-email, cannot access local files or perform direct network operations. The program can, however, use up CPU power for its own purposes. This may even be done in a hidden manner such that the user is not aware that his or her spare CPU cycles are being used for the Java applet's author's benefit. In the case of bugs in the mail client's implementation, or in the case of unprotected technologies like ActiveX (although that particular technology seems to be dying off), there are even ways of having the active email component read local files and upload them back to the email's author.

The problem is to determine what a user implicitly accepts by reading HTML-enabled email with active extensions (such as Java) turned on. More specifically, it is extremely difficult to draw the line between a useless animation that takes up computing power, and a "stealth" encryption-key cracker that uses the same amount of power, but for a useful purpose (even if the usefulness is only visible to the sender of the email, and not to the person whose computing power is being used).

Legal Discussion

The use of active emails posts additional legal questions from the example of spam. In the case of active emails, there is a better justification to limit permission than in the traditional spam example. There are more ways to differentiate these emails based on non-content-related attributes such as the particular use of CPU cycles. However, there are still difficulties with this approach since emails are becoming more "interactive" and may contain music, animation, and other digital data that requires CPU processing. As technology increasingly moves emails towards multimedia, the line between traditional spam and active emails will not be so clear.

5.3    Hacking a Web Page

Technical Discussion

Web pages are becoming increasingly important in today's society. A web page is a public window into the lives of companies, universities, organizations, and individuals. In a number of cases, a web page is the only form of existence of a particular organization or company. For example, Egghead Software, which has long been a central software/hardware resource for personal computers, has completely moved out of normal retail stores, and is now an online-only company.[74] Their web page is their business.

Naturally, when a web page is "hacked" by intruders - modified in any way without authorization - enormous damage can be done. Unfortunately, very few people understand how a web page can be cracked. This leads to a fear of the Internet, a fear so imposing that some companies simply refuse the risk of having an Internet presence at all. It is important to understand that cracking a web page is a purely logical process, one that is almost always enabled by a mistake or security bug on the web server machine.

The CGI hack is one of the more interesting and classic examples of just such a security hole. The problem starts because most web pages want to be more interactive than a simple static web page. There is a need for user input to be processed by a server-side program which then returns a customized output to the web surfer on the other end. A common process for accomplishing this task is through the Common Gateway Interface (CGI), an agreed-upon method of having a web server execute a program with user input. The problem is that, very often, the user input is fed directly to the CGI program, and the CGI program does not appropriately check for potentially malicious inputs. In fact, a large number of CGI scripts are written in such a way that an average hacker would be able to execute an arbitrary command on the server-side, including replacing a web page, changing a password, or even deleting necessary system files on the server which would cause a total breakdown of the web server. This can often be done simply by entering specifically formulated inputs on certain web page forms.

In short, hacking a web page is not necessarily difficult. However, it is hardly unintentional, and remains a logical, fixable problem.

Legal Discussion

Using the traditional trespass metaphor, web page hacking is similar to planting a tree in someone's backyard or spray-painting someone's storefront. The perpetrator changes the content of the web page by adding files to the web server. This is a more fitting case of trespass since boundary of property is clearly defined (the web server) and the owner neither implicitly nor explicitly grants permission. Although one can argue that web page hacking is only possible because of security loopholes, a hole in the fence is still no defense to trespass in real space.

There is a more interesting question of when did "trespass" occur. In real space, there is a crime of burglary, which punishes breaking and entering with intent to commit a felony. The crime of burglary occurs at the point of entry; it is irrelevant if the actual intended felony was committed. Similarly, the trespass can occur without successfully replacing the web page, but mere "entry" onto the web server where the "entry" was not allowed. However, this approach is also difficult since the CGI loophole is not exactly the same as a broken fence or an unlocked backdoor. The loophole does not allow intruder to enter a forbidden area; it merely allows intruder to do more that what is permitted within an area. In this sense, there is no "entry" in the clear trespass sense because the CGI script carries out the act of "entering" or executing commands. The only act of the user is feeding the CGI protocol "malicious data."

5.4    Breaking Into a Personal Computer

Technical Discussion

As more and more personal computers turn to more advanced operating systems like Windows NT or Linux, the possibility of remotely accessing the machines becomes more probable. It is in fact quite ironic that danger comes from using a more advanced system!

The number of known security holes in these operating systems grows every day, as more and more hackers find ways of logging into the machines remotely. Numerous attacks are documented at the Rootshell website, http://www.rootshell.com, a site which, ironically enough, was recently the target of a web page hack (as described above). The site publishes numerous known attacks and programs that implement and automate these attacks, making a cracker's job extremely simple, even though the site clearly states that it does not promote such behavior.

The process of attacking a personal computer is similar to that of hacking a web page, except that it is usually considered more difficult. Indeed, while the existence of a web page implies at least one potential method of entry into the computer (the web service itself), a properly-configured personal computer can be made to refuse any network connection. With the advent of the Internet and the popularity of distributed computing, however, most machines with the capability of accepting remote connections are configured to do so in some way or other, in order to allow authorized users to access the computer remotely. At that point, bugs in the service that provides the remote connection can be used to perform unauthorized operations, which can eventually lead to a cracker giving himself full access to the machine.

Legal Discussion

In this example, the facts usually satisfy all four elements of trespass. The issue here is the scope of punishment and different approaches to the problem. The Computer Fraud and Abuse Act makes such a behavior a crime with certain limitation once the requisite criminal intent is met. The criminal intent in this case only needs to be intent to entry. There are also many other state laws that deal with this type of computer hacking.[75] These statutes usually designed with this specific act in mind, and they have very detailed definition of what constitutes each element of the crime.

The question is what is the best approach to such a problem. Similar to web page hacking, there are other recourses for the owner of the server. The CFAA makes such a behavior a felony. Given the seriousness of the problem and the potential for great damages and abuses, the Congress has chosen to impose criminal sanctions upon these behaviors. But it is still unclear whether there will be or should be civil liability for what the hacker done. If the statute has provided adequate remedies for the owner of the computer, then common law trespass action will probably not be the best way to approach the problem.

There are also questions of possible identity fraud (or attempt of such crime) since there are usually very personal data in the computer compromised. Since computer hacking can lead to other damages or criminal acts, trespass may not be necessarily the best way to deal with the problem. If common law trespass is still to apply in this case, it is to provide a last resort for the owner of the computer when all other means have failed.

5.5    Conclusion

An interesting question with trespass or intrusion of computers arises when individual computers are beginning to merge into a larger network. There are areas and files where a person might to keep private but also allow access to other parts of the computer. Using the trespass approach, a doctrine based on the concept of property, there is an increasing "zoning" of storage spaces. There are some "private zones" where individual has really limited access, and there are "public zones" where access is permitted and encouraged. There are also zones where different permission is granted such as in the case of the use of CGI, where certain access (posting data) is allowed and changing information (hacking web page) is not.

Technology provides means of potential trespass far beyond those of real space. It is possible for one individual to have a much bigger impact with a lot less time and a lot less effort than would be necessary in the physical world. Whatever legal structure is adopted to deal with cyberspace trespass, it must take into account the sizeable motivation behind the act of trespass and thus provide an adequate remedy.

6    Current Strategies for Dealing With Trespass in Cyberspace

Many steps are being taken through technology, legislation, and social norms to combat trespass in cyberspace.  Many piecewise solutions, the effectiveness and the legality of which are questionable, have been proposed by governments, organizations, and outraged individuals.  Due to the strong individual and communal sentiment against it, an especially large effort is being undertaken to stop unsolicited commercial email, or spam.

6.1    Technology

Just as many companies sell software to facilitate the sending of bulk email, several organizations offer software applications, available both commercially and as freeware, to filter out unwanted email.  Filters currently work based on either origin or content.  Origin-based filters block all incoming email that is sent from domain names on an opt-out list.  Alternately, opt-in origin-based filters allow the reception only of email that originates from recognized and trusted domains.  For example, a user may enable a filter to allow messages only from friends and relatives on an opt-in list.  Likewise, filtering based on content may use an opt-out or an opt-in approach.  For instance, a content-based filter may block all messages containing the phrase "make money fast" or may allow only messages containing a certain code word to enter the recipient's mailbox.  Filters may be implemented at different levels of application: by individual users, by network administrators, or by ISPs.

Unfortunately, the integrity of the messages being sent severely limits the effectiveness of filtering.  Commercial emailers often forge their sender information and use misleading subject headings in order to avoid unwanted replies and to circumvent filters.  In an effort to improve the effectiveness of filters, other technologies have been suggested to work in conjunction with filters.  For example, digital signatures could be used to verify the domain names of senders.  Labeling standards, similar to PICS,[76] could be implemented to categorize all email.  Under this regime, all commercial email would be labeled as such.  Then, filters would be better able to identify and sort out unsolicited email.  Additionally, commercial emailers could maintain remove lists to discontinue the sending unsolicited email to users who request to be taken off of their distribution lists.  However, this technology only works after a user has received spam and, again, relies on the integrity of the sender to honor the recipient's request for removal.  From a different angle, the Ad Hoc Working Committee has proposed a system that would channel mail into dynamically generated accounts as well as Bulk Mail Transfer Protocol (BMTP), an alternative to Simple Mail Transfer Protocol, which would actually use a separate channel for email sent in bulk.[77]  In order to achieve universal acceptance, though, these technologies would need to be implemented in a fashion that would require their use, either through law or code, as an accepted standard available for all email communications.

Technologies to prevent forms of cyberspace trespass other than spam can also be employed.  On the World Wide Web, sites can require digital certificates before allowing users to view certain pages, and cancelbots can be used to delete multiple postings to online discussion groups.  For networks and individual computers, software such as CyberSoft's VFind can be used to scan for computer viruses and to detect trojan horses.  To protect against viruses, local routers can be programmed to filter all packets from an attacking source.  Network administrators can also build firewalls around their systems to increase the difficulty of gaining access from outside of the network.

It is important to point out that unauthorized access frequently occurs when a hacker exploits a defect in a security system.  Although many of the previously mentioned methods are effective in deterring intruders, they may be expensive or require extensive technological abilities in order to be implemented.  Moreover, network administrators and individual users must consider the trade-off between cost and the level of security they wish to install.

6.2    Legislation and Government

In 1991 both the Federal Bureau of Investigation (FBI) and the Department of Justice (DOJ) established computer crime units.  While the CFAA is the only federal legislation that addresses trespass in cyberspace, it does not specifically address unsolicited commercial email as an offense.  Many states, including California, Connecticut, Massachusetts, Nevada, and Washington, have already passed bills to place restrictions on spam.[78]

At the federal level, anti-spam bills are pending.  The Anti-Slamming Bill (HR 3888), a telecommunications act, includes an amendment to prohibit forged return addresses on email and to require that commercial emailers maintain remove lists.[79]  A more stringent bill, the Netizen's Protection Act (HR 1748), seeks to ban all unsolicited advertisements by email and to require correct sender information on all electronic mail.[80]  The requirement of accurate sender information would certainly help individual users to filter out unwanted email from known spammers; however, this legal approach fails in two respects.  First, the legality of this legislation is questionable on the grounds of First Amendment rights since these proposals may be interpreted as infringing upon free speech and privacy.  Second, U.S. laws would only affect email sent within the United States and would have a yet-to-be-determined influence over email sent from or to foreign users.

6.3    Social Norms and Markets

Due to the intrusion of spam into the established cultural of the Internet, an intense anti-spam movement based on social norms has emerged among much of the Internet community in recent years.  Using a vigilante approach, some annoyed email recipients fight spam with spam by sending numerous, and often vehement, replies back to the sender.  At a higher level, groups have been organized for the sole purpose of eliminating spam.  For example, the Coalition Against Unsolicited Commercial Email (CAUCE) advocates a legislative approach and supports the Netizen's Protection Act.[81] Other groups have created communal databases, such as SAFEeps, of people who do not wish to receive spam.[82]

The debate over spam has even brought together parties with conflicting interests in hope of reaching some sort of agreement.  Comprised of representatives from ISPs, on-line advertising companies, software companies, anti-spam organizations, and civil liberties groups, the Ad-Hoc Working Group submitted a report to the FTC which called for certain technical and legislative reforms to reduce significantly the amount of spam.

A more extreme grassroots approach has been incorporated in the Realtime Blackhole List (RBL).[83] Clearly, the legality of the RBL is questionable due to its overbroad protection which blocks more email than just spam.  It also may violate the Sherman Antitrust Act if it is interpreted as conspiracy in the restraint of trade.  Nonetheless, althought the RBL may not be the most desirable method for eliminating spam, it has proven effective in many respects and has shown that the Internet community will not stand for spam.

In a less drastic way, individual users may also fight spam by reporting the receipt of unsolicited email to ISPs, mail providers, and network administrators with the hope that these authorities will implement or enforce anti-spam policies.  Additionally, a market approach could be taken by charging, in terms of cash or computation time, commercial emailers for each unsolicited advertisement they send.

In addition to attempts to reduce spam, efforts can also be made to prevent other forms of unauthorized access.  Users can choose their network passwords carefully to reduce the risk of hackers falsely logging in as them.  Network administrators can implement policies to deter potential hackers.  For instance, many network administrators log the number of failed log-in attempts to their system.  If the network administrator notices three successive failed log-in attempts, he could send a message to the IP address where this failure occurred to let the user know that he is being watched for suspicious activity.  Such action may prevent a potential hacker from continuing an attempted network break-in.  The bottom line is that users and network administrators should use common sense and implement policies, when possible, so that they do not make breaking into a network any easier for hackers.

7    Metaphors

In general, metaphors can be a helpful means of analyzing novel situations. For example, hacking a web page on the World Wide Web can be likened to entering a door marked as private on a busy city street. Similarly, sending spam email can be compared to engaging in traditional direct marketing. Ultimately, however, these metaphors are limited both in their descriptive and in their normative powers. Criminal activity on the Internet, or at least, activity which seems like it should be criminal, is rarely as simple as the prototypical trespass case. Cyber-trespass often involves interactions that are more complex and nuanced than real space trespass, and cyber-trespass often raises questions about the scope and relative importance of the elements of trespass. More general property metaphors also fail. For example, hacking a web page and sending spam email are closely bound up with notions of property and violation of ownership. However, such traditional textbook analyses often fall short and raise more questions than they answer.

This section reviews the inadequacy of the trespass metaphor in the abstract, then explores two specific applications of the trespass metaphor. When all three fail, this section proposes breaking down the analysis of trespass to a conceptual level and distilling important principles. After establishing the definitions of important terms and concepts, this paper will explore the legal and technical architectures that can help implement these principles in cyberspace.

7.1    The Trespass Metaphor in the Abstract

As discussed earlier, courts agree that trespass can be applied to cyberspace.  However, certain problems arise from the differences between real space and cyberspace in this application.  First, a broad range of definitions and interpretations exists in deciphering the language involved.  For example, the cyberspace terms "property" and "entry" are inherently different from their real space counterparts.  Does cyberspace "property" include both hardware and software on personal computers?  Who, if anyone, owns data that is stored remotely or while it is being transferred via network connections? A consistent use of these terms relies upon a common, but currently nonexistent, understanding of their translations into cyberspace. Unlike the real space language of trespass, however, the definitions of these terms may become quickly obsolete.

Currently, the CFAA defines a computer as:

...an electronic, magnetic, optical, electrochemical, or other high speed data processing device performing logical, arithmetic, or storage functions, and includes any data storage facility or communications facility directly related to or operating in conjunction with such device, but such term does not include an automated typewriter or typesetter, a portable hand held calculator, or other similar device.[84]
The accuracy of this definition is questionable even today. How long will it be until it is obsolete? With the incredibly fast rate of change in technology, new devices that cannot be construed under this definition will force adaptations of this model.

Second, detecting computer trespass may be significantly more difficult in cyberspace than it is in real space.  Electronic signals that trespass onto a computer are intrinsically more difficult to detect than a person who trespasses upon another's property.  In cyberspace, a person less often knows when his computer is being trespassed upon, and sometimes no evidence exists to prove that trespass occurred.  For instance, trojan horses and active Java applets may copy or borrow a computer's resources without the computer's owner ever knowing.

A propagation effect of trespass may also be more pronounced in cyberspace.  In real space, a trespasser can only be on one piece of property at any given time.  Through network connections, though, vehicles of trespass may be passed along from one computer to another quite easily.  For example, an animated email that steals computer resources can easily be forward from one computer to another.  Viruses and Internet worms that travel through networks also contribute to the rapid spread of trespass instances.

Finally, enforcement of legislation against trespass also becomes more difficult in cyberspace.  Certainly, it is impossible to punish illegal behavior that cannot be detected or proved, but even known trespass may be difficult to prosecute due to the current status of laws and the nature of the Internet.  Cyberspace trespass transcends the element of location which physically exists in real space.  Location and property are key elements in enforcing legislation against trespass, but they are not well defined in cyberspace. Thus it appears that the trespass metaphor fails to adequately describe actions in cyberspace.

7.2    Specific Trespass Metaphor: Street of Doorways

Hacking a web page lends itself naturally to comparison with physical entry. As Trotter Hardy wrote, "[m]any of the words used to describe Web sites have a basis in real property: the word 'site' itself is one, as are such expressions as 'home pages,' 'visiting' web sites, 'traveling' to a site and the like."[85] This property metaphor envisions the World Wide Web as a series of interconnected loci; a network of doors, both public and private, which the surfer can visit -- or at least, try the doorknobs. Unauthorized access or modification of a web page is therefore comparable to unauthorized entry of a physical space.

Because of the intermingled characteristic of the World Wide Web, this metaphor envisions the geography of a New York City block, where sequential doors can lead to a private residence, a grocery store, a members-only social club, and a fee-charging museum. These entities represent the varying levels of privacy and exclusion that can be applied in cyberspace. The doorway analogy also lends itself to the varying levels of protection that an owner can erect; an open door suggests a different assumption of permitted entry than a door that is closed, but unlocked, which again leads to a different assumption than a door that is closed and deadbolted.

The doorways metaphor relies on traditional trespass laws to the extent that once a door is defined as public, private, or something in between, unauthorized access is determined to be trespass. The owner of a private residence can exclude others without cause; a reason as simple as "I don't like the way you look" will suffice. The owner of a social club also has significant discretion in choosing whom to admit and exclude; as long as it is a private club, users who do not fit the club's membership requirements can be excluded and, theoretically, charged with trespass.[86] However, the owner of a grocery store or movie theater has significantly less freedom to exclude. Title II of the Civil Rights Act of 1964,[87] as well as state laws,[88] prohibit exclusion on the basis of factors such as sex and race. Ejectment must be on the basis of grounds such as violent or disruptive conduct.[89] Otherwise, the establishment will be held liable for violating the ejectee's civil rights.[90]

The difficulty with applying this analogy to cyberspace is that the "doorways" do not always identify what lie inside. Because web pages often feature hyperlinks without describing the linked pages, the user frequently does not know the nature of a site until he or she enters it.[91] Sometimes, the access is the result of confusion of the part of web page creator. [92]

Conceivably, unwelcome access could also occur if the creator of a web page were to link her site to an otherwise unavailable web site[93] without that site's consent. Users might also stumble upon a confidential web page after guessing at a URL.[94] While the real space metaphor presumes an element of intent on the trespasser's part, these scenarios illustrate the way in which inadequate labeling in cyberspace can lead to ambiguity over whether the intruder intended to trespass, and thus whether it makes sense to punish her. Labeling issues aside, the real space "doorways" metaphor does accurately map the intuition in cyberspace that the owner of a personal homepage should seek to install a lock or at least close the front door[95] before bringing a lawsuit for trespass. Owners should not have an expectation of privacy in property which they leave out in the open, accessible to all who happen by. However, there is a strong social norm in cyberspace permitting users to visit numerous doorways and to try multiple doorknobs. There is also much less social condemnation for hackers who enter a computer upon finding a security hole than there is for burglars who enter a building.[96]

Thus, for all the reasons stated above, the metaphor comparing the World Wide Web to a street of doorways is a poor fit at times.

7.3    Specific Trespass Metaphor: Direct Marketing

Spam email can also be thought of in terms of an analogy -- specifically, the analogy to direct marketing. Whether door-to-door sales, telemarketing, junk mail, or fax advertisements, direct marketing shares several characteristics with spam email: They are both unsolicited, they are both commercial in nature, and they are both directed towards a known and finite audience. These similarities have led some courts and commentators to think of and treat spam email as yet another form of direct marketing,[97] subject to the same principles of trespass to land and trespass to chattel.[98]

These principles vary depending on the medium of direct marketing. Because of its intrusiveness, the Supreme Court has held that commercial door-to-door advertising may be regulated as to time, place and manner by state and federal governments.[99] However, because of important First Amendment considerations, governments may not prohibit such solicitations altogether.[100] Similarly, the Ninth Circuit has upheld restrictive regulations on telephone solicitation requiring callers to identify themselves, to call only during reasonable times, to allow individuals to choose not to receive future calls, and to refrain from using automated calling machines[101], but has limited the range of permissible limitations to content-neutral time, place, and manner restrictions that are narrowly tailored to meet important government objectives.[102]

Because of their less intrusive nature, direct mail solicitations are subject to less restrictive regulations than are door-to-door and telephone solicitations.[103] Individuals may still filter junk mail by making requests of the postmaster,[104] and governments may require the labeling of unsolicited commercial mail as such.[105] However, the Court has held that governments may not themselves filter or prohibit such commercial mail.[106]

Finally, fax solicitations have been treated in an entirely different manner. The Federal Telephone Consumer Protection Act of 1991 (TCPA) totally bans all unsolicited advertisements by fax.[107] Because fax solicitations force the conversion of the recipient's paper and toner for the advertiser's use, and because receiving unsolicited faxes can tie up the recipient's resources, preventing her from receiving other, wanted communications, courts have upheld the constitutionality of the TCPA.[108]

Drawing from these analogies, one commentator has argued that spam email should be regulated by the government through a labeling requirement for junk communications, restrictions on the sale of personal information, or a complete ban on electronic solicitations.[109] Carroll asserts that unless spam email falls under the "cost-shifting and scarcity" exception of fax solicitations, a complete ban would treat all commercial solicitations as trespasses per se, even if some recipients may have welcomed the solicitations. Analogizing to the door-to-door solicitations that were under consideration in Project 80's, Inc. v. City of Pocatero,[110] he argues that the government may not supplant private decision making as to whether the solicitations are welcome.[111]

However, the validity of the direct marketing metaphor is questionable. Although spam email is like the other forms of direct marketing, it differs from each form in a significant manner. Unlike door-to-door, telephone, and direct mail advertising, spam email comes at almost no cost to the advertiser. After the initial purchase of computer equipment and mailing lists, the marginal cost to the spam emailer of sending one more email is virtually zero. This enables the spam emailer to costlessly send many more messages to many more people. Thus, spam emailing lacks the built-in cost deterrent present in other forms of direct advertising. Considering that studies have found direct mail to be profitable with only a 1% response rate,[112] even with the significant costs of printing and postage, there seems to be enormous potential for spam email to overflood and paralyze the Internet. This would suggest more stringent regulations on spam email.

Another disjuncture in the analogy concerns the disruptiveness of the communication. Receiving an email is not as attention-demanding[113] or as intrusive[114] as a knock at the door or a ring on the telephone. In this aspect, it is most similar to direct mail and fax solicitations. This feature suggests less stringent regulations on spam email.

However, spam email differs from fax solicitations yet again in the areas which distinguish fax solicitations from door-to-door, telephone, and direct mailing: cost-shifting and scarcity. Although there may be some cost-shifting for those who subscribe to pay-by-minute commercial online services,[115] a significant -- if not greater -- number of users have free or flat-rate Internet access.[116] Cost-shifting is therefore not as compelling an argument as it is in the fax transmission context.[117] Moreover, spam emailing has not yet reached the scale where other types of email are precluded, whether because of the timing of the transmission or because of the volume of memory occupied. The absence of cost shifting and of resource scarcity militate in favor of a less stringent regulatory regime.

Moreover, there are several aspects of spam email that were never even contemplated in the analogous media. Most users obtain email accounts through third-party ISP's; maintenance of these email accounts is subject to contractual agreements between the recipient and the ISP (such as subscription agreements), and between the sender and the ISP (such as policy statements). Although analogous contractual agreements exist in real space (renting a post-office box, for example), the application of contract law exceeds the normal boundaries of the direct marketing metaphor. Courts have begun to consider the application of contract rules to spam email situations,[118], but many questions remain unexplored and unanswered. Who has standing to sue the spam emailer -- the recipient or the ISP? Could the ISP be held liable for negligently failing to filter obviously commercial email? Alternatively, is ownership of an email account an implicit signal that commercial mail is welcome? Is the email inbox public or private space? Does the answer to this depend on whether one is an ISP, or whether one contracts with an ISP? As one commentator has stated, "A home in the real world is, among other things, a way of keeping the world out. . . . An online home, on the other hand, is a little hole you drill in a wall of your real home to let the world in."[119] Such a view suggests that commercial email is to be tolerated and even expected -- yet another conflicting recommendation resulting from the trespass metaphor.

7.4    The Usefulness and Limits of Metaphors

As demonstrated above, the trespass metaphor fails both in the abstract and in two specific applications. It thus appears that trespass-esque crimes in cyberspace, such as hacking web pages and spamming, are their own animal -- so much so that they should not be stretched or squeezed to fit pre-existing analogous molds. Nevertheless, the shortcomings of the specific metaphors have led commentators such as Carroll to pick and choose from the extant options to fashion the solution that "feels" the best, without articulating a coherent principle for the rules and exceptions proposed.[120] This problem is compounded by the natural tendency to draw connections among metaphors. For example, when envisioning an architecture for dealing with cyber-trespass in general, it is tempting to add the "feel-good" solution for spam email to the "feel-good" solution for web page hacking and to find an answer in the average. However, the limitations of metaphors as an analytical tool suggest that a different kind of analysis is required to evaluate a regime as different and unprecedented as cyberspace.

Metaphors serve a useful purpose, and are especially favored by lawyers, who are constantly challenged by bizarre and unforeseen fact patterns that stretch the limits of legal precedent. Comparing interactions and relationships that exist only in cyberspace to real-world situations is helpful to the extent that it extends core principles to new and complex situations where the novelty itself is distracting. This helps to preserve common values and ensure that they survive the development of a new legal regime. Using metaphors also "places new modes of processing and interacting with information in a familiar framework and . . . make[s] users feel comfortable with the new technologies."[121] This helps to promote the spread and assimilation of the new technology.

However, using metaphors can lead to counterintuitive or undesirable results. For example, some opponents of spam email argue that the metaphor of direct marketing ignores the strong cultural norms of the Internet that condemn commercial communications. Thus, someone who is indifferent to, or even enjoys, receiving direct mail might find spam email utterly offensive. Similarly, the strong cultural acceptance of hacking in cyberspace may lead to a user tolerating or supporting the hacking of web pages while at the same time condemning trespass in real space. For such users (who at one point constituted the majority of Internet users), application of the real space metaphor would lead to an undesirable result in cyberspace.

Using metaphors can also result in inconsistency and confusion. Depending on which medium of direct advertising is selected, the recommendations for regulating spam email differ considerably. Comparisons to door-to-door and telephone solicitation suggest time, place, and manner restrictions. Comparisons to direct mail solicitations suggest user regulation and some government regulation. Comparisons to fax transmissions suggest a complete ban. Instead of recognizing that the comparison to direct marketing is apt because spam email is like -- and equal to -- the other types of commercial solicitation, commentators and courts are tempted to categorize spam email into one of the four types and make adjustments according to what "feels right." Similarly, comparisons between hacking and trespass tend to get caught up in the categories and exceptions of real space. Is the web page like a commercial storefront? A living room? A bedroom? A bedroom with a "Do Not Disturb" sign on it? Although such exercises can help define an appropriate course of action, conflicting recommendations are ultimately unhelpful.

More insidiously, using metaphors poses the risk of gazing myopically on a new frontier. Indeed, some commentators would argue that the risk of shortsightedness is near certainty. As Ethan Katsh writes, "Particularly during the early phase of the development of some new technology, differences in how some task is conducted are not necessarily easy to recognize and, as a result, the qualitative differences between the old and the new technologies tend to be neglected. It is almost to be expected that how the new media differ from the old will be glossed over. Thus, early films were labeled 'moving pictures' and were not immediately understood to be an art form."[122] As another commentator has observed, "the first cars were called 'horseless carriages' and looked as though they were designed to be pulled by a horse. It took many years to realize that a good shape for a car is quite different. Radio was originally called "wireless telegraphy"; it took years to realize that the great application of radio was broadcasting."[123]

Thinking about trespass-like behavior in cyberspace solely under the rubric of trespass can lead to this same myopia. For example, one commentator recently evaluated spamming by presenting two conflicting metaphors, discussing related case law, and glibly reducing the inquiry to "determin[ing] if receiving unwanted email is more similar to Consolidated Edison and Bolger, or to Frisby."[124] The practice of hurriedly latching onto a metaphor runs the risk of limiting potential solutions to variants of that metaphor, and may subsequently ignore entirely different metaphors that prove more illustrative.

Moreover, an over-preoccupation with property lines and defining "mine" vs. "yours" could hamper the development of the Internet in directions that do not depend on barriers and zones. One such zone-free possibility is the conception of the Internet as a world without hard drives. Rather than being stored in any physical locus, information would be stored in constant motion over a worldwide network from which information may be recalled instantly. Thus, the best uses of the Internet may be yet unseen, and may turn out to be premised on concepts that are totally unreliant on notions of property.

7.5    Conceptualization: A More Helpful Approach

A more helpful and robust approach is to extract important principles and concepts from the obvious metaphors, and then determine how best to preserve those principles and concepts in the new regime. In the context of real space trespass, several important concepts emerge. These are explored below.

Containers

One of the most important concepts in the realm of trespass is the concept of containers as the barrier between what is private and what is public. Whether the container is an envelope that affords a letter all the protection of the federal postal statutes, the four walls of a building that define the space within as private and subject to Fourth Amendment protections, or a purse which can neither be opened and rummaged through, nor bent and worked to determine the shape of what is inside, containers are the basis of trespass and property law as the thing which bestows and defines ownership. In cyberspace, the concept of containers could mean the architecture of a personal folder on a shared hard drive; a user id and password request required of all who access a particular page; or even encryption itself, which admits only the intended recipient and excludes all others. More importantly, the concept of containers is just that -- a concept -- and is far more open to unprecedented technological developments than is an analogy that is tied to one manifestation of a container.

Signaling

Another important concept is signaling. Although signaling is not required in common law trespass, most states have adopted some sort of signaling requirement for the plaintiff who wishes to claim trespass protections. For example, the Arkansas state statute requires that "No Trespassing" signs be posted on farmlands and reservoirs before intruders can be charged with trespass.[125] Signaling requirements have also been established for various forms of direct marketing. A telemarketer is free to make commercial telephone calls except to those recipients who have requested that further calls not be made to them.[126] In states that permit recipients to opt out of commercial mail, direct mailers may legally send their advertisements to everyone except those who informed the post office to filter out all such mail for them.[127]

Generally, signaling is important because it ensures that the defendant had some sort of malicious -- or at least, careless -- intent. Imposing liability even in the absence effective signaling would result in strict liability, which does not affect deterrence because the defendant could not have knowingly avoided the crime. Strict liability is therefore unattractive in every case, except where the trespasser is the least-cost avoider. That is, where fault lies with neither party but where the trespasser can take avoidance measures more cheaply than the property owner can, strict liability should lie with the trespasser. Signaling is also important because it imposes a barrier on achieving privacy that ensures to some degree that users will think before designating something as private.

The importance of signaling has already been suggested through the social norms in cyberspace. There is a strong intuition that a user who stumbles upon an easily accessible and utterly unprotected web page should not be deemed to have trespassed just by having accessed it; the owner should have signaled in some way or at least erected minimal barriers to indicate that access is not welcome. This intuition is partially related to the notion that cyberspace is much more for exploring and trying doorknobs than is real space. As a result, responsibility is shifted towards web page owners to expect visitors and to plan accordingly. Signaling would also have the effect in cyberspace of minimizing the amount of information kept private. It would thereby help to maintain the Internet as a forum for the free exchange of information.

Self-determination

Self-determination is closely related to individual sovereignty; commentators identify it as the most important stick in the bundle of property rights.[128] As Locke and Radin have argued, controlling the disposition of what one owns is an essential tenet to property and even to personhood.[129]

Self-determination runs counter to paternalism; the owner's wishes must be respected, regardless of what course efficient social planning would otherwise dictate. This principle is evident in the context of direct mail, which a state or local government may permit a resident to refuse if she so desires. Similarly, the TCPA requires that recipients be able to remove themselves from telemarketers' lists.[130] Owners of private property can erect "No Trespassing" signs which must be respected by would-be door-to-door salesmen.[131] Just as importantly, however, courts have held that an outright ban on direct mail, telemarketing, and door-to-door solicitation is improper because it assumes that all recipients dislike commercial solicitation.[132] "The government may not supplant private decision making as to whether the solicitations are welcome."[133]

As applied in cyberspace, this principle of self-determination dictates that whether a user wants to keep the information inside her container public, private, or semi-private, or whether she wants to sell or lease her container to others, such disposition should be her choice, and should be effectuated by the law.

Although the metaphor of trespass is useful in initially grasping and thinking about certain types of crimes in cyberspace, over-identification with particular analogies can lead to inconsistency, incorrect results, and shortsightedness. Breaking down the analysis of cyber- trespass to a conceptual level helps to ensure that differences with real space analogues are not ignored as inconvenient disjunctures. Rather, conceptual thinking encourages the careful consideration of and the dismissal of irrelevant differences. Conceptual thinking also encourages unfettered problem-solving and untraditional solutions. Having presented the basic concepts relating to trespass and property, this paper will now define terms that will be used to explore the legal and technical architectures that will help facilitate these principles.

8   Goals

As discussed earlier, the metaphor of common law trespass is neither sufficient nor appropriate in its application to cyberspace. Before developing a new architecture for dealing with trespass, our unauthorized access, in cyberspace, we must first define the goals we hope to accomplish with the proposed architecture. This section maps out the objectives, for the Internet in general and for access rules in particular, that we hope to achieve. Furthermore, these goals will provide the basis for the creation of an architecture that is both effective and desirable.

8.1    Internet Goals

1. The Internet should promote a free exchange of ideas and information.

The Internet has emerged as a medium, like no other, which allows for the exchange of ideas and information among a global community. The concepts and technology used behind the Internet enable this exchange to occur easily and at a minimal cost. It is amazing to think of the potential offered by cyberspace, and the Internet particularly, in terms of communication, information, and technology.

The accelerated pace of technological development introduces difficulties to establishing and maintaining a responsible use of this potential. Although it may be necessary to place restrictions on Internet use in order to ensure an effective and fair utilization of Internet resources and to protect conflicting interests, these limits should be minimal so as to support the primary goal of preserving and promoting the free exchange of ideas.

The architecture proposed in this paper, first and foremost, should contribute to and encourage the free exchange of ideas and information. In doing so, it should move with societal norms to preserve the open nature of the Internet.

2. Commercial uses of the Internet should be available and viable.

With the potential to reach large audiences at low marginal costs, the Internet has become a valuable commercial resource for both producers and consumers. Existing businesses have expanded by using technology and Internet communications to communicate with business associates via email, advertise products online, provide company information to potential customers and employees, and facilitate the distribution of products to a larger base of customers. Entrepreneurs have seized opportunities to create online business such as virtual bookstores and newspapers. Additionally, the Internet itself has contributed to a new industry with the emergence of Internet service providers, search engines, software such as browsers and email applications, and hardware necessary to use the Internet and its multimedia applications.

Similarly, consumers have enjoyed the conveniences brought about by Internet commerce. Individuals can now save time and money by accessing services and purchasing goods online, with transactions ranging from subscribing to online newspapers to purchasing airline tickets. Internet users can enjoy personalized shopping on the Web and can instantly confirm purchases via email.

The proposed architecture should continue to serve and facilitate these business purposes on the Internet. It should allow businesses and consumers to take advantage of the Internet as a commercial resource while maintaining a certain amount of protection for their businesses, public images, and privacy. Further, the Internet should encourage online commercial uses and transactions that benefit both businesses and consumers in terms of convenience, efficiency, and profitability.

3. Individual users should feel safe on the Internet.

In order to promote the free exchange of ideas among as large a population as possible, people should not be inhibited from using the Internet due to concerns over safety. Rather, the Internet should be developed to promote a feeling of comfort with precautions available to protect users' safety interests.

The Internet originated as a tool utilized only by a technically elite class of individuals. This community established the rules of etiquette, known as Netiquette, which were used to govern behavior on the Internet. They collaborated to achieve common goals and did not have to worry about online safety. However, as the Internet expands in scope and individuals with bad intentions, unfortunately, do enter the online world, these self-imposed rules are no longer sufficient to police the Internet. A well-defined framework to discourage and prevent unauthorized access will contribute to increasing safety on the Internet.

Individuals should feel comfortable in using services available on the Internet, and each individual's sense of privacy should be upheld while doing so. For example, users should feel safe in sending email to and receiving email from strangers as well as acquaintances. Individuals should feel safe browsing the Web without fear of their every move being tracked and their privacy being invaded. People should feel safe participating in online discussion groups and posting to online bulletin boards. Web page authors should feel safe publishing content online. Parents should feel safe allowing their children to use the Internet at home, at school, or at friends' houses. Customers should feel safe purchasing items from businesses on the Internet without divulging financial information to unintended third parties. Users should feel safe downloading software without fear of downloading viruses to their personal computers.

The proposed architecture should work to minimize unauthorized access in a manner that improves Internet security and safety. In doing so, this architecture should encourage individuals, commercial entities, and governments to use the Internet, thus supporting the primary goal of a free exchange of ideas.

4. The Internet should be easy to use.

Many people have a fear of technology, and this fear often prevents them from enjoying the benefits available through the use of computers. The Internet began as a tool for only technically knowledgeable "nerds" who understood the structure and the language of computers and the Internet. However, as the range and usefulness of the Internet grow dramatically, a significantly larger segment of the population logs on to the Internet.

Parents, children, businessmen, and "nerds" alike, independent of technical knowledge, should be able to access and use the Internet. Using the Internet should be as simple and commonplace as operating a television. The newness of Internet technology certainly creates the perception of a barrier to users who are unfamiliar with computers; however, technology and innovation can be used to reduce this obstacle. For instance, graphical user interfaces and multimedia applications improve the ease of using the Internet and entice a broader spectrum of individuals to take advantage of this technology.

In order to include as large a population as possible in the Internet community, the proposed architecture should not restrict users on the basis of technical knowledge. Instead of adding a layer of complication to the Internet, the architecture should use signals to guide users through the Internet. Moreover, the implemented technology should continue the trend of improving Internet usability and convenience.

8.2    Access Goals

1. Everyone should know what the rules of access are.

Each Internet user should know whether or not he has permissions to access a particular section of cyberspace. In the spirit of the speed limit rule in real space, each user should be responsible for knowing the rule, and ignorance should be no defense. Each Internet user should be educated as to the default rules of access, and exceptions to these rules should be obvious. If a user does not know what access permissions he possesses, he should be able to find out easily and undeniably.

On the other side, access providers should work to achieve this clarity of access rules as well. System operators should make efforts to allow only authorized users to access their networks or computers and should work to prevent unauthorized users from accessing their systems. Cyberspace owners, including owners of computers, networks, web pages, and email, should be encouraged to take precautions against attempted violations of the integrity of their entities, but an unreasonable burden should not be placed on these owners.

The proposed architecture should use signals to clearly identify the rules of access. Further, this structure should establish guidelines for punishing individuals who violate these access rights.

2. Publicly accessible space should exist on the Internet.

Some spaces on the Internet should be available to all users and should serve to benefit the general population of the Internet community. These spaces should be designated as public, and individuals should be able to access these spaces easily and freely.

Since the Internet is not subject to a centralized body of control, public spaces need not be owned or approved by a governing body. Rather, they are public with respect to access rights, not ownership. Private groups or individuals may own public spaces, and they may determine and maintain the level of interaction with these spaces. Although the Internet is a valuable commercial resource, overprivatization of the Internet should be avoided. It is assumed that markets will help to avoid this problem since commercial entities will encourage individuals to use their resources on the Internet.

The proposed architecture should allow for publicly accessible space on the Internet. Additionally, this regime should clearly define public spaces and the access rights to these spaces. It should also make provisions for individuals who violate these access privileges.

3. Private space should be allowed on the Internet.

Private space, which permits access only to authorized users, should be allowed on the Internet. Again, these spaces are private in regard to access and not ownership in the sense of real space property. Private spaces may be owned by public or private entities, such as commercial organizations, educational institutions, governments, and individuals. Ownership implies some right to establish privacy. If an entity owns space or elements of space on the Internet, that entity has a right to establish and maintain private space. This privacy may be established by denying access to some or all others.

The proposed architecture should allow for private spaces on the Internet and should facilitate the implementation of private spaces. This structure should clearly define private spaces and the access rights to these spaces. Further, this structure should contribute to maintaining the integrity of private spaces and should make provisions for individuals who violate these private access rights.

4. Privacy on the Internet includes the right of self-determination.

An owner of private space on the Internet should be allowed to determine how his space is to be used. Namely, the owner may decide the contents of this space, may designate who may access this space, and may determine the type and level of interaction with this space. For example, a web page owner may choose to publish a page with commercial content, allow all users to browse this page, and arrange for celebrity chats in this forum. Self-determination also allows owners to determine transactions involving the ownership and the use of their space.

The architecture set forth in this paper should extend the right of self-determination to owners of private space. Further, this right should be protected by law.

5. The rules of Internet access should meet and withstand the changing needs of technology.

The proposed architecture should be adaptable to accommodate future technologies. It is very feasible to predict that the Internet of the future will network all "computers" in a manner that is unimaginable today. New computers will be developed, and the protocols for storing and transferring data will change as well. The regime proposed for determining and encouraging proper cyberspace access should be flexible enough to withstand these changes in technology and to accommodate the Internet, or whatever medium may develop, in the future.

 9   Definitions

Before developing the new architecture, it is important to gain an understanding of the language that will be used. The following definitions provide more precise definitions of the key terms which are to be used in the remainder of this paper:

The concept of embedded containers is also permissible, and access to an inner container first requires access to all of the surrounding containers. For example, consider a private file that exists in a user's personal account on a network. The network firewall is the outermost container, which encloses all of the named entities and presents a barrier to entry by users outside of the network domain. The next container is the user's personal account which requires a username and password in order for the user to access the individual account. This container protects all of the entities, such as files and mailboxes, within the user's allotted space and provides a barrier to entry by those users who do not have permission to access the account. The innermost container in this example is the read/write/execute setting of the individual file as set by the user. (In this case, only the author has permission to view or modify the private file.) This container encompasses the file data and presents a barrier to entry by any user who does not have the proper access permissions for that particular file. In order to access the file, a user must have permissions to (1) enter the network, (2) log in to the personal account, and (3) view the file.

10    Proposed Architecture

The definitions of entities, control, and containers lead the way in proposing a technical and legal architecture to prevent unauthorized access of all kinds.  The term access replaces the term trespass, since trespass is a broken metaphor.
Figure 1: Flowchart showing the relationship of law and code.

The overall philosophy of the architecture works as shown in Figure 1.  If unreasonable access is attempted, it should first be thwarted using code.  Should code prove imperfect, law should defend someone who tried to protect his property using reasonable measures.  Should law fail, the law should be reconsidered.  It is the genuine hope of the authors of this whitepaper that the flexibility of its suggestions means that it almost never fails on a legal level.

Moreover, this architecture will provide conceptual guidelines and, through the use of signals and law, will work to deter, prevent, and remedy unauthorized access in cyberspace.

10.1    Containers

What exactly is a container?  It is the defining properties which create a container.  While in the abstract it is outlined in the Metaphors and Definitions section, its exact technical properties need to be thought out in order to provide the framework for our proposal.

Each container has an owner, whether an individual or organization, who has control over the contents and the permissions of the container.
Each container consists of a barrier or set of barriers between its contents (entities) and the outside world.  Each barrier has potential points of entry, either physical or virtual, through which users access the contents within it.  The degree of protection provided by a container is measured by how effectively its barrier prevents unauthorized outside parties (users) from gaining access to its contents through those points of entry.
Each container has specified permissions for interaction between the outside and the inside through the points of entry, such as whether or not a particular individual might read or modify the entities in the container.  It is important to note that, when not specified, it is assumed by the law and the user that a particular access attempt is allowed.

10.2    Ownership

What does it mean to own a container without the existence of physical property?

In Property Rules, Liability Rules, and Inalienability: One View of the Cathedral,[134] Guido Calabresi and A.  Douglas Melamed describe a framework which helps illuminate the options which exist for allocating and protecting rights related to unauthorized access in cyberspace.[135]  Their framework is particularly helpful in establishing ownership and access rules in cyberspace because there is the opportunity and freedom in cyberspace to design technical and legal architectures that meet the goals which society  wants to achieve.

Ownership reflects a bundle of rights or entitlements. An entitlement is a 'right' to have a particular interest protected.[136] Ownership of containers includes the entitlement to exclude third parties, the right to possess and control access to a container, and the right to determine what actions or behaviors are allowed (e.g. to read or to write to) with a container. Entitlements conflict because they represent competing interests, such as the entitlement to exclude third parties v. the right to access containers freely or to attempt to gain access to a container by hacking or cracking. Because resources are scarce and individuals may have conflicting interests regarding the same container, society must decide which entitlement will prevail and how that entitlement will be allocated and protected. Property rules, liability rules and inalienability are ways of allocating and protecting entitlements.[137]

An individual can purchase a container or receive a container as a gift.  In either case, he can sell the container as well.  Property rules control the allocation of access to containers in cyberspace.  The market allows owners to obtain and dispose of containers.  The majority of containers are governed by these property rules.

Most containers can also be rented, even without cost, to other people.   This second situation is facilitated by contracts which respect the property rules of the Calebresi and Melamed framework.  An example of this is any email service which currently allows users a specific mailbox on their server in which to store email.  Unauthorized access into a renter's container is no more acceptable than into an owner's, and the architecture should reflect that in a generic sense.

Calabresi and Melamed discuss how inalienability can be used by society to allocate and to protect entitlements. Inalienable entitlements involve the greatest amount of state intervention because the state forbids sales of the entitlement in some circumstances, such as transactions made while you are drunk or incompetent are void, or in all circumstances, such that you cannot sell your kidney or sell yourself into slavery.[138]

The third entitlement scenario, that of inalienability, does not currently exist.  The government has not specified any containers which are inalienable, but the situation is not so far fetched.  A private encryption key given to individual citizens by the United States government can be called a container.  If the government were in the business of handing out private keys as it hands out passports, as it may one day do, then it clearly would like to make inalienable the sale of that identity.  In that sense, a container may one day be identified as inalienable.

Finally, and perhaps most importantly, the owner or renter of the container is fully responsible for all signaling and securing of private areas.  If a violation occurs, the only legal recourse will apply if the owner has reasonably secured the points of entry into his container, as defined later.  In order to maintain the assumption of public access, the burden of proof must be on the owner of the container.  Since ownership is a legal concept in its entirety, the only issues to address are legal ones.

Legal components: Access is permitted unless technical barriers attempt to prevent it

The default legal rule for access to data and systems in cyberspace is "access permitted."  In order to make a container private in cyberspace, an owner must erect a barrier to limit access to the space and to clearly indicate that one attempting to enter the space is not authorized to enter or to obtain the requested information.  Erecting barriers to create private space occurs by using the technical architectures described below.[139]  Circumvention of "container" architectures is a crime under both federal and state criminal statutes and creates a civil cause of action for the owner.  To implement the default rule, our regime repeals all provisions in federal and state criminal or civil statutes which provide owners a cause of action against third parties who access private data or systems without authorization where the owner does not erect a technical barrier to bar access by third parties.  State legislatures and Congress must enact criminal statutes which require owners to erect barriers to access by third parties before third parties can be prosecuted for the crime of unauthorized access.  Only by erecting such a barrier, and thereby clearly indicating that the space is private, does an owner acquire an enforceable property right.

Making "access permitted" the default rule retains the central characteristic of the precursors to the current Internet -- an open data source and communications channel for the free exchange of ideas and information and ensures that this benefit will continue in the future. Providing privacy and enforcing the entitlement to exclude only for owners who tag or label their space imposes a cost on owners who go against the social norm of early Internet users by seeking to limit access to information accessible via the Internet.  This cost reflects the difficulty and high costs of enforcing restrictions on access where owners erect no access barrier.  In order to obtain society's resources to protect private space, the user must incur the cost of labeling or tagging the space and thereby, reduce the cost of enforcement.  The default rule for access does not force owners to give up their right to regulate access to their data or system as the price of a connection to the Internet.  After erecting barriers to access by third parties, owners should feel safe from authorized access and confident that the law provides a remedy if a third party violates their entitlement to regulate access to containers under their control.   While a rule which does not protect a user's determination of what should be private deters both personal and commercial users, it is particularly harmful to the goal of encouraging widespread commercial use of the Internet.  Enforcing owner-specified access restrictions on spaces labeled "private" using technical architectures which create barriers promotes use of the Internet for commercial purposes.

Where owners have erected access barriers, third parties who enter private space without the owner's permission are subject by statute to criminal prosecution, with fines and jail time set at levels sufficient to deter such acts, and civil liability for any damage caused.  When no damage occurs, owners also can bring a civil suit because of the entry of the third party and can recover punitive damages, in addition to nominal damages.  It is important to note that the mere imposition of criminal sanctions will not be effective if these penalties do not provide a deterrent sufficient to keep perpetrators from bypassing existing property rules, like the entitlement to exclude others from private space.  If (1) property rules exist to determine how an entitlement is allocated and (2) if the entitlement is taken, the law provides the victim with compensation only for damages,[140] then a perpetrator can take the entitlement without negotiating for it and transform property rules to de facto liability rules.  If there is underdeterrence (i.e. a criminal penalty which is not set high enough), there is the substantial risk that property rules will be converted to liability rules by default. Richard Posner discusses how the criminal law functions to discourage market bypassing.[141] This is accomplished by setting criminal penalties above the level necessary to compensate a victim for her loss. The antisocial act then becomes less attractive because the costs associated with the act are higher than those existing in the market. The problem of concealment of crimes should cause the penalty imposed to be even higher because there is a probability less than 1 that the perpetrator will be caught.[142]

All existing Fourth, Fifth, and Fourteenth Amendment doctrines should apply to private containers in cyberspace.  Strong procedural safeguards and judicial oversight must accompany the cultivation of an expertise in hacking by law enforcement agencies.   This can be accomplished by procedures similar to those existing in the federal wiretapping statute,[143] by warrant requirements and by doctrines like the exclusionary rule.

Legal components: Users have a right to public space

All current and future users of cyberspace have the entitlement to the existence of a certain amount of public space on the Internet.   Just as with private containers, we rely on the market and the social norms of the early cyberspace pioneers -- which our regime bolsters -- to provide this public space.  Government regulation may be necessary in the future if society feels there is not enough public space in cyberspace.  It is important to note that the entitlement to public space is not synonymous with the right to access to specific content in cyberspace or to specific permissions (i.e. to read or to write to or to execute).

Retaining public space on the Internet ensures cyberspace will continue to facilitate the free exchange of ideas.  Requiring all users to "purchase" privacy or access restrictions to their data or system should help ensure that over-privatization of containers in cyberspace does not occur.  The market, not government, is the best means of providing public space on the Internet.  The market is providing public space now in the form of chat rooms, news groups, bulletin boards, and web sites on the World Wide Web.  While it is possible that the market will provide an insufficient amount of public space, the Internet's ethos of free information sharing and the relative low costs of creating a publicly accessible space in cyberspace (e.g. all one needs is a computer, communications software, access to a server, and a phone line),  make it unlikely that social norms or prohibitive costs will limit the provision of public space.[144]

Government provision of public space requires an infrastructure to create and to maintain public spaces.  Such an infrastructure is costly and inevitably would have difficulty responding to the demands and preferences of the public.  The destruction of geographic boundaries in cyberspace creates a free-rider problem with public space provided by government.  No specific level of government (local, state or national) or government entity has strong incentives to provide public space since if any space is provided by any government entity, other government entities (and anyone else anywhere in the world, for that matter) can merely free-ride and use the space which already exists.  The government entity which provided the public space could limit access to only its residents, but then this space would not be truly public space.  It would become private space owned and operated by a government entity.  Some government may still choose to create public spaces intended for use by their citizens, even though the benefits of this public space will be enjoyed by all accessing the Internet.

It is important to note that access to public space does not ensure that users can access specific content.  The public spaces accessible may have largely commercial content.  This is inevitable in a system where individual users retain the right to decide what information is private.  In addition, the cost of public access provided by private owner/operators may be $0, but owners may require users to give up some personal information.  This cost is necessary if markets, not the government, provide public space.

10.3    Barriers and Points of Entry

In general, a barrier is what stops users from exceeding their authorization of access into a container through the points of entry.  The points of entry into the container through the barrier are both its physical and network-based openings into the container.  Points of entry into the physical barriers of a container are the points of entry into the physical location which houses the container, such as the lock on the door of the room in which a computer resides.  The other points of entry into a computer include its TCP/IP ports, those openings which accept TCP/IP packets into the computer.  Finally, there are a limited number of other points of entry into the computer, such as the UDP ports.

While it is possible to enumerate the points of entry into a container available now, the idea is to extend the concept into containers which we have not yet identified.  If it is possible to divide containers into physical and virtual, then it is possible to describe the theoretically-possible points of entry as a function of that distinction.  The physical containers have physical and software points of entry into the system; the virtual containers have software points of entry but no physical ones, as there may be no physical manifestation of a particular virtual container.

Having identified the points of entry into different types of containers, it is possible to now define the barriers preventing unauthorized entry into each of them. The technical and legal concepts of encryption, trusted agents, and signaling are aided by the use of market pressures for encryption products as well as social forces toward the use of encryption in general. The following are technical options; it is assumed that for any given container, one or more of the options will be utilized. Market forces create a demand for encryption products and architectures such as trusted agents. Market forces also create incentives for owners[145] to erect barriers to the entry of third parties into containers or to erect barriers to control the behavior of authorized users.[146]

Technical components: Encryption as a container

The first technical concept is that of encryption as a virtual container, or more specifically as the barrier of the container.  Encryption is used primarily when the other options, such as trusted agents outlined below, are unavailable.  In the case of a virtual container, it cannot defend itself against attack, so it must be externally protected.  This is done here using encryption.

The architecture proposed in this whitepaper relies heavily on the use of encryption, primarily because encryption is the very thing which enables virtual containers.  An encryption key in this way defines two realms: the data encrypted with that key, which can be considered inside the container, and data not encrypted with the key, which can be considered outside the container.  Different types of encryption (public versus secret key encryption) define different permissions, as the next section will show.  In addition, it seems that encryption will become more widespread and less controlled through law as time goes on.

Public key encryption defines a public key and a private key used in the encryption and decryption, respectively, of data.  In a world where a public key encryption infrastructure is established, one can imagine an Internet whitepages of sorts, where a sender of email looks up a recipient's public encryption key.  The sender has just learned how to write to the recipient's container: encrypt a message with the recipient's public key.  Even out on the network, an encrypted email or other entity is protected by encryption, assuming it is strong enough.

Once the email is encrypted with the recipient's public key and sent out onto the network, it becomes property of the recipient, within the confines of the container called his mailbox.  Anyone who can find the recipient's public key can write to the container.  The only person who can read from the container, however, is the owner because he is the only one with the private key.  In this way, encryption is a container which anyone can create and which only the owner can read or access.

Due to the security inherent in certain encryption algorithms, an encrypted connection to a container would not be considered a point of entry into the container as much as an extension of the container to include the connection.  An unencrypted connection or unencrypted virtual container, however, is totally unsafe, as a postcard is in today's postal system.

Technical components: Trusted agents

The second technical concept, that of the trusted agent, applies more to the physical container than the virtual one.  The concept is most easily explained using an example.  A government computer containing the most classified information will likely be in a very secure environment.  Physically, it is probably located in a room protected by security guards, electronic identification devices, and mechanical locks.  It is probably not connected to any network at all; if it is, then most of the ports are shut off and all allowed connections are made into ports responding with software daemons using encrypted connections.  The number of people allowed to access the computer is probably quite low.  All in all, the security of the computer is entrusted to different sorts of agents: whether they be security guards, identification devices, locks, or secure software daemons.  The access to the machine is controlled by guards of all kinds: hardware and software.

In a general sense, a trusted agent is something which the container trusts to allow users into a container and prevents them from exceeding their authority once inside.  The "guard at the gate," a piece of software or hardware or even a person, has been trusted for decades by different operating systems to do this very job.  It is not a grandiose or new idea.  The Unix operating system, for example, has a salient example of this guard: telnetd.  A user can telnet into a computer in attempt to access his account.  If he has the proper authentication information, he will be allowed inside the system to complete only certain operations consistent with his level of authority in the situation.  While the trusted agent is helpful in the understanding of permissions, it is one that creates a barrier between the outside of the container and the inside.  Assuming that the owner employs reasonable software agents, such as fingerd on the appropriate port of his computer, he will be protecting his container in a legal sense.  If someone were to exploit the finger daemon in order to obtain improper access to the computer, the owner would have total justification in claiming improper access.

Technical components: Sandboxing

Sandboxing, a third component of the technology, is the technique in which a container reinforces itself by creating a thicker barrier around itself.  The Java programming language has used sandboxing to prevent Java applets from, for example, erasing a user's hard drive.  Unlike Microsoft's ActiveX (TM) technology, which relies on the legal liability of the authors of malicious code, sandboxing actually prevents a program from leaving its container and extending into the parent container: in one case preventing a program carried in an email from affecting the recipient's hard drive.

Technically, one would basically need to use Java right now to implement sandboxing.  Should sandboxing of all containers take off, it is entirely conceivable that more programming languages and software products would be created to resolve the need for sandboxing in a more general sense.

Technical components: The whole picture

While the whitepaper presents several options for creating barriers between protected entities and the rest of the Internet, it is assumed that, for each container scenario, one or more of the options will be chosen.  To send email securely, for example, will require only the use of encryption as a container.  To maintain the sanctity of one's mailbox, one may employ trusted agents to access his mailbox securely as well as sandboxing to prevent improper access outside of that container.  To maintain the security of a file server account will require the use of software agents as well as hardware protection in the form of safe location of the mainframe.

On the whole, some permutation of the three options presented here need to be used to provide a secure container on a generic level.  In addition, some form of signaling must be used to inform users of their right to enter a container.  Whether that means providing the appropriate header information to a web browser (Error 401 in this case) or providing a username and password dialog box to prompt a user for security information, it must be quite clear to a user that he must have the appropriate rights to access a given container.

Legal components

Preventing third parties from accessing private containers in cyberspace or private spaces on a computer not connected to the Internet requires users to take affirmative steps to protect themselves, such as the use of encryption, trusted agents, or sandboxing.

Under this regime, hackers are subject to criminal sanctions and punitive damages in civil suits, whether or not damage to an owner's system or data occurs.  This may mean beneficial information currently produced by hackers must be obtained from other sources, such as contractors with an owner's permission to test a system for security flaws.  Unsolicited commercial email would not be affected by these rules if recipients do not take steps to bar access by these solicitations.  Unsolicited commercial email which enters private space by circumventing access barriers subjects the sender to the criminal and civil sanctions above.

A  barrier erected by an owner must be reasonably difficult to defeat.  For instance, if an  encryption algorithm is not strong enough, then the barrier is not one that the law respects.  Government encryption standards could be used to set a minimum standard for algorithms which the law will enforce when owners erect barriers to access by third parties.

But there are always holes in programs which provide heretofore unforeseen entry points.  Consequently, one must distinguish between circumvention of architecture as occurred in the Robert Morris incident and "holes" in architecture or bugs in software which provide a point of entry to a container.  Our regime uses an unreasonable entry standard implemented by statute which looks at the following factors: 1) did the third party use a program outside of its common or reasonable use in order to gain access to the container? 2) did the third party "try to beat the system" to get in to the container (e.g. try to crack a password or give false credentials to a trusted agent)?  3) did the third party steal an authorized user's password or assume the identity of an authorized user?  If any of the three factors exists, then the third party's access of the private container is illegal and the owner has a civil cause of action against here, regardless of whether the third party caused any damage.

The unreasonable entry standard implemented in a criminal statute would require the specific intent of intentional, purposeful or reckless acts before the elements of the crime are met.  No criminal liability exists for negligent acts because our regime places the burden on owners to erect a barrier and prevent unauthorized access.  Unintentional access in not a violation under our regime.  Unintentional access by a third party suggests that the owner did not erect a sufficient access barrier.

Although the unreasonable entry standard interferes with the goal of ensuring that all users know the rule for accessing containers, the messiness here corresponds to the messiness of real life.  In difficult cases, a reasonableness standard applied by judges and juries is required.

Other components

There are other tools at our disposal, outside of legal and technical ones.  Social norms and market forces can be used to influence public behavior.[147]  In that spirit, this whitepaper recommends the following actions toward the use of the container concept and its associated technical innovations.  First, social pressure can create an environment friendly to the wide-spread use of encryption.  If the attitude towards unencrypted email moves toward the current attitude about the security of a postcard, then people will be more likely to use encryption to safeguard their email.  Second, education can increase social awareness about the standard rules of access.  That is, you are allowed access a container if you do not have to hack any part of the security mechanism to get that access to it.  Third, market demand for encryption will, it is hoped, provide freely available and easy-to-use encryption software.  As well, there will be increased market demand for increasingly stable and bug-free trusted software agents.  Overall, the legal and technical proposals create a market need for and social pressure to use the technologies suggested.

10.4    Permissions

What are permissions?  What makes a container public or private?  If a container is physically secured and only the owner can access it, the container is fully private.  A container is fully public if any user can access and modify its contents via any connection to it.

Most containers, it seems, fall somewhere in between fully private and fully public.  The owner will probably define a limited set of users who can modify the entities within the container and a larger set of users who can view them.  At the point of defining the permissions for the container in a traditional sense, the owner will also have the option to define what he wants from each user in order for the user to interact with the container. Perhaps the owner doesn't want users to send anonymous email into his mailbox or wants only animated graphics but no Java applets accepted into his mailbox.  Law and code can be used to encourage the use of permissions as outlined here.  The code can provide three solutions, described below.

Technical components: Encryption defines permissions

To some extent, the use of encryption as a container, which was described in great detail above, defines the permissions of that container.  That is, anyone in the world can write to the container (see Figure 2) using the public encryption key and only the owner can read from it.  While this may be a simple definition of permissions, it is also the most secure one, since "hacking" into the container in most cases is a problem even for government agencies and certainly for the local 10-year-old.

Figure 2: How the encryption container works

Technical components: Operating system style permissions, or trusted agent-defined permissions

The second option for the definition of permissions is the trusted-agent or operating-system style permissions as defined above in the ownership section.  This relies on the use of guards at each point of entry into a container.  Each guard can have a list of users and their associated permissions with respect to each container.  The system could be more complex of course, but the general idea is that the guard is entrusted to make the decision at each step with respect to whether a user has the right to perform a particular access on a particular entity within a container.  If the agent is corrupted, then there clearly is a problem; however, reliable agents are in common existence today in many forms.

Technical components: Labels and filtering of certain interactions

The third option, and the only one not previously described, relies on labels and filters based on those labels.  For each access into a container, the user wishing to access the container would label his access in certain predefined ways.  The owner of the container can set up filters in code to allow or disallow the access based on the labels.  In law, described later, will protect users from fraudulent labeling.

A perfect example of this labeling and filtering scheme deals with spam email.  A standards organization can establish a set of labels such as those identifying "commercial" versus "non-commercial" emails.  A container owner can specify that in order for an email to be admitted into his mailbox container it has to have a label in the "commercial" email header, a boolean value indicating that status.  The owner of the container can set up a filter to deny access into the container if the label contains the wrong values.  A response email can be sent if the recipient desires, informing the sender why his email was denied access.

Legal components: Preventing falsification of labeling information.

The enforcement of barriers erected by owners covers most of the legal issues regarding permissions.  In our regime, a state or federal statute would make intentional, purposeful, or reckless falsification of identification credentials given to a container a criminal violation.

11    Evaluation

The architecture put forward in this document is one that suggests a number of important changes to the technical landscape of the Internet. Such changes should not be taken lightly, and need to be evaluated independently of the reasons that led to the conception of this new architecture. This architecture is not expected to deal perfectly with every issue at hand. One can expect, however, that it will prove to be quite compatible in a number of ways, and will progressively and subtly change the world to the point of making this architecture quite adequate.

11.1 Examples of Trespass Revisited

In the evaluation of the proposed architecture, it is important to ask whether there has been an improvement in the treatment of potential problems. That is to say, has it addressed those issues identify as difficult potential trespasses in previous sections?

Spam Email

The architecture was designed specifically with spam in mind, as spam was one of the largest problems dealt with in the arena of trespass. Going along with the philosophy diagram put forth in the Architecture section, it is possible to follow the same line of reasoning, hence questions.

First, is the spam unwanted access? In some cases, the answer is no. Certain people are unbothered by spam, and using the proposed architecture would not force them to implement a solution stopping the spam.

What if the spam is indeed unwanted, by someone downloading mail on a slow, expensive connection? In this case, does the code attempt to stop it? Yes, the owner of the mailbox can set up a filtering mechanism allowing him to filter out "bulk" or "commercial" email labeled as such. While such an infrastructure of labeling is not currently available, one can easily imagine a small but utilitarian set of labels including "commercial" and "bulk." or "text-only" and "contains-programs," labels as described here.

In this case, the owner can, with relatively little time, cost, and effort, set up labels blocking commercial email from entering his mailbox. A sender is required to label correctly his email. In the ideal case, then, the recipient would never receive email which he considers a spam email. While there is a certain amount of effort required in the owner-side labeling and filtering, this maintains the goal of default access that is so important to the architecture.

What if the sender of the spam succeeds in getting email into the recipient's mailbox, and thus the code has failed? As the scenario progresses down the flow chart of the philosophy, it arrives at the place of greatest weakness: the unauthorized access has already occurred.

It is hoped that the architecture has correctly done its job, and that it is the sender of the email which got through who actually made fraudulent claims about the content and nature of his emails. In most of those cases, it is possible to track down the sender of the email either by three techniques: either, (A) the content of the email points to a particular commercial organization, or (B) the sender can be identified from the labels in the email and relied on the lack of interest to prosecute on the part of the recipient, or (C) the recipient can use the return headers of the email to trace down the sender. Given anonymous and pseudonymous remailers makes certain situations virtually untraceable, but such is the risk that the architecture takes in the maintenance of the Internet as a free place. In most cases, however, it will be possible to find at least the organization from which the fraudulent email came, and prosecute as necessary for fraudulent labeling and identification.

Overall, the architecture is more successful than the status quo at keeping out unwanted spam, and more successful in prosecuting senders who falsify return information.

Active Email

The second scenario, active email, relies on the same labeling scheme used in the spam scenario. However, it adds to its arsenal the use of sandboxing, if desired, to create a safe place for the inclusion of some programs.

Is the active email unwanted access? In some cases, no. It is possible to imagine a recipient downloading a program to run on his local machine, and needing access outside of his mailbox to his local files. In this case, it is possible for the recipient to correctly identify the sender of the program (using the same legal ramifications in false identification) and to open the sandbox to a particular sender. The technique is in general called the "signing" of particular programs, and can be used in conjunction with sandboxing. In this case, it is assumed that the recipient knows the sender well and trusts that the sender will not create malicious

What if the active email is indeed unwanted at the outset? Then the recipient can (A) set up a sandbox around his email inbox in addition to (B) using labels to defend against incoming mail which is labeled as "containing a program." Should he wish to receive "safe" programs but not "unsafe" ones, he can use only (A) but not (B). This will allow him a relative amount of security in the knowledge that it is unlikely that his computer will be accessed. And he has done essentially nothing costly or time-consuming in this defense of his computer.

What if the code fails, and the unwanted, malicious email gets into his computer? First, this condition is unlikely given the current state of sandboxing technologies such as Java, which prevent most unwanted access. In the quite rare case, however, it is possible to download code which harms one's computer, and it was at once labeled incorrectly. In this case, unfortunately, the only recourse is once again to rely on punitive law to scare people away from doing this to other people's containers.

Overall, the architecture is more likely than the status quo to prevent damage to a computer from unwanted access. But in the small chance that it does not adequately prevent it, the architecture relies on law to punish the sender of the email, and to find out whether the email was indeed malicious or not. Since it was incorrectly labeled, there is a good signal that it was malicious.

Hacking a Web Page

The concepts of web page hacking and computer-system hacking fall into largely the same category, because they are so common in technique of access.

Is the "hacking," or access to the computer followed by changing of information presented there, unwanted? Perhaps not. Perhaps the computer is left somewhat open for "gentleman's agreements" not to harm the other computer. This falls into a different category altogether, and law needs to deal with that appropriately.

What if the hacking is unwanted? Then, did the owner of the computer make reasonable attempts to keep the computer's points of entry safe? Did he keep the computer physically in a safe place, and all of the open ports protected by widely used software? If so, then he probably protected himself from hacking.

What if the code fails, if the reasonable protection on the points of entry is not enough? Then, having attempted to protect himself, the container owner has the right legally to try to prosecute the enterer, should he (A) want to do so, and (B) have the proper identification information, frequently found in access logs. And in the end, the money and time spent was done in preventing unwanted hacking into the computer.

Overall, the architecture clarifies more readily the legal recourse possible where hacking is involved. As well, it does not seem to harm the ideal of free exchange of ideas on the Internet, and still allows for privacy and commerce to happen as well.

Other Cases

It is important to note that the architecture also deals with borderline cases of computer such as wristwatches and so-called "palmtops." Since the main point of entry there is through someone's peaking into the viewable area of one of these small devices, it becomes incumbent upon the owner to protect that point of entry. If he leaves it on his desk in a public space open without touching to private information, he has not done enough. If he uses it on a subway without checking who can see it, he has not done enough. If he tries legitimately to protect that point of entry and fails because of the sneakiness of someone desiring access to his palmtop, then he has tried hard enough and has legal recourse as necessary, with no cost and very little effort on his part.

11.2 Feasibility

Usability

The first issue at hand is whether or not the architecture defined is usable. For this one needs to distinguish two sides: the creators of information (web publishers, and senders of email), and the consumers of information (web surfers, and readers of email). Given the state of the Internet, any given person will probably be both a creator and consumer of information at one point or another. Separating the two roles, though, provides a better perspective on the distinct issues.

For creators of information, the first and immediate issue is public information, such as world-accessible web pages, or unencrypted emails: is it easy to create? How much effort is involved? Given that the architecture strongly suggests public as the default status for information, there is very little involved in creating such information. Unencrypted email is by far the standard today, and thus would remain extremely simple to produce. Putting up a public web page would remain as simple as renting space on an ISP's web server and building a few HTML pages. Making a piece of information public involves a complete lack of barriers in this new architecture. Implementing this "lack" is trivial.

Of course, this implies more difficulty on the side of private space setup. Private, encrypted emails require special software, and a solid, widely-adopted public-key infrastructure to allow any user to find any other user's public key. Today's cyberspace does not have such a public-key infrastructure, which suggests at least some growing pains in the adoption of encrypted email. On the web side, however, the situation is much easier. Adding a username and password protection scheme to a web page - a solution which the architecture deems enough for legal defensibility - requires some effort (and possibly the right ISP that allows for such protection on personal web pages), yet remains relatively easy. For large Internet players that run their own web servers, it is trivially more difficult than normal web pages.

Once private space is possible, the issue then becomes one of flexibility: how easy is it for the information creator to select different sets of permissions for different users of the system? In the case of web-based restrictions, the situation is reasonably well solved already: web server administrators can easily set very specific, file-based permissions on their systems. Read, Write, and Execute privileges can all be set (Note that the execute privilege in this case is a "read" privilege on a web page that performs an action). In terms of email, the only privilege to worry about is the ability to read. Currently, the theoretical techniques for allowing a number of different people to read a given encrypted email do exist, but are not implemented in any significant manner in existing software. In general, one would expect the task of setting up explicit varied permissions for encrypted email to be somewhat more difficult than the web page access control rules. It is however, something that can eventually be made easier with the right software.

The architecture exposed here seems to maintain a very usable level for information creators. For information consumers, things are even easier! By default, a user is allowed to do anything on the Internet. In terms of usability, there is very little that can compete with such a setup. The architecture further stipulates that access is disallowed when a clear barrier is presented to the user and she is not aware that she is authorized to enter. Given that these clear barriers must involve a username-password dialog, a certificate request, a denied connection, or strong encryption, the user can hardly be confused as to what parts of cyberspace are private! There is little or no usability issue with browsing public space and identifying disallowed private space.

Finally, there is the issue of the ease with which an authorized user can access a piece of information destined for her, but inaccessible to others. If the information is inherently protected (i.e. by encryption), the problem is usually relatively simple in that the user need only use her private key to decrypt the information. This may require special software, but that step is relatively easily overcome. For information that is protected by a trusted agent (like an operating system, or web server), the authorized user must prove proper access permission to the agent. In some situations, this is as easy as providing a password, but in others, this may involve providing a digital certificate proving the access permission. Digital certificates necessitate a public-key infrastructure similar to the one already discussed for encryption. There is also the problem of the distribution of these digital certificates by servers who want to assign access permissions to users. This is complicated enough  that using digital certificates for widespread identification remains a somewhat tedious task for the user. It can be expected, though, that such technology will only be used by those information providers who truly need to provide extremely strong barriers. The tediousness of access can then be compared to that of accessing one's bank deposit box: it is somewhat of an annoyance, but a necessity to ensure serious security.

Cost

Beyond simple usability, there is the issue of the cost involved in creating or consuming information in cyberspace. With a major goal remaining the free exchange of information, it is imperative that both, but especially consuming, remain relatively inexpensive. On the producer end, creating public information remains as easy under the new architecture as it is today: almost any individual has the budget to set up a web page on an ISP, or set up an email account from which to send email. Yet public space on the Internet also involves the possibility of having consumers become contributors, thereby creating collaboration space. One means of collaboration is through Usenet newsgroups. On the web, however, given today's technology, creating and maintaining collaboration space is still relatively expensive. There exist some companies working in that area, and one can very much expect that the the cost of such systems will go down as they become more common and better understood. One should note, however, that serious web-based collaboration remains scarce and expensive for now.

A slightly more tricky task involves making all of this information private. For encryption-based privacy, the need for a certified key pair adds a yearly certification cost that can be somewhat high for web sites (while it remains reasonable for email address certification). For trusted-agent-based privacy, the costs are truly outrageous: a seriously secure Internet-connected computer can only be kept secure today by constant expert supervision. The reason is that no trusted agent is perfect: security holes in software can often allow attackers to take control of the trusted agent, thereby gaining unauthorized access. Because the architecture recommends that inherent protection (i.e. encryption) be used as often as possible, this cost is minimized. However, one must realize that there is no way to completely eradicate this expense: a safe fortress requires full-time guards.

Thankfully, the cost on the user's side when attempting to enter a private space for which she is authorized is quite low. For password-based systems, it is almost nonexistent, while for digital certificate methods, it is a modest yearly fee (one comparable to the maintaining of a passport as form of identification) for some systems, or even free for others.[148].

The final question related to cost is one that concerns the way in which the architecture affects markets. There is a risk that, with the architecture recommending certain technology, the marketplace becomes skewed by the demand for a patented technology that only a few companies can provide. This is precisely the case of encryption: RSA Data Security owns a number of key patents in the area of encryption and, given requirements of compatibility, could easily gain somewhat of a monopoly in the business of providing encryption. This is a situation to watch carefully, hoping that the patent time limit will be enough to prevent such a counter-productive outcome.

Viability

Even if a system is usable and cheap, it may only seem so for now. A piece of technology may very well scale extremely poorly, quickly become incompatible and outdated, or simply be blocked by an unexpected factor: good technology does not necessarily survive.

Encryption, for example, is one of the main elements of the system. The architecture does not specify a particular type of encryption, so one must judge whether the concept of encryption itself is viable, as a widespread technology, today and in the future. The concept itself is clearly one that will last: the encoding of private information such that only authorized individuals can decode the information is a key necessity of the information age for the foreseeable future. Since no particular type of encryption is required by the architecture, future requirements of safety can simply be used to upgrade the encryption schemes available to always provide serious security. Today, military-grade encryption is easily available. The only obstacle barring immediate wide-scale adoption of encryption technology is the necessity of a public-key infrastructure put in place. This is somewhat of a chicken-and-egg problem, where adding more users increases the number of public keys, but users want to participate only if there are enough public keys to make it worthwhile. There is no question, though, that within a short adaptation period of time, encryption technology is perfectly viable, today as well as for years to come.

Beyond inherent protection implemented through encryption, trusted agents perform a type of control. These trusted agents (operating systems, web servers) have been around for a certain time. They are always changing, adding and removing security holes as they go, never quite reaching a truly secure status. Yet the presented architecture does not require perfect security from these trusted agents, since the law steps in to help out where technology fails, protecting container owners from attacks attempting to bypass the trusted agents. Thus, in the role requested by the architecture, trusted agents are a perfectly sensible and viable choice, even with only the expectation that they will remain only as secure and useful as they are today.

On the issue of publicly-writable containers (such as email inboxes), the technology proposed is somewhat new, or at least different enough that it can be considered new. Labeling has been used recently with technologies such as PICS,[149] but it is currently inconclusive whether such labeling works, given that it has not yet been shown how honest the labeling would be.  However, the architecture here does make room for legal consequences if labeling is not done properly. This combination of technology and legal measures makes for quite a viable system in this particular case.

The final technical issue is that of the sharing of untrusted programs, whether via the Web or by email. The architecture recommends a strong reliance on the technical barrier of sandboxing, a system that actively prevents programs from performing certain potentially harmful actions. The system also specifies ways for such program to request user permission, given some certification of liability on the programmer, to perform riskier actions. This type of idea is not completely new in theory, but relatively new in widespread implementation. Java is, in fact, the first serious implementation of such a system. It has taken some time for Java's sandbox model to mature and become secure enough to work within the architecture described here, yet the system now works according to the planned specifications. Whether this scheme will remain valid for years to come is yet undetermined, given its youth. It seems likely, however, that any new computational capability could be added to the list of controlled actions that the sandbox regulates. However, it is not well known how gracefully this technology will age. It can be considered relatively safe for now, but one might want to watch out for possible future incompatibilities and complications.

Norms

In terms of viability, one also has to worry about the pressure the architecture puts on norms. The main normative push that the architecture involves concerns people's expectation of privacy. Currently, most netizens feel that the email they send is relatively safe from prying eyes. Yet the architecture in question here advocates that all plaintext email sent on the Internet can be viewed by packet sniffers without any legal consequences. This may cause some initial adoption problems, given that users want privacy in almost any situation.

However, given that the architecture also pushes for active use of encrypted email, a mode which would be completely legally protected (as well as technically difficult to crack), one can expect that the user discomfort at having to change her habits will only be temporary. This proposal is different from the CDA[150] in that it does not take away any user's particuar right; it only prescribes new ways through which a user validates these rights.

Of course, one has to consider the world-wide aspect of normative issues. In France, for example, almost all encryption is illegal. While this may change with the upcoming European Union regulations, these issues are definitely strong obstacles to the architecture's full blown implementation. Without expounding on this problem, which has been discussed quite extensively, one can expect that encryption's adoption will go the way of the printing press: while very controversial at first, it was eventually adopted as a widespread necessity. While it may take another ten years, one can expect that the architecture and the normative changes it requires will indeed happen and prove this technology truly viable.

  1. Projections
How will the presented architecture be received by society? What are the implications for the future of cyberspace?

12.1    Code

The first important consequence on technology will be caused by the high reliance of the architecture on encryption. Currently, the United States government regulates the export of cryptographic software and hardware much like it does munitions (although it is not actually classified as such). Recently, the Wassenaar Arrangement has expanded this policy to 33 other countries, with a future possibility of regulating domestic encryption.[151]

Numerous experts, however, claim that encryption regulation and key-escrow systems simply cannot work.[152] With that in mind, one can expect that encryption software and hardware will become much more wide-spread, given enough time. With the spread of the software and the need to use encryption, a comprehensive public-key infrastructure will be built to support the sharing of trusted public keys.

Because encryption software will be used by almost all online users, one can also expect this type of software to become much easier to use, given that today's encryption software is directed mainly at professionals and tends to confuse most casual netizens.

Furthermore, an open, public network where packet-sniffing and wire-tapping are perfectly legal will lead to the production of extremely advanced versions of packet sniffers and other such network analysis tools. While this will be of obvious interest to those individuals truly analyzing Internet traffic, this will also probably lead to a few key cases where an unencrypted, yet sensitive message will be revealed to the public. This may challenge the new architecture, but will most probably lead to even stronger use of encryption, given its availability and  users' realization that unencrypted data is available for public browsing.

Other technologies involved, mainly filtering and sandboxing, will make great strides to fill the gap left by the lack of legal support. This concept of technology stepping up is quite interesting and useful, given that where code can make a difference, it usually makes a stronger one than the law does. The maturity of these technologies will bolster user's sense of privacy and protection on the Internet, while maintaining strong the freedom of information exchange.

12.2    Norms

The way people react and eventually change with respect to the proposed architecture is extremely important. Most of the goals expressed earlier in this paper dealt with desired norms, because once can judge the quality of a solution by the norms that it brings about, under the assumption that existing norms do not simply reject the proposal altogether.

It has already been shown that this architecture would probably pose very few problems on the normative side. However, this does not imply that norms would not change as a consequence.

One interesting example of this is linked to the clear establishment of private spaces, with an equally clear default of public state. Given this default, and given the forceful boundaries imposed to fend off unauthorized users from private space, one can expect that private space will be better respected than it is today. This respect will most probably come from the feeling that users, the browsers of information, have been treated fairly, that the Internet norm of free information exchange has been respected, and that private space that is set up is in fact done for users' own good. This type of prediction is somewhat optimistic, but very much reflects the community aspect of the Internet, where users treat others very much in the same way that they feel they have been treated, and where users contribute surprising amounts of time and effort to help a cause which they understand and support.[153].

Another interesting normative change which will be forced by the open, public network involves users' perception of privacy. Unencrypted emails will no longer be regarded as safe, given that any eavesdropper will have the perfect legal right to "sniff" the network and record information that is transferred. In a sense, unencrypted emails will become very much like postcards. While a postcard may, in many cases, never be read by anyone except the intended recipient, there is a good chance that someone else will read it in transit. The same will apply to the way people think about unencrypted email.

Finally, there is the issue of spam. One cannot rationally expect that spam will be completely eradicated, and neither would such a situation be necessarily good: some users actually want spam! However, one can expect that spam will become less and less of a nuisance for users, given the strong technical and legal barriers erected to protect from this annoyance. Traffic of spam will most probably reach very acceptable levels, much like junk mail one gets through the Post Office. Eventually, while spam will remain, it will do so in small quantities and be much more something that one quietly throws away before reading the important mail.

12.3    Law

Effects on the Encryption Debate

Given the architecture's strong reliance on cryptography, the architecture will increase the incentive to settle the issue of encryption. The fact that law enforcement does not have an easy way of breaking encryption will be an obstacle to the implementation of the proposed architecture. The exception in the architecture will probably not satisfy the law enforcement needs of the government. Although there are exceptions in the architecture that allows for breaking of encryption codes, the difficulty in breaking codes efficiently and cheaply gives the government a strong incentive to move toward a key escrow system.

However, as mentioned before,[154] public norms and technical shortcomings of the key-escrow scheme will keep it from becoming a reality. The debate over encryption will probably intensify because of the greater use of encryption, but it will also favor more uses of cryptography as it becomes more necessary part of the Internet. The law enforcement concern will have to be address in another way. The result will be that the law enforcement will either have to crack the key or locate it in some other ways. Although this may seem to weaken the ability of law enforcement to intercept communication, it also pushes the need for technologically innovative solutions to the problem of encryption.

Fourth Amendment Implications

The wide use of encryption in the architecture raises interesting Fourth Amendment questions. Traditionally, the debate on the Fourth Amendment has been the approach of exclusionary rule, which excludes evidence illegally searched or seized. Before the use of exclusionary rule, the remedy against the Fourth Amendment violation is through legal action against the law enforcement agency, which has been proved ineffective. However, in the proposed architecture, civil liability is clearly defined and enforced as an integral part of the architecture.

The legitimate use of private space on the Internet will allow the court to look at the Fourth Amendment outside of a purely criminal context. In a Yale Law Review article, there is a hypothetical of net-wide search, which uses worm-like program to search for specific files (such as a well-know child porn or a pirated Microsoft program). The container- and permission-based architecture may allow user to block such a program technically and also create legitimate legal cause for actions. In this case, the Fourth Amendment analysis will occur before evidence or search occurs whether than after the fact where there is strong pressure to keep the incriminating evidence and thus make decisions that erodes the expectation of privacy.

12.4 Markets

Software Market

The first impact on the market will be an increase in the demand for inexpensive encryption products. Hopefully there will be enough healthy competition to create better and cheaper encryption software. There will also be a similar impact on filtering and labeling technologies.

Impact on e-Commerce

The protection of private space and the increased safety on a more open network will allow for greater variety of products and services on the Internet. Since the architecture will make the Internet more secure, it will strengthen the already diverse use of the Internet for financial services and other future uses where confidentiality is of great concern.
 

Containers as New CyberProperty

The concept of the container also allows for a more flexible arrangement of spaces on the Internet. There are currently several arrangements for accounts existing on the Internet: rental or ownership of an ISP account, mail account, or web space (e.g. AOL account, hotmail account, and Geocities). But the architecture also gives a more flexible meaning to "container."  One can envision the rental of an "Internet safe deposit box" which has several encryption layers with different keys to protect each one. There can also be rental of "testing" spaces for programmers to safely test their programs. The uses of the Internet will thus also become more innovative to fit different needs of people.

13 Conclusion

Over one hundred pages later, what have we learned? The law and technology of cyberspace are intertwined in a way such that is impossible to solve the problem of unauthorized access using only one or the other. However, finding the right combination of these elements is, to say the least, a difficult task. As we have shown, common law trespass imposes too many unnatural restrictions when applied to cyberspace. Instead, a regime based on the concepts of containers and control allows for a more efficient and flexible interpretation of these issues. Moreover, an architecture that seeks to uphold certain underlying principles, rather than common law doctrine, provides a more appropriate and flexible framework for combating unauthorized access in the rapidly-evolving realm of cyberspace.

Appendix A: Acknowledgements

We owe our first thanks to Ms. Joanne Costello, Coordinator for IT Support Planning at MIT Information Systems and our project advisor. She consistently went above and beyond the call of duty: reading and critiquing our papers, traveling long distances to attend practice presentations, and even ordering Chinese food for meetings. In the end, we have a better whitepaper because of Joanne, but equally importantly, we had an easier and more enjoyable time creating it because of her help and guidance.

Professor Larry Lessig of Harvard Law School and Professor Hal Abelson of the Massachusetts Institute of Technology put together a fabulous course taught jointly to student of both schools. Then they granted us a fascinating subject on which to focus our whitepaper, hard-working and talented partners with whom to research, and Joanne as our advisor! As if that were not enough, their guidance and input throughout the entire project made the process much easier and more illuminating for us.

Finally, we would like to acknowledge the fine cooking and delivery staff of Bertucci's Pizzeria, Pu Pu Hot Pot, Joyce Chen, Three Aces, and many other take-out establishments in the Cambridge, Massachusetts area. We couldn't have done it without you!

"This paper is brought to you by the number 6... and the letter A."

Appendix B: Credit of Authors

Credit by Section

Lauren Fletcher, Style Editor
Michelle Hong, Citations Editor

1. Introduction
    (Kristina Page, Overview; Lauren Fletcher, Introduction and Outline of Paper)

2. Real Space Trespass
    (Michelle Hong)

3. Cyberspace Trespass
    (Lydia Sandon)

4. Technical and Legal Introduction to Cybertrespass Cases
    (Lydia Sandon, Introduction and Technical sections; Michelle Hong, Legal sections)

5. Examples of Trespass in Cyberspace
    (Benjamin Adida, Technical sections; Enoch Chang, legal sections)

6. Current Strategies for Dealing With Trespass in Cyberspace
    (Lauren Fletcher)

7. Metaphors
    (Michelle Hong)

8. Goals
    (Lauren Fletcher)

9. Definitions
    (Lauren Fletcher)

10. Proposed Architecture
    (Lydia Sandon, Technical sections; Kristina Page, Legal sections; Combined, Introductory sections)

11. Evaluations
    (Lydia Sandon, Examples Revisited; Benjamin Adida, Feasibility; Enoch Chang, contributions)

12. Projections
    (Benjamin Adida, Code and norms; Enoch Chang, Law and markets)

13. Conclusion
    (Lauren Fletcher)

Executive Summary
(Lydia Sandon)

Credit by Author

Benjamin Adida
    5      Examples of Trespass in Cyberspace (Technical sections)
    11    Evaluations (Feasibility)
    12    Projections (Code, Norms)

Enoch Chang
    5     Examples of Trespass in Cyberspace (Legal sections)
    11   Evaluations (Contributions)
    12   Projections (Law, Markets)

Lauren B. Fletcher: Style Editor
    1     Introduction (Introduction, Outline of paper)
    6    Current Strategies for Dealing with Trespass in Cyberspace
    7    Metaphors (Trespass Metaphor in the Abstract)
    8    Goals
    9    Definitions
    13  Conclusion

Michelle Hong: Citations Editor
    2    Real Space Trespass
    4    Technical and Legal Introduction to Cybertrespass Cases (Legal)
    7    Metaphors
 
Kristina Page
    1    Introduction (Overview)
    10  Proposed Architecture (Legal, Combined)
 
Lydia Sandon
    3    Cyberspace Trespass
    4    Technical and Legal Introduction to Cybertrespass Cases (Technical, Introductions)
    10  Proposed Architecture (Technical, Combined)
    11  Evaluations (Examples Revisited)
         

Executive Summary

Endnotes