Sovereignty on the Internet:
Ways to Prevent Cyberanarchy
The Internet is truly at a crossroads right now in that more and more elements of society are recognizing its potential significance. Cyberspace is no longer simply composed of academic scientists and computer hobbyists. Instead, major newsmagazines have declared that this holiday season will be the first where e-commerce plays a major role. Users can watch the grand jury testimony of a President on their computer screens and where web journalists like Matt Drudge can scoop more traditional television journalists such as Tom Brokaw. It is even a time when many a romance can be formed on AOL and a major motion picture can be made that deals with such possibilities. Quite simply cyberspace is at least in some ways becoming more like real space. A broad range of people from around the world are beginning to see cyberspace as a place where they can do just about anything, including buying gifts, watching "television", becoming informed, and even finding love.
But oddly enough as cyberspace becomes at least in some respects more like real space, more real space problems arise online. Individuals have to deal with things like fraud, defamation, and privacy. Companies have to worry about intellectual property. Nations must worry about preventing harms to their citizens, including harms their citizens may not want to avoid such as gambling. In this context, numerous questions arise. For example, what is one to do if the e-merchant from whom you just bought a sweater from happens to be nothing more than a few web pages on a server in Antigua designed to steal your credit card number? What should your state to do if the e-merchant you happen to deal with is instead a gambling site in Antigua? What can and should be done if you decide to forego an e-merchant entirely and go download songs from U2’s latest album at a free, unlicensed MP3 site? Can Iraq legitimately prosecute Matt Drudge if by posting something on his web page he breaks Iraqi law? Do I have any recourse if the web personal site I used to find love online sells my personal information to a marketer?
In the real world, governments have often dealt with issues like these by passing laws and by issuing regulations. More often than not, these laws prescribed a particular punishment that should be meted out for a given kind of proscribed conduct. Such a system can work fairly well if it is clear who broke a particular law and which set of laws should be applied to a particular questionable transaction. However, for several reasons such a system may be much less effective in governing actions and interactions that take place on the Internet. To begin with, identity is still fluid on the Internet. It can be difficult if not impossible to link up an online identity with a corporeal being and often it may even be hard to tell from which traditional nation state an online user is from. So long as a user remains anonymous, she cannot easily be held accountable for her actions, at least not through the means usually employed by territorial governments. Second, there is little consensus as to what actions on the Internet should subject a user to the jurisdiction of a particular territorial government. There is some agreement that users should be responsible for the effects they produce in a particular country, but what does it mean to produce effects in a particular country when one is simply putting something in cyberspace? There is no simple answer to when posting something on the web should subject a person to liability in any country where that something can potentially be accessed.
None of this is to say that the above traditional model of governance has no place on the Internet. There are many situations where the above framework may still be workable. It is sometimes possible to determine the identity of a user, and future technology such as digital identification may make it easier. Nations may also come to some sort of agreement regarding what constitutes causing an effect in a particular country in certain contexts and thus who should have jurisdiction in those contexts. However, the reality is that both individuals and countries also often have very different conceptions about what type of conduct should be proscribed, and it is unlikely people from around the world will come to an agreement over this issue with regard to all of the types of conduct possible on the Internet.
In light of this fact, this paper proposes a couple of technology based solutions to Internet governance problems that will hopefully give governments as well as users the ability to either block or avoid potential harms. The first solution this paper proposes can give a traditional territorial sovereign power to control what kind of activities its citizens participate in online. This solution relies largely on technology, but its successful implementation also depends on governments requiring service providers to implement it and may requirement governments to issue digital identification. Therefore, there will still be a need for more traditional avenues of rulemaking such as legislation and the treaty process.
Our second solution is an attempt to empower individuals and nongovernmental organizations. The hope is that this system will allow individuals to set up electronic zones on the Internet and that each zone will have its own set of rules created by the users and maintainers of the sites within the zone. Such a system will allow individuals to avoid what they consider to be harmful on the Internet and will allow individuals to find places where their conduct will be deemed acceptable. If successful, these zones might lessen the need for regulations on the part of territorial sovereigns. A huge open question though is how well can people regulate themselves within these zones. When the Internet was small, self-regulation was reasonably effective, but it is not certain that a norms based system could survive now that the Internet has such a diverse user base. However, this system will also rely on these sovereigns to an extent. Again some form of digital id may be needed for zones to hold members accountable for violations of rules, and if these zones are based on some form of contractual relationships, some traditional sovereign may be needed in extreme cases to provide remedies for breaches of those contracts.
Finally, it is important to note that the viability of any legal or technical solution this paper proposes also depends largely on who ends up shaping the technical contours of the Internet. Therefore, sovereignty over the architecture of the Internet is just as important an issue as sovereignty over what people can do on the Internet. In the last few months, a great deal has happened in this area, namely the formation of an organization called ICANN. There has also been a great deal of pressure to let the private sector play a significant role in determining the structure of the Internet.
A Brief Road Map
This paper begins with a brief overview of both the history of traditional sovereignty and the history of the Internet. It then explains how traditional sovereigns are trying to use existing legal structures to gain control on the Internet and some of the difficulties they are encountering in their attempts. A case study on gambling follows that illustrates some of these difficulties in greater detail.
The paper then deals with the significant question of who will have the power to shape the Internet and thus influence who can exercise control on it. This leads to an analysis of the roles ICANN and other current standards making bodies are likely to play.
From there the paper examines the two technical solutions to Internet governance mentioned above and considers both the feasibility and desirability of those solutions from the perspectives of current governments, engineers, and individuals. Finally, it looks at the advantages and disadvantages of letting a decentralized market take control of governance on the Internet and questions whether or not norms based Internet governance is still possible in some situations.
What is Sovereignty, Anyway?
For the purpose of this White paper, Sovereignty will refer to control. Who, or what, is in control of what a person can do on a network, and what form that network takes? This paper will address not only who controls the Internet, but also who should be in control, in order to facilitate a workable and fair system of Internet governance.
A. Definitional History of Soverignty
According to one possible definition, sovereignty refers to an attribute of a powerful individual. Historically, sovereigns controlled territory because of divine or historic authority; an authority that had little, if anything, to do with the consent of the people. Sovereignty was based primarily on geographical borders: another political power could violate a ruler's sovereignty by entering the territory of the sovereign without his permission. The American and French Revolutions pioneered systems under which sovereignty really stemmed from the desires of the people, and the definition of the word has never been the same since. During the framing of the U.S. Constitution, the debate raged whether sovereignty over a territory and its inhabitants stems from the ruler or from the inhabitants themselves. Americans concluded that sovereignty originates in, and remains with, the people themselves. Most other nations never re-evaluated sovereignty in the intense way that the United States did; however, the definition of the term morphed over time. Now, far from its totalitarian roots, the term sovereignty is now often used synonymously with self-determination, and the battle for sovereignty is the battle for independent control of one's territory or self. Still, sovereignty is "supreme authority, which on the international plane means legal authority that is not in law dependent on any other earthly authority," and international law still protects sovereignty, but the sovereignty of the people over themselves rather than the sovereignty of a so-called "sovereign." In recent years, commentators have suggested that the model of sovereignty has begun a shift from a horizontal model, with several sovereigns of somewhat equal power to a more vertical, hierarchical model, where sovereigns are ruled by greater sovereigns and overlapping sovereignty exists at the national level as well as the personal level.
People are often the subjects of overlapping hierarchical sovereigns. Every American, for example, is subject to the sovereignty of at least two sovereigns: the U.S. and their state of residence. Most are also subject to even more overlapping sovereigns - their employer, their church, their family, their school. These overlapping sovereigns have differing levels of control over certain areas of life, but the hierarchy of control among these sovereigns is generally clear. On the Internet, the hierarchy of sovereigns is not so well-defined. A person's employer may have at least as much control over what he can do on the Internet (through the use of firewalls, proxy servers, and other mechansims), as his national government might have.
B. The Internet as a Territory to be Governed
Because the fudamentals of sovereignty have historically been territory-based, the Internet subverts the traditional notion of sovereignty. As its name suggests, Cyberspace may be interpreted as a "space" apart from geography. The location of information in cyberspace can be viewed as a location of its own, in the ether between two communicating computers. Although actions taking place in cyberspace may have effects in the locations of both the sender and receiver of information, the actions take place in a third "space," neither the location of the sender nor the location of the receiver. This conception is inconsistent with the traditional basis of sovereignty, however. Until now, national borders have almost always been the basis for creation and administration of laws. The current system of legal governance is based on the geography of national borders. Nations make laws governing citizens of the nation, conduct taking place in the nation, and conduct having an effect in the nation. Although the people who use the Internet are at all times governed by their real-space sovereigns, the sovereignty, or control, of the Internet itself and actions taking place within it remains up in the air.
Since the first embryonic beginnings of the Internet, parties have vied for power over cyberspace and its inhabitants. Only relatively recently have international governments and their legal systems asserted control in the traditional way discussed above.
Computer Networking and Sovereignty: Historical Perspective
To really understand the answer to the question, "Who's in charge of the Internet?" it is appropriate to explore the history of computer networking in general. To varying degrees, the construction of the Internet we know today was a government research project, a scientific experiment, a commercial enterprise, and a spontaneous grass-roots effort. Many of the people and organizations that hold power over the physical and virtual architectures of the Internet do so because they inherited such power. On the other hand, some of the "reigns" have changed hands over the years in very interesting ways.
In the Beginning
The most direct ancestor of the Internet was an experimental computer network called the ARPANet. ARPA (renamed DARPA in 1972) was the Advanced Research Projects Agency, part of the Department of Defense. Motivated by Cold War concerns, the DoD was funding networking research at universities and research centers across the country. The ideas behind ARPANet arose from three parallel research efforts, running from 1961 through 1968, at the Massachusetts Institute of Technology, Rand Corporation, and the National Physical Laboratory in the United Kingdom.
The equipment on the network was paid for by ARPA-funded research institutions; the connecting hardware was built under ARPA contract by Bolt Beranek and Newman, Inc. (now owned by GEE). In 1968, the first four nodes on the net were at universities in the Western states. By 1971, there were fifteen nodes at various U.S. research centers and universities, and in 1973, the first international connections were made to the University College of London, England, and the Royal Radar Establishment, in Norway. The network was thus under the tight
control of the U.S. Department of Defense and its partners.
ARPANet was not an "internet," because it was really only a single network of computers all running NCP, the Network Control Protocol. The term "internet," of course, refers to a larger network which results from the combination of two or more smaller, clearly distinct networks. (We refer to _the_ Internet, with a capital "I" because it is the only network of international proportions that runs using IP, the Internet Protocol. For our purposes, the slew of computer networks that followed came in three varieties.
In 1974, Telenet debuted from BBN, and in 1977, Tymshare introduced Tymnet. These allowed computer users (mostly academics, developers, and hobbyists) to connect their computers to distant servers via dedicated landlines. This provided a significant savings over making long distance phone calls.
Several years later, modern dialup services began to appear. TRINTEX was founded by IBM and Sears in 1984, becoming Prodigy in 1988. In 1985, Quantum Computer Services (later America Online) began doing business, making use of Telenet and Tymnet. Overseas, in 1982, France Telecom rolled out MiniTel, a metered collection of commercial and information services. Two years later, there were over 800,000 users and 1,000 services from which to choose.
Many computer enthusiasts circa 1980 ran Bulletin Board Systems (BBSes) wherein anyone could connect to an individual's personal computer via modem. In 1983, amateur operators formed a self-governing, volunteer-run organization called FidoNet, which allowed for the efficient distribution of e-mail and news via the phone system.
Academic and internal corporate networks
As the word of the utility of networking applications (like e-mail) made its way around corporate and academic circles, a plethora of acronym nets running a variety of different protocols emerged. It seemed like everyone wanted on the bandwagon -- there was everything from the Asia's JUNet (the Japan UNIX Network) to the academia's HEPNet (the High Energy Physics Network) to business' DECNet (of the Digital Equipment Corporation).
When considering who owned, operated, funded, and benefited from
these networks, the arrangements seem somewhat complicated. For example, BITNet (Because It's Time) was a cooperative network founded at the City University of New York and Yale University in 1981. No doubt partly because it used IBM's mainframe platform and protocol, Big Blue funded the network from 1983 to 1987. In
particular, IBM helped establish a gateway to the European Academic and Research Network (EARN). During this time, the Canadian government launched a successful effort, called NetNorth, to put all of its universities on BITNet.
Another collaborative effort between academia and industry, started in the same year as BITNET was the Computer Science Network (later the Computer and Science Network). CSNet was built by through a collaboration of academics at the Purdue University, and the Universities of Delaware and Wisconsin, and corporate scientists at Rand and BBN. Seed money was provided by the National Science Foundation, in an effort to provide connectivity to universities not on ARPANet.
Usenet was started in 1979 by three grad students at Duke University and the University of North Carolina at Chapel Hill, as a web of dialup connections using the Unix-to-Unix CoPy program (UUCP). It was characterized by a strong philosophy of open access and democratic anarchy; the network was sometimes referred to as the "poor man's ARPANet."
Usenet is defined by the reach of the newsgroups, which are created by the anarchistic process of convincing operators all over the network to carry them. Most early news readers were researchers at universities or corporations who found the information they carried useful or socially relevant. Sometimes, institutional equipment would be networked without official approval from management. In other cases, employees were able to convince their administrators that the intellectual benefits from participating in news exchange justified the telecommunications costs.
There was something of an administrative culture clash between the new UC-Berkeley node on Usenet, which was also an ARPANet site, began (probably without "official" approval) forwarding ARPANet e-mail traffic in newsgroup form. Many Usenet readers became upset at the fact that they were not allowed to _send_ to these lists. Even NSF needed to obtain special permission from the Department of Defense before it could use the ARPANet to connect two remote parts of CSNet. The prohibition on receiving Usenet traffic was lifted in 1981.
The One True Internet
The Birth of the Internet
The concepts of modern internetworking technology were formalized in the research community by Robert Kahn and Vint Cerf at a meeting of the InterNetworking Working Group, in 1973 at the University of Sussex, England. A decade later, on January 1, 1983, the ARPANet switched over to the TCP/IP protocols. (Transmission Control Protocol is an verification layer that runs on top of the routing-layer Internet Protocol.)
Many of the networks mentioned heretofore were then connected to the ARPAnet. Prodigy launched an Internet e-mail gateway in 1983; FidoNet was connected in 1988. NNTP, the Network News Transport Protocol, was designed to carry newsgroups efficiently over TCP/IP on the Internet.
Many of the old networks were made obsolete by the explosion of new connectivity on the Internet. ARPANet (after spinning off MILNET) was decommissioned in 1990; CSNet was shut down in 1991, and BITNET was scheduled for downage in 1996.
NSFNet and privatization
By 1985, the National Science Foundation had realized the incredible benefits that e-mail and other forms of electronic communication had to offer the scientific community. It sought to bring connectivity to institutions of higher learning in a cross-disciplinary manner. NSF had also decided to include its project within the Internet framework built by DARPA, and coordinated this effort through a variety of governmental and technical bodies; still other bodies mediated international cooperation.
The agency contracted out responsibility for establishing and maintaining a high-speed (for the time) backbone for "NSFNet" to Merit Network, Inc., with IBM and MCI involvement. (The group later formed Advanced Network and Services, Inc.) NASA and the Department of Energy also contributed infrastructure to the new network. Once again, research institutions, public and private alike, enjoyed the benefits of operating pieces of the global network on the government dollar.
However, the government architects also had eventual privatization in mind the entire time. NSFNet policies were carefully tailored to achieve this end. The government-funded network provided a model for private development, showcasing both local and long-distance data transport mechanisms, high level peering structures (called Federal Internet Exchanges, FIXs), and interoperability with the rest of the Internet.
The agency also directly leveraged privatization through the academic funding it provided. Government policy simultaneously encouraged local network operators to seek commercial customers (saving money through economies of scale) and prohibited commercial traffic on the NSFNet backbone. Long distance data carriers like Performance Systems International, Inc., General Atomics, and UUNET Technologies, Inc. provided the commercial equivalent to NSF's "long-haul" services, forming the Commercial Internet eXchange (CIX) Association in 1991.Meanwhile, local ISPs began clustering around network access points and retailing capacity to individual and corporate customers.
By 1995, the commercial infrastructure had progressed enough to allow the privatization of the NSFNet backbone. The agency redirected its funding to enable regional academic customers to purchase long-distance connectivity from the private sector. (It also created the "new" NSFNet for high-bandwidth connections among
supercomputing centers and universities, called vBNS, the Very high speed Backbone Network Service.)
"Killer apps" out of control
By the early 1990s, e-mail and newsgroups had exploded in popularity across academia, the corporate world, and limited segments of everyday society. In May, 1991, the third "killer app" arrived on the cyber scene. The first web servers went online at CERN (the French acronym for the European Laboratory for Particle Physics), publishing the technical specifications for HTTP, HTML, and URI (Uniform Resource Identifier: think "URL"), and soon providing sample client/server implementations.
Why CERN? Why 1991? The hypertext concept, so familiar to us today, was certainly not new. The term itself was coined in 1960 by Ted Nelson, though Vannevar Bush had described a hauntingly similar idea in a 1945 paper on the "Memex." In 1967, Andries van Dam of Brown University had built the Hypertext Editing System, which NASA used to help manage complex documentation in the Apollo Program. Networking had been around since at least 1968, but the combination of the two never really caught on until the CERN project.
Perhaps the world wasn't ready for HTTP - maybe people with the right sort of expertise had not encountered problems of the type that hypertext solved. Perhaps more likely, maybe the networked hypertext idea had just never been presented in a form that a critical mass of users would adopt. Most relevant to the interests of sovereign powers, however, it seems that no matter who conceived the World Wide Web, once the technology began its inexorable spread, it would be very difficult to control. The technology introduced a new, much more decentralized paradigm for communication, which allowed individuals to elude the control of geographically constrained governments. Virtually anyone from a multinational corporation to a private, middle-class individual can feasibly produce web content, and even more can view that content.
Even more importantly, the popularity of the web expanded far beyond the research community, and in the past few years, has enjoyed unprecedented interest from new sectors of society. The sudden wave of commercial competition for scare resources, as well as the immense cultural, legal, and interpersonal conflicts created by the peoples of the world bumping elbows in cyberspace, have left us in the current state of institutional crisis.
Principles of Jurisdiction
One of the advantages of the Internet over other methods of communication and commerce is that it enables access to a much wider, even a worldwide, audience. Spatial distance and national borders are irrelevant to the creation of an Internet business, many of which are conceived for the express purpose of expanding sales horizons across borders. In a sense, a person can be everywhere in the world, all at once. This ease of communication raises a vital legal question, however: when a person puts up a website on his home server and allows access to it from all points on the globe, does he subject himself to the governance of every law- and rule-maker in the world? Under the current system, in order to decide what state's or nation's laws govern disputes that arise over Internet issues, a court first must decide "where" Internet conduct takes place, and what it means for Internet activity to have an "effect" within a state or nation.
Even apart from the Internet, this border-centric view of the law creates certain difficulties in an economy moving toward globalization. Entire bodies of law have been developed by every nation to deal with the resolution of international conflicts of law, conflicts that arise when geography and citizenship would allow a dispute to be decided by the laws of more than one country, and the laws of those countries are not consistent with each other. Conflicts of law are particularly likely to arise in cyberspace, where the location of an occurrence is never certain, where ideological differences are likely to create conflicting laws, and where rules are made not only by nations and their representatives, but also by sub-national and transnational institutions.
II. The test currently in force
A. In the United States
A court does not have power over every person in the world. Before a court may decide a case, the court must determine whether it has "personal jurisdiction" over the parties. A plaintiff may not sue a defendant in a jurisdiction foreign to the defendant, unless that defendant has established some relationship with that forum that would lead him to reasonably anticipate being sued there.
In the U.S., the Due Process clause of the Constitution's Fourteenth Amendment sets the outermost limits of personal jurisdiction. If a party has substantial systematic and continuous contacts with the forum, a court may exercise jurisdiction over a party for any dispute, even one arising out of conduct unrelated to the forum. This is known as general jurisdiction. For example, a corporation or person can always be sued in its state of residence or citizenship or its principal place of business, regardless of whether or not the claim arose there.
If a party is not present in the state or does not have systematic and continuous contacts with the state, courts may exercise jurisdiction over a party for causes of action arising out of his contacts with the state, or arising out of activities taking place outside the state expressly intended to cause an effect within the state. This "effects" test is described from the American Law Institute's Restatement (Second) of Conflict of Laws 37 (1971), which provides:
"A state has power to exercise judicial jurisdiction over an individual who causes effects in the state by an act done elsewhere with respect to any cause of action arising from these effects unless the nature of the effects and of the individual's relationship to the state make the exercise of such jurisdiction unreasonable."
To do this, the court must look to the state's "long-arm" statute, which sets the parameters for the state's exercise of its constitutional power to govern conduct by non-citizens (including both Americans and foreigners). Long-arm statutes vary widely from state to state. For example, Arizona grants the broadest possible freedom to its courts: "Arizona will exert personal jurisdiction over a nonresident litigant to the maximum extent allowed by the federal constitution." New York, on the other hand, gives a more restricted and specific charge to its courts with its statute, which allows personal jurisdiction over those who transact business or commit a tortious act within the state of New York, and over those who commit an act outside the state that could reasonably be expected to have a tortious effect within New York. The Federal courts have the equivalent of a long-arm statute of their own, in Federal Rule of Civil Procedure 4(k) (Rule 4(k)), which provides three basic grants of jurisdiction. First, it authorizes federal courts to "borrow" the long-arm statute of the state in which the federal court is located. Second, Rule 4(k) authorizes federal courts to exercise grants of personal jurisdiction contained in federal statutes, such as the federal securities and antitrust law, which have their own jurisdiction provisions. And third, Rule 4(k)(2) grants long-arm jurisdiction in an international context, within the boundaries of the Constitution, over parties to cases arising under federal law who are not subject to the jurisdiction of any particular state. The concept of being able to have minimum contacts with the United States as a whole has profound implications for the Internet and international jurisdiction. Users all over the world, without establishing contacts in a particular state, could establish contacts with the entire country with nearly every foray into cyberspace.
In order to be subject to personal jurisdiction in a state that is not his domicile, not only must a person fit under the ambit of the state's "long-arm" statute, but also the state's jurisdiction must be valid under the Due Process Clause of the Fourteenth Amendment. The Supreme Court set the standard for constitutional exercise of jurisdiction in International Shoe Co. v. Washington. Pursuant to the Due Process Clause, a nonresident defendant may not be sued in a forum unless it has first established sufficient "minimum contacts with [the forum] such that the maintenance of the suit does not offend traditional notions of fair play and substantial justice." In addition, the nonresident's "conduct and connection with the forum [must be] such that he should reasonably anticipate being haled into court there." This test relies on courts to decide, according to "traditional notions of fair play and substantial justice," what contacts are sufficient.
Courts will generally hold that contacts are sufficient to satisfy due process only if the nonresident "purposefully availed" itself of the benefits of being present in, or doing business in, the forum. According to a the plurality of the Supreme Court in Asahi Metal Industry v. Superior Court, a connection sufficient for minimum contacts may arise through an action of the defendant purposefully directed toward the forum State. The placement of a product into the stream of commerce, without more, is not an act of the defendant purposefully directed toward the forum State, but advertising or marketing in the forum state may fulfill the deliberate availment requirement. There must be clear evidence that the defendant sought to serve the particular market.
If the minimum contacts test is met, a court may only exercise jurisdiction if it is "reasonable" to do so. In determining reasonableness, a court must weigh and consider the burden on the defendant to litigate in the forum, the forum state's interests in the matter, the interest of the plaintiff in obtaining relief, efficiency in resolving the conflict in the forum, and the interests of several states in furthering certain fundamental social policies.
In sum, under U.S. law, if it is reasonable to do so, a court in one state will exercise jurisdiction over a party in another state or country whose conduct has substantial effects in the state and whose conduct constitutes sufficient contacts with the state to satisfy due process. Because this jurisdictional test is ambiguous, courts in every state of the U.S. may be able to exercise jurisdiction over parties anywhere in the world, based solely on Internet contacts with the state.
There is little dispute that nation-states can prosecute Internet users (or anyone else, for that matter), whatever their location, for revealing national secrets, falsifying official documents, or inciting war. These activities threaten national security, wherever they are committed, and therefore fall under international standards for jurisdiction. Similarly, it is a universal crime to publicly incite torture or genocide. These universal offenses may be prosecuted extraterritorially by any nation, regardless of the citizenship or location of the user.
These are easy cases, however. Nations may also be interested in enforcing non-universal laws extraterritorially; for example, In Germany, it is illegal to import or distribute material espousing a Nazi or Neo-Nazi viewpoint. Such material is not difficult to find in USENet or on the World Wide Web. German authorities may be interested not only in interpreting German laws to classify Internet viewing as "importation" of material, but also (in part because of the difficulty of locating those who break an importation statute without leaving their own homes) in prosecuting those who make such material available to Germans via the Internet. If German authorities attempted to prosecute a U.S. citizen or resident for such an offense, however, they would be met with great opposition by the U.S., which certainly would not enforce any judgment against the U.S. citizen in such a case, because the German statute violates U.S. Constitutional principles. Under U.S. law, it would be prohibitively difficult to prevent German users from viewing such a site and therefore the result of such a prosecution would be to chill otherwise legal (if unpleasant) speech in the U.S. Under the current system, it is possible to envision that German courts may have jurisdiction over Americans who publish such material, even though the material may not be "purposefully directed" (one interpretation of the American standard) toward Germany in the way a mailing of flyers would be.
As discussed above, U.S. courts apply the same "effects" test to foreign parties as to American parties. If minimum contacts exist, parties from other countries may be haled into court in the United States just as parties from one state may be haled into another. Similarly, Americans may be tried by courts in other countries depending on the rules of that country. Although each country's laws are different, most rely on some sort of "effects" test resembling the U.S. test, whereby a party is subject to jurisdiction in a place where his conduct has an effect. This jurisdiction traditionally is subject to a "reasonableness" test. According to section 421 of the Restatement (Third) of the Foreign Relations Law of the U.S., exercise of jurisdiction is generally reasonable if the party is a citizen, resident, or domiciliary of the state, or if:
. . . (g) the person, whether natural or personal, has consented to the exercise of jurisdiction;
(h) the person, whether natural or juridical, regularly carries on business in the state;
(i) the person, whether natural or juridical, had carried on activity in the state, but only in respect of such activity;
(j) the person, whether natural or juridical, had carried on outside the state an activity having a substantial, direct, and foreseeable effect within the state, but only in respect of such activity; or
(k)the thing that is the subject of adjudication is owned, possessed, or used in the state, but only in respect of a claim reasonably connected with that thing.
This standard differs somewhat from the U.S. standard for interstate exercise of jurisdiction; for example, transitory presence (known as "tag" jurisdiction), accepted in the U.S., is not generally accepted as a method of international jurisdiction.
Every nation has an obligation to exercise moderation and restraint in invoking jurisdiction over cases that have a foreign element, and they should avoid undue encroachment on the jurisdiction of other States. Although countries are given great discretion in deciding whether to exercise jurisdiction over conduct in other countries, international law dictates that a country exercising its jurisdiction in an overly self-centered way not only contravenes international law, but can also "disturb the international order and produce political, legal, and economic reprisals."
Based on this traditional moderation, and the relatively high threshold of the "reasonableness" standard discussed above, it is unlikely that foreign nations will have the sort of long-arm power over citizens of other nations as states have over citizens of other states within the U.S. today. Scholars have suggested that individual persons and small commercial entities whose only contacts with a nation are on-line are, in all likelihood, more insulated from international jurisdiction than they are from interstate jurisdiction. This is largely speculative, however, because international Internet jurisdiction cases have thus far been rare, and nations have not hesitated to pass laws conferring global jurisdiction for Internet activities.
III. Application of the "effects" test to the Internet
A. in the United States
The Supreme Court has not discussed the impact that technology might have on the analysis of personal jurisdiction. Lower courts, on the other hand, have explored the question of cyberspace jurisdiction. While most have held that merely creating and hosting a website available to all does not subject a person to general jurisdiction everywhere in the U.S., they diverge widely as to whether the presence of such a site will lead to specific jurisdiction over the party for the purposes of disputes arising from the website.
Some decisions suggest that a court may obtain personal jurisdiction over a non-resident defendant whose sole contact with the forum state arose through the Internet. Examples of these include: CompuServe, Inc. v. Patterson, Zippo Manufacturing v. Zippo Dot Com, Inc., Panavision International, L.P. v. Toeppen, and Maritz, Inc. v. Cybergold. In each of these cases, Internet contacts with the forum state exceeded those of a passive website: In CompuServe, the defendant knowingly reached out to and did business with CompuServe, knowing that CompuServe was an Ohio corporation. In addition, the dispute arose out of contacts with the forum state. In Zippo, the defendant's site required participants to submit address information in order to receive a news service; therefore, the site operators knowingly transacted business with residents of the forum state, where the plaintiff was headquartered. In Panavision, the defendant had set up his web site as part of a "scam" to make the plaintiff purchase the domain name from him, and as such had intentionally directed his actions toward the plaintiff's home state. In Maritz, the defendant's site invited users to send and receive information about services it offered, and the defendant company had sent information to over 100 users in the forum state. The court found that "[a]lthough [defendant] characterizes its activity as merely maintaining a 'passive website,' its intent is to reach all Internet users, regardless of geographic location."
Two other recent decisions, in declining to exercise jurisdiction, support the notion that passive Internet sites are not sufficient to support jurisdiction. In McDonough v. Fallon McElligott, Inc., a Minnesota defendant had displayed plaintiff's photographs on the Web without plaintiff's consent, in possible violation of California copyright and unfair competition laws. The Southern District of California held that: "Because the Web enables easy world-wide access, allowing computer interaction via the Web to supply sufficient contacts to establish jurisdiction would eviscerate the personal jurisdiction requirement as it currently exists . . . . Thus, [having] a Web site used by Californians cannot establish jurisdiction by itself." Similarly, in Benusan Restaurant Corp. v. King, the Southern District of New York held that the operator of a small Missouri jazz club called "The Blue Note" did not subject it to New York's trademark laws by merely erecting an advertising site on the Web.
The New York district court's holding in Benusan is at direct loggerheads with the District of Connecticut's holding in Inset Systems, Inc. v. Instruction Set, Inc. In Inset, a party utilizing the trademark of another company for its domain name and "800" number was subject to jurisdiction in the home of the party whose mark was infringed. Also in seeming conflict with Benusan and most other U.S. interstate Internet jurisdiction cases, the Federal Circuit found in Graphic Controls Corp. v. Utah Medical Prods., Inc., that a Utah corporation's activities, which included having an open-access website for ordering goods, having an "800" number, having meetings in New York unrelated to the cause of action, and sending "cease and desist" letters to party in New York, did not constitute minimum contacts with New York. In similar conflict with the above cases, the Southern District of New York held that creating a commercial and interactive (though not yet operational at the time of litigation) website that was available to, and used by, New York residents was not in itself enough contact to subject a publisher to New York jurisdiction in Hearst Corp. v. Goldberger. The District court found that exercising jurisdiction would violate traditional notions of fair play, and noted that the site operator did not purposefully direct his activities toward New York.
The disagreements between the cases above illustrate some of the variety among courts as to the proper approach to take when dealing with Internet jurisdiction. Approaches differed greatly, even among some of the above cases having similar final outcomes. States have not regularized an approach to the Internet, preferring to analogize it to real space. Erecting a website has been compared to publishing in a widely distributed general-interest magazine or putting an item (with the capacity to travel) in the stream of commerce by selling it locally. As the above illustrate, courts seem to be taking an approach resembling that recently laid down by the Ninth Circuit Court of Appeals in Cybersell, Inc. v. Cybersell, Inc., which held that the mere presence of a passive website on the Internet does not constitute the minimum contacts needed to subject a person to the jurisdiction of every court and that "something more," either interactivity or purposeful direction, is needed to justify jurisdiction. What degree of interactivity is required to constitute minimum contacts, however, remains largely unclear from case law. Under the rule set forth in Cybersell, a court would decide whether a website creates minimum contacts by examining the degree to which the site is commercial and interactive, and the degree to which the site is directed at citizens of the forum state. The more interactive a site is (i.e. the more exchange of information is possible between the site and the user), and the more commercial the site's nature, the more likely a court is to find that contact exists between the site owner and the distant user. Similarly, the more the site is directed at an audience in the forum site or designed to harm citizens of the forum state, the more likely a court will be to find that purposeful availment has occurred. Still, the Supreme Court has not addressed the issue of personal jurisdiction in cyberspace and many details still remain unresolved.
U.S. courts have, basically, shoehorned Internet cases into the same jurisdictional rules that they use for non-Internet cases, with the result that U.S. courts lean toward limiting jurisdiction, regulating only sites that intentionally direct themselves into the U.S. in some way. Other countries have not limited their courts so. Several examples illustrate that jurisdictional issues are at least as severe and jumbled in the international context as they are within the domestic U.S.
In the United Kingdom, the Financial Services Act of 1996 makes it a
criminal offense to place investment ads in the U.K. unless they are issued or approved by the Financial Services Authority (FSA). In early 1998, the FSA notified the national U.S. mutual fund association that mutual fund Web sites which can be brought up on a screen in the U.K. are considered to have been issued in the U.K. This could have had a profound impact on the way in which U.S. mutual fund sites operated, however, the FSA stated that it would not take enforcement action against U.S. companies that complied with certain FSA regulations, including placing disclaimers or warnings on their Web sites.
Germany has passed a sweeping law that subjects any Web site accessible in Germany to German law, holding Internet service providers (ISPs) liable for violations of German content laws if the providers were aware of the content and were reasonably able to remove the content. This followed the settlement of a well-publicized incident between Germany and CompuServe, in which German authorities threatened to prosecute CompuServe for allegedly pornographic news groups. In response to the German threat, CompuServe blocked access to those newsgroups to all users, approximately 4 million worldwide. Later, CompuServe restored access and distributed free software for blocking pornography. This caused CompuServe's indictment for aiding in the distribution of pornography and computer games. Prosecutors charged that CompuServe did not do enough to block Germans from accessing the material.
Malaysia's new cyberspace law also extends well beyond the borders of Malaysia. The bill applies to offenses committed by a person in any place, inside or outside of Malaysia, if at the relevant time the computer, program, or data was either (i) in Malaysia or (ii) capable of being connected to or sent to or used by or with a computer in Malaysia. The offender is liable regardless of his nationality or citizenship.
IV. Conflicts of Law
As mentioned above, the Constitution and states' long-arm statutes may permit court jurisdiction over out-of-state conduct, depending on the specific long-arm statute and the conduct involved. This means that many states may have concurrent jurisdiction over the same conduct. A similar situation exists in the international context. Because it is generally accepted as a matter of international law that nations may govern conduct of citizens of the nation taking place outside the nation, conduct by non-nationals that take place elsewhere but has significant and intended effects in the state or nation, conduct that threatens the sovereignty or security of the nation, and conduct that constitutes a universal crime such as torture and genocide, many situations may arise in which several nations' laws could govern the same conduct. To use a real-space example, imagine that A (an American shipping company) ships a batch of B's widgets from New York to B in Belgium, by way of France. The widgets are damaged during the French stopover and that this damage gave rise to a cause of action in tort between A and B. Assuming that A had significant enough contacts with both France and Belgium to warrant jurisdiction in both courts, B could sue A in the U.S., in France, or in Belgium, depending on which legal system would treat B more favorably. Additionally, B could sue in U.S. court but request that the court apply Belgian law to the dispute, or sue in Belgian court but request that the court apply French law, or any other combination of courts and laws.
The many applicable laws will not necessarily be substantively compatible. Different states and nations will have different interests and each will want its laws to govern each dispute. This situation becomes extremely poignant when laws are not only inconsistent, but also incompatible; for example, in some states of the U.S., it is illegal to provide or engage in Internet gambling, but in Liechtenstein, such gambling is government-sponsored. Although the situation of inconsistent laws occurs with moderate frequency now (especially in the antitrust and securities fields) it is likely to become even more common as cyber-commerce becomes more prevalent. This is because, in cyberspace, cross-border transactions are no more difficult than transactions with local parties.
When conflicts of law arise, courts must decide which law will govern. A court need not decide a dispute according to its own law; for example, a court deciding a dispute arising out of an automobile accident in another state would be likely to apply the driving standards of the state where the dispute arose, rather than of the forum state. Several methods exist to aid courts in the decision between laws. Historically, U.S. courts decided a dispute according to the law in the lex loci delicti, the "place of the wrong." In transnational cyberspace, however, the place of the wrong might be any of the nations that are on-line. There is no lex loci delicti.
The Restatement (Second) of Conflicts of Law rejected this historical formulation, preferring the so-called "most significant relationship" test, which values (1) the needs of the international system; (2) relevant policies of the nation in which the suit was brought; (3) the relevant policies of all interested states; (4) justified expectations of the parties; (5) certainty, predictability, and uniformity; (6) and ease of administration.
Several other approaches to choice of law have also been posited and accepted by some courts. The "center of gravity" approach, first adopted by the Court of Appeals of New York, might be characterized as a simplified version of the "most significant relationship" test of the Second Restatement. This approach authorizes courts to look at all the existing contacts between the various parties to a suit and various jurisdictions. Ultimately, the court should choose the law of whatever jurisdiction is most closely tied to the case.
Legal scholar Brainerd Currie espoused the "interest" approach, which encouraged courts to look to the history of the applicable laws and, if the laws of one state could be applied without impairing the other state's interests, those laws were to apply. In the case of a true conflict, in which one state's interests would always be impaired, Currie suggested using the law of the forum. California has accepted this approach, but instead of automatically applying the law of the forum in true conflicts cases, applies a "comparative impairment" analysis and applies the law of the state that creates the least impairment.
Finally, professor Robert Leflar has devised a test in which courts consider 1) predictability of result, (2) maintenance of interstate and international order, (3) simplification of the judicial task, (4) advancement of the forum’s governmental interests, and (5) application of the better rule of law.
Currently, U.S. states and the U.S. itself take a variety of approaches; none of the above approaches have been universally accepted.
Interestingly, most approaches other than the "place of wrong" approach eliminate the need to decide "where" the conduct in question occurred before deciding what law governs (although determining the location of an action may help create the list of nations' laws from which to choose). As the few reported cases show, however, courts may ignore traditional choice-of-law principles entirely and simply apply forum law to Internet-related disputes. Indeed, at least one state, responding to the problem of Internet-based gambling, has announced an intention to apply its own law to lawsuits resulting from in-state Internet contacts. The Minnesota Attorney General’s office has interpreted existing Minnesota law to prohibit all forms of on-line gambling, and has noted that "[g]ambling is just one example of illegal activity on the Internet" and "the same jurisdictional principles apply with equal force to any illegal activity." Courts have tended to apply the law of the forum state in Internet cases, without discussion.
It should be noted that many Internet activities are commercial and that many of these involve contractual transactions. These contracts may contain choice-of-law clauses defining what state's law will govern any dispute arising out of the transaction. Most ISPs, for example, include choice of law clauses in their service agreements; such clauses may greatly simplify choice of law questions on the Internet, as choice of law clauses are, for the most part, honored as a matter of international law. Many Internet activities are not commercial or even transaction-oriented, however, and choice of law clauses may not cure problems arising from non-transaction-oriented activities. Case law does not indicate what route courts might take in resolving true choice of law disputes arising from such activities. One commentator has suggested the creation of a choice-of-law treaty for the Internet.
V. Practical Implications of these laws: Looking to the future
Many questions remain about how courts will fit the Internet into the current system of jurisdiction. For this reason, people do not know what laws to live by. Although most people know the laws of their domicile state, many do not know the laws of states with which they will be interacting; therefore, under the current system, it remains entirely possible that a person could be haled into court in a foreign land for something that is perfectly legal in his domicile.
Simply put, the "effects" test doesn't work. It is true that states and nations are perfectly within their power in prosecuting foreign parties who provide illegal services (for example, a Nevada site that provides gambling to a Minnesota resident), and foreigners who commit crimes (for example, an American who posts names of businesspeople on a site available to Chinese dissidents). However, the architecture of the Internet makes it easy for people to obfuscate their identity and location and it is therefore impractical, under the current architectural regime, to make sites provide and deny service based on a person's identity; yet, that is exactly what the current legal system requires.
Most importantly, under the current regime, people can unwittingly open themselves to liability, by posting information on the Web that they consider proper. For example, a local company with the same name as a different company across the country and thereby expose itself to trademark liability; or an American could put up a site containing photos of women in short skirts, thereby exposing himself to criminal liability in countries under Islamic law.
If the current legal system is to maintain effective and fair control over the Internet, courts all over the world will have to make a clear move toward a new test for jurisdiction, and a consistent test for resolving choice of law disputes. If courts could agree to exercise jurisdiction based on an effects test with a much stronger element of purposeful availment than exists in the current system, the current style of legal governance might be able to serve the Internet in an effective and consistent way, without the excessive and unpredictable elements that it currently suffers from. For the purposes of fairness, mere awareness that a site could be accessed at a location would not be enough to trigger jurisdiction; rather, in order to be subjected to jurisdiction in a place other than his domicile or its primary place of business, a party would have to display intent to reach the audience in that location through advertising or special targeting subject matter, or a positive awareness of an audience's locations by way of interactions involving the exchange of information about realspace location. Under such a system, a party would still be subject to jurisdiction in its home state or nation, but not in a foreign jurisdiction unless the party sought out an audience in that foreign jurisdiction. Such a system would put the burden on state and local authorities to prevent the viewing of illegal material and to focus on laws regarding the use of illegal material, rather than laws the provision of such material. As hey are today, transactions would be susceptible to the jurisdiction of the domiciles of all parties involved and any jurisdiction in which the transaction was definitively intended to have an effect.
Treaties: a step up?
One method traditionally used to deal with jurisdictional conflicts and conflicts of law is to create treaties and conventions. Although treaties are not always effective at curbing behavior, they have been a somewhat effective means of establishing a baseline of agreement between nations and thus they may still play a role in Internet sovereignty.
Treaties are negotiated between nations, often with the aid of international organizations such as the United Nations. Once negotiated and signed, treaties must be ratified by way of implementing legislation in each of the individual agreeing nations.
Treaties take time to negotiate, and given the relative youth of the Internet, few treaties have been negotiated regarding the Internet. This section will consider treaties as a means of Internet governance by considering two situations in which treaties have been suggested as solutions to problems raised by the Internet: copyright and taxation.
a. The WIPO copyright treaties
The WIPO Treaties are not yet in force and are not binding on any state at present moment. For the WIPO Treaties to enter into force, thirty states or intragovernmental organizations must ratify them. As of September 10, 1998, fifty-one states or intragovernmental organizations had signed the Copyright treaty, including the United States and European Communities (Union), and four states had ratified it, including Belarus and Indonesia. In addition, the U.S. ratified the treaties with the passage of the Digital Millennium Copyright Act in October 1998. Signing a treaty is an expression of an intent to ratify the treaty. The Treaties were accepted on December 23, 1996, however, and there is no limit on the among of time states may take to ratify them.
In order to be truly effective, these treaties must be unanimously accepted, to prevent the creation of copyright "havens" where people can go to violate copyright laws with impunity. In addition, the treaties do not answer all possible questions regarding Internet copyright. They do not definitively address enforcement, dispute settlement, Internet service provider liability, fair use, exhaustion of rights, place of publication, or the definition of a copy. As one commentator put it, however, "[d]espite these inadequacies, the resulting Internet Treaties are not an exercise in futility. They establish an international consensus on the application of copyright and neighboring right principles to digital technologies which can serve as the foundation for further legal infrastructure down the line."
b. The OECD taxation project
It has been said the only two sure things in life are death and taxes. But due to recent rapid technological advancement, at least for the moment, there appears to be one place where the latter is in question, especially for international businesses-on the Internet. Taxing authorities are perplexed over which country should have taxation rights in complex international electronic transactions.
Most states impose some sort of income tax for money earned by a company in the territory. Where a company is taxed often depends on where its principal place of business is located. Many tax treaties currently in force, designed to prevent double-taxation, permit taxation of a company only in a place where it has a "permanent establishment." For this reason, an Internet businesses might choose to use server space in locations with little or no income tax, or to change servers frequently and in so doing, to not have principal place of business at all. Similarly, many states impose sales taxes on items purchased within the state. Given the difficulty of determining the location of a any party to a transaction in cyberspace, it could be too great a burden to require that sellers of merchandise in electronic commerce discover the location of and appropriate tax rate for their customers; however, states have an important interest in seeing that the Internet does not become a tax-free zone.
For this reason, the Organization for Economic Development (OECD), which calls itself a "club of like-minded countries" (its membership consisting of 29 of the most affluent countries on the globe) has undertaken to devise a system for facilitating taxation of on-line transactions. The drafting took place in close consultation with the European Union and the World Customs Organization. On October 10, 1998, the OECD released a series of 28 "Implementation Options," suggestions of various overlapping methods for facilitating tax payment by the players in electronic commerce. These take the form of suggested actions that revenue authorities might take. They range widely in subject matter, and include creating automated revenue service hotlines (option 2), enabling electronic payment of taxes (option 5), allowing digital tracing of businesses that do not adequately identify themselves (option 11), allowing digital authentication of documents (option 14), cooperatively creating a set of rules for taxing the sale of intangible goods, as opposed to tangible goods (option 17), and monitoring developments in electronic commerce in order to continue developing effective taxation methods (option 25). The OECD plans to redefine the "permanent establishment" requirement mentioned above (option 21) and monitor the application of existing norms in order to continue making proposals for improvement (option 22). The OECD is now calling for responses from the business community and from nations that did not participate in the drafting process.
the OECD options contemplate that nations will maintain individual control and administration of taxation, but in a coordinated way. It also contemplates that bilateral treaties may exist, under the framework of whatever taxation guidelines and/or conventions the project yields (option 27). This implies that any treaty created will have to allow for great variations in actual implementation from country to country. In addition, similarly to the Copyright convention above, any taxation plan developed, in order to be truly effective, would have to be accepted unanimously among nations, to prevent the creation of tax havens.
c. Advantages of, and Difficulties Surrounding, Treaties as a method for Internet Governance.
Treaties are appealing as a method of Internet governance. Nations are accustomed to negotiating treaties and may be prepared to consider them more readily than dramatic and confusing technological "solutions." Treaties are also appealing because they may be brokered by familiar and well-trusted bodies such as the U.N., WIPO and the OECD. In addition, treaties create binding, regularized laws that enable all people to be aware of the laws governing them. In many cases, they eliminate or alleviate jurisdictional battles, either by containing jurisdiction clauses or by conforming laws enough that jurisdiction will not have an effect on the outcome of a dispute.
Treaties have significant drawbacks, however. It takes a very long time to create and ratify a treaty: The Berne Convention for the Protection of Literary and Artistic Works, for example, was first adopted in 1886, and has gained members steadily since then. Of the 133 states who have ratified the Berne convention, only 15 had joined in 1919, thirty years after the treaty's adoption; approximately thirty states have joined since 1992 (the United States joined in 1989). The WIPO treaties, although less ground-breaking than the Berne Convention and therefore more likely to receive general acceptance in a shorter period of time, have taken two years to accumulate only four or five of the 30 ratifications needed to enter into force.
In addition, treaties are extremely difficult to create because it is often impossible, or nearly so, to reach consensus among national powers. For this reason, treaties may be a better solution to moderate- to low-controversy issues than to more controversial ones. The issue of gambling, for example, is unlikely to be resolved by creating a treaty, because of deep-set and fundamental philosophical differences between nations on whether or not Internet gambling should be legal. In the event of such ideological differences, nations who choose not to sign a treaty may become "havens" for activity that is illegal in other states. Especially in the Internet context, where transmission of materials from one state to another is no more difficult than transmission of materials down the street, the existence of "havens" can completely undermine the objectives of any treaty.
This is not to say that treaties cannot be an effective method of conforming international laws and preventing conflicts of law, merely to suggest that they cannot be considered a panacea for all of the ills of Internet governance. In fact, treaties may well be the most effective means to handle certain Internet issues, such as copyright enforcement, and may be very effective in combination with technological approaches to handle other issues (for example, treaties, combined with a digital identifications system, may be very effective in facilitating accurate taxation).
Internet Gambling - will it bring down our sovereigns?
Internet regulation in the status quo world, absent some new technologies or governance structures, is challenging. The Internet jurisdiction issues, alone, are complex and can be difficult to grasp. In order to see how regulation of the Internet works today, this section of this paper will explore the question of Internet gambling, an issue that raises many of the problems and complexities of sovereignty on the Internet.
Internet gambling is a reality. The Internet gambling industry was a $60 million business in 1996 and is expected to be a $600 million business in 1998. Grand Dominican Resort & Casino was getting thousands of "hits’ per day after only three months of operations. ICG Sports International made a profit of almost $250,000 in its first year. Jay Cohen, a former US stock trader, moved to Antigua in 1996 to set up the World Sports Exchange. His company has grown from only 20 customers to over 1000. They receive 100,000 bets per month. Although customers log on from Belgium, Japan, and Russia, 95% of their bets are placed from computers located in the US.
Almost every type of gambling is available on the Internet: lotteries, casinos, sports wagering, etc. Many commentators acknowledge that the "question is not whether [Internet] gambling will be available. The real questions are whether it will be legal and whether governments have the necessary tools to prosecute offenders." Most governments have policies regarding gambling by their residents. Some prohibit it. Some regulate it, and some offer gambling run by the state. Once a jurisdiction determines its policy on gambling, it must then implement that policy. Implementation includes not only the adoption of laws, but also their enforcement. It is this aspect of implementation that is most relevant, and most difficult to achieve, in the global Internet environment.
Gambling was first legalized in the US by Nevada in 1931. Then in 1946, after considering the economic benefits, New Hampshire became the second state with legalized gambling when it introduced the state lottery. Native Americans began bingo parlors in the 1970s and casinos opened in New Jersey in 1978. In a critical change to US gambling laws, Congress passed the Indian Gaming Act in 1988, authorizing gambling activities on Indian lands. As various sovereigns have realized the economic potential of gambling, they have introduced a variety of legalized gaming activities. In the last two decades legalized gambling has increased dramatically. There are lotteries in 37 states. Casinos are authorized in 20 states. Only Hawaii and Utah prohibit gambling completely.
II. Reasons to regulate:
A. Taxation and Revenue
Critics of Internet gambling prohibitions complain that US States don’t oppose gambling for policy reasons but want to reap the financial rewards of regulation via taxation or licensing. Given the fact that some form of gambling is now legal in all but two states in the US, it seems ironic that the US is at the forefront of trying to prohibit Internet gambling. These critics claim that US State governments would like to prevent Internet gambling sites from drawing away the funds that these entities currently reap through state-run or state authorized gambling. "It is really hypocritical when states like Missouri, Minnesota and Wisconsin, which sanction gambling for their own profit, start taking a moral stand against people betting with regulated businesses on the Internet. It seems like their real interest is protecting their pocketbook, not their citizens."
While it might be easier to raise funds via traditional territorial gambling, states can run Internet gambling sites or authorize private Internet gambling within their jurisdiction. These states could then reap financial benefits from Internet gambling as well as territorial gambling. Finances can be raised through license fees from gambling providers, taxes on gambling operations, company profits tax, and income taxes on winnings. States may desire to maintain the supremacy of territorial gambling, however, because of the economic stimulation that results. Real space gambling can stimulate an economically depressed or remote area and creates a large number of jobs.
B. Policy justifications for regulation or prohibition.
While the argument that some sovereigns oppose Internet gambling for purely commercial reasons does carry some weight, there are a number of other policy justifications for the prohibition or regulation of Internet gambling. The actual impact of gambling, and particularly Internet gambling, is unclear. In 1996, Congress created the National Gambling Impact Study Commission to examine precisely this issue. The Commission has been "tasked with producing a comprehensive and factual study of the social and economic impacts of legalized gambling on states, tribes, communities and individuals" within the US. Their goal is to "provide elected officials and citizens with reliable data and information that could be a resource in future deliberations and decision regarding legalized gambling." The Commission conducted its last meeting the week of November 9, 1998 and will be issuing final reports by June, 1999. Even in the absence of this comprehensive study, many sovereigns argue that there are a number of policy reasons to regulate or prohibit Internet gambling.
Some sovereigns believe that gambling is undesirable and government should not tolerate such activity. This argument does not work for most US jurisdictions because almost every state authorizes some form of real space gambling. States that authorize only lotteries, however, might try to distinguish between the morality of lotteries versus casino gambling, thus justifying a ban on one while allowing the other. Additionally, the two states that do prohibit gambling entirely may rely on this justification.
Increased gambling addictions
When gambling is legalized, gambling addiction increases by 100-550%. Because Internet gambling is so much more easily accessible and because it is available in private, Internet gambling may increase addiction even more than legalized real space gambling. In fact, Internet gambling is referred to as the "crack-cocaine of gambling addiction." Aside from simply desiring to protect its citizens from gambling addiction, preventing such addictions also reduces other losses. Gambling addiction results in significant external harms: 47% of compulsive gamblers engage in insurance fraud or theft and there is speculation that 40% of white collar crime is caused by addicted gamblers. In addition to economic crimes, total crime rates, including child abuse, domestic violence, and burglaries have increased dramatically upon the legalization of gambling. The national price tag for compulsive gambling is estimated to be $56 billion per year.
While states may claim that gambling addiction justifies prohibition or regulation, Internet gambling providers disagree. Some sites, such as Cohen’s World Sports Exchange offer links to Gamblers Anonymous. As Cohen himself says, "can you remember the last time you were in a liquor store and they handed you information on Alcoholics Anonymous?"
Current technology does not allow Internet gambling sites to effectively check the age of the gambler. Regulators are concerned that Internet gambling will therefore increase the trend of youth gambling.
There is a fear that Internet gambling will significantly reduce consumer spending on other areas of the economy. Additionally, there is concern that increased gambling will result in decreased consumer saving which could hurt the national economy.
Regulators are concerned about the ability of Internet gambling sites to defraud their customers. Without regulation, it is possible that Internet gambling operators will not run the games according to stated terms and conditions and that they may not pay winning gamblers. Another concern expressed by regulators is their ability to ensure that providers are financially solvent and that currency translation is fair. Regulation can prevent fraud and abuse. To address these issues, some Internet gambling operators have instituted forms of self-regulation to address these issues. For instance, one site has offered its algorithms to consumers for inspections. Thus its customers can verify that the algorithms are not designed to induce fraud. Additionally, Internet gambling sites can contract with accounting firms to certify the legitimacy of their games and their finances. While these issues do present an argument for regulation by a sovereign, they do not necessitate regulation by the US or US states. These issues are adequately addressed by countries that currently allow Internet gambling.
Prevent money laundering.
Law enforcement officers are concerned that offshore wagering accounts will help conceal criminal profits. For example, if you want to launder money you could bet on both Florida and Georgia when they play each other. Obviously one will lose and one will win. If you structure the bets properly you can get the winner to pay double and end up with exactly the same amount that was wagered, except now the money appears clean.
Expectation of regulation
Because real world gambling is heavily regulated, the public may believe that on-line gambling is also regulated. If policy makers do not regulate this form of gambling, the public’s misconception could cause significant harm. This is only true, however, if most Internet gambling providers are not regulated. Instead, although most are not regulated by the US, they are generally regulated by the country in which they are based.
III. US Regulation
While the federal government has increasingly played a role in gambling regulation, gambling has traditionally been the province of state legislatures. Thus, "[a] canopy of federal gambling laws rests over the thicket of state gambling legislation." Although the US Department of Justice has officially claimed that Internet gambling is illegal, there is no current regulation designed to prevent Internet gambling per se. Instead law enforcement officials using federal law will have to apply laws designed to prohibit gambling over other media.
1. Existing US regulations:
The Transmission of Wagering Information Act, 18 USC 1084, is the federal law most applicable to Internet gambling. The statute provides that:
Whoever being in the business of betting or wagering knowingly uses a wire communication facility for the transmission in interstate or foreign commerce of bets or wagers or information assisting in the placing of bets or wagers on any sporting event or contest, or for the transmission of a wire communication which entitles the recipient to receive money or credit as a result of bets or wagers, or for information assisting in the placing of bets or wagers, shall be fined under this title or imprisoned not more than two years, or both.
Operating an online gambling site should qualify as being "in the business of betting or wagering," as required by the statute. Although commentators agree that the statute can be used to target Internet gambling, it is not a perfect fit. First, the statute specifically mentions sports betting or contests. It is uncertain, therefore, whether the statute could be used against Internet gambling providers that operate casinos or lotteries. Additionally, the statute was intended to apply only to wire transmissions.
Two other items of this statute are important to note. It explicitly allows law enforcement to enjoin common carriers who knowingly allow their phone lines to be used by gambling operations. The statute does exempt transmissions in interstate or foreign commerce where the betting activity is legal in both jurisdictions.
Although the statute clearly applies to foreign businesses that supply gambling to the United States, these entities may be able to argue that they fail to meet the criminal intent requirement necessary for violation of Section 1084. This argument is not likely to succeed in a US court. First, Internet sites are available to virtually every country in the world including the US. In fact, there are more US users of the Internet than there are users from any other country. Thus the likelihood is that a site (especially one available in English) will be visited by US citizens. The US Sixth Circuit Court of Appeals ruled that a site that approved a user’s application for a password with knowledge of the user’s location had knowledge that their product would be disseminated in the user’s location. Offering a warning or disclaimer that Internet gambling may be illegal in the US does not negate the criminal intent element. A casino operator can reasonably foresee that the service may still be used illegally in the US despite such a disclaimer.
2. Proposed Federal Legislation:
The patchwork nature of existing federal regulation results from the traditional role of State and local governments in regulating gambling. Federal law, however, may be more appropriate for Internet gambling because of its interstate and international nature. Additionally, Internet gambling presents legal problems, such as extradition, that state governments are not prepared to handle.
Thus, in 1997, Senator Kyl introduced the Internet Gambling Prohibition Act of 1997. This regulation passed the Senate on July 23, 1998 in a 90-10 vote. Although the act was introduced in the House twice, it never made it to the floor for a vote. It is unclear if this type of provision will pass next term, but the Senate vote indicates that the law in question garnered significant political support.
The purpose of the law was to amend Section 1084 of Title 18 so that gambling via the Internet and any other computer network would specifically be proscribed. Additionally, the law was designed to expand the reach of Section 1084 beyond sports betting to all forms of gambling. Specifically, the bill would have made it illegal to engage in the "business of betting or wagering knowingly us[ing] a wire or electronic communication facility for the transmission in interstate commerce of bets or wagers or information assisting in the placing of bets or wagers…" Additionally, the law is no longer limited only to gambling operators, but criminalizes betting itself.
One of the iterations of the bill, according to Fojut, redefined the meaning of "common carriers" to include ISPs. If this provision remains in the bill, ISPs will be "obliged, along with other common carriers, to ‘discontinue or refuse, the leasing, furnishing, or maintaining’ of a facility in violation of th[e] statute upon notification ‘by a Federal, State, or local law enforcement agency.’" The law further provided that law enforcement agencies could enjoin carriers who refused to terminate access after being notified of an offending site.
Although the US is still working under the patchwork of laws in existence prior to the development of the Internet, federal law enforcement officials arrested several people for Internet gambling in March, 1998. The government appears to have carefully targeted the people arrested. All those arrested where Americans; some were living in the US. All claim to have been licensed by foreign countries. They all conducted some part of their operations within the United States. They were all operating openly, some even taking out ads in US magazines. Only operators and others involved in the business of sports gambling were arrested, no players were targeted. If any cases could be successfully prosecuted under the existing patchwork of laws, it should be these. Regardless of the long term legal results of these cases, the arrests are having an immediate impact. Many sportsbooks are barring bets from Americans and some are closing their doors completely.
Traditionally, most gambling regulations were passed by state governments. "Each state determines whether gambling will be permitted within its boundaries - and if it is permitted, what specific forms of gambling will be allowed. Existing Federal law does not preempt state law from also applying to on-line gambling activities. Instead, Federal gambling laws are designed to aid the states in enforcement of their laws.
1. State attempts to enforce their regulations:
So far, most state enforcement attempts have come against organizations that have a US presence. While the jurisdictional reach of the States over these entities is unclear, their jurisdictional reach over entirely foreign operated organizations is even less clear. Following are some examples of US State approaches to the Internet gambling issue.
Minnesota has been particularly active in trying to enforce gambling prohibitions on Internet gambling organizations. The Attorney General has issued a statement on Internet Jurisdiction. In this statement Minnesota asserts that "persons outside of Minnesota who transmit information via the Internet knowing that information will be disseminated in Minnesota are subject to jurisdiction in Minnesota courts for violations of state criminal and civil laws." The statement then claims that "services outside of Minnesota that offer Minnesota residents the opportunity to place bets on sporting events, purchase lottery tickets, and participate in simulated casino games…are illegal in Minnesota."
Not only are the gambling organizations subject to these laws, the AG claims that "credit card companies and Internet access providers that continue to provide services to gambling organizations after notice that the activities of the organizations are illegal would be subject to accomplice liability." Minnesota residents can also be penalized for Internet gambling. "[A]ny person in Minnesota who places a bet through one of these organizations is committing a crime." One of the penalties of the crime is forfeiture of the device, the computer, used to commit the crime.
Minnesota filed the first US Internet gambling lawsuit on July 18, 1995. Minnesota is suing Granite Gate Resorts, Inc. of Nevada for deceptive trade practice, false advertising and consumer fraud. Granite, which was intending to launch WagerNet an Internet gambling service based out of Belize, claimed that its service was legal. The Minnesota Supreme Court ruled that the state has jurisdiction to press charges against the Nevada company. The Defendant has said that it will appeal to the Supreme Court of the United States. The Supreme Court has not yet ruled on US jurisdictional reach over Internet providers.
Missouri law proscribes almost all types of gambling, including Internet gambling. Missouri is pursuing a suit against Interactive Gaming & Communications Corp. (ICG), a corporation based in Pennsylvania. An undercover investigator from the Attorney General’s office placed bets with ICG totaling over $100. ICG is now enjoined by court order from accepting bets from Missourians. The AG also filed a civil lawsuit against ICG and ordered it to pay more than $66,000 in penalties and costs to the state. If ICG is found guilty of "promoting gambling," the corporation could be required to pay up to $10,000 in fines. ICG president, Simone, could be required to pay up to $5,000 and is subject to a prison term of up to five years.
Missouri has also issued a temporary restraining order preventing Indians and two Internet gambling businesses from promoting on-line gambling in Missouri.
The Wisconsin Attorney General entered into a consent decree with On-Line International, Inc. and its parent corporation, World Wide Web Casinos. The Wisconsin corporation, On-Line International, which distributed computer software to facilitate Internet gambling will be dissolved. Wisconsin claimed that distribution of Internet gambling products violated the State criminal gambling statutes. The State filed other lawsuits against different entities trying to operate gambling sites on the Internet.
Florida Attorney General, Robert Buttersworth, has taken a completely different approach to Internet gambling. In response to questions about the legality of internet gambling by Florida residents, Buttersworth concluded that "[d]espite the prohibitions against gambling provided by federal and state law, at present the structure and operations of the Internet pose an extraordinary challenge." According to the AG, "[e]volving technology appears to be far outstripping the ability of government to regulate gambling activities on the Internet and of law enforcement to enforce such regulations. Buttersworth indicated that regulation and policing of the Internet should be addressed at the national or international level and has therefore declined to prosecute any Internet betting cases, including in-state bettors.
Instead, Buttersworth is using alternative means to shut down offshore Internet gambling sites. His goal is to make doing business offshore difficult by working with financial transaction providers. He negotiated with Western Union to cut off money transfers to and from Caribbean sports books. This enterprising AG is also targeting potential Internet gambling advertisers.
In an interesting twist on state Internet gambling law, one state has gone on-line itself. Internet gambling is no longer the sole province of private providers. New York State’s Off-track Betting (OTB) is joining the fray.
International regulation resembles that in the US in that there is no unified approach to gambling regulation. Unlike US jurisdictions, however, many international jurisdictions have explicitly legalized Internet gambling.
The Australian government has decided to support Internet gambling. They identified the key difficulty for regulators as "find[ing] a way to deal with a situation where gambling products can be delivered over a telecommunication network that has no cognizance or recognition of State borders or local rules." In this context, "traditional controls are rendered inoperable." They therefore decided that regulation, rather than prohibition, would provide the best answer in this setting. According to Australian regulators, the introduction of legalized gambling has always, at least in Australia, resulted in a reduction in illegal gaming. The idea is that if the Australian government can provide trusted, well-regulated products as an alternative to other Internet gambling sources, that their citizens will choose to use the Australian products.
Of course, Australians will still have access to foreign Internet gambling providers. However, providers not regulated by Australia will not be allowed to use traditional forms of advertising to penetrate the local market. In addition to the advertising ban, the Australian government thinks that Australian citizens will use providers licensed in Australia in order to ensure their own protection. Players will know that the games are fair, that the provider is not criminal, that the provider will be able to pay significant prizes, and that personal information and privacy will be respected.
Another goal of the Australian system is to prevent Internet gambling by minors. The system would require players to register before they are allowed to gamble. Registration requires proof of identity, age and place of residence. Although this is a laudable goal, and one that the US would want to achieve as well, it is not clear that the Australian regulatory system will help to achieve it. If Australian licensed providers are off-limits to minors, they will simply choose a foreign licensed provider. These foreign operators may not provide the security options of the Australians, but they may be accessible to minors. Determined minor gamblers will likely sacrifice security for the ability to bet.
Importantly, Australia does not plan to enforce prohibitions from other jurisdictions, such as the US. Overseas players will be able to engage in activities that are available for Australians. Otherwise, Australia would have to attempt to apply conditions and restrictions "desired by thousands of individual different localities." Instead, Australia and New Zealand want to become major players in the emerging global Internet gambling industry. Additionally, they have determined that ISPs should not be liable for the content for which they act only as a conduit.
Liechtenstein, one of the world’s smallest countries, operates a casino in six different languages, including Chinese. Liechtenstein claims that people coming to their web site are coming to Liechtenstein and thus are only subject to Liechtenstein’s laws. Despite this claim, the tiny country has agreed with its two larger neighbors, Austria and Switzerland, that it will not allow their citizens to participate in Liechtenstein’s on-line lottery.
Many Internet gambling providers operate out of Caribbean Countries. Several of these countries heavily regulate these operators in an effort to prevent fraud and ensure the legitimacy of the games. In Antigua, for instance, annual licensing fees run between $50,000 and $75,000. In addition, providers must undergo rigorous personal and credit investigations. They also must post bonds, sometimes as high as $500,000, to ensure that they can pay winners. Despite these significant regulations, or maybe because of them, Antigua is a popular base for Internet gambling providers. Twenty-six sites were already licensed in Antigua in January 1998, and there were forty applications under review at that time. Similarly, Belize established a Computer Wagering Licensing Board, which supervises the on-line gambling industry. Licensing is required and licensed providers are required to post bonds in order to operate an Internet gambling site out of Belize. Other Caribbean countries that have legalized Internet gambling include Grenada, Cook Islands, and Turks and Caicos Islands, St. Kitts, St. Martin, and the Netherlands Antilles.
While it is clear that governments may have logical justifications for prohibiting or heavily regulating internet gambling, it is unclear how they will be able to enforce their laws. Many government officials concede that there may be no way to enforce prohibitive Internet gambling legislation. Senator Richard Bryan (D. Nev.) claims that , "there is no way of regulating it." "International Internet gambling? We can’t do anything about it," says John Russel, spokesman for the US Department of Justice. Senator Kyl, author of the Internet Gambling Prohibition Act of 1997, states that, "I don’t believe it can be regulated, so we have to prohibit it." It seems, however, that the opposite is true. Prohibition may not be possible, but some regulation of the majority of Internet gambling providers probably is.
Despite these doomsday statements, there are many targets law enforcement may attempt to reach in regulating Internet gambling. The primary target of real space gambling regulation is the provider because the player was typically viewed as the victim of the crime rather than a perpetrator. Because of the difficulty in enforcing laws against providers located in foreign countries, regulators may choose to regulate the gambling consumer or the facilitator in the Internet context. Enforcement against the consumer may prove more difficult than enforcement against the provider. While enforcement against the facilitator is easier to achieve, that option has some policy tradeoffs and thus is not necessarily the best answer.
The question of legal violation and thus the question of enforcement can only be asked once it has been determined where the activity occurred. "No discussion of the legality of an activity could be conducted without reference to which laws are being broken." Although the people on either end of an Internet gambling transaction are connecting via cyberspace, both are located in physical space as well. The company is run from a physical location and the gambler initiates gambling from a physical location.
Most Internet gambling occurs "‘on-site’, meaning that they are run on the gambling provider’s server." Many operators claim that if the transaction takes place entirely on the operator’s server, then the transaction has not occurred in the US and no US law has been violated. According to the Electronic Frontier Foundation, and many Internet gambling providers, the gambler/user travels into the jurisdiction in which the provider is located.
A key difference between Internet gambling and real space gambling is that the provider of Internet services does not need to be near the jurisdiction of the consumer. Thus, the provider can choose to locate anywhere in the world. As noted above, many jurisdictions specifically authorize Internet gambling operators. Because Internet gambling is legal in these jurisdictions, US law cannot require sovereigns to assist the US in preventing Internet gambling sites from setting up shop. However, US courts may enforce US laws against an operator who causes an act that produces a detrimental effect in the United States. Thus, operating an online gambling facility in a foreign jurisdiction that authorizes such activities does not entirely insulate these providers from US law.
US courts can enforce their laws against Internet gambling providers who cause a harmful effect in the US jurisdiction if they can show that the court has personal jurisdiction over the provider. Some US courts have granted jurisdiction over Internet providers located within the US but not in the state asserting jurisdiction. Because the courts seem to grant jurisdiction over web sites that offer interactive services, gambling operators are likely to be subject to these rulings. Asserting personal jurisdiction over international providers, however, may prove more difficult.
Once a court gains jurisdictional authority to prosecute an entity, it must jump yet another hurdle. In order to enforce US law, the US court must have physical custody of the violator. Gaining physical custody of a violator located off-shore, known as extradition, usually requires the existence of a treaty between the US and the nation where the provider is located. While the US has extradition treaties with some nations, no such treaty exists with many of the small nations that have legalized gambling. Additionally, criminal suspects can only be extradited for committing crimes that are specified in the treaty. In order to extradite an Internet gambling provider, therefore the treaty would have to make gambling an extraditable offense. Since these nations have specifically legalized gambling, and some have stated that they will not enforce foreign prohibitions, it is highly unlikely that the US will be able to negotiate an effective treaty to ensure the extradition of Internet gambling providers.
Because the extradition picture is so bleak some legal commentators have suggested such outrageous behavior as obtaining custody by force! While both impossible on a large scale, and insupportable generally, such action would create the necessary physical custody and US courts have held that due process rights are not affected by abduction from a foreign country.
The US can prosecute individuals who are involved in the management, operation and ownership of Internet gaming sites, including officers, directors, shareholders and managers. Even if these people are US citizens, however, they are only subject to US jurisdiction if they are in the US, or return to the US. If any of these people, US citizens or not, come to the US, they will be subject to the laws of any US court that has asserted jurisdiction over them.
While federal law currently only criminalizes gambling operators, many state laws also prohibit betting. Bettors are located in the state that is trying to regulate them, and therefore are not subject to the jurisdictional problems inherent with offshore Internet gambling operations. Enforcing laws against Internet gamblers, however, will be very difficult and therefore will probably not result in sufficiently curbing the targeted behavior. Obtaining evidence of the crime will be extremely difficult. The user is most likely to gamble from within the home, a location that is substantially protected by the US constitution. Thus the means of collecting proof will be limited to 1) insider report from a family or friend who tells law enforcement - this will not happen often enough to prevent Internet gambling in any sufficient way; 2) setting up a sting gambling site - this could possibly be considered entrapment; and 3) surveillance of home user via wiretap. Although a wiretap could be the most effective it is likely to be illegal because it will be almost impossible to show probable cause. Additionally, wiretaps are extremely costly and will be ineffective if the gambling sites uses encryption. Law enforcement probably will not be able to gain information from the internet gambling providers either. Because operators are outside US jurisdiction there is no way to gain access to their records.
C. Enforcement against facilitators
Because it is so difficult to enforce Internet gambling legislation on the bettor and on offshore gambling operations, law enforcement may turn to Internet Service Providers and other facilitators as the target of regulation. These entities are desirable targets because they are identifiable (in contrast to the bettors) and subject to personal jurisdiction (in contrast to the operators).
1. Internet Service Providers (ISPs)
There are some negative consequences to holding ISPs liable. First, regulators who support the growth of the Internet fear that such liability will discourage the development of new technologies. Additionally, there is a concern that ISP liability will result in chilled speech. If ISPs are held liable for the content or actions of their users they will have to monitor all messages, Internet traffic, websites etc. This is obviously an impossible task, and thus ISPs will discontinue those services for which they are most at risk.
Regulators have therefore examined the option of requiring ISPs to cut off service to specific offending sites, rather than holding ISPs liable for damages from Internet gambling operators. Thus regulators could effectively shut down offending sites, and yet avoid some of the negative implications that come with holding the ISP liable. While requiring ISPs to discontinue service to targeted sites may assist in enforcing US regulations, this will not be the case if the ISPs are located off-shore. ISPs located in foreign countries would be insulated form US action in the same way that internet gambling providers located in those countries would be protected - courts do not have physical control over them.
The US could require AOL, CompuServe and other full service ISPs to disallow access to a particular site. Although this will only prevent users with such services from accessing prohibited sites, the reduced flow of traffic may still be significant. Additionally, the US could provide a list of all forbidden sites to all US ISPs - not just full service providers - with the requirement that they disallow access. In order to succeed under this tactic, the US would have to invest significant amounts of money in locating offending sites and providing updated lists to US ISPs. The government would also have to monitor the ISPs to ensure that they are disallowing access to these sites. If this system was successfully implemented, most sites would be shut down before they could establish a clientele. Additionally, those sites that did exist would not be easy to find for the average Internet user since any site listed in a well-known search engine would be quickly found and targeted by the government.
As with ISP regulation, regulation of financial facilitators could enable enforcement of Internet gambling laws. Before bettors can start gambling, they must establish an account and deposit funds with an Internet gambling provider. If US customers do not have the means of transferring funds, they will be unable to gamble. The two most typical means of establishing an account are wire transfer and credit card cash advances. The major credit card corporations - Visa, MasterCard, and American Express - are subject to US jurisdiction and US law. Major wire service companies, such as Western Union, are also subject to US law. Not all financial transactions can be prevented, however. Some Internet gambling sites accept money orders or bank checks. In this case, the issuing bank may not know where the money will be going and is thus less susceptible to criminal liability for aiding a crime.
There are several means of using financial facilitators to prevent US customers from gambling. First, some states have indicated that these organizations will be held liable for accomplice liability if they transfer funds to Internet gambling providers. If financial facilitators are subject to prosecution for aiding or conspiring to commit a crime, they may decline to facilitate the money transfers that Internet gambling operators require before players can start betting. This tactic is susceptible to the same issues discussed above regarding criminal liability. The prosecutor would have to prove that the facilitator knew that the money transfer would facilitate illegal activity in order to establish criminal liability. Banks whose clients write personal checks would be least susceptible to this knowledge requirement.
While holding a credit card company liable for aiding a crime may not be possible, another option for deterring credit card companies from facilitating these transactions may be available. If credit card companies can not recover funds from consumers who use them for gambling, they are unlikely to transact with Internet gambling providers. The ability of US bettors to decline to pay Internet gambling credit card charges is currently being litigated. A woman in California has declined to pay her credit card bills, claiming she does not have to pay because Internet gambling, the source of her debt, is illegal. The outcome of the case could have a huge impact on the Internet gambling industry. According to I. Nelson Rose, an American Bar Association expert on gambling law, Internet gambling is doomed without credit cards.
In addition, law enforcement has another option available to prevent funds transfer. Rather than holding these organizations liable, law enforcement officials can negotiate with them to prevent the transfer of funds. The Florida Attorney General has brokered such a deal with Western Union. This major wire transfer company has agreed not to transfer funds to internet gambling providers. Thus the law can, through alternative mechanisms such as deal negotiation and non-enforcement of debts, disable some of the most typical means of facilitating the prohibited behavior.
Two other areas of possible regulation have not yet been explored by law enforcement. The government could attempt to lessen the prevalence of internet gambling by regulating advertising on the Internet - disallowing US advertisers from placing ads on Internet gambling sites. Additionally, government could regulate index providers or search engines that assist the consumer in finding these sites. Currently, Internet gambling sites are easy to find. In addition to typical search engines, there are a number of gambling Internet indexes available. These sites include: <placeyourbet.net/topsites>, <www.guardian.co.uk/gambling>, <www.gambling.com>. The information provided by these sites varies. Some merely provide content and links to other sites while others provide ranking systems of gambling operators. While both of these means are subject to some enforcement problems, coupled with the above targets, they could assist in lessening the occurrence of Internet gambling.
The answer to Internet gambling is not to prohibit it on a federal level. A more reasoned approach is to determine the policy justifications for regulation and to implement reasonable rules that will ensure these policy goals are addressed while also creating an environment in which Internet gambling operators are likely to follow the rules rather than attempt to circumvent them. That way the state or country can ensure that local providers are regulated in a fashion desired and that a portion of the proceeds can be gleaned by the country through regulatory fees or taxes.
Operators of web sites have indicated that they would submit to US regulation if the US would allow Internet gambling rather than attempt to prohibit it. "If Senator Kyl thinks we’re not regulated well enough down here [in Antigua], bring us home. Whatever standards they set, we’ll live up to. Why not offer a federal license? They can regulate it, tax it. The real issue here is who is going to get the gamblers’ dollars." By allowing internet gambling in the US, subject to regulation, states could alleviate many of the concerns that they have raised in opposition to Internet gambling.
While many agree that international cooperation would be the best means of addressing Internet gambling, there has been no effort to organize such cooperation to date. One reason may be that many other countries are happy with the status quo and that the US has yet to adopt a formal policy on Internet gambling. Additionally, the position the US may be heading toward - prohibition of Internet gambling - is directly at odds with much of the existing world policy. Cooperation is difficult to achieve when policies and interests diverge so significantly. Despite this challenge, if the US decides to regulate Internet gaming, US Gaming Commissioner Leo McCarthy believes that the federal government will have to rely on diplomatic contacts and treaties with other nations. "If it is a regulatory situation where you are dealing with other governments and are trying to persuade them to respect the social attitudes and the individual states within our nation, it obviously has to have a lot of federal input."
International cooperation or treaties in this area would only be effective if a majority of countries agreed to abide by the treaty terms. As long as some countries decide to break with the agreement, internet gambling providers can locate their operations in those countries. The issue of havens is only relevant, however, if a sufficient number of countries disagree with the treaty. St. Martin can house only a certain number of internet gambling providers. Additionally, locating such an operation in a haven will prevent foreign courts from gaining physical control, but if any member of the operation travels outside of the haven, they may then be prosecuted.
A group of Internet gambling operators have formed the Interactive Gaming Council. Their hope is to create model regulations, and possibly even establish an international regulatory agency. That way, Internet gambling providers could more easily operate in more than one country.
Even without this type of initiative, the Internet gambling industry is not the paragon of evil that some US and State officials fear. "The reputable sites are regulated by governments around the world. These are legitimate governments that are concerned about who they regulate. How arrogant is it for American politicians to say that those nations aren’t good enough to regulate this industry?" A representative for Antigua has expressed similar consternation over US posturing. "The issue in the Unites States should not be whether Internet gambling should exist in Antigua or not. Antigua is a sovereign state and isn’t their concern. We are no banana republic." Instead, the Antiguan government has indicated to the US State Department that it wold be best for everyone if they could all work together. "It is not America’s job to be the world’s police force on the Internet."
Under the current legal regime and technical structure, everyone who has access to the Internet is able to access all Internet sites. If one country tries to punish an Internet site operator in this context, the repercussions will be felt throughout the global Internet community. If this type of system emerges, the jurisdiction with the strictest laws will be able to impose its values on the rest of the world.
Sovereignty Over the Structure of the Internet
The Internet affords the opportunity to create alternative governance structures, in preference to the existing geographically-based sovereignty structure discussed above. One such structure is currently being implemented to govern the Domain Name System ("DNS") of the Internet, and it is called the Corporation for Assigned Names and Numbers ("ICANN"). However, we must first explain how DNS has been governed in the past before we can analyze how that structure will change DNS governance.
DNS Architecture and Root Server Controversy
One of the most lively controversies that has been created by the commercial explosion of the Internet involves a technology that was originally designed to do nothing more than provide a convenient way of finding remote hosts on the early Internet. Domain names are the hot electronic real estate of the 1990s, and have quickly become a scarce resource. The controversy boils down to how the names should be allocated, and more importantly, how the namespace should be expanded. But as we shall show, the answer to the second question will have to be different from the usual anarchistic democracy of Internet "engineering."
Jon Postel and the Early Days
On the early ARPANet, a canonical list of all hostnames was maintained by one person, Jon Postel, in one file, "HOSTS.TXT". Working under government research grant at UCLA, Postel also edited the Request for Comments series, the "documents of record" of the ARPANet, and later the Internet. RFCs were published on paper by the Stanford Research Institute (also under DARPA contract), though soon found their way
onto the budding network.
In the early 1980s, the explosion of LANs, PCs, and UNIX workstations, and the growing needs of internetworking necessitated the invention of a distributed name-to-numerical address translation system. And so, the Domain Name System was born, and Jon Postel and his colleagues, under the auspices of USC's Information Sciences Institute, were given control of the root servers. (SRI was responsible for seeing that domain names were dutifully registered with the servers.) The combined functions of Postel and SRI, which also included the allocation of IP numbers, was given the umbrella title of the Internet Assigned Numbers Authority (IANA).
In 1992, Network Solutions, Inc. (NSI) was awarded a contract from the National Science Foundation to take over the running of the DNS servers and registry of Top Level Domains (TLDs, like .gov and .edu). The eventual commercial and political implications of this decision - and subsequent NSI policy choices - were apparently not foreseen at the time. In order to appreciate the fully complexity of the current crisis, some technical exploration of the DNS system is necessary.
How DNS works
Ovals are DNS servers; these do nothing more than take in a hostname and return either an IP address or a referral to another DNS server. Rectangles connected to ovals represent DNS lookup tables; other rectangles show users. Rounded boxes represent content (web) servers.
In this example, the user's computer (beland.mit.edu) is configured to direct DNS queries to the local MIT servers. As the diagram shows, strawb.mit.edu has a database which includes information (the IP address) on where to find other computers in its domain, like www.mit.edu, in order to download content (in this case via an HTTP connection).
The user may also find the computer www.amazon.com by an indirect route. The request for the Amazon Web server's IP address is first sent to strawb.mit.edu, which determines the request should be referred to the next-closest machine likely to be knowledgeable about the name in question. In this case, the "D" root server handles the query, locates the ".com" database, and determines that a referral to the machine ns.amazon.com is in order. Finally, the Amazon DNS server knows where to find the desired web server, and sends the appropriate IP address (220.127.116.11) back to the user's computer.
Divisions of Responsibility
Currently, the IANA, operated by ISI/USC (until 1998 under the direction of the late Jon Postel) still manages the RFCs and assigns assorted minor Internet protocols classification numbers. IANA is also responsible for allocating unique series of IP addresses to ARIN (American Registry for Internet Numbers), RIPE (Reseaux IP Europeens), and APNIC (Asian Pacific Network Information Center), which sub-allocate addresses to large ISPs in North America, Europe, and the Asia/Pacific region, respectively. Addresses are then sub-allocated again to smaller ISPs and eventually individual users and hosts.
The InterNIC, the registry for generic Top Level Domains, gTLDs, is operated by government contract with NSI, which charges fees for registrants of .com, .org., and .net. (NSF sponsors .edu and .gov domains.) Coordination of subdomains is left to the registrants; country code Top Level Domains (ccTLDs) are generally managed by some agency in the country in question. However, there has been some controversy as to the qualifications that ccTLD administrators must have; in particular, they might not be affiliated with - or even approved by - the national government.
The thirteen root servers are owned and operated by a variety of organizations around the world, about half of which are agencies or research partners of the US government. The actual equipment in United States is distributed among seven sites in California, Maryland, and Virginia. Foreign servers are located in Keio (Japan), London, and Stockholm. By mutual agreement, the databases are synchronized with NSI's master "A" root server database.
The Alternate Root Server Dilemma
"Alternate" root servers arose in 1996, partly in protest over the NSI "monopoly" over gTLDs. While there have been many proposals to extend the "official" gTLD list from the current six, NSF and InterNIC have not done so. However, nothing in the DNS protocols _requires_ all DNS software to defer to any particular set of root servers. Alternate servers seek to challenge this government-sponsored monopoly, but in doing so, threaten two essential features of domain space, as it was originally envisioned - uniqueness and universality.
The illustrated example, above, shows three popular alternative services, AlterNIC, eDNS ("e" for "enhanced"), and name.space. AlterNIC sports such new gTLDs as .bank, .asia, .games, .kids, .radio, .wired, and .xxx; name.space operates an installed base of DNS servers in New York, San Jose, Amsterdam, and several Scandinavian countries.
In order to be able to resolve domain names offered by these organizations, users must either operate their own DNS servers, or convince their local system administrator (most likely an ISP) to reconfigure the DNS software to point at the alternate servers.
To illustrate how this works, let us return to the example. As a general rule, to successfully resolve a hostname into an IP address, a DNS server must either have the information in its own database, or know (perhaps after a series of queries or referrals) where to find a server that does. In the diagram, the "reconfigured user" is a believer in AlterNIC, and has directed all DNS queries to its servers. This allows the user's computer to access some new AlterNIC-run domains, like .kids and .xxx, and also the traditional gTLDs like com, because the AlterServers can refer those requests to the conventional root servers.
This user may decide to create a hot new web site at celebrities.xxx, and advertise it via some mailing lists. Unfortunately, when Joe User, who is a customer of a dialup ISP (like AOL, Compuserve, or any one of hundreds of local providers) that does not use AlterNIC (like the vast majority of Internet users). Joe will not be able to access this new site, because the conventional root servers (nor any other DNS server in Joe's chain) point back to AlterNIC. Moreover, Joe's favorite search engines can't find the site, because their equipment does not use AlterNIC either. Universality has been destroyed.
Now consider eDNS, a competing service. eDNS users (like our doomed theoretical friend) cannot use any domains AlterNIC has created, because eDNS will obviously not refer users there. In fact, eDNS might decide to (or accidentally) create some of the same gTLDs as AlterNIC. Even worse, eDNS may rather unintelligently try to undermine the NSI .com domain by constructing a conflicting registry. The doomed user in the diagram is thus directed to a "renegade" amazon.com instead of the "real" domain he or she was probably expecting. In both cases, uniqueness has been destroyed.
The splintering of the DNS hierarchy would probably be a economic disaster for businesses trying to establish name recognition online, not to mention an inconvenience to users. Fortunately, alternative domain name servers have not gained much popularity.
The United States government is now finding itself in the undesirable position of having to respond to demands to break the NSI monopoly to allow competition in TLDs, while simultaneously trying to maintain the uniqueness and universality of the namespace. The government's proposed solution is at the heart of a most current controversy...and the next section.
As is apparent from the history, domain names were initially administered by the US government. More recently, the government delegated that responsibility, through contracts, to the Internet Assigned Numbers Authority (IANA) and Network Solutions, Inc. (NSI). IANA has administered the Internet address system since Jon Postel helped to invent it over 30 years ago. IANA assigns the numbers that give domain names their address. NSI registers the actual names on top level domains of ".com," ".org," and ".net."
With the increasing globalization and commercialization of the Internet, it is clear that it is no longer appropriate, nor is it possible, for the US to retain control over the domain name system (DNS). Instead, the US government has been seeking an alternative form of Internet governance. While there are several options of possible governance, the US government has proposed a unique and novel plan - the development of a private, not-for-profit organization representing the global Internet community. "The transfer of Internet assets and authority from the U.S. Government to the New Corporation represents a major departure for the private administration of a global resource." The US proposal calls upon the Internet industry to fulfill a daunting task: "build the international, nonprofit corporation that will eventually govern one of the most important resources for the next century."
While the proposal for the new organization might appear to create simply another "geeky technical standards group," it is actually "the beginning of something big - a unique form of government for the global Internet." Commentators have referred to the creation of this company as a Constitutional Convention. "If there is going to be this one entity that has a great deal of power, you’d have to say that the process of deciding how that power will be exercised is constitution-making." As with other constitution-making, this process has been fraught with debate and struggles for power.
II. Genesis of the proposal
As a means of resolving many of the policy issues raised by DNS, the US government decided that it would be best to privatize the management of Internet names and addresses. To begin the process of privatization, the National Telecommunications and Information Administration (NTIA) of the United States Department of Commerce (DOC) issued "A Proposal to Improve the Technical Management of Internet Names and Addresses" otherwise known as the "Green Paper." This Proposal was published on February 20, 1998 and held open for public comment from around the world until March 23, 1998.
The Green Paper received significant criticism during this comment period. Initial protests primarily addressed the US focus of the plan. Critics in the European Union and Australia complained that the US was "trying to hijack an international network." In response, the US government revised the Green Paper and then, on June 5, 1998, issued its current proposal for privatization in its Statement of Policy, Management of Internet Names and Addresses, known as the "White Paper." The White Paper included important changes for the involving the international Internet community in the development and management of the entity that would be selected to govern DNS. While not accepted in its entirety, the White Paper has received much more support from the International Community, and foreign governments have agreed to work within the framework presented by the White Paper.
III. From Genesis to Implementation: The creation of ICANN
The White paper proposal does not specify the parties who will form the new private corporation. Instead it simply puts forth the principles under which the new entity should operate and the responsibilities of the entity. It challenges the Internet community at large to create the organization. Once the private organization is formed, the proposal envisions a process by which NTIA and DOC will review the organization to ensure that it fits within the required parameters. Then, the US government will begin transitioning the role of DNS management to the new organization.
The International Forum for the White Paper (IFWP) formed as a response to the government request that the Internet community create this new private company. The goal of IFWP was to create consensus regarding the future management of DNS by working as "sectorally and geographically diverse group of Internet stakeholders." While IFWP began the process toward developing this new entity, the operation broke down before a company was formed. The process was then taken up in a less representative manner.
Jon Postel and IANA moved the process forward by selecting an interim board and proposing a new organization, ICANN - the Internet Corporation of Assigned Names and Numbers - to the government as the private entity called for in the White Paper. "ICANN was originally created by Postel on behalf of a broad coalition of Internet stakeholders in response to" the NTIA White Paper request that the Internet community create a global consensus non-profit corporation. Postel’s goal in choosing the Board, was to find "respected people who had no history or stake in the years-long international debates over how to move Internet governance from the public to the private sector."
This private process of drafting the bylaws and selecting the Board has engendered significant criticism. The White Paper called for the international Internet community to work together to form the company that would administer the Internet name and address system. While the IFWP process represented a large selection of the Internet community, the IANA follow-up did not.
On October 2, 1998, Jon Postel presented ICANN, its interim board, charter and bylaws to NTIA as the entity that could serve as the private, non-profit company that could take over management of DNS as requested in the DOC White Paper. The ICANN submission was countered with several submissions from organizations that did not believe that the ICANN bylaws and structure met the requirements proposed for corporation by the White Paper. These organizations include the Boston Working Group (BWG), the Open Root Server Confederation (ORSC), Ronda Hauben, and INEG, Inc.
After reviewing the IANA/ICANN submission, NTIA issued a letter indicating its support for the progress made by ICANN toward achieving the mandate of the White Paper. NTIA indicated, however, that ICANN needed to address issues such as accountability and transparency that had been raised by critics. "Because this organization is going to play a quasi-public function, it needs to have a level of transparency and fiscal accountability that’s commensurate with that." Following this initial submission, ICANN has worked with NTIA, and with some of its opponents, to resolve the issues that remained open. After several iterations and some significant changes to the initial bylaws, ICANN was selected by NTIA and DOC as the company that will help transition the management of DNS from the US Government to the private sector.
The development and evolution of ICANN is important because ICANN’s ability to carry out its responsibilities will depend not only on gaining authority from the US government, which it now has, but on the authority and legitimacy that it will be able to create within the global Internet community. Only if it has this authority will ICANN be able to fulfill its function of making difficult and contentious decisions that will be accepted by the Internet community it represents. While ICANN has come a long way in developing its bylaws to reflect some of the concerns of the community, there are still critics who are unhappy with ICANN as it is currently structured. In an effort to gain the requisite community support, ICANN has held public meetings in the US and Europe and will be holding a similar meeting in Asia in March, 1999. Additionally, the two year transition process with the US Government may help to assuage some concerns.
Not only will the ICANN have control of administrative decisions regarding DNS, but it will also be responsible for making public policy decisions in this area. This is considered by some to be a "herculean" responsibility. Mike Roberts, interim president of the ICANN board, has indicated that ICANN will have to answer five years of policy questions that have been backlogged in the absence of an entity that could make such decisions.
ICANN will have control over four principal areas: 1) oversight of the policy for domain names, 2) control and assignment of numeric addresses, 3) coordination of the assignment of technical protocols, and 4) oversight of the operation of the authoritative root server. We will examine just one of these areas, domain names, as an illustration of the type of control that ICANN will have and the decisions that must be made.
a. Competitive Registration. According to the White Paper, "[t]here is widespread dissatisfaction about the absence of competition in domain name registration." Private companies around the world have been demanding an opportunity to compete with Network Solutions in the lucrative registration business. To resolve this dissatisfaction , ICANN will determine how to introduce competition into domain name registration for gTLDs. In doing so, ICANN must balance the requirements of stability with the need to promote competition among registrars.
b. Introduction of new gTLDs. Additionally, ICANN will determine how many top-level domains should be added to the Internet, and when and how they will be added. Michael Roberts, the interim president of the ICANN board, has indicated that "[t]here is no more bitterly disputed area than that." While some commentators believe that introduction of new gTLDs should be limited, others think that the market for a large or unlimited number of new gTLDs should be opened immediately.
c. Control of ccTLDs. ICANN will also determine who will have control of two-letter country code domains. Country code domains are currently managed by a variety entities, including private registrars. When asked how it would handle country code domains, ICANN indicated that it would "respect each nation’s sovereign control over its individual top-level domain." If countries are given sovereign authority, they may be able to put the current registrars out of business. Such a policy would "establish a new principle out of the blue and overturn the apple cart." Critics say that a policy granting each country sovereign control would not only impact the companies that have made investments to build country code domains, but that the policy would potentially invalidate the million plus domain names that have been assigned to those country codes. ICANN has since responded that it will not make any immediate changes to the current ccTLD policy.
This type of controversy is bound to occur with each responsibility that ICANN wields. It is exactly this type of issue that ICANN was developed to handle. The question that critics ask is "what are the circumstances under which the current system can be changed. The real goal is to ensure stability and continuity." That is exactly what ICANN has been tasked to achieve. Its mandate is to develop and implement new policies while assuring the stability of this valuable resource.
d. Determination of dispute resolution policies for contested domain names. With the growth and commercialization of the Internet, trademarks and domain names have increasingly come into conflict. The US government has asked the World Intellectual Property Organization (WIPO) to develop recommendations for a uniform approach to resolving trademark/domain name disputes and to recommend a process for protecting famous trademarks in gTLDs. Additionally, WIPO will study the trademark effects of adding new gTLDs. While WIPO is tasked with providing ICANN with a set of recommendations, ICANN will have ultimate authority to adopt the policy to resolve these issues.
This interaction between ICANN and WIPO will be indicative of interplay between ICANN and other established international bodies. Although ICANN has ultimate authority, WIPO has the legitimacy of its established position as well as the authority of the signatory governments. It is just this type of coordination between entities that could help answer some of the most contentious issues of Internet governance that must be resolved.
It is clear that ICANN will wield a lot of power over the Internet. Although there are a number of issues that are not within the responsibility of ICANN, control of DNS amounts to control of much of the Internet as we currently know it.
Although ICANN is a private corporation headquartered in the U.S, ICANN is supposed to represent the interests of the global Internet community. The specific means of doing so have not yet been determined, but a general structure has been outlined. ICANN will be made up of two segments: Supporting Organizations and Membership-at-large.
ICANN will have 3 Supporting Organizations (SOs), one for each area of control: one for addressing, one for protocols and one for name registration. It is unclear exactly who these SOs will be. The bylaws specify a process for designating the SOs that resembles the process ICANN went through with NTIA. "The supporting organizations will have to create themselves - and convince the Board that they are truly representative of their communities. Only then can they be recognized by the Board as ICANN Supporting Organizations." Thus interested parties - to include certain specified representatives - shall determine their qualifications for membership and shall apply to ICANN to be an SO. Once approved and designated as the SO by ICANN, the SOs will have primary responsibility for proposing policy to the board.
Nine of the nineteen ICANN board members will be selected by SOs, with each SO appointing 3 board members. Because the SOs will elect board members and will propose policy to the board, there is fear that there will be conflicts of interest between the board and the SOs. In response to this concern, the bylaws require that board votes on policies recommended by a specific SO would have to receive a majority of the board representatives not elected by that SO.
An additional nine of the nineteen ICANN board members will be elected by the members-at-large. While the ICANN bylaws require the Board to create a representative membership structure, the system is undeveloped as yet. The Board will be creating an independent advisory committee to suggest possible membership structures. Parties interested in serving on this committee were requested to notify ICANN by December 5, 1998. Once ICANN determines the make-up of the advisory committee, it will begin deliberating various options for membership structures.
While determining the structure of ICANN is critical to determining who the organization is and how it will be run, representation of the global community requires more. ICANN must meet basic elements of geographic diversity. One of the goals of the organization is to eliminate the current US-centric governance of DNS and to ensure that the system is instead controlled by the global Internet community. To do so, ICANN itself must be geographically diverse. The current, interim board is somewhat geographically diverse. Almost half (4 of 9) of the board members are from the US. Three are from Europe, one from Australia, and one from Japan. While this is better than the current entirely US dominated governance, it is not geographically diverse. There are no representatives from Latin America or Africa.
The proposed structure for the permanent Board is designed to ensure more geographic diversity. The permanent Board must have at least one representative from each geographic region and no more than half of the directors elected by the SOs may be citizens of any single geographic region. The geographic regions are defined as : Europe; Asia/Australia/Pacific; Latin America/Caribbean Islands; Africa; and North America. While this would increase the diversity that currently exists, it does not seem adequate for some regions. The Japanese government, for instance, has requested that the Asia/Australia/Pacific region be divided into more than one category. Additionally, it is unclear where countries such as India, Russia, and the Middle East are categorized.
While the Interim Board is not fully geographically diverse, its U.S.-dominated membership echoes the U.S.'s proportionally heavy use of the Internet currently. This may change with time.
Because this organization will be governing in place of a traditional government, there are additional features that the White Paper called for beyond the basic structure of the organization. Particularly important is the accountability of the organization to the Internet community. While the organization is structured so that it will be made up of representatives of the Internet community, it is unclear what options community members have if they are unhappy with the actions of ICANN. The amended bylaws mandate the creation of an independent process for review. As with the membership issue, ICANN will develop an independent advisory committee to determine the mechanism for independent third party review where someone believes that ICANN or its staff have broken the bylaws or rules set forth for themselves. Because the Internet community will no longer be able to approach a government to request redress, this independent review process - a sort of private appellate process - will be critical.
A similar issue related to accountability is transparency. In its most recent bylaw amendments, ICANN significantly increased the transparency of the organization. While ICANN did not agree to hold open board meetings as requested by many critics, ICANN will hold a separate open meeting in conjunction with each board meeting. At this open meeting the board will raise the issues that will be decided at the board meeting. They will engage in a formal comment and response process, but will also make their opinions known and discuss them interactively so as to engage in genuine interaction. The results of the board meeting, including individual votes by board members and their reasoning, will also be made public.
Another critical issue that ICANN must address is its fiscal accountability. ICANN will be supported not through government financing, but directly by the Internet community. It is unclear how this structure will be created so as to prevent conflicts of interest. The bylaws indicate that fees and charges will be subject to the full notice and comment provisions in the bylaws, but nothing more is known at this time.
There is concern by some that private management of domain names and addresses will magnify the commercial and political pressures that have caused the current problems. Thus rather than solving the DNS debate, the private organization might actually prevent the solution that could be achieved via internationally shared protection and administration of the DNS. While there is much criticism about the specific structure of ICANN and ICANN’s secretive creation, only a small minority disparages the concept of a non-profit private organization as the new governance body for the Internet. Those who criticize the specific structure of ICANN, but not necessarily the concept that ICANN represents, have pushed for further transparency, accountability and other features that resemble government-style protections.
So is this entity different than an international government agency? While the government determined to transfer governance of the domain name system to the private sector, it appears that it is creating an entity that resembles government more than a private company. According to Mike Roberts, interim president of the ICANN board, the board is trying to balance the need to keep the public informed with the efficiency gained by private meetings. "We think the reason why the government wanted to transfer responsibility to a private corporation was because it wanted some ability to get the job done in a timely way." Introducing open meetings and other elements of transparency could hinder the flexibility expected of a private company. According to Roberts, there is "a great diversity of opinion" about whether the board will function more like a private company or like a quasi-governmental agency.
It is possible that these government-style protections are simply a protection mechanism needed because ICANN is such a novel entity. If ICANN demonstrates that the Internet community can trust this type of structure for Internet governance, future iterations of this type of entity may be able to avoid some of the bureaucratic protections. Until trust and legitimacy are earned, however, it seems appropriate for those whose interests are being represented in this new form to demand some means of accountability and redress.
While ICANN may include some components that look more like government than like a private company, it is by no means a government body. In fact, no government officials may serve on the ICANN board, nor may any officials of a multinational entity or treaty organization. ICANN will have, however, a Governmental Advisory Committee. Members of national governments, multinational governmental organizations and treaty organizations will make up this committee. The committee may consider and provide advice on ICANN activities that are related to concerns of the governments. ICANN will consider any response from the Committee, but is not bound to follow the committee’s requests.
The power that national governments will have over ICANN remains to be seen. When asked what ICANN would do if a government indicates a preference for particular policy decision, Mike Roberts indicated that it is not what government thinks, but instead what industry thinks that is relevant. Because ICANN is subject to US jurisdiction, it may not always be able to give that same answer. If the US government passed a law that required ICANN to act in a specific way, however, Roberts indicated that ICANN would rely on community support to dissuade the enactment or enforcement of such legislation. It is unclear what obligations ICANN would have if a different government passed such a law. The White Paper specifies that "incorporation in the Untied States is not intended to supplant or displace the laws of other countries where applicable." Despite this, it is unclear just whose jurisdiction ICANN will be subject to. The Japanese government has mentioned concern on a parallel issue. If a party outside the US, not necessarily a foreign government, takes the new company to court in his home country, that court’s judgement may not be effective in the US.
VI. ICANN Expansion or Replication
Currently there are a number of issues that raise difficult questions for control on the Internet - issues such as libel, content control, copyright protection, and privacy. A quick and easy response for commentators is to suggest that ICANN take control of these issues. Despite these suggestions, ICANN’s current mandate does not include anything beyond DNS management. While this may be a considerably large task, it does not include management of all Internet issues. Even within this specified and limited realm, there are substantial concerns about transferring this significant authority to ICANN. Transferring additional authority or expanding the role of ICANN could cause sufficient concern about concentration of power that ICANN might not be able to succeed at its initial mission of DNS management - thus leading to the failure of this new structure of Internet governance. Expanding ICANN’s role, at least before it has a chance to prove its ability and legitimacy within the realm of DNS management, would be a mistake.
If ICANN is successful at gaining authority and legitimacy, not only with regard to the necessary governments, but also with regard to the global Internet community, then we may return to the question of whether such a body should be given expanded control over other areas of Internet governance. The question then becomes whether to have one centralized body that deals with all Internet governance issues or to allocate these issues to replica institutions. Assuming that ICANN would be an appropriate structure for addressing many Internet governance issues, we still need not necessarily answer that ICANN should take on this responsibility. Apart from concern over the concentration of power, delegating additional responsibilities to this company may cause structural problems that would lead to breakdown.
As ICANN is currently structured, it is not necessarily scalable to deal with expanded responsibilities. Half of the ICANN board is elected by Supporting Organizations. Each SO represents an area of responsibility - addressing, names and protocols. In order to add additional responsibilities outside of DNS management, additional SOs would have to be added as well. If each SO is granted three board slots, this would create an unworkably large board of directors. Reducing the number of board members elected by each SO, and possibly by the members at large as well, would keep the size of the Board at a manageable level, but might sacrifice some of the authority or legitimacy of the board.
As a means of addressing both scalability and concentration of power concerns, it might be better to take the model of governance that ICANN represents and replicate it for separate issues or responsibilities. Thus there would exist side by side ICANN along with a number of other possible companies such as the Internet Corporation for Anonymity and Privacy (ICAP) and the Internet Corporation for the Content Control (IC3).
Of course, there will be an overlap in the responsibilities that ICANN currently holds (DNS) and other issues that must be addressed. This is true of the trademark/domain name dispute issue. This is also true of any cyberzoning proposal that might rely on domain names for zoning content or privacy schemes (such as .xxx). Perhaps if the fears of concentration are mitigated, a means of addressing ICANN’s scalability problem might also be achieved so that the Internet community can take advantage of the synergies amongst these issues.
Then, the question will be which issues are appropriately handled by this new type of private corporation and which should be handled in a different way. Other entities such as treaty organizations or international governmental bodies might be more appropriate for dealing with certain issues. This is at least partially true of the trademark issue. While WIPO is better able to determine the issues that must be resolved and the processes that will best resolve them, in this instance, ICANN is perfectly suited to implement those policies. Unlike the trademark issue that is interrelated to domain names, other areas of Internet controversy may not be as appropriate for ICANN or a structure like it. Issues that are value driven such as content control, may be more appropriately handled at a much more local level. In the end, however, a governance structure such as ICANN may be the entity that enables the development of a system that allows individuals to implement their own values.
This new entity is seen as "a model for policy making for a medium that crosses geographical as well as socially constructed boundaries." If ICANN eventually overcomes the difficulties it is currently having and proves its ability to efficiently and fairly manage DNS, this structure might gain support as the solution to many Internet sovereignty dilemmas. At this point, however, the concept has a long way to go - including a two year transition process with the US government. It is up to the global Internet community to help make this project succeed. If this concept fails nations around the world may have to create a government-based management scheme. "Almost nobody, including the US government, wants that."
Standards and Standards Bodies
ICANN is only one organization that may shape the future technical structure of the Internet. One of the main reasons that the Internet has succeeded as a new medium of communication has been, to put it simply, that it works, and it works well because of carefully thought out standards adopted by various standards bodies. For example, the Transmission Control Protocol and the Internet Protocol (TCP/IP), which work together as a platform-independent base for the transfer of information, have made it possible for users to erect web pages, FTP servers, newsgroups, and email servers. The elegance of the protocol has also make it easy use all these things without substantial knowledge about how it all works. This easy interoperability is not easy to achieve, however.
When there is a choice of useable standards, the difficulty lies in convincing the majority of people to use the same system. This task is never easy, and many times, the question of which standard to use prevents a technology from ever reaching its full potential. Division among standards often leads a technology to be perceived as "difficult to use" or "unreliable" simply because of the fact that it is not used by a majority of people. This can lead to a rejection of the technology, despite the fact that each of the standards, in isolation, might be technically sound. However, if a standard is authoritatively established, but lacks technical quality, it can lead to similar negative perceptions about the technology and hamper the actual technical development of the technology. Thus, it is not sufficient to simply proclaim any standard to be the "official" one; thought and care must be put into both selection and endorsement. For a standard to be successful, it must have both widespread acceptance and technical quality.
The question, then, becomes one of how to establish a standard without fragmentation of use or degradation of quality. The beginning of the Internet faced this question when establishing the TCP/IP protocol. In the early 1980s, still in the beginning of the development of computer networking protocols, the United States federal government developed the Open Systems Interface (OSI) as the communication language for computers. This interface was designed to be a complete, extremely robust communication system that would work on virtually any type of computer network. Because the government itself has such a large computer network and because they interface with many other networks, the government was and continues to able to exert a great deal of authoritative and technical power to establishing a working protocol. In light of the government’s commitment to OSI, many overseas networks began to integrate this technology into their networks.
However OSI proved to be a flop, with the current TCP/IP taking its place. The reasons for this are both technical and social. Because OSI was developed all at once with little real-world testing, some aspects of the system that might have been improved were unable to be effectively changed once it had been widely dispersed. Additionally, because OSI had been designed to accommodate any type of computer, it was more complicated, and therefore less widely used, than TCP/IP. The final blow was that TCP/IP was already installed on a large number of systems and was free to use. This killed any chances OSI had for success as a standard.
Today, the solution for the Internet has been to entrust the development of standards to a series of organizations with the express purpose of formulating protocols for the Internet. Leading the way is the Internet Engineering Task Force (IETF). In it’s own words, the IETF is "a large open international community of network designers, operators, vendors, and researchers concerned with the evolution of the Internet architecture and the smooth operation of the Internet. It is open to any interested individual." The IETF organizes professionals into groups that address various protocol issues on the Internet, coming up with potential solutions through actually developing and testing code. They hold very little formal power over the Internet. As an organization, they have very little structure and no formal membership (anyone who wants to participate is welcome). It is the quality of their standards has been the force that has made them the most prominent body in terms of the creation of standards. And, thus far, their work has lead to a period of great success for the Internet.
The IETF is a part of a larger framework of bodies that control the Internet. At the top is the Internet Society (ISOC) that oversees the work of both the IETF, the Internet Architecture Board (IAB) which works with the IETF on the larger structural issues related to the global network, and the Internet Assigned Numbers Association (IANA). The ISOC is different, though, in that it has both individual members as well as corporate and governmental bodies. The ISOC tries to include among its membership as many of the parties with interests on the Internet as possible. While the ISOC is more formal than the IETF (membership dues range from $1000 for individuals to $50,000 for "Sustaining corporate membership", the ISOC also derives its power from its ability to attract membership that is sufficiently prominent that its decisions will have authority.
As has been discussed earlier, the formation of the new ICANN body and its supporting organizations will, in all likelihood, fulfill the responsibilities attributed thus far to the IETF as well as IANA. In practice, many of the current "members" of the IETF may find themselves asked to play a role in the new supporting organizations. The challenge facing ICANN will be to successfully replicate the high quality and high acceptance rate of the standards that have worked for the Internet so far in the future.
Current Efforts at Regulation Through Technology
As various entities seek to exert influence over cyberspace, many of them do now -- and will in the future -- utilize the hardware and software architectures of the Internet to seize control. First, we'll consider contemporary technologies and their applications, and then we'll move on to an exploration of new technological tools of power.
I. Existing Architectures
Currently, the technical means for regulating Internet activity are as decentralized and confusing as the legal regimes. Different parties, with different regulatory needs, currently use a variety of technologies in order to control some part of cyberspace.
Sovereign nations versus the private sector
The global nature of the Internet, as we have seen, creates some complicated dilemmas for sovereign nation-states struggling with each other for control over the global network. However, on the Internet of 1998, the conflict between governments and their own citizens is probably even more problematic.
Both governments and their individual and organizational constituents have a number of technologies at their disposal. These may be divided into two fundamentally different types of architectures.
Filtering architectures: Controlling incoming information
The first type involves placing barriers between the end user and the network at large, in order to exclude undesired content.
Power to the government
A government that wishes to protect its citizen from the corrupting influence of the outside world may choose to erect a national "firewall" around the country. This involves physically routing all incoming and outgoing Internet traffic through government-owned computers. Functionally, any web pages, e-mail, newsgroups, or other information that the government chooses to censor cannot be accessed from within the jurisdiction.
National (or local) firewalls face both legal and practical challenges. Such systems, if imposed by the government, would in most cases be unconstitutional in the United States, for example. Also, firewalls are only effective if one entity can physically control the network connections between the jurisdiction and the outside world. Given that many Western countries, especially the United States, have a large number of international connect-points, this is not a small problem. The situation is compounded when one considers that landline, cellular, and satellite telephone systems may be used to carry network traffic.
However, some countries -- Singapore, for example – having sufficient control over the national information infrastructure, and being unburdened by constitutional constraints, are already actively using national firewalls.
Power to the private sector
Firewalls and other filtering technologies are also useful to individuals, corporations, and other organizations. Many corporations operate firewalls that block certain services or sites, both to censor content and to (supposedly) improve network security.
Parents concerned about exposing their children to inappropriate content on the World Wide Web can choose from a whole host of commercial web filtering programs like Cyber Patrol, NetNanny, Surf Watch, and X-Stop. Readers of newsgroups and e-mail can use filters that block messages based on the author, or the content. Choices include pre-fabricated systems that block spam (unsolicited bulk commercial messages), and third-party "killfiles" and rule-based filters.
Unfortunately, current filtering technologies are not nearly as accurate as end-users would prefer. Filters may solve this problem with course granularity, blocking a large fraction of content (including some acceptable content) in order to ensure that no unapproved content may be viewed. On the other hand, firewalls may be circumvented by clever programmers, and finely discriminating filters may allow inappropriate content to sneak by. Protocols like PICS, however, may enable more accurate fine-grained filtering.
There has also been some fear that national governments may mandate the use of filtering technology, whether commercial, PICS-based, or otherwise. Many feel this may eventually lead to increased government censorship, stealing discretion from the individual.
Shielding architectures: Controlling outgoing information
The second type of architecture involves the protection of data in hostile environments (generally through some form of encryption), as well as preventing information from leaking out in the first place. Firewalls, as discussed above, may of course be used to isolate a sub-network from the rest of the Internet. However, there are many more novel techniques for controlling outward information flow. Once again, they can shift control toward either sovereign states or individuals and organizations.
Power to the government
One rather compelling example of a territorial sovereign attempting to extend its power in the electronic realm is the United States federal government's move toward encouraging the domestic and global use of "key escrow" technology. With such systems, the government would be able to decode encrypted messages sent between private parties using a special second key. If, as the government hoped, U.S. key escrow encryption systems were used internationally, the technology would effectively extend the reach of American law enforcement,
perhaps farther than international politics and diplomacy could go. Naturally, Washington's critics were wary of potential abuse of such a far-reaching technology.
Increase private control
Technology can also can extend the control of individual and corporate content providers farther than government is willing - or able – to go.
DVD and DIVX
Two interesting examples of "trusted systems" in the video arena are the DVD (Digital Versatile Disc or Digital Video Disc) and DIVX (Digital Video Express) formats. DVD is a commercial data format that allows movies (as well as audio and other data) to be stored in
compressed form on CD-like discs.
With the advent of digital storage, motion picture studios were concerned that consumers could potentially undermine the video market by making large numbers of (illegal) perfect copies. As a result, embedded in DVD discs and players are a number of technical copy protection features that make it much harder for consumers to download a disc's content to other digital media. An additional feature also addresses the problems caused by the global nature of the video marketplace.
DVD discs and players may optionally be encoded with a one-digit code that corresponds to one of six global regions. Discs encoded for consumers in one region will not play on device sold in another region. Producers and wholesalers may thereby technologically enforce the timing and geographic extent of the distribution of new releases. Moreover, with these architectural barriers against unauthorized copies and exports, studios have much better control over their product in countries (like China, for instance) where enforcement of Hollywood copyrights is lax.
However, even the protections that DVD and U.S. law enforcement (perhaps soon to be bolstered by the Digital Millennium Copyright Act) provide aren't enough for some companies. A subset of the DVD industry has also developed the DIVX standard, which is an enhancement of the DVD architecture. DIVX uses Triple-DES encryption to secure the contents of the disc; the playback equipment (which also works with standard DVDs) connects to a central database via a modem. Consumers are billed on a pay-per-view basis, with the option to purchase the rights to unlimited viewing.
While the government certainly hasn't done anything to try to stop the rollout of this new technology, the viewing public has not necessarily been so kind. As DVD and DIVX begin to penetrate the home video market, it remains to be seen if consumers will be comfortable with this digital extension of corporate control into their lives. It also remains to be seen what impact society's experience with DVD and DIVX will have on future network-based media delivery systems. (DVD is designed to be interoperable with computers, and "DVD-ROM" may be used to store data, much like CD-ROM.)
Citizens may find refuge from prying government (or other) eyes behind encryption technologies like PGP, or from the other side of anonymous or pseudonymous e-mail, or anonymized web browsing. Unfortunately, these technologies can be abused, especially by criminals trying to evade law enforcement. Especially problematic, of course, are schemes that almost completely eliminate traceability by crossing into foreign jurisdictions not bound by reciprocal enforcement agreements.
The Internet can conceivably be used to perpetrate crimes that take place in the real world (robbery, murder, kidnapping), in addition to creating new categories of offenses. Software piracy, online speech-related crimes (libel, fraud, plagiarism, etc), and e-commerce, the detergent of choice for hip money launderers, are creating new conflicts between governments and their cypherpunk citizens.
Pulling the plug
In addition to active filtering and shielding technologies, the various players in the great Internet power struggle also have the option of flicking the virtual "off" switch to both undesired services and undesired users.
Early on in the history of USENet, corporate operators of news servers sometimes exercised their sovereign right not to carry those newsgroups they felt were not of any (business-related) utility to their employees (hopefully keeping the workforce focused and the phone bill low). Later, with the proliferation of spam on the net came tough policies from many ISPs, who promised to kick off anyone using their services to flood users' inboxes with unwanted e-mail.
On the government side, in November 1998, Loudon County, Virginia made the controversial decision to temporarily disconnect Internet service to its public libraries. In an interesting technological victory of sorts for home rule over federalism, officials were reacting to a court order that required them (on First Amendment grounds) to stop filtering Web access with X-Stop software.
Internationally, there is growing concern that the Chinese government may be using the Hong Kong-based China Internet Corporation to create a nation-wide network that would supplant the need for anything more than extremely limited Internet connectivity.
Indirect influence on technology
Of course, the Internet does not exist in a vacuum. A variety of legal, economic, and social forces affect what technologies are available to whom, and who is willing to use them.
The oft-cited United States government likes to throw its weight around the market, through narrow procurement specifications designed to encourage the development of the sorts of technologies (like key escrow) it wants to see. Of course, it also has the luxury of certain kinds of direct regulation, like regulation of strong encryption exports.
But the electronic business community is not to be outdone, foisting new anti-piracy advertising campaigns on the public, and maneuvering for market dominance. Competing strategies these days seem to include the "open source" approach (recently adopted by Netscape, in which you give away one product for free, charge for support, and sell related and more expensive products) and undermining technical standards (as Microsoft has allegedly done with the Java programming language).
Inadequacy of current solutions
So as we have seen, the problem of regulation may be divided into two parts: The first is how to prevent unwanted behavior from happening in the first place; the second, how to make people accountable for their actions after the fact. Real-world governments have traditionally addressed the "ex post" half of the problem rather well, clearly defining crimes and some civil offenses, setting specific punishments, and creating a legal system to enforce and interpret
In situations where questions arose over which government had jurisdiction over a particular offense, three criteria could be used to decide - the location of the conduct, the effects of the conduct, and the citizenship of the perpetrators. Traditionally, cross-jurisdictional conflicts have been a relatively small problem, since the effects of a crime are usually limited to a short distance from the location of the perpetrators. However, the Internet has complicated this problem by allowing, on a regular basis, conduct in one part of the world to affect people anywhere and everywhere on the network.
Existing "filtering" technologies try to solve this problem by trying to limit the propagation of harmful content. Some "shielding" technologies, like copy protection, encryption, and anonymizing systems, prevent information from information from being abused. Others, like key escrow, introduce after-the-fact accountability.
However, it is becoming clear that these technologies are inadequate to fully address cross-border problems, especially between jurisdictions with irreconcilable (or irreconciled) laws. In light of the shortcomings of several of the architecturally based systems already mentioned, a couple of new architectures may lead to effective governance on the Internet.
New Architectures for Internet Governance
Proposed Architecture 1: Nation-Based Active Prevention (using Digital IDs)
Digital IDs can address the "ex post" part of the problem by tying virtual actions to real-world persons, allowing existing judicial systems to introduce accountability. However, this scheme is rendered impotent by a jurisdictional problem, as governments are not willing to enforce all laws of each other. Accountability for online behavior, then, is limited by the ability of governments to forge successful international agreements.
In this section, we will propose a new architecture based on digital identification certificates that seeks to solve this problem by preventing harmful behavior in the first place, while still allowing real-world governments to disagree on what behavior should be regulated in the first place. For this architecture, our focus will be on applications where one party wants to provide some content or service to a large number of consumers, as on the World Wide Web.
The Existing Alternative: Logged Access
Current electronic infrastructures also allow for little or no assurance that users are actually accessing the Internet from where they claim to be, or that they actually are who they claim to be. Perhaps the strongest methods currently possible to verify citizenry and location information are those used by Netscape on its download site for the version of Communicator 4.5 with strong encryption. (We will use this site as an illustrative example.) Users are required to fill out a form specifying their citizenship, name, address, and phone number. However, the only actual verification that takes place appears (after some brief experimentation with the system) to be a check that the phone number supplied and IP address contacting the server are valid and located within the U.S.
While these checks maybe useful to increase the security of the proposed active prevention system, they have serious drawbacks. IP checks can be confounded by proxy servers and dialup users. The additional information required of users may be considered intrusive, especially considering all access attempts are logged. Also, many legitimate users (such as U.S. citizens abroad, Canadian citizens, and Canadian residents) are locked out of the system because Netscape has no way to verify their claims.
Though the logs that Netscape keeps are only release upon "lawful requests by government agencies" (and are certified in doing so by TrustE), this leaves them prone to abuse by law enforcement, who may look upon users who need strong encryption with suspicion. These logs require Netscape to provide a significant amount of additional storage capacity, and the checks required of the users are extremely inconvenient. If some organization had a strong incentive to provide some service or content to the global community which was being regulated by the US government (perhaps even by some international agreement), and a similar system were used to contain it, there would be no mechanism to prevent the alteration of the log files. (Though the government might spot-check the server for violations using undercover web-browsing agents.)
Even if the logs were found intact, if a foreign user were able to circumvent the weak safeguards on the system, (using proxy services and an electronic phone books) and were discovered by the government, the transaction may or may not be able to be traced to its source. In addition, the hacker would be most likely located in a foreign jurisdiction, and thus out of reach of domestic law enforcement. As a result, the government may decide to order Netscape to shut down the server that distributes the strong encryption-enabled versions of its browsers. Clearly, this is an unacceptable solution in the general case - any product or service which is illegal anywhere in the world would have to be excluded from the Internet.
To take an even worse example, as mentioned earlier, in Liechtenstein, the government sponsors Internet gambling. There is no way for foreign jurisdictions to shut down this service short of cutting off an entire country from the Internet. Even this tactic of last resort would be questionable at best, for nearly every country in the world prohibits something which is legal in other parts of the world. This creates quite a conundrum for jurisdictions with different values - like some states in the US - which wish to ban online casinos or other such sites in order to protect their citizens.
Description of Proposed Architecture
"Nation-based active prevention" is a potential electronic solution to the above problem. It would require different software for clients and servers.
Digital ID for Clients
Users of the system are required to have digital identification certificates that specify their citizenship and current location. IDs could theoretically be granted or authenticated by any trusted party. However, in practice, Internet Service Providers are in an excellent position to authenticate users' real-world locations, and may set up mechanisms for automatically granting "session" IDs. Real-world governments, on the other hand, may be the logical choice to certify citizenship.
Of course, these are not the only possible choices; some third trusted party (like Verisign) or parties may be delegated the responsibility of granting and/or authenticating IDs. It is debatable whether PGP's "web of trust" model would be workable for the purposes of this system. Also, for some applications, certification of other attributes, like legal status (minor vs. adult) might be necessary.
Active Prevention Systems for Servers
Service and content providers will need to run special software in order to make the system work. However, such systems might be created by modifying existing server programs, like HTTPSd, which can be compatible with existing clients. So, consumers, for example, could use Netscape, Internet Explorer, Lynx, or any other existing browser that supports digital certificates. Users without digital IDs would also be able to browse any part of a web site that was not protected by active prevention without any special modifications.
The novel component of server software we are proposing would be a "jurisdictional database." As users from many different countries attempt to access the provider's content or service, the system would use the database whether to admit or turn away users, or perhaps direct them to a version that is legal for them to use, given their national affiliations.
Digital ID Layer
Digital IDs must be available to the citizens of every country, or else they will be unable to participate in any online transactions protected by the system. However, if for some reason, a particular government fails to provide such a service to its constituency, a trusted third party, such as a regional consortium, a corporation, or a non-profit organization, could most likely be used instead.
Privacy and Security
In the system described above, only two or three pieces of information: citizenship, location, and perhaps minor status would need to be contained (or revealed) through the identification certificates. These architectural choices would probably require physical verification (with the help of ISPs and IP traces) in order to help ensure the non-transferability of the certificates. For enhanced security, certificates might be tied to real-world identities, though only a trusted third party need know the details.
With this scheme, for enforcement purposes, the provider must be held somehow liable if any illegal transactions occur because of any errors in the system's configuration. However, if personally identifiable information is revealed, the opportunity to holding users accountable for their actions is preserved. In other ways, including the danger of identity theft, the security of the system depends of the quality of engineering and safe operation by the entities that implement digital ID and active prevention technology.
As stated earlier, the current state of protocol formation is not necessarily organized, nor is it efficient. However, some international standards-setting body, such as the IETF, must design a universally acceptable format for the digital certificates, or an existing system must be agreed upon. Otherwise competing, incompatible systems may arise, perhaps supported by different segments of the international community. Most importantly, some party must advocate for the adoption of the system. (See "Constitutionality and Adoption Issues.")
Active Prevention Layer
Conflicts of Law
Any system that superimposes real-world government onto cyberspace will necessarily have to have some mechanism to resolve jurisdictions with fundamentally irreconcilable laws. Fortunately, the active prevention system provides the option simply to deny questionable transactions.
Because the Internet can very efficiently put users in contact with a service or content provider anywhere in the world, the "havens problem" looms over the active prevention system. Unless all of the nations in the world agree to somehow require the adoption of the system in regulated industries, enterprising providers can set up shop in "haven" jurisdictions, evading prosecution and becoming a magnet for black market services.
On the other hand, almost every country would like to regulate some aspect of online behavior. This may lead to international "logrolling," eventually facilitating a widespread adoption of active prevention. Also, because the technology is modular, real-world sovereigns don't necessarily have to enforce adoption in all internationally regulated industries. This may provide sufficient flexibility to reach agreement on important issues.
If the number of haven nations is small, and the behaviors under consideration considerably problematic, the rest of the world may threaten to cut off Internet connectivity, pending rectification of the situation. This extreme tactic, however, is probably unlikely to be used in practice (and is potentially unconstitutional in the United States).
Most of the attention of law enforcement may be occupied in tracking down and eliminating rogue servers in participating jurisdictions. Moreover, even if some providers seek refuge in havens, much - if not most - of the undesired conduct will have been stopped.
The construction of the described "jurisdictional database" has the potential to be an overwhelming research task. If a given set of transactions must be investigated with regard to the laws of every single country on the planet, not to mention states and provinces, a tremendous obstacle is placed before the provider. Governments may ease this legal burden by providing simple guidelines (and make timely updates) so that regulated industries may properly configure their software.
On the other hand, service and content providers may not have such a large task after all, if they only wish to provide a limited domain of services. They may also choose to research the regulations with a coarse granularity, simply denying access to nations with confusing laws until they can be clarified.
Constitutionality and Adoption Issues
Perhaps the most critical challenge facing the adoption of active prevention systems is the constitutionality of requiring such systems in the United States, a global leader in Internet technology and a major negotiating player. The Communications Decency Act sought to effectively require age verification software on pornography web sites. The Supreme Court ruled parts of the law unconstitutional for several reasons. In the majority opinion, Justice John Paul Stevens wrote that because the CDA prohibits "the dissemination of indecent messages only to persons known to be under 18, the Government argues, it does not require transmitters to 'refrain from communicating indecent material to adults; they need only refrain from disseminating such materials to persons they know to be under 18.'"
But, the Court concluded, "This argument ignores the fact that most Internet fora--including chat rooms, newsgroups, mail exploders, and the Web--are open to all comers....[The CDA] would confer broad powers of censorship, in the form of a 'heckler's veto,' upon any opponent of indecent speech who might simply log on and inform the would be discoursers that his 17 year old child--a 'specific person . . . under 18 years of age,' would be present."
Stevens also cited the District Court's finding that "the burdens imposed by credit card verification and adult password verification systems make them effectively unavailable to a substantial number of Internet content providers," particularly non-commercial speakers.
Encouragingly, Justice O'Connor, in her dissenting in part opinion, wrote, "I write separately to explain why I view the Communications Decency Act of 1996 (CDA) as little more than an attempt by Congress to create "adult zones" on the Internet. Our precedent indicates that the creation of such zones can be constitutionally sound. Despite the soundness of its purpose, however, portions of the CDA are unconstitutional because they stray from the blueprint our prior cases have developed for constructing a 'zoning law' that passes constitutional muster."
These opinions seem to indicate that requiring some providers to install active prevention systems (which seems incredibly useful even for domestic applications like keep pornography away from children) is not outside the realm of possibility. However, several Constitutional constraints must guide any such decision.
The domains of regulation must be "narrowly tailored" to those areas where the government has a significant compelling interest to intervene. Of equal importance, the government must somehow avoid putting a substantial burden on service and content providers, lest they run afoul of the First Amendment.
This might be better accomplished through laying the groundwork for the architecture, as opposed to careful wording of any statues. It would be prudent for the government to either create on its own, or somehow support or require the establishment (by private industry, perhaps) of, a free, universally available national digital identity infrastructure. Thus, no one would be excluded by the system, as a recipient of online speech or services, who would otherwise have a right to access.
The government and its policies could also contribute funding and expertise to the development of open-source software to run the servers and to the extensive research necessary to create the supporting jurisdictional databases. It might choose to exempt small or non-commercial sites, or alternatively, subsidize or otherwise support installation of such systems.
Even before legislation can be passed, its imminent arrival may inspire industry self-regulation, as has happened in motion pictures, television, and even Internet pornography, to a small degree. The threat of lawsuits from home and abroad may provide additional motivation.
Proposed Architecture 2: Cyberzoning
In addition to the "ex ante" vs. "ex post" divide, in some ways, there is a schism between real-world and online "governments." "Cyberzoning," the partitioning of the Internet into identifiable, though not necessarily mutually exclusive segments, is in many ways an attempt to allow cyber-government to establish itself.
Neither of the two implementations we will discuss necessarily prevent harmful behavior in the same direct way that active prevention does. However, by allowing cyberspace to more effectively regulate itself, it may reduce the frequency and propagation of the effects of undesirable conduct, as well as potentially introducing some amount of accountability for actions that primarily affect the electronic universe. Unlike active prevention, cyberzoning can be applied to both producer-consumer and peer-to-peer interactions.
Alternative 1: ZoneDNS
Figure 2 shows how the "ZoneDNS" system might work, with accessing an online casino over the World Wide Web as an illustrative example. The "ZoneDNS hierarchy" is nothing more than the conventional DNS system with a some simple extensions that would allow DNS servers to store information about "zones."
(Note: Here, we are using the term "zone" is a sense which is not related to the "zones" used in technical descriptions of the DNS system. Also, "service provider" refers to the provider of content or some commercial or other service on the Internet, not to be confused with Internet Service Providers (ISPs), which provide connectivity to the Internet.)
Our "zones" may be conceived of as a group of related sites (web pages, for example) on the Internet that all adopt (by choice or by law) a common label, and establish and perhaps enforce some common set of rules. The ZoneDNS architecture allows for any one site to be virtually situated in any number of different zones.
The formation process and regulations of a given zone depend on its purpose. A group of pornographic sites might come together and establish rules allowing a high degree of freedom with respect to content control while putting strict limitations on the use and release of personal information. A group of personal web pages might join together in a zone to create rules about what may be said about other individuals, establishing a high degree of responsibility for remarks made about other people. A group of commercial sites might form a zone that offers copyrighted material, but sets strict rules for use of the material. Another group of pages might adopt a particular zone label in order to be easily located, but decide not to have or enforce any rules at all. Traditional real-world governments may also create zones for regulatory purposes, and other geographically based confederations (like all the ISPs in a given region) might be formed. In any case, groups of related sites will be able to create and advertise to users unified, broadly accepted policies, clarifying one's rights and responsibilities in a given corner of cyberspace.
To form a zone, interested service operators would register their affiliations with the ZoneDNS system, much as hostnames and their corresponding IP addresses are registered today. (The zone administrator, if any, might need to have oversight over this registration.) The illustration shows a user who desires to access a web site, in this case, www.lasvegas.com. In addition to returning the site's IP address - which allows the user's software to communicate with the web servers - ZoneDNS would also return the registered affiliations.
The user, presented with this information, would then be able to make an informed decision about whether or not to proceed into the site. Users might find it convenient to configure their browsers to avoid certain zones, or warn them before proceeding. In situations where some outside agency (like an employer, ISP, or even a government) has control over the user's software, access to particular zones may be involuntarily blocked.
Alternative 2: gTLD extension/competition
In considering the foreseeable future of the Domain Name System, we must also consider that the current effective moratorium on the creation of new Top Level Domains (TLDs) may be coming to an end. Domain names do carry some information about what users will find within. Country code Top Level Domains (ccTLDs) like .us and .uk convey the geographic location of a particular service. The six generic Top Level Domains (gTLDs - .com, .edu, .gov, .mil, .net, and .org) also provide some very abstract information. For example, one is certainly assured that a .mil address is owned by a U.S. military installation. However, the extreme popularity of the Web, combined with a lack of appropriate alternatives that would facilitate convenient domain names, other gTLDs have become somewhat diluted. For example, one finds many non-commercial sites in the .com domain.
However, if new gTLDs are created, service providers will be able to more finely differentiate themselves by choosing an appropriate domain. Compared to ZoneDNS, however, simultaneously occupying multiple zones is quite difficult.
The Domain Name System was designed to be extensible. New categories of information may be integrated into the system merely by defining a new type of "resource record." This sort of standard might be drafted by the IETF (as the Secure DNS extensions were), or by any other relevant standards-making body. Presumably, ICANN, it its role as official steward of DNS and assigned numbers for protocols, would need to assign the "zone resource records" a type number and handle.
Then, all that would remain would be to convince DNS clients and operators to create zone listings in their database, and to use ZoneDNS-aware software, which would be interoperable with older systems. A similar movement is underway to convince equipment owners to register geographic location information in experimental "LOC" resource records. While DNS-LOC has not enjoyed widespread use, it is interesting both as a potential model for ZoneDNS, and as another system attempting to bring real-world geography into cyberspace.
DNS's flexibility also allows for an almost unconstrained set of new TLDs to be created. In fact, the only limiting factors in such expansion might be end-user demand and the political willingness of the powers that be - most likely, ICANN and its constituents - to do so.
Another important technical criticism of ZoneDNS is that it violates a fundamental philosophy of Internet architecture. In particular, low-level protocols should be general and extensible; application-specific protocols should be constructed at a high level of abstraction. However, it might be argued that "cyberzones" are actually on the same level of generality as DNS. Certainly, the ZoneDNS system makes the same sort of changes to Internet functionality as the much less controversial (at least with regard to the end-to-end argument) expansion of TLDs. Why not allow end users to take advantage of the existing DNS infrastructure?
Application to peer-to-peer transactions
While it's easy to see how cyberzoning can apply to client/server interactions, like the World Wide Web, such systems can also be utilized in communications between individuals. For instance, Internet e-mail addresses always contain the domain to which the sender or recipient belongs. (And while the technical details have not yet been worked out, Internet-wide "instant messaging" may employ a similar architecture.) Under one possible application of the system, uers are essentially "affiliated" with a domain, which in the second alternative, are equivalent to the zones we have been describing. In the first alternative, software enhancements might automatically retrieve and display zone
affiliations when presented with an IP address or domain name.
In the e-mail example, senders would be expected to tailor their content based on the destination zone. A message that might be appropriate for .sex, for instance, would not be appropriate for .kids. Likewise, e-mail users may affiliate themselves with either sort of zone in order to express some preference to the sender – for instance, whether or not they wish to receive bulk commercial e-mails. This sort of information cannot easily be obtained giving the existing Internet architecture.
Comparison to other architectures
Cyberzoning, in of itself, (and unlike the active prevention architecture) does not necessarily solve the problem of regulating behavior which crosses real-world boundaries. However, zone "governments" might use active prevention technologies to forestall involvement by real-world sovereigns, and incidentally preserve some amount of cyberspace "independence."
Zone boundaries might also be reinforced by technological barriers. For example, employing the Active Prevention paradigm described earlier, adult content providers might deny all requests originating from known under-18 zones. (Note that cyberzoning enables this finer discrimination even without a digital identity infrastructure.) In the peer-to-peer realm, e-mail servers in the .kids domain might scan incoming messages for inappropriate language. E-mails containing swear words might generate warning messages, or get re-routed to parents or system administrators. In general, cyberzoning enables users to have a clearer notion of their rights and responsibilities in particular online arenas. Currently, each ISP, web site, or individual author may define different policies regulating online behavior; cyberzoning might encourage unification and simplification of these rules.
"Cyberzones" already exists on the Internet, though perhaps in not quite such an immediately obvious form as ZoneDNS would provide. For example, individuals can join particular online "communities" allowing them to engage in electronic discussions (as on The Well), or creating free web pages (as on GeoCities). And of course, each Internet Service Provider (including academic institutions) creates "zones" of interaction online for Users, sometimes allowing the"cyberpublic" to join in (as with AOL's Instant Messenger).
These "zones" have been created by commercial entities in an "top-down" fashion. These companies created "areas" in cyberspace for certain prescribed sorts of interactions, complete with rules and enforcement. Community members must agree to abide by the regulations of these pre-existing structures, and everyone is subject to the wishes of the local cybersoveriegn.
There are also examples of more "bottom-up" cyberzones, like "web rings" which authors can use to connect their work and otherwise diffuse pages into a cohesive set, organized by topic. Newsgroups (especially before commercialism invaded) are another instance of self-forming (not to mention self-governing) cyberzones. Decentralized structures such as these are most often governed by normative forces, whether that be in the form of implicit expectations, or friendly reminders from vocal personalities.
ZoneDNS presents an explicit mechanism by which either sort of zone might be created; gTLD expansion, while almost certainly inevitable, will probably be biased toward one model or the other.
ZoneDNS and expanded gTLDs may aid in solving some cross-jurisdictional problems, leave many unsolved, and may even create some new dilemmas. As described in the "Existing Architectures" section, regulatory architectures deal with two main types of problems - filtering incoming information, and shielding sensitive information from discovery or leakage. As cyberzoning expands, we must be mindful of the implications in a world that cannot agree on a single set of laws.
Cyberzones (especially if coupled with Secure DNS and a secure transaction protocol like HTTPS) may be quite useful in helping users to differentiate "safe" transactions from unsafe ones.
For example, zones might be set up where only legitimate, certified businesses could operate, perhaps even with specific policies for handling personal information. Users might be warned - or even impeded by intelligently configured software - from doing business with anything other than approved sites.
Likewise, only a select list of zones (perhaps corresponding to some geographical location or an approved corporate intranet) might be approved for the export of strong encryption. (An authoritative domestic database might be needed to track and categorize foreign domains in this case, for security reasons.) Fortunately, even if these safeguards failed, the perpetrators (on the sending side, at least) would be within the reach of domestic law enforcement.
The assignment of descriptive zone labels to content and services on the World Wide Web introduces a potential for censorship. Not only might undesired content be filtered out by nations, organizations, or individuals, but providers might be blocked based on their online affiliations or practices. (This might be useful, if the content is genuinely harmful, or destructive if the censors are overzealous.)
On the other hand, censors may find it difficult to fine-tune their filters enough for practical use. A single zone - basing membership on business practices or a broad content-based criteria – might contain a wide diversity of material. The presence of significant amounts of "unzoned" content may also make zone-based censorship impractical.
When used for content selection, cyberzoning raises many of the same issues as PICS. However, a given zone will probably encompass many web sites, whereas PICS labels are usually applied with a much finer granularity. Zones also produce many of the same benefits as PICS, like enhanced search and categorization capabilities. One might even go so far as to hybridize the system - a ZonePICS protocol might allow independent agencies to classify various sites on the web according to arbitrary criteria.
The active prevention system described in the previous section is a tool for responsible service and content providers; PICS- and cyberzone-aware browsers and filters are tools for responsible consumers and zone administrators.
With many illegal or regulated activities, local law enforcement can efficiently shut down significant dealers and providers, but finds it nearly impossible to catch a substantial fraction of individual consumers. This asymmetry is one major factor in the creation of the "havens problem" as it relates to certain problematic behaviors.
For example, the most practical approach to stopping Internet gambling might be to stop a few online casinos from connecting with users (rather than trying to stop a large number of users from accessing the sites). It may therefore be treated as a "filtering" problem, assuming effective law enforcement shuts down domestic casinos, causing the remaining outlaws to flee to offshore havens.
In this case, unfortunately, cyberzones may do nothing more than mildly exacerbate the problem by helping users find the casinos. We might hope that the industry would self-regulate, perhaps installing active prevention systems; consumer demand for these vendor's illicit services might be reduced by advertising campaigns and rigorous domestic surveillance (perhaps aided by tracing requests to casino-related zones). Or the government may make arrangements with financial institutions and ISPs to try to disrupt the underground economy by cutting off the flow of dollars, or even IP packets – though the latter would probably run aground of the First Amendment). Aside from reaching reciprocal enforcement agreements with haven nations, however, law enforcement can do little to solve the problem.
Redeeming Middle Ground
The above scenario is, while potentially realistic, probably a worst-case. A more optimistic observer might predict that the ZoneDNS implementation, for example, would create a framework that encourages online industries to unify and self-regulate before they run amok. The current freedom of self-determination on the Internet is highly prized, and the nascent commercial cyber-community may be eager to avoid running afoul of real-world sovereign powers at the expense of restraining a few excesses.
This may be an especially effective strategy given the cross-border complication. From a government standpoint, effective regulation of the Internet involves high costs incurred in the negotiation, ratification, and universal enforcement of international treaties. Reasonable self-regulation seems like an attractive alternative.
Fueled by the electronic publishing revolution, companies are slowly but surely losing their ties to particular tracts of land, with online storefronts replacing the traditional mortar, domain names hotly contested, and gTLD expansion eagerly awaited.
Of course, none of the proposed architectures - or indeed, hardly any that might be imagined for the foreseeable future - can protect against real-world crimes committed with the aid of network technology. A digital ID might be useful in tracing an e-commerce fraud, but the worst the sysops can do to the user who mailbombs an e-mail pen pal on another continent it to delete the account. Clearly, international cooperation is required among real-world law enforcement agencies to ensure order on our rapidly shrinking digital planet. However, this
is not to say that the architectures we have proposed cannot be useful to solve some of the problems the world faces in these areas. Our best guess is that successful technologies will need to be judiciously applied to the particular problems for which they are best suited.
Norms as the Historical Internet Governance Mode
Since the early days of the Internet, engineers and scientists such as the late Jon Postel, Vinton Cerf, and many other like-minded individuals have contributed to the development and growth of the Internet. These engineers and scientists formed an informal technical community that shared communal norms of mutual trust and respect based upon professional competence, and open information sharing. Their shared value systems guided them to develop an informal system of Internet governance based upon self-emergent norms.
An Internet norm that has gradually evolved, and has proven very effective, is the Request for Comments (RFCs) process by which an Internet standard proposal is reviewed, approved and published. Anyone can submit an Internet-draft for review. During the submission process, others can review and comment upon the Internet-draft to improve the draft document. After several iterations, the Internet Engineering Task Force (IEFT), or the RFC-editor will evaluate the draft for its suitability as an RFC. Once approved, the document is legitimized as an Internet "norm", and is published on the Internet as an RFC.
However compliance with any RFC remains a "norm" because there is no legal regulation to require Internet users to comply with any RFC. Internet stakeholders voluntarily comply with the RFCs because the recommendations contained therein have been subjected to communal Internet review and approval. This "norm-based" attitude towards Internet governance has permeated many other areas of Internet usage, such as the Usenet groups in which these communities govern themselves through self-developed social norms. There is no legal enforcement of communal norms except through the sheer social peer pressure of desiring communal respect, and avoiding communal disapproval. The norms-based Internet governance mechanism has proved highly successful for many years.
Thus this section of the paper will identify what are norms, and how do they develop within a communal context. As a form of self-emergent governance, the Internet's norm-based governance mechanism has its strengths and weaknesses; with the increasing commercialization of the Internet, its deficits are being increasingly exposed. This doesn't mean that the "norms" cannot function to govern the Internet; this just means that it can no longer be the dominant mode of Internet governance, but will have to allow other governance modes such as the law, markets or architecture to replace, supplement or complement it.
Norms can still function as an Internet governance mechanism in many situations, and this section will also provide examples where norms still contribute towards governing the Internet, especially if the "norms" governance power is integrated with other governance mechanisms such as the law, market, or architecture.
What are Norms, and How They Develop
Norms develop within a community. Cooter describes a community as "a social network whose members develop relationships with each other through repeated interactions. A community has to organize and govern itself to obtain collective benefits. To maintain and sustain the community, the community members often develop informal protocols for interacting with one another. According to Gibbons, norms are informal "regularities that individuals feel obligated to follow because of an internalized sense of duty, and because of a fear of external non-legal sanctions, or both". As people who share values and expectations form communities that include like-minded individuals, and exclude those that do not share those values, they realize that the community needs a mechanism to maintain order and civil behavior. The community can develop its own norms, borrow or adapt other communities' norms to provide the needed communal guidelines. People will tend to adhere to norms that they have developed collectively, and perceive to be fair.
Norms truly represent "consent of the governed" because norms are developed and enforced by the community that are governed by them. Norms are only successful to the extent that the community members voluntarily associate, and self-govern and self-enforce without external regulatory mechanisms. These norms generally acknowledge or reward community-enhancing behavior, accept ordinary behavior, and punish community-diminishing behavior. The community can use its norms as guidelines to exclude those who are unwilling or unprepared to comply with communal values and goals. Those who do are unable or willing to comply with a community's norms can choose to avoid joining the community in the first place, or having joined the community, can choose to leave. Thus the individuals who remain within a given community and adhere to the community's norms do so voluntarily.
Since there are usually low switching costs between Internet communities, individuals can choose to leave, and seek other more hospitable or compatible communities. The communal norms have legitimacy and are meaningful for the community's members only because they voluntarily accede to them. People also have to perceive that others are compliant with those norms; in other words, there are no or very few "free-riders." Thus the community will monitor and sanction norm violators, which could include expulsion, to enhance communal harmony, and also to signal the other community members that the community will collectively seek to maintain its value boundaries and rules.
Norms function as an effective governance mechanism for the portion of the Internet that is community-focused because it represents "consent of the governed." Given the highly decentralized nature of the Internet, new decentralized communities can easily form, and adopt their own decentralized self-emergent norms free from external intrusion. Furthermore these norms are not hard-and-fast rules that are automatically imposed upon violators. The community will afford some flexibility for norm violators who heretofore have faithfully abided by the norms. The community can also implement a gradated set of norms in response to varying levels of norm violations. Thus the community can adjust its own enforcement of its norms in response to the varying gradations of norms violations.
Building a sustainable Internet community with enduring norms takes time and energy, both on the part of the community members, and new entrants. The existing members must identify its norms so that they can share them via Frequently Asked Question (FAQ) documents with new entrants ("newbies"). However, not all norms are documented, and many remain tacit. Community members can misinterpret tacit norms, which can lead to communal dissension.
Helpful community members often play a key role in acquainting newbies with the community's norms. A well-organized community might even have a system operator who can guide new entrants in how the community functions. However sharing the community's norms is insufficient. The newbies must also be willing to receive, and assimilate the community's norms. Without assimilating the community's norms, these newbies can stress the community's social structure, and norms. The ease of entry and ease of exit challenges the longevity and stability of any Internet community because people might not wish to commit the time and resources to contribute to the community.
People, on the other hand, might seek to "free-ride" by taking from the community without contributing to it, which reduces the overall value of the community to its members. An example might be an Internet Usenet group where there are many passive observers who mine the Usenet group for information, without contributing to the community's pool of information. Eventually the active participants might weary of being the primary contributors of information and help to others, and thus withdraw from the community.
Furthermore, people can change their Internet identities very easily and cheaply to circumvent Internet norms. Like the classic cartoon of a dog proclaiming that no one on the Internet knows that he is a dog, any Internet user can change identity very easily. An individual who has been ejected from an Internet community can assume a different Internet identity and return to that community to cause mischief.
The most common and most established Internet application is electronic mail (e-mail). Thus the Internet community at large has developed several simple e-mail usage norms. These include the following usage guidelines.
As users become familiar with Internet e-mail, they often progress to Usenet groups, which can develop their own norms. These include:
Other Internet norms, sometimes known as netiquette, can be found at the Netiquette Home Page (http://www.fau.edu/netiquette/net/netiquette.html).
Maintenance and Dissemination of Norms
Once an Internet community has developed its norms to maintain communal order and civil behavior, it has to commit energy and time to maintain and disseminate those norms. A community often writes a FAQ to establish its communal identify, and to introduce itself and its norms to potential members to provide direction on the community's values, goals, and expectations. Any member who joined a community has implicitly, to some extent, accepted the norms, and will abide by those norms, or be subject to norm-based sanctions. By documenting those norms, the community also provides a source of guidance for resolving any potential intra-communal disputes.
However documented norms often form a small, though necessary, portion of the communal norms because many other norms are often tacit, and remain undocumented. Then existing members can often share their insights on what constitutes acceptable behavior within the communal context. Of course, members can only volunteer their guidance to the extent of their abilities and time. Thus an Internet community can resort to an individual (or a group of individuals), who has been generally accepted by the community as an authoritative and fair monitor, to provide assistance to newcomers. For Internet Usenet groups, this guiding role is provided by the system operator (sysop), who helps to maintain the community's social structure, "show the ropes" to newbies, and guide the general direction of the community in accordance with the community's wishes.
Norms in Action
Major Internet service providers (ISPs) such as MCI/Worldcom, GTE and others own networks that comprise part of the Internet. An Internet user who accesses the Internet through GTE and communicates with another Internet user who accesses the Internet through another ISP relies upon these companies' peering agreements. These agreements emerge from the Internet norm of freely transmitting inter-network traffic consisting of IP packets.
In April 1994, the law firm of Canter & Siegel spammed "5,000 newsgroups with an on-line advertisement for their services. Net users who were outraged at the violation of the Internet norm "to abstain from spamming" in turn deluged the law firm's e-mail box with over 30,000 e-mail replies, at which the law firm's ISP terminated the firm's Internet account. Arnt Gulbrandsen wrote a "cancelbot" that automatically deleted all Canter and Siegel spam from Usenet newsgroups within 20 seconds of the spam's posting.
Many Internet users have used shared software and shared participatory norms to create virtual environments known as multi-user dungeons (MUDs). These users create online personalities (with descriptive identities) that inhabit and participate in real-time in these virtual communities. The software technology allows users to collaborate and participate to create the community's life, and history.
LambdaMOO is a MUD-based community that has learned to develop norms to maintain its social structure, and communal civil behavior. In April 1993, a LambdaMOO user called "Mr. Bungle" engaged in virtual acts that resulted in significant individual and communal distress. Despite the community's non-intrusion norm, a system administrator summarily deleted him from the environment. Since the Internet's architecture allow a person to assume multiple virtual identities, "Mr. Bungle", once kicked out, returned as "Mr. Jest". Once "Mr. Jest" was identified as "Mr. Bungle" masked with a different Internet identity, the community once again ejected him.
Subsequently LambdaMOO participants collectively developed its norm for voting actions that can be mandated upon the system administrators, and also a dispute-resolution mechanism with binding punishments. The community members recognized that "real-space" laws did not exert any authority over the issue at hand, and that the community should regulate itself. "In general the citizens of LambdaMOO seemed to find it hard to fault a system more purely democratic than any that could ever exist in real life."
Beyond Norm-based Governance
Beyond the limitations of norms as an Internet governance structure, the Internet itself had changed in its nature. When the Internet was primarily used by the research and academic community, few knew, or even cared, about how the Internet was governed. Most RFCs tended to be focused upon technical standards, with the exception of a few that covered Internet usage norms and sundry Internet items. This informal norm-based resulted in voluntary technical standards worked very well for many years until businesses realized the commercial opportunities afforded by the World Wide Web (WWW). As the commercialization of the Internet increased, there was increasing recognition that the norm-based governance structure of the Internet, adequate as it was for the academic and research community, might not be able to cope with the mass-market and commercial pressures.
As America Online introduced countless thousands to the Internet, these newbies often knew nothing, or cared little about existing Internet norms. Many veteran Internet users who initially took the time to patiently share their communities' norms with these newbies often grew weary of helping those who ignored their advice. As the Internet's user population shifted from being primarily a technical and research community to a more mass-market population, the existing Internet norms are being challenged.
In early 1998, Jon Postel, as the administrator of Internet addresses, requested operators of the other root servers to redirect domain name information requests from the master domain (Root Server A) to a University of Southern California domain name server. Though Postel's request had no legal standing or review, these operators complied because they perceived Postel as being a highly respected and authoritative Internet figure. This action caused many people within and external to the Internet community to recognize Postel's great administrative power of the Internet's workings. Many businesses became very concerned that Internet governance was being left to the technologists. As the American Bankers Association said, "the Net must serve individual consumers and businesses, rather than a narrow interest group of technologists." This furor accentuated business and government perceptions that Internet governance should advance beyond the norms-based approach that had worked for many years when the Internet was primarily a technical and research environment.
Markets as the Emerging Internet Governance Mode
In 1997, the United States Government released its Framework for Global Electronic Commerce. This document contained principles which the government believed were necessary, though not sufficient, for the successful introduction and growth of global electronic commerce. Though the United States government had funded the Internet's infrastructure, it understands that the private sector has been the primary driver of the Internet's expansion. Thus the government proposed the first principle that "the private sector should lead." The government's position, as excerpted from the framework document, is as follows:
For electronic commerce to flourish, the private sector must continue to lead. Innovation, expanded services, broader participation, and lower prices will arise in a market-driven arena, not in an environment that operates as a regulated industry. Accordingly, governments should encourage industry self-regulation wherever appropriate and support the efforts of private sector organizations to develop mechanisms to facilitate the successful operation of the Internet.
Other governments (e.g. Canada, Japan, New Zealand), and international bodies such as the World Trade Organization (WTO) have followed the US government's lead, and have also supported the free-market approach to governing the Internet. As WTO's director-general Renato Ruggerio said, "We need to avoid uncoordinated actions by governments. A combination of the market and self-regulation by business can do much to police the Internet without excessive government involvement"
This section of the paper will define the free-market approach to governing the Internet, and identify its underlying theoretical assumptions that make it a suitable governance mode for the increasingly commercialized Internet. The free-market approach certainly has its strengths, but it also has limitations, especially with respect to public goods. Nevertheless the free-market approach is becoming increasingly accepted and common throughout the Internet, and various examples are provided of its implementation on the Internet.
Free-Market Approach to Governing the Internet
The Internet does not represent one homogeneous network; instead it is a network of networks. The free-market approach represents another form of self-governance in which businesses regulate themselves by negotiating prices and undertaking actions that maximize their profit in response to consumer needs. By doing so, businesses and consumers decide for themselves how the Internet should be structured for everyone's mutual benefit. For example, businesses can construct subnetwork-based communities with different rules and regulations in which Internet users can safely and securely conduct their transactions, and communicate with other Internet users. By allowing companies to compete freely in determining how their subnetworks and communities are architected, the government is allowing consumers to freely choose among these different ISP providers in the marketplace of rulesets.
There are many underlying assumptions to the free-market approach to governing the Internet. It's necessary to identify and understand them to consider the suitability of the free-market approach to governance of different areas ("zones") of the Internet.
Market competition exists. If market competition does not exist, or is highly attenuated, companies might not seek to provide optimal goods to consumers. This doesn't necessarily imply that the absence of competition, or weak market competition is caused only by a monopoly; an oligopoly that colludes can effectively limit market competition, thus limiting consumer choice and reducing buying power.
Switching costs are low. Without low switching costs, consumers will find it difficult to switch from businesses that do not satisfy them their needs to other businesses that provide more suitable products. Switching costs consists of exit costs associated with leaving an existing business, and entry costs of buying from another business. High switching costs lead to switching rigidity that could limit consumer choice and movement among different alternatives, and hinder free-market competition.
Profit maximization is the primary market motivator. The market will only adjust to meet consumer needs only if this maximizes profit. This means that consumers can exert their preferences or purchasing behavior to influence business to provide desired goods by switching from less desirable alternatives to more desirable alternatives. By doing so, consumers are properly exercising their market influences to guide companies to provide the right sets of products. Thus consumers guide business through their abilities to choose between different alternatives, thus achieving self-regulation.
Enforceable contractual relationships are possible. Without legally binding contacts, business will be reluctant to engage in relationships with consumers, or even other businesses.
By definition, a free-market approach to governing the Internet should result in high responsiveness to market forces as exerted by businesses and consumers. This should allow innovation and price competition between companies to result in optimal solutions for structuring and governing the Internet.
The United States government had long influenced the Internet's infrastructure and direction. By espousing the free-market approach to governing the Internet, the government has provided its blessing to this emerging approach, thus providing it with institutional legitimacy. In addition to the United States, other countries such as those represented by the European Union, and Japan have also expressed their support for the free-market approach. These countries recognize that the Internet transcends national boundaries, and thus deserve an approach that similarly transcends national boundaries. Few entities, whether they are other countries or non-governmental organizations, have mounted meaningful challenges to the free-market approach.
Differentiation between Public Goods and Private Goods
A public good is non-rival, which means that the provision of a public good may simultaneously benefit more than one person. A public good is also non-exclusive, which means that once a public good is produced, it's nearly impossible to prevent others from simultaneously benefiting from its production. Since a public good is non-exclusive, there is little incentive for anyone to pay for its production, thus leading to the free-rider problem, in which the optimal strategy for an individual consumer is let someone else pay for production, and since there is non-exclusivity, free-ride of what is produced. On the other hand, private goods are generally defined by exclusion as being goods other than public goods. Thus normal consumer purchases will qualify as private goods because they do not meet the requirements of non-rival and non-exclusivity.
However the contradiction is that if everyone decides to free-ride, then no one will pay at all for the public good, and no public goods will be produced. The probability of free-riding is proportional to the community's size i.e. the smaller the community, the more impact a person's action or lack thereof will have on the community's other members. Furthermore knowledge of members' identities is important; once a community's members can identify the free-riders, the community can exert social pressure upon them to contribute back to the community. However, if only a small group of individuals participate in the enjoyment of the public goods, then private (i.e. without government intervention) provision of public goods is possible. An example of an Internet public good is the Domain Name System (DNS) which maps IP addresses to Internet domain names (e.g. xyz.com). Without this public good, the Internet will not exist in its current form.
Though the free-market approach might work well for private goods, it would not adequately address societal and communal needs for public goods. Given the propensity for free-riders to exploit public goods, government should regulate or delegate the regulation to ensure that public goods are being adequately supplied for the benefit of all. Governments might be relying excessively on the free-market to adequately appreciate and resolve the legal and public policy issues raised by Internet-based electronic commerce. Businesses generally recognize individuals primarily as consumers, and might fail to recognize non-financial issues unless these are framed within the business context. Such issues include the right to privacy, access to objectionable material, and free speech over the Internet.
The free-market approach assumes that all entities that use the Internet are driven by the profit motive, and does not recognize society's moral and ethical values. George Soros observed that "markets basically are amoral, whereas society does need some kind of morality -- a distinction between right and wrong. And by allowing market values to become all-important, we actually narrow the space for moral judgment and undermine public morality." Society has to decide whether Internet governance should also incorporate moral or ethical values, or allow the free-market approach to determine what's possible and available on the Internet. It is not clear how the free-market approach can easily incorporate societal values? This issue is being highlighted by American society's desire to restrict pornography, improve privacy, and increase access over the Internet.
If companies collude in the free-market to maximize profits, they could achieve higher profits at the consumers' expense. This holds true in real-space as well as cyberspace. As in realspace, the government will probably have to intervene to identify and regulate anti-competitive behavior. Other businesses might engage in predatory behavior. If victim consumers or businesses do not have recourse to the legal system, these predatory businesses might not honor their business obligations or contracts.
Computer businesses have lauded the US government's desire to allow the private sector to lead in governing the Internet. However if Microsoft represents a threatening force, these same businesses are willing to sacrifice their free-market ideals to allow the government to intervene in the free-market to address anti-trust issues. As T.J. Rodgers, President of Cypress Semiconductor, noted in referring to erstwhile free-market companies such as Oracle Corporation, Netscape, and other Microsoft competitors, "these guys give up their free-market principles to get a short-term edge over Bill Gates".
The free-market approach assumes that buyers have perfect information that allows them to make informed choices about different alternatives. However there is never perfect information, and there are also costs to searching for and acquiring information. Consumers usually find it difficult or costly to monitor whether businesses are actually providing optimal services, and might not know enough to switch suppliers. Thus the lack of perfect information, and the difficulty of acquiring information hinders consumers from exerting market discipline upon businesses. On the other hand, businesses can capture full gains from collecting user information by leveraging it for marketing purposes, or by selling the information to other businesses, but will not suffer full losses if discovered. This means that there is an incentive to use private information even though the customer might not have permitted it. It's difficult for individual consumers to participate and inform the marketplace such that consumers can collectively discipline the businesses within the marketplace.
Marketplace Interaction in Action
A person who wishes to access the Internet has to enter into a contract with an Internet Service Provider (ISP), who contracts to provide Internet access for a price and the user's commitment to adhere to the ISP's system rules. These rules could include acceptable use policies that cover harassment, unauthorized system access, fraudulent account use, and unsolicited commercial advertisements (i.e. spam). America Online (AOL) provides an easy-to-use, safe and secure cyberspace environment. In it, AOL users can easily retrieve their Internet e-mail, chat with others in AOL chat-rooms, and conduct transactions. Its success in providing a comfortable mass-market cyberspace environment has allowed it to rapidly grow its user base. Thus in providing the right combination of services, AOL has prospered in the marketplace.
General Electric has established a secure trading system over the Internet that allows it to buy on a very competitive basis from its suppliers. Building upon this success for its own internal purchases, GE has now allowed other buyers and supplier to begin using this secure trading system to expedite online procurement process. By doing so, GE has lowered transaction costs for buyers and sellers, and has earned a profit in enabling this electronic marketplace over the Internet.
Consumers can use Internet-based pricing services such as Priceline.com to place bids for airline tickets. Priceline.com will then forward the bid to participating airlines, who then compete to meet the bid. In the past, the airline industry has occasionally been reviewed by the Department of Justice for collusive behavior on ticket pricing. Internet users have been able to undermine collusive behavior by allowing the airlines to compete against one another for the consumer's business.
The computer industry has realized that Internet users are hesitant in engaging in Internet commerce because of privacy concerns. Thus the industry in conjunction with the research community has developed a system called Privacy Preferences Project (P3P) for protecting Internet users' privacy. This system allows "users to be informed about Web site practices, delegate decisions to their computer agent when they wish, and tailor relationships with specific sites. " . This attempt at achieving societal privacy goals is still in its early stages, and it remains unclear whether it will be successful in ensuring Internet users privacy. Computer companies have also established a private corporation called TrustE (www.truste.org.), whose mission is to "establish trust and confidence in electronic transactions … by providing users with a trusted privacy mark (or brand)".
Norms and Markets : Cooperation and Conflict
Though the broader cohesiveness of the entire Internet community has declined; however people still desire to congregate, albeit in cyberspace, with like-minded people. The Internet previously allowed people to easily form their own cyber-communities with minimal cost of entry and exit; however, with the ever-increasing influx of newcomers to the Internet, people have realized that they need better control over those cyber-community boundaries. Thus they choose their cyber-communities of like-minded people for a price, and accede to that community's rules and norms. Thus a recommendation will be for businesses to compete for consumers by forming sub-networks, for a price and subject to their community rule sets. These communities can be secured from outsiders via software architecture. Thus in this scenarios, the three governance mechanisms of code, norms, and markets blend to form a solution that is welcomed by many consumers as reflected in AOL's high membership count.
Though the free-market approach to governing the Internet is gaining increasing prominence, this doesn't necessarily mean that Internet norms no longer have any relevance. Witness for example the web server competition between Apache, which is developed by software developers who believe in open source code, and Windows NT, which is developed by the profit-driven Microsoft. Despite Microsoft's software hegemony in Intel-based machines, Apache's share of the web server software market surpasses that of Microsoft. Thus code freely developed by software developers who believe in the norm of sharing and distributing code is currently triumphing over commercially developed code.
The goal of this paper has not been to propose one simple solution to sovereignty on Internet. Instead, it has attempted to consider a multiplicity of factors that will likely affect Internet governance. Perhaps a one size fits all solution would have a certain elegance to it, but we believe our approach is necessary for several reasons.
To begin with, although the Internet is new, it exists in a world that has a long established legal culture based on the idea of territorial sovereignty. It is unlikely that most territorial sovereigns will abandon such a culture any time soon. That is why we have considered ways, both technical and legal, that traditional sovereigns may be able exercise power over what their citizens can do online.
Yet despite this long history of territorially based sovereignty, we fully realize that many people would like the Internet to be more libertarian in character and that some would like it to be governed solely by norms or agreements developed by individual users. We fully embrace this idea and are in no way advocating a totalitarian regime on the Internet. Our cyberzoning proposal is aimed at creating a structure that will enable users to create unique zones with widely varying rule sets.
We also realize that many traditional governments are arguing that the private sector should lead in regulating the Internet. Such an approach could empower individual users. As with our cyberzoning proposals, companies would likely create communities that provide services and regulations based upon individual user preferences and many different communities would likely form. However, as our paper points out, such a system might not be very effective in promoting public goods on the Internet.
It is clear that all of these means of Internet regulation and control are viable in at least some respects, and we believe that an amalgam of all of the above is most likely to exist for the immediate future. Such a system may be confusing at times, but it may be the only truly workable one. We hope the future of Internet sovereignty will look as follows.
Traditional territorial sovereigns will still play a somewhat significant role in Internet governance. Attempts on the parts of territorial sovereigns to block their citizens from accessing substantial portions of the Internet may be successful in countries that are already extremely repressive, but there is no reason to believe that in already democratic countries such a thing will happen once legislators more fully understand the character of the Internet. Certainly such legislators may try to block conduct online that is already illegal in the 3 dimensional world, but the existence of such legislation on the Internet does not suddenly make such legislation inherently bad.
However, just like in the 3 dimensional world, there will be rooms for self-regulation, but traditional governments will probably still play a role. Complete self-regulation by users could lead to total anarchy as the Internet continues to grow. Communities on the Internet may be able to create their own rule sets, but at least in some cases they will need the power of the state to deal with egregious violations of those rules. Traditional governments may need to step in to also assure that users are fully informed before entering into agreements, and traditional sovereigns may also have a role in preventing some sorts of agreements from being entered into.
The private sector will respond to consumer preferences, but traditional sovereigns as well as users will continue to foster the production of public goods on the Internet. No one commercial entity will maintain such dominance on the network that consumer choice is substantially restricted.
Finally, governments will, through the treaty process and perhaps other means, make sure that no one territorial sovereign dominates decisions regarding the technical structure of the Internet.
Ultimately, we have presented here law-based and architecture-based approaches to creating governance systems, and emphasized the importance of norms and markets in governing behavior and form, but we do not expect any of these systems to be effective in isolation from each other. It is likely that traditional jurisdiction structures will continue to impose the principles of real-world justice on activities. In combination, the systems we've presented may ameliorate the flaws in the traditional system. Different solutions may be more effective in solving different problems. For example, problems of taxation may best be solved using a combination of treaties and digital identification; problems of gambling may be lessened through a system of zoning, digital identifications, and treaties. Bodies such as ICANN will, we hope, be well-suited to administering systems such as the domain name system, and possibly to administer zoning or digital identification systems.