Dissent in Reno v. ACLU: Seeds sown for a new generation of regulation?

Joanna Lu


Paper for MIT 6.805/STS085: Ethics and Law on the Electronic Frontier, Fall 1997
When the Communications Decency Act of 1996 was ruled unconstitutional in ACLU v. Reno, and later in Reno v. ACLU, there was great rejoicing among the various groups that allied in the effort to move the court toward those decisions. It seemed a ringing affirmation of First and Fifth Amendment rights and a forward step in governmental understanding of the unique technological position the Internet enjoys within the realm of communications media.

However, the June 26, 1997 ruling made in the Supreme Court's Reno v. ACLU decision was not the overwhelmingly unanimous ruling voiced in the lower court. Justice O'Connor and Chief Justice Renquist concurred with the rest of the Court only in part with respect to the constitutionality of the CDA. Together, they issued a joint dissenting opinion. It's true that with the Supreme Court decision in Reno v. ACLU, one chapter of the Internet Censorship Saga came to a close. The dissenting opinion, however, leaves elements of the larger plot hanging and its characters redrafting their alliances. It leaves open the possibility for future Internet regulation based on the perceived need to protect minors from indecent material. The CDA may have died that day in June, but the seeds of a new generation of regulations may have been sown the very same day.

Let's consider if there were a Son or Daughter of the CDA,

  1. ... what Constitutional grounds did the decisions concerning the CDA stand on?
  2. ... what rationale supported these grounds?
  3. ... what developments in technology support the rationale for regulation?
  4. ... apart from constitutional issues, what societal forces exert pressure to regulate the Internet?
  5. ... and how do I believe regulation can or should be enacted?

In answer to the first question, the Court deliberated long and hard to root its opinions in firm constitutional ground. To the second, I believe there is a legal rationale to connect Constitutional principles to regulation, but that rationale is still debatable. To the third, there are Internet technologies developing that didn't exist at the time of either ACLU v. Reno or Reno v. ACLU and they will aid regulation. To the fourth question, there are forces in a democratic society (for better or worse) that come into play other than rulings in the Supreme Court on issues of constitutionality. And to the fifth question,

I do believe there are ways to put up guidelines consistent with the principles of the US Constitution which at the same time answer the concerns of a US population with extraordinarily diverse needs, but I don't believe they need to be regulated through legal statutes.

1. What are the Constitutional grounds for the decision?

First, let's take a look at the sections of the CDA in question and the accompanying issues of constitutionality. Since it's primarily background to later parts of the paper, I'll try to zoom through the major points of the opinion as quickly as possible.
(a) Whoever  --
  (1) in interstate or foreign communications --
     (A) by means of a telecommunications device knowingly --
         (i) makes, creates, or solicits, and
         (ii) initiates the transmission of any comment, request,
              suggestion, proposal, image, or other communication
              which is obscene or indecent, knowing that the recipient
              of the communication is under 18 years of age,
              regardless of whether the maker of such communication
              placed the call or initiated the communication; ... and 
(d) Whoever --
  (1) in interstate or foreign communications knowingly --
     (A) uses an interactive computer service to send to a specific
         person or persons under 18 years of age, or 
     (B) uses any interactive computer service to display in a manner
         available to a person under 18 years of age, any comment,
         request suggestion, proposal, image, or other communication
         that, in context, depicts or describes, in terms patently
         offensive as measured by contemporary community standards,
         sexual or excretory activities or organs, regardless of
         whether the user of such service placed the call or initiated
         the communication; or
  (2) knowingly permits any telecommunications facility under such
      person's control to be used for an activity prohibited by
      paragraph (1) with the intent used for such activity ...
      
Shall be fined under title 18 US code or imprisoned not more than two
years. (1) 

The above statutory provisions were included in the CDA to protect minors from "indecent" and "patently offensive" communications on the Internet. Even though the Supreme Court was sympathetic to the congressional goal of protecting children from harmful materials, it still agreed with the three-judge District Court ruling that the statute abridged "the freedom of speech" for adults protected by the First Amendment. It used the decisions in Ginsburg v. New York, FCC v. Pacifica Foundation, and Renton v. Playtime Theatres, Inc. to lay doubt on the constitutionality of the CDA. This is how the Court reasoned.

Ginsburg v. New York upheld the right to prohibit selling to minors under 17 years of age material that was considered obscene to them even if it was not obscene to adults. Reno v. ACLU, however, rejected its applicability to the CDA on the grounds that the CDA was far broader than the New York statute. They declared the CDA's coverage was wholly unprecedented and that the breadth of the statute demanded that the government explain why a less restrictive provision would not be as effective as what the CDA proposed. (2)

In the FCC v. Pacifica Foundation decision, the court upheld the order of the FCC that the "Filthy Words" monologue which included words that referred to excretory or sexual activities or organs "in an afternoon broadcast when children are in the audience was patently offensive" and concluded that the monologue was indecent "as broadcast." Reno v. ACLU rejected this decision's applicability to the CDA because the Internet is not comparable to broadcast media in the receiver's risk of accidental encounter of "offensive material." The justices cited the lower court's finding that the risk of encountering indecent material on the Internet by accident is remote because a series of affirmative steps is required to access specific material." (3)

In Renton v. Playtime Theatres, Inc., the Court upheld a zoning ordinance that kept adult movie theatres out of residential neighborhoods. It was not a content-based issue, but an issue of "secondary effects." In other words, the court was not trying to stop dissemination of offensive speech, but instead, it was trying to control crime and deteriorating property values in the geographic neighborhood of these theatres. With this distinction in mind, the justices evaluated the CDA. They saw it as a "blanket content-based restriction on speech, and, as such cannot be 'properly analysed as a form of time, place, and manner regulation'" in the same way that Renton could. (4) In other words, the effect of proximity is almost impossible to determine on the Internet.

That was the majority opinion. But what did the dissent say? What issues were left to hold ajar the open door for another generation of the Communications Decency Act? At the heart of the dissenting opinion was the concept of "zoning." O'Connor and Renquist saw that the creation of "adult" zones can be constitutionally sound. States have long denied minors access to certain physical establishments frequented by adults. They have also denied minors access to speech deemed "harmful" to them. These prohibitions relied on two concepts that are readily enforceable in the physical world. These are geography and identity. Geography is the place, such as an adult dance show. Identity consists of elements that make up an individual, such as age or gender - aspects of "who someone is" that are hard to conceal in the physical world. A 14-year old would have a difficult time passing through the doors of the Naked I Cabaret on her own. In the electronic world, however, the boundaries of geography and identity are not as easily enforced.

"The electronic world is fundamentally different. Because it is no more than the interconnection of electronic pathways, cyberspace allows speakers and listeners to mask their identities. " (5)
But things are changing, O'Connor and Renquist went on to say in their dissent. They contended that although the world at the time of their decision may not have been compatible with zoning, now ...

"Cyberspace is malleable. Thus, it is possible to construct barriers in cyberspace and use them to screen for identity, making cyberspace more like the physical world and consequently, more amenable to zoning laws. This transformation is already underway. Cyberspace is moving ... from a relatively unzoned place to a universe that is extraordinarily well zoned." (6)
Can we carry the concerns we've attempted to legislate in the past into the developing media of communications? Is a law -- enforceable in the "real" world of "adult" theaters with doors and bouncers, the "real" world of magazine stands that sell their wares to adults but not minors -- now also enforceable in a cyberspace that can be "zoned" into adult sectors and children's sectors? As O'Connor stated her position,
"... the constitutionality of the CDA as a zoning law hinges on the extent to which it substantially interferes with the First Amendment rights of adults." (7)
Therefore, if it were technically feasible to create zones that would "protect" minors from indecent, patently offensive and obscene material without abridging adult First Amendment rights to access that same material, would it be constitutionally possible to legislate statutes delineating the boundaries of those zones while at the same time enforcing who crosses and who remains within those boundaries?

2. What rationale supports these grounds?

Before looking at the whether these laws can be enacted in the technologically maturing realm of the Internet, let's look at the philosophical basis for attempting to do so. Citing Ginsburg, the Court affirmed the State's independent interest in the well-being of its youth, and also on their "consistent recognition of the principle that 'the parents' claim to authority in their own household to direct the rearing of their children is basic in the structure of our society." (8)

Citing Pacifica, it also agreed with the government that there is a compelling interest in protecting the physical and psychological well-being of minors "which extended to shielding them from indecent messages that are not obscene by adult standards." (9)

And citing Miller v. California, the Supreme Court stated that even though the CDA provisions were vague in their attempts to regulate "indecent," "patently offensive," and "obscene" material, it believed that with a more carefully drafted substitute, the weaknesses of the CDA could be overcome by using the three-pronged Miller obscenity test properly. Let's take a closer look at these three parts. To judge something obscene, one would determine

"(a) whether the average person, applying contemporary community standards would find that the work, taken as a whole, appeals to the prurient interest; (b) whether the work depicts or describes, in a patently offensive way, sexual conduct specifically defined by the applicable state law; and (c) whether the work, taken as a whole, lacks serious literary, artistic, political, or scientific value." (10)
The Court charged the government with using only one limitation of the Miller test, the "patently offensive" prong stated in (a), and used it synonymously with "indecent." The court also said the government then went on to assume that since Miller outlined this limitation, the CDA cannot be constitutionally vague. But there is a second prong ... and it contained a critical requirement omitted from the CDA: that it be "specifically defined by applicable state law." This is what makes Miller's use of "indecent" and "patently offensive" specific -- that it can be determined by a law defined by a state. The CDA lacked the prong that reduced the inherent vagueness of the term "patently offensive." In other words, if the people who drafted the CDA were more together about applying the principles of indecency and offensiveness outlined by Miller v. California, the Act might have withstood the vagueness charge. The justices had no beef with the aim of the CDA. They simply charged the writers of the statute with being sloppy.

Given that the Court understood Congress's compelling reason in drafting the CDA to protect minors from harm ... and given that the Court felt that the CDA was overbroad in its reach because it abridged adult First Amendment rights in a then unzoneable medium ... and given that Justices O'Connor and Renquist saw the Internet moving towards zoneability ... can minors now be segregated from adults according to a set of community standards? What would happen if what was once considered a community of standards in the form of a geographic state is now a cyberspace community of standards with its own set of boundaries? Is it now possible to narrow a statute to the least restrictive means? Has widespread technical feasibility arrived for creating "zones?" Enter PICS.

3. What developments in technology support the rationale for regulation?

Even before the CDA was written by Congress and signed into law by President Clinton, an effort to establish boundaries had already begun. In the summer of 1995, a number of groups joined in an effort to find a way to address the issue of content control without government regulation. Under the auspices of the World Wide Web Consortium, this effort that included the Electronic Frontier Foundation and The Center for Democracy and Technology sought to define a technology platform on which people could write their own labels. It would be a set of protocols for how ratings could be expressed, not a rating system in itself. It came to be known as the Platform for Internet Content Selection, or PICS. (Other options exist which are part of commercial online services or can be purchased as stand-alone software products, but it seems that PICS is developing as the standard that most rating and filtering systems are employing.)

How does it work? PICS is a technical standard for labels. Ratings from any source employing PICS technical standards will work with all PICS-using filtering software. Consider how it would work from two sides of the communication. The label content itself is written by either self-rating publishers or by third-party services that rate other groups' sites. This done, the standard then allows any PICS-enabled tool to find and interpret the label on a Website or Website file.

How does this work for filtering material such as the CDA was concerned with? A browser or stand-alone software filter can be set to check labels for certain content. If a parent, library, or school were concerned with filtering out indecent material, it could set a filter to do just that. When an end-user (minor) asks to see a particular URL, the software filter fetches the document but also makes an inquiry to the label bureau to ask for labels describing that URL. Depending on what the labels say (indecent or decent), the filter may either block access to that URL or let the material through.

As of 1996, PICS conventions have caught on with a number of vendors. These include Microsoft, Netscape, Surfwatch, and CyberPatrol. Networks such as AOL, AT&T Worldnet, CompuServe, and Prodigy provide free blocking software that is PICS-compliant. RASCi (of the Recreational Software Advisory Council) and SafeSurf, both third-party rating services, write using their own labeling vocabulary, but use on-line servers that produce PICS-formatted labels. CompuServe announced it will label all web content it produces using RASCi labels.

Proponents of the system point to its flexibility as a plus. By creating only a platform of technical standards, it leaves the actual labeling and filtering process in the hands of others. In the words of Paul Resnick and James Miller from "PICS: Internet Access Controls. Without Censorship,"

"Not everyone needs to block reception of the same materials. Parents may not wish to expose their children to sexual or violent images. Businesses may want to prevent their employees from visiting recreational sites during hours of peak network usage. Governments may want to restrict reception of materials that are legal in other countries but not in their own. The "off" button (or disconnecting from the entire Net ) is too crude: there should be some way to block only the inappropriate material. Appropriateness, however, is neither an objective nor universal measure. It depends on at least three factors.
  1. The supervisor: parenting styles differ, as do philosophies of management and government.
  2. The recipient: what's appropriate for one fifteen-year old may not be for an eight-year old or even all fifteen-year olds.
  3. The context: a game or chat room that is appropriate to access at home may be inappropriate at work or school." (11)
What are the implications for another effort at Internet regulation? It means that even though the CDA was charged with being overbroad in its constitutional reach by making content-based decisions for the entire population, it now has the means to narrow that reach by targeting only minors. By installing blocking protocols on specific systems or stand-alone machines, a supervisor (parent, employer or government) can block "indecent," "patently offensive" or "obscene" material from a computer in a "children's zone." A filtering system set for an adult user would allow the same materials to enter an "adult zone" or allow an adult access to an "adult zone." There is a possibility of finding a least restrictive means. Minors can be both shielded from inappropriate material getting to them and prevented from accessing "zones" they are not mature enough to enter.

In her dissent, O'Connor identified the "patently offensive display" provision of the CDA as in reality two separate provisions. The first, she says, makes it a crime to knowingly send a patently offensive message or image to a specific person under the age of 18. The second criminalizes the display of patently offensive messages or images in any manner available to minors. She goes on to reason that since neither of these provisions purports to keep indecent (or patently offensive) material away from adults who have a First Amendment right to obtain this speech, "the undeniable purpose of the CDA is to segregate indecent material on the Internet into certain areas that minors cannot access." (12) Later in her dissent she says,

"Gateway technology is not ubiquitous in cyberspace, and because without it there is no means of age verification, cyberspace still remains largely unzoned - and unzoneable ... Although the prospects for the eventual zoning of the Internet appear promising, I agree with the Court that we must evaluate the constitutionality of the CDA as it applies to the Internet as it exists today." (13)
Well, today has gone and tomorrow has come. And it has come with PICS. We can filter. We can block. We can zone. Should we as a society actually desire to do that?

4. Apart FROM Constitutional issues, what societal forces exert pressure to regulate the Internet?

Many of the groups who rallied together to oppose government regulation of the Internet pointed to the development of blocking, filtering and labeling systems as a preferred alternative. However, here lies an interesting twist. Debate over the implementation of PICS-based labeling has become a battleground itself. Where some point to labeling as a way for empowered parents to take control of their families' online lives, others see it as a way for government and big business to join together in order to exercise an even more insidious control than writers of the CDA ever dreamed of wielding.

Let's look at the proponents' arguments. Many were collected from a statement of the NetParents homepage. It's entitled "Businesses, Public and Private Groups Unite Behind Initiative For Family-Friendly Internet Online World."

"Companies in the Internet online industry joined today with organizations representing education, children, parents, consumers and law enforcement to support President Clinton's and Congress' call for an Internet online environment that's family-friendly and rewarding and safe for children." (Executive Summary Excerpt)

"The benefits of this emerging medium to our society, and especially to our children, are extraordinary. We are only just beginning to understand the power of interactive services to educate, inform and entertain our children. At the same time, parents need to get involved with their children online and help them get the most out of their interactive experience. Just like you wouldn't drive without buckling up your kids in seat belts, you shouldn't let them travel in cyberspace without the available and easy to use technology safeguards." (Steven Case, Chairman and CEO of AOL)

"Today, parents who want to ensure that their children are safe online have many different options, whether they are using one of the big Internet online services or a small ISP. These tools are widely available, effective and mostly free. If you can click a mouse, you can use kid-friendly tools. And across the industry, what you're seeing today is a real commitment to make these tools even more effective, even easier for parents to use and even more widely distributed than they are today." (Laura Jennings, VP, Microsoft Network)

"Parents need to know the difference between good software and not-so-good software. They need to know where they can buy it, how they can use it and what it will do. Government's role in this debate is to provide parents with the tools they need to make wise decisions -- not to introduce ineffective and meddlesome regulations." (Carole Shields, President, People for the American Way)

"Our customers tell us they want their children to have rewarding online experiences. They recognize that the Internet is a vast global frontier, and they want and need help charting their children's course. The industry in general and AT&T in particular are providing them that help today and are both cooperating and competing in finding even better ways to help parents child-proof the Internet." (Tom Evslin, president, AT&T WorldNet Service) (14)

And what stand does the president take? In a paper on the White House home page he pledged that
"In the wake of the Supreme Court's decision on the Communications Decency Act ... [t]he Administration will continue to enforce laws to protect children on-line. The Supreme Court decision on the Communications Decency Act did not affect US laws against obscenity, child pornography and on-line stalking. The president made clear that the Administration remains committed to the vigorous enforcement of federal prohibitions against the transmission of child pornography and obscenity over the Internet and other media, and the use of the Internet by pedophiles to entice children to engage in sexual activity." (15)
So, the Justice Department's decision notwithstanding, the President will find a way to protect the children of the United States of America by making it "family-friendly and rewarding and safe for children." (16) He is responding to a concerned, vocal and active constituency.

But as we embrace guardian empowerment to protect minors, perhaps we should recognize some caveats. Critics of blocking, filtering and labeling abound. Let's review some of their warnings.

The ACLU white paper "Fahrenheit 451.2: Is Cyberspace Burning?" raises the fear that the White House is moving away from the position Judge Dalzell staked out in ACLU v. Reno. Dalzell contended that with its low barriers of entry and wide-reaching exchange of communication, the Internet should be afforded the greatest protection of all media under the First Amendment. The ACLU states

"The White House meeting was clearly the first step away from the principle that protection of the electronic word is analogous to protection of the printed word. Despite the Supreme Court's strong rejection of a broadcast analogy for the Internet, government and industry leaders alike are now inching toward the dangerous and incorrect position that the Internet is like television, and should be rated and censored accordingly ... in the virtual world, one can ... easily censor controversial speech by banishing it to the farthest corners of cyberspace using rating and blocking programs." (17)
The ACLU also recounts the following events with the suspicion that government and big business may not need the CDA to lay the infrastructure for Internet censorship since that process is already underway. In a presidential summit with industry leaders, Clinton encouraged Internet users to self-rate themselves and at the same time urged industry leaders to develop and use tools for blocking inappropriate speech. Network industry leaders responded with pledges supporting the White House efforts. Netscape and Microsoft, with 90% of the browser market, have adopted PICS standards. With a grant from IBM and its use by Microsoft and CompuServe, the RASCi rating system has become the "defacto industry standard rating system." Four of the major search engines announced a plan to promote self-regulation on the Internet. Senator Patty Murray (D Wash.) proposed legislation which would impose civil and ultimately criminal penalties on those who misrate a site. (18)

While collaboration between government and industry giants is laudable, control of the information industry is rapidly moving into just a few commercial hands. To borrow from an old economic metaphor describing the "invisible hand" guiding the free market economy ... the not-so-invisible hand of the government may be guiding the not-so-free industry of information exchange.

There is an additional angle to consider. Simson Garfinkel's worry is that access controls on the end-user's machine can be turned off by a user who doesn't wish to be censored. But it's not this step that worries him. His concern is the next step ... that controls are tamper-proof only by implementing them upstream from the end-user's PC, at the ISP or on an organization's firewall. This would allow the network administrator to review all down-loaded documents and identify the person who downloaded it. It would also mean that a user wouldn't even know what documents were blocked or filtered out, but the administrator would. (19) In the ACLU white paper the ultimate result is envisioned.

"The Internet will become bland and homogenized. The major commercial sites will still be readily available. They will have the resources to self-rate, and third-party rating services will be inclined to give them acceptable ratings. People who disseminate quirky and idiosyncratic speech, create individual home pages, or post to controversial news group, will be among the first Internet users blocked by filters and made invisible by the search engines. Controversial speech will still exist, but will only be visible to those with the tools and know-how to penetrate the dense smokescreen of industry "self-regulation." (20)
There are other examples of blocking problems that have arisen. And these can't even be attributed to the government and big business censoring free dissemination of information. They are the result (I hope) of glitches in software programming still in the infancy of its development.

Here's one. Some programs tend to block entire directories of Web pages simply because they contain just one "adult" file. This results in perfectly appropriate material being blocked because of proximity to inappropriate material. On some occasions entire domains are blocked. This is an issue of granularity. How fine a filter will be used to sift material?

Here's another. It results from string-recognition software. AOL's program, looking for and blocking four-letter words, refused to let users register from the town of Scunthorpe in Great Britain. Using Surfwatch, users at the University of Kansas Medical Center couldn't see the Web page of the Archie Dykes Medical Library, part of their own medical facility.

And one more. The white out feature in CYBERSitter not only blocks selected information, but in so doing will give it new meaning. By "whiting out" the "inappropriate" material and leaving the "acceptable" remainder, a sentence like "President Clinton opposes homosexual marriage" would become "President Clinton opposes marriage." (21)

Other examples highlight issues that go beyond growing pains in software development. These are human judgment calls made by raters. CYBERSitter tends to block anything that has to do with sex, including information on sexual orientation by virtue of the fact that it would have the word sex in its name. CyberPatrol blocks Usenet newsgroups including alt.feminism, soc.feminism, clari.new.women, soc.support.pregnancy.loss, and alt.support.fat-acceptance. It blocked the Electronic Frontier Foundation's censorship archive, the National Organization of Women's web site, and the Penal Lexicon which is an encyclopedic British site concerned with prisons and penal affairs. With these examples in mind ...

5. Can new regulations be enacted?

Yes. I've already outlined the constitutional support and legal rationale that can be cited to enact future regulatory laws. I've also shown the considerable political and economic support among the people and institutions for Internet regulation. Is regulation already developing? Yes. In fact we've felt some of the growing pains already.

... and should they?

Should some form of regulation exist on the Internet? I believe yes. Why? Because as a society we need protections. Even as we need protections from being accosted on the street or invaded in our homes, regulation on the Internet affords us and our children protection. Who should do it? That's a trickier question. We are a diverse nation. And when we consider the Internet, probably we should say that globally we are a diverse group of nations each made up of a multitude of people. We have different needs, different governments, and operate under different codes of ethics. A single governmental law would create a Procrustean bed of what is acceptable on the Internet. It would either cripple us by chopping off parts of ourselves we value, or deform us by forcing us into areas we don't naturally fit.

I believe the motive behind the Communications Decency Act - to protect minors from indecent, patently offensive, or pornographic material -- stems from good intentions. But because the definition of what constitutes decent material is subject to a community standard, I'm leery of identifying which community standard gets applied. As someone has already said... One man's meat is another man's sex organ.

But we have arrived at a time when we actually have the tools with PICS to define for ourselves what community we choose to belong to. (This doesn't apply only to issues of sex either. It's just that the CDA was consumed by that particular issue. Our standards of taste and tolerance vary with respect to views on religion, violence, animal use/abuse, respect for age, euthanasia, commercial solicitation, privacy, and political affiliations.) Without a doubt, there are problems with labeling, filtering and blocking software. I've mentioned only a few. I could have spent the entire paper throwing stones at rating systems and their potential for mistakes, misuse, and misapplied efforts at censorship.

But I'm an optimist. And I do believe in the vigor of a free market and the eventual wisdom of free choice as long as there are many choices. As the system matures, the sophistication of a rating vocabulary will evolve with our sophistication in using it, and as more rating services proliferate, we will find the communities we wish to join. We have not only the choice but the responsibility to take charge of our own lives and not simply balk at government attempts to set standards for us.

So what do we do? We accept a technical platform for content labeling. It's like labeling clothing. There's a place we always look to figure out the size ... usually on the neck or waistband. After all, it helps to know what size it is. Then we choose which size we buy because we know how we want our clothes to fit us. Likewise, there's a label for fiber content. Is it cotton, silk, or polyester/spandex? How does it fit with our taste and preferences? Do we like to machine wash only, or will we dry clean? We decide. The label simply gives us information. If it's something we reject, like a material that's not flame-retardant and could endanger the life of our child, that's our choice as well.

How does choice fit in to labeling for network material? If a PICS-enabled browser can find the URL label, a user accepts or rejects the material based on labeling information. What about the anecdotes of CYBERSitter and CyberPatrol blocking perfectly legitimate sites with a whiff of a hint of a rumor of sex? This is where I believe that we should take charge in choosing our community. Blocking should happen as far downstream to the user as possible. The key is to keep control of filtering and choice of rating services in the hands of the end users. We should be reading our own labels and tailoring our filtering to our particular needs of age, maturity, taste, and tolerance. There will eventually be a multitude of rating services. We should not accept the package deal that comes with a browser or ISP. It is the user's responsibility to research how different services rate material and choose the services compatible with her/his values and preferences.

Esther Dyson has envisioned a very near future where this kind of choice can and should take place:

"This is ideal ground for a proliferation of third-party raters, or trust and reputation services. We expect to see a lively marketplace of competing rating services -- just as in the real world there are restaurant ratings, seals of approval, best picks in magazines, reading lists from high schools, rankings of legislators by political groups, top-10 lists, special college issues and the like ... One can imagine rating systems for almost any characteristic that is specifiable -- or for any group's or individual's judgment. And they can rate locations or places or communities as well as static content. Some rating systems simply look at the words used, but the more sophisticated ones make judgments on a different basis -- more like editorial judgment. This site is filled with mature people with an interest in social action; that one is best-suited for teenage girls who want to talk about boys, models, and make-up; that one is for my teenage girl, with discussions about female pilots, doctors and executives ..." (22)
Garfinkel raises the possibility of a savvy youth altering filter controls with the result that parents move the blocking protocols upstream to the ISP or network administrator so they can't be tampered with. That scenario isn't necessary if a guardian doesn't abdicate responsibility for a child. In the way a family or school makes decisions about what kind of print or broadcast material enters a building, or the way they evaluate what movie to go out and see, that same mechanism should be used to judge what online material enters or is accessed by a computer.

What about the agreements among the large ISP's, the developers of the major browsers, and the White House to make the Internet family friendly? In this case, I worry. I worry that the rights and responsibilities that adults who care for children can be abridged. I worry that the ability for minors to learn how to judge for themselves what are the tolerable boundaries for acceptable language, sexual activity and levels of violence is being abridged as well. I worry that in the rush to cater to the politically popular cause of "our children above all else," we may be forgetting the substantial percentage of the remaining population which is over 17 years of age. Since any rating system imposes its own set of values on the material being judged, and since the driving concern of the White House and the mainstream Internet industry is family friendliness, this partnership may have its cost. Since much of the population will likely view the Internet through filter-mediated access such as employers, libraries, or state-funded community centers, there is a strong likelihood that material will pass through a politically-defined sieve or be strained through a mesh sized for only bland, wide-spread commercial appeal ... easy to digest, but without much texture.

So should we abandon PICS and its label-enabling technology? ... or should we abandon filter and blocking technology because of the potential for misuse? No. They have value, and the value resides in being able to nurture the future generations in the way families and communities have nurtured their young in the past -- by exposing them to the world with guidance and support appropriate to their maturity and the values of their chosen community. Should we be wary of abuse? Yes. But since we don't reject kitchen knives because of their potential for harm, and we don't rule out buying aspirin because it can be abused if taken in the wrong amount or in the wrong context, we shouldn't wring our hands over the censorship possibilities of rating. It certainly is preferable to abiding by a new version of the Communications Decency Act. If we acknowledge the communities we belong to, actively participate in creating our own rules and standards, encourage a diversity of rating systems, and step forward to take responsibility and control of the technology before us, then we have also taken a step in affirming our place in a technological democracy.

Notes:

1. Communications Decency Act, Title 47 U.S.C.A., 223(a) and (d), 1996.

2. US Supreme Court majority opinion, Reno v. ACLU, No. 96-511, 1997.

3. Ibid.

4. Ibid.

5. US Supreme Court dissenting opinion, Reno v. ACLU.

6. Ibid.

7. Ibid.

8. US Supreme Court majority opinion, Reno v. ACLU.

9. Ibid.

10. Miller v. California, 413 U.S. 15 (1973)

11. Paul Resnick, James Miller, "PICS: Internet Access Controls Without Censorship," Association for Computing Machinery, Inc., 1996. URL: http://www.w3.org.PICS.iacwcv2.htm 12. US Supreme Court dissenting opinion, Reno v. ACLU.

13. Ibid.

14. A statement from public interest and industry groups on White House announcement, "Businesses, Public and Private Groups Unite Behind Initiative For Family-Friendly Internet Online World," NetParents, 1997. URL: http://netparents.org/970716_stmnt.html

15. White House Statement, "A Family Friendly Internet," 1997. URL: http://www.whitehouse.gov.WH/New/Ratings/

16. Op. cit., "Businesses, Public and Private Groups Unite Behind Initiative For Family-Friendly Internet Online World."

17. Principal authors Ann Beeson, Chris Hansen, Barry Steinhardt; American Civil Liberties Union White Paper, "Fahrenheit 451.2: Is Cyberspace Burning?" 1997. URL: http://www.aclu.org/issues/cyber/burning.html

18. Ibid. 19. Simson Garfinkel, "Good Clean PICS," HotWired, February, 1997. URL: http://www.hotwired.com/packet/garfinkel/97/05/index2a.htm 20. Op. cit., Beeson, Hansen, Steinhardt, et.al.

21. Jonathan Weinberg, "Rating the Net," 1997 URL: http://www.msen.com/~weinberg/rating/htm 22. Esther Dyson, "Labels and Disclosure," Release 1.0, December, 1996. URL: http://www.edventure.com/release1/1296body.html

References:

1. Beeson, Hansen, Steinhardt, et.al., American Civil Liberties Union White Paper, "Fahrenheit 451.2: Is Cyberspace Burning?" 1997 URL: http://www.aclu.org/issues/cyber/burning.html

2. Center for Democracy and Technology, "Internet Family Empowerment White Paper: How Filtering Tools Enable Responsible Parents to Protect Their Children Online," 1997. URL: http://www.cdt.org/speech/empower.html 3. Communications Decency Act, Title 47 U.S.C.A., 223(a) and (d), 1996.

4. Dyson, Esther, "Labels and Disclosure," Release 1.0, December, 1996. URL: http://www.edventure.com/release1/1296body.html

5. Garfinkel, Simson, "Good Clean PICS," HotWired, February, 1997. URL: http://www.hotwired.com/packet/garfinkel/97/05/index2a.htm

6. Harvard Law Review Association, "The Message in the Medium: the Amendment on the Information Superhighway," The Harvard Law 1994.

7. Marshall, Joshua Micah, "The Trouble With PICS," Feed, Inc., 1997. URL: http://www.feedmag.com/html/feedline/97.09marshall/97.09marshall.html

8. Resnick, Paul, "PICS, Censorship, & Intellectual Freedom FAQ," last revised June, 1997.

9. Resnick, Paul and Miller, James, "PICS: Internet Access Controls Without Censorship," Association for Computing Machinery, Inc., 1996. URL: http://www.w3.org.PICS.iacwcv2.htm 10. Shallit, Jeffrey, from a talk given at the Session on "Public Networks and Censorship" at the Ontario Library Association, 1995.

11. Statement from Public Interest and Industry Groups on White House announcement, "Businesses, Public and Private Groups Unite Behind Initiative For Family-Friendly Internet Online World," NetParents, 1997. URL: http://netparents.org/970716_stmnt.html

12. Tribe, Laurence, "The Constitution in Cyberspace," Keynote Address at the First Conference on Computer Freedom and Privacy, 1991. 13. US Constitution, Bill of Rights, 1791.

14. Weinberg, Jonathan, "Rating the Net," 1997 URL: http://www.msen.com/~weinberg/rating/htm

15. White House Statement, "A Family Friendly Internet," 1997. URL: http://www.whitehouse.gov.WH/New/Ratings/

16. World Wide Web Consortium Homepage, Platform for Internet Content Selection, last updated 18 July 1997. URL: http://www.w3.org.PICS/