Hot Nasty PICS?

Cyrus Dolph
Paper for MIT 6.805/STS085: Ethics and Law on the Electronic Frontier, Fall 1997

Introduction

Following the Supreme Court's nullification of the Communications Decency Act, lawmakers and activists are striving to effect new methods of protecting our nation's children from harmful content on the internet. Government censorship is not a likely option now, except maybe in special cases such as child pornography, so new answers must be found in order to keep Junior away from cyber porn. The trendy solution to the problem now is the use of internet content rating schemes--such as the World Wide Web Consortium's Platform for Internet Content Selection, or PICS--that allow parents to block certain inappropriate web pages and newsgroups from their children's computers. The hope is that by popularizing voluntary rating of content, the internet will become a safer place for children, while still remaining a forum of free speech for adults.

PICS is a very flexible framework that allows for the labeling of internet content by both authors/publishers and third parties. It supplies a standard format for labels, thus supporting multiple labeling schemes and rating services. Its most obvious and original purpose is to allow parents and schools to control children's access to objectionable content on the internet, but it can also be used for other document classification purposes such as quality ratings and intellectual property rights management. There are currently a handful of rating service vendors selling software with names like CYBERsitter, Safe Surf, and Net Nanny, each using their own proprietary standards to assign ratings to web pages and newsgroups. Both Microsoft's Internet Explorer and Netscape Communicator support browser-level filtering of PICS-based rating systems; together they claim 97% of the web browser market share. One popular PICS-based rating system is that of Recreation Software Advisory Council on the Internet, or RSACi, which is currently directly supported by Internet Explorer, and the Cyber Patrol blocking program.

PICS has received large amounts of both praise and criticism. Many see it as a reasonable and least restrictive way to protect the innocence of our children, while others worry that the costs of such a system will result in the loss of freedom of expression on the internet. Proponents of rating systems argue that they are an easily available, effective solution that gives parents the ultimate control over where their children are allowed to wander in cyberspace. Some supporters of rating systems even advocate government regulation, including restrictions on false ratings and the mandatory rating of all content. The American Civil Liberties Union has adopted a strong stance against PICS, fearing that even voluntary rating systems will ultimately result in the stifling of free speech due to the burden and unwieldiness of implementing such methods. The prospect of mandatory rating systems eventually emerging from the voluntary system is anathema to the ACLU. Despite the concerns of the ACLU and other detractors, voluntary rating systems are an excellent idea for the internet, provided they don't evolve into mandatory schemes. In this paper, I will defend rating systems against their criticisms, and attempt to show the reader some new reasons why they are the right thing for the internet.

A defense of voluntary rating schemes

There are many benefits of voluntary rating schemes. The most important of these is reflected in the word "voluntary"--everyone involved has the choice of participating or not. Authors may decide not to rate their own pages, and web surfers can choose what content is appropriate for them and their children. The responsibility for controlling web content thus rests on parents--not on authors, service providers, or the government. Another advantage is that rating schemes are inexpensive to implement. It costs little or nothing for publishers to rate their content. Blocking software is also cheap or free, and often available from internet service providers.

Rating schemes are also effective at marking inappropriate sites. The web is very large, and thus it is too early to conclude how well rating systems will cover cyberspace. In light of the extensive cataloging of the web that already exists in search engines and directories such as Yahoo, it is reasonable to assume that a majority of the web could be rated within two years if PICS gains more popularity (currently, about 45,000 web sites are rated with RSACi.) With several rating systems already on market, competition among vendors will result in a steady acceleration of the web page rating process.

Is voluntary rating censorship?

Rating schemes are not perfect, of course, and are subject to criticism. Opponents to internet ratings claim that the whole idea smacks of the kind of censorship that was trounced with the Supreme Court's nullification of the Communications Decency Act. Is PICS based content rating censorship? Well, it depends on how you define the word "censorship". One very broad definition, quoted by Paul Resnick, is "any action by one person that makes otherwise available information unavailable to another person." Under this interpretation, voluntary rating schemes would be censorship, as would a mother preventing her child from viewing pornography magazines. A narrower definition preferred by the authors of PICS restricts censorship to controls placed on what information a author may publish or distribute. According to this definition, voluntary rating systems are not censorship, whereas the CDA's restrictions on content are. PICS-based blocking software would be better described by the term "access controls".

The ACLU's attack on rating

The ACLU has published a White Paper criticizing rating schemes entitled "Fahrenheit 451.2: Is Cyberspace Burning?", which predicts that the eventual result of rating schemes will be government enforced rules on mandatory self-rating--a scenario the ACLU claims is "for now, theoretical, but inevitable." They offer several reasons why self-rating schemes are "wrong for the internet", and I will address them here.

Controversial speech will be censored.

The ACLU claims that rating systems will result in the censorship of controversial content such as web pages about safe sex and domestic violence. What about are animal rights activism pages that show pictures of animals mutilated in scientific research? Arguably, web pages that depict sexually oriented or violent material in an educational manner shouldn't be censored along with the common smut. Also, children are not the only ones affected by content labels. Would adults steer clear of a safe sex site upon seeing a label indicating that it contains sexual content?

Under most popular rating systems, there are several levels of rating that differentiate between pornographic material and sexual content presented in a technical or educational manner. This gives parents the ability to block content based on their own judgment of their child's maturity. As for adults avoiding web pages based on labels, it seems clear that it is a matter of personal choice to select content based on labels. Many people don't like to see controversial speech with which they disagree that offends them. As long as it is the choice of the web surfer, and not some intermediate party or government censor who is regulating the distribution of information, then no terrible transgression on free speech rights has occurred.

Rating is burdensome, unwieldy, and costly.

Self-rating can be expensive to implement for some publishers such as non-profit organizations with large web directories. Rating thousands of pages on one site could require a lot of man-hours. What about the non-profit art collection that can't afford to rate their entire content body, and is thus inaccessible to a large segment of the on-line population? The ACLU challenges that compelling netizens to rate their work will be an unfair burden in some cases.

The fact of the matter is that it is not extremely difficult to rate a web page, and the effort involved is much less than the work required to actually create the page in the first place. It takes but a few minutes to self-rate with Safe Surf and RSAC. In the case of large collections of documents, it is possible to assign one rating to entire directories where appropriate. I predict it won't be long before web-page authoring software packages will feature shortcut procedures to facilitate the entire rating process. If an organization has the resources to create a massive web site, then it likely has the ability to rate its content.

A further issue with self-rating is the problem of handling the diverse content of the internet consistently. How can authors across the globe be expected to make consistent judgments about web pages containing highly subjective content, such as art? What one person views as artistic nudity suitable for children, another sees as blockable pornography. Who gets to decide what is appropriate in such cases?

For a rating system to work, somebody will have to make such subjective judgments about content. Various rating systems set different guidelines for raters, with different levels of strictness. It would be most desirable for the parent to be able to make such decisions on a case by case basis, but the next best thing is for the parent to choose a rating system according his or her preferences. There are really two types of subjective choices involved in access controls: the choice of how to label content according to a rating system's specifications ("is the content erotic or pornographic?"), and the choice of whether a particular rating is inappropriate for young viewers ("is my child mature enough to see foul language?"). Under government censorship, both of these decisions would be made by third parties. But thanks to customizable blocking software, at least the second choice can be made by the parent. Of course, for the system to work well, the honesty and cooperation of the content providers is required. I will address the issue of the accuracy of rating schemes in more detail below.

The issue becomes murkier when we consider how to rate news sites. Should a violent news story be rated differently than an action computer game site? Recently, there was a loud cry in favor of the creation of an additional "N" label for news for the RSACi rating system. But is soon became clear that deciding what exactly counted as news would be a tricky question. RSAC's Executive Director Stephen Balkam planned to assign the news rating to "legitimate, objective news" sites, but opinion magazines would not qualify. And no extremist groups would receive the rating. "If we came across a publication called the Nazi News, we would certainly, undoubtedly turn them down."

RSAC later canned the idea when several major news media companies including The Wall Street Journal, the New York Times, and MSNBC rejected the idea of a "news" label for their content. This was a wise choice, as it is a bad idea to allow any one body to define what is legitimate news. Of course, third party vendors will essentially be making this decision when they assign their own ratings to news web sites. If Safe Surf declares that ABC's on-line news site is acceptable for children viewers, but doesn't rate the less prominent local newspaper site, then it will be up to the parent to include it manually.

Conversation can't be rated.

A further issue of concern is the labelling of dynamic areas of the internet, such as chat rooms and discussion groups. It is absurd to label individual posts of participants, yet blanket ratings would also be inappropriate.

I don't expect there exists a satisfactory method for rating internet conversations of these types. Blocking software would be able to filter posts according to text parsing rules--this would be effective in catching most cases of inappropriate language. The other option would be to block entire chat rooms or discussion groups based on their content. To restrict the speaking rights of participants in any way would be wrong. The chat rooms on family-friendly America On-Line are all monitored; it might be best for parents to restrict their children to such an arena for conversations.

Self-rating will create "Fortress America" on the Internet.

The ACLU White Paper argues that the United States' adoption of rating schemes will result in the isolation the country from the rest of the global village, as people in other countries will be slower to learn about and use rating systems. There is also a concern that American rating standards will not be compatible with those of other cultures.

This argument reflects a naive viewpoint of the ACLU that the United States is the ringleader of the internet. According to Jim Miller of the World Wide Web Consortium, it is the rest of the world that is ahead of us in the game. Other countries are much more likely to agree to labelling standards. Australia, for example, has seen a large movement towards content rating, although it is unfortunately probable to involve government control.

Self-rating as a mandatory policy in the US only would indeed be a bad policy. Voluntary rating on a world-wide scale, as is happening right now, is a good idea. The internet is adopting its own self-regulating measures to effect a global solution to a problem that individual national governments will never resolve satisfactorily by themselves.

Self-rating will only encourage, not prevent government regulation.

The threat of government regulation is a black cloud overhead the internet community. There is a concern that governments might require authors to rate their sites, or criminalize misrating. In fact, one bill that criminalizes misrating has already been proposed, and Safe Surf is pushing for legislation allowing civil suits in cases of false ratings.

In light of the recent Supreme Court decision striking down the Communications Decency Act, it is unlikely that mandatory rating laws will be upheld. This may not stop legislators from trying to pass censorship bills they know are unconstitutional. Yet, it is hard to claim that such obnoxious legislation will be the direct result of rating schemes--politicians will attempt to censor the internet anyway, as we know from the CDA case. Voluntary rating systems are an important component of the least restrictive means of providing access controls for internet content.

Rating systems will cause a commercialized internet.

The ACLU claims that under a self-rating system, many sites will find it is too expensive to rate all their web content, while large corporations will have little problem rating their own pages. Also, large corporate speakers will more likely be noticed by third-party rating services, resulting in the little guys' web pages going unnoticed. In the worst case scenario, the once cheapest, most participatory speech medium will become dominated by American corporate speakers.

This possibility seems a bit unrealistic, when considering that it really isn't that burdensome to rate a web page. Any author can easily rate his work. As for the prospect of a corporate dominated internet, it is already the case that many corporate web sites are more prominently featured in net directories, search engines, and through their own promotion on banner advertisements. But this hasn't stopped individual speakers and small groups from being heard.

Further criticisms of rating systems

Outdated pages might not receive ratings.

There are pages that haven't been updated since 1995 or earlier, because their authors have forgotten or abandoned them.. These pages will not be rated by their authors, and will likely be overlooked by third-party raters.

It is still possible for unrated pages to be processed by blocking software using filters. It's obviously better for authors to rate their own work, but a simple parser that blocks pages containing offensive language would be an acceptable way to handle such cases. Most popular rating software vendors use this technique to assign ratings on the fly as customers load unrated pages. At least one company then personally reviews the sites as they are encountered to verify the rating.

PICS will be a tool for oppressive governments to censor incoming content.

It is true that foreign governments could use labels on internet content to filter out banned information. Paul Resnick points out that PICS is not the only way to accomplish this:
......for example, a national proxy-server/firewall combination that blocks access to a government-provided list of prohibited sites does not depend on interoperability of labels and filters provided by different organizations. While such a setup could use PICS-compatible technology, a proprietary technology provided by a single vendor would be just as effective.
It is unfortunate that some countries do not provide the same degree of protection for speech as the United States, but it's not right for the rest of the world to stifle its development of new technology for fear that an oppressive government might abuse it.

Parents are too computer illiterate to use the system.

In many households, the children are the ones in charge of the computer. Are parents too hopeless to install and configure blocking software? Also, will clever kids be able to defeat the software?

It is certainly true that many of today's parents are computer illiterate, but they are becoming more and more techno savvy as the world enters the information age. Installing and configuring blocking software is not a difficult task, and in many cases can be facilitated by the internet service provider. Many service providers have the capability to enable blocking software on their end of the wire, allowing computer illiterate parents to protect their children easily, and without the fear that their children will able to circumvent the software.

What about inaccurate ratings?

An important problem with self-rating is that accuracy may suffer because methods of rating always rely on subjective judgments. As I mentioned earlier, one person's art for all ages is another's offensive pornography. The rating given to a site will depend on the nature of the questions the rating system asks about the content, and their interpretation by the person answering them.

There are two approaches to designing rating schemes: those that rely on standards, and those that use rules. Safe Surf is an example of a standards based rating scheme. For instance, Safe Surf asks the rater to evaluate whether nudity on a web page is "artistic", "erotic", "pornographic" or "explicit or crude". Answers will obviously vary on such questions from person to person, resulting in imprecision in rating. Standards based rating is a system which has all the advantages of its flexibility, but which suffers from inconsistency. RSACi, on the other hand, has much more detailed questions on its self-rating form, on which it asks the author to supply more specific information that isn't open to much interpretation. This type of rating relies on rules to assign a label to content. The advantage of rules based rating is higher consistency; its drawback is that the rule set used will necessarily be an inferior approximation of the decision process needed to apply the standard its creators had in mind.

It is the buyer's privilege to decide which style of rating system is better. No rating system will be perfect, obviously, but at least there are enough options that most parents will have a good chance of finding one that they like. It is expected that most participating authors will make a conscientious effort to accurately rate their page according to the specifications of the rating schemes they are using.

What about irresponsible or unfair rating services?

It is a worry that third-party raters will arbitrarily provide unfair, inaccurate, or inconsistent ratings to web pages. Some rating systems have glitches or unfairly jump to conclusions without evidence, as evidenced by cases of blocking out entire directories or even domains based on a single document. Critics like to quote the deeds of CyberSitter, perhaps the most politically conservative rating service. For example, CyberSitter blocks the National Organization for Women web site, along with pages containing any references to lesbian or gay topics. It also blocks pages that criticize it for its blocking decisions. One critic of CyberSitter, the PeaceFire organization, decrypted CyberSitter's (proprietary) list of blocked sites, and published it on the web. They also found a few bugs in the blocking software's text filtering algorithm. From the PeaceFire web page:
If you make a web page that says:

"Gary Bauer is a staunch anti-homosexual conservative who sees the gay movement as absolutely pure fascism and thinks movies of men with men are the greatest terror."

and view it through a web browser while running CyberSitter, it says:

"Gary Bauer is a staunch anti-conservative who sees the gay movement as absolutely pure and thinks movies of men with men are the greatest."

In response to this critical web site, CyberSitter threatened to block all sites on PeaceFire's internet service provider if it didn't shut down the page immediately, and added text to its licensing agreement forbidding the reverse engineering of its list of blocked sites.

CyberSitter and a couple other rating services with such controversial standards are examples of companies that may lose their credibility in the competitive blocking software market. There are many other providers of blocking software with less extreme filtering rules. It is the parent's choice to buy a particular vendor's blocking software--if some companies employ unfair or irresponsible practices in their ratings, then they will lose the trust of the market. CyberSitter's policies may seem extreme to most people, yet there are many conservative families who support such heavy-handed blocking for their children. What is popular will sell--this is the nature of the free market economy.

There is also a concern that individual speakers will abuse the system by incorrectly rating their own sites to gain a larger audience. A ruthless person could label his raunchy smut server as acceptable for all ages, which would undoubtedly upset most parents using blocking software.

I believe that these situations will not be a huge problem due to the fact that users of blocking software can report mislabeled sites, and have the label provider review them. There is no need for criminal penalties for false labelling as rating providers will have the power to monitor the labels submitted to them. Furthermore, the existing rules of "netiquette" will discourage misrating. Internet service providers can enforce these standards by refusing to host mislabeled web pages, and penalizing the accounts of offenders. In this way, the internet can police itself without the involvement of the government.

A case for voluntary rating

Now that I have answered the challenges of the ACLU and other opponents, I will present my case for why voluntary rating is the appropriate way to protect both the innocence of a world of young web surfers and the speaking rights of the rest of the netizenship.

Voluntary rating will not restrict speech on the internet, but increase the numbers of participants.

As voluntary rating is optional for all parties involved, it seems silly to claim that it is an infringement on anyone's rights. PICS-based blocking software does not censor internet speakers, it controls access to those speakers by children. Many of these children might not be allowed by their parents to access the internet without the supervision of blocking software. So rating systems are really increasing the amount of free speech on the internet, not the other way around! Most people don't want the internet to be an adult-only realm, and it's not fair to make it into a children's content-only realm, either. With access controls, we can have an internet that contains adult content, and is still child-friendly--the best of both worlds.

Voluntary rating fits in with the spirit of a self-governed internet.

One of the things that makes the internet a unique communication medium is that it has thrived for years across the world without any outside regulation. There is no central controlling body for the entire net, everybody just follows certain agreed-upon conventions for interoperability. The internet has an autonomous spirit--sysops take pride in dependably administering their networks, and users have developed a protocol of "netiquette" to govern their actions. The adoption of content rating is a natural extension of this protocol. Already, service providers are strongly encouraging their clients to rate their web pages. Further action by the electronic community that could stimulate self-rating might include web directories and search engines requiring requests for new listings be rated web pages. If the push to rate content on the World Wide Web arises from within, then this is truly a triumph for an autonomous internet and a free economy.

There will always be a few anarchist independent servers who refuse to play along with the rest of the system, such as pornographic web sites that rate themselves as acceptable for children. But these dissenters will blocked by third-party raters, and will the feel a pressure to comply from the rest of the electronic community.

Voluntary rating is the "American Way" to regulate the internet.

Americans are proud of our free market economy, and our rights to choose what products we buy, what TV programs we watch, and what newspapers we read. Voluntary rating systems fit perfectly with this way of life. At both ends, there is a choice of how to participate. The author of a web page can choose to self-rate, or decide not to and possibly not be heard by users blocking unrated material. Parents wishing to monitor their child's net exploration have a wide selection of blocking software to choose from, and can select which types of content their child will be able to access. All this is made possible by the PICS standard. This is far better than a system of government censorship, which involves a choice being made for everyone by a law.

Blocking software providers will compete to provide the best rating services, and parents will choose the software that best suits their needs. There will always be rating services with extremist political agendas who block pages based on the views of their authors, for example it is possible to strike pages with pro-homosexual or anti-Christian content. Although many people are opposed to such opinion-based filtering, it remains the right of parents to control what their children see, and as long as a large number of parents desire the ability to block certain issues, rating companies will continue to provide it in their software.

The Supreme Court has held that the "least restrictive means" of controlling speech is appropriate in cases where there is a need to protect the public interest. The Communications Decency Act proposed an excessive method of speech control, and was thus struck down. The internet as a speech medium deserves a high degree of protection because of its widespread penetration and the high level of participation of its users. It would not be American for the government to enforce censorship on-line. However, it is very American for parents to use filtering software to protect their children on-line, and for the internet community to encourage voluntary rating of published content.

Voluntary rating is here already, and isn't going away.

While politicians, activists and people like me are busy writing papers about whether or not content rating is a good idea, there are a myriad of netizens who are using it right now. PICS-based access controls are not merely a theory, but are being used by many satisfied parents. Obviously, the system has not been perfected yet, but blocking software publishers are writing better programs, and the internet community is gaining awareness of the rating idea. I predict it will only be a matter of time until self-rating, probably through PICS, becomes the popular standard for providing internet content access control. If the system is popular, constitutional, and effective, then it is probably a good idea.

Why mandatory rating is wrong

With the rising popularity of content rating systems, there will come increased demand by some factions for their mandatory implementation. Legislators are already considering proposed bills criminalizing false rating, and there is a movement in favor of a law allowing civil suits for damages caused by false rating. Some people are even in favor of laws requiring all web page authors to self-rate their work. Such measures calling for government involvement in the rating process are a very bad idea, as they amount to unjust censorship.

Donald Haines, of ACLU Legislative Counsel draws an interesting parallel on the situation:

Imagine being forced to wear a sandwich board that says "violent and sexual content" if you want to stand on the street and hand out a pamphlet on domestic abuse. This kind of content-based self-labeling is exactly what the Supreme Court opposed in its recent decision striking down censorship provisions of the Communications Decency Act.
It is already the case that distributers of unsolicited pornography are required to label their materials with warnings indicating that they contain possibly offensive content. This law is designed to protect individuals from being harassed by unsolicited porn distributed in an intrusive manner. So, in some special cases, it is right to force a speaker to declare the possible offensiveness of his content. Most of the time, however, it is not acceptable, as it is an infringement of free speech.

The Supreme Court has held that it is unconstitutional in most cases to require a person to say something against his will, as this is a violation of free speech. Forcing internet authors under penalty of law to publish a rating of their work is just plain wrong. Criminalizing false ratings isn't much better--the author shouldn't be forced to publish a particular rating if he disagrees with it.

Another argument against mandatory rating is that it is simply unnecessary. In the Supreme Court battle over the Communications Decency Act, the ACLU presented a case that filtering software was an already existing solution to the inappropriate content problem; this was one of the important reasons the act was overturned. In summary, if voluntary ratings are effective, then there is no need to mandate them.

Is filtering appropriate in libraries and schools?

PICS-based access controls for internet content are a great tool for parents who want to supervise their kids' net surfing at home, but there are cases in which the use of access controls is not appropriate and unconstitutional. Is it right for employers to restrict workers' internet access? Should blocking software be used in public libraries to protect minors? What about in schools? The first question is a matter of policy for a private enterprise, but the second two questions are important to the public, and I will address them here..

Libraries

The use of blocking software in public libraries is wrong for many reasons. To understand the issue better, it is necessary to consider the philosophy under which American librarians operate. The American Library Association states this philosophy simply in its Library Bill of Rights: "A person's right to use a library should not be denied or abridged because or origin, age, background, or views." Librarians do not act in loco parentis, meaning they do not accept responsibility for the content selection of minors in the library. For example, in most libraries, children may obtain library cards without their parents' signatures, and may borrow materials such as "R Rated" movies without restriction. This policy reflects a feeling that free access to information must be protected, and the acknowledgment that it would be prohibitively difficult for librarians to supervise minors reading in a library containing tens of thousands of unrated books. Parents are encouraged to play an active role in their child's reading, but librarians are not expected to assume that role in the absence of a parent.

Another important aspect of a librarian's job is the selection of books to include on shelves. A librarian chooses books based on many factors including patron requests, critical reviews and price, but not according to any value judgments. It is the patrons who decide what the values of books are when they pick them off the shelf and read them. Ideally, the library would have enough space and money to include all books; yet even the Library of Congress is limited in what it can shelve. This philosophy of unabridged access applies to internet use in the library as it does to books, and the American Library Association has affirmed this in 1996. To install blocking software in library computers is a contradiction of everything for which this philosophy stands--why should the library restrict its content to what some third party believes is appropriate when it could just as easily, and for no additional cost, include the entire internet?

There are some who argue that libraries should install filtering software on all computers, and disable it upon an adult's request. Not only does this put a silly burden on adult patrons, but it also violates the American Library Association's policy of unrestricted access for children. In cases where patrons' use of library computers to access pornography might offend others, the best solution is for librarians to enforce content-neutral usage time limits, and to place computers in more secluded spots.

Public Schools

Public schools, unlike libraries, do operate in loco parentis, because children are required to attend school, and teachers are responsible for the well-being of their students. For this reason, it is acceptable for teachers to limit their pupils' internet access, although schools should be encouraged to enforce access limitations on a class by class basis as opposed to on a school-wide level.

Conclusion

In an ideal world, in which all parents had the time and inclination to supervise their children's web surfing, internet access controls and rating systems would be unnecessary. But parents and teachers aren't super beings--they want children to be able to experience the internet without encountering inappropriate content. There are only three ways to resolve this problem. The first solution, which was the status quo two years ago, is to do nothing to protect children from harmful content. With an exponentially increasing internet, it is unfair to deny concerned parents a safe way for their children to participate. The second solution, government censorship of authors, is very undesirable because of the restrictions it places on adults, its unconstitionality, and the impossibility of controlling foreign content. The third solution, which I advocate, is the implementation of access controls for children, using PICS-based rating systems to facilitate filtering. Of course, a solution involving rating systems isn't perfect, but it has many more merits and few drawbacks when compared to the alternatives.

Detractors point out many harms that can be done if PICS-based filtering becomes widespread. Nearly all of the arguments against PICS are seated in the belief that someone--government legislators, large corporations, individual consumers--will use the system irresponsibly. The creators of PICS proclaim a stance of neutrality concerning its application by parents, teachers, governments, and others. PICS is nothing more than a protocol that can be used to label web pages. We must remember there is nothing good or bad about PICS itself--it is how we use it that matters.

Access controls have already become popular among concerned parents, and the use of rating systems is growing more widespread. We should welcome this development, but with a caution to buyers that access controls are not perfect. With a community full of active defenders of free speech, injustices in site ratings should not be tolerated, and marketers of blocking software will need to maintain a reputation of fairness and responsibility. As long as internet users understand the ramifications of restricted speech, the system can work for the wonderful benefit of allowing children to access the wealth of information available in cyberspace.

Bibliography