PICS is a very flexible framework that allows for the labeling of internet content by both authors/publishers and third parties. It supplies a standard format for labels, thus supporting multiple labeling schemes and rating services. Its most obvious and original purpose is to allow parents and schools to control children's access to objectionable content on the internet, but it can also be used for other document classification purposes such as quality ratings and intellectual property rights management. There are currently a handful of rating service vendors selling software with names like CYBERsitter, Safe Surf, and Net Nanny, each using their own proprietary standards to assign ratings to web pages and newsgroups. Both Microsoft's Internet Explorer and Netscape Communicator support browser-level filtering of PICS-based rating systems; together they claim 97% of the web browser market share. One popular PICS-based rating system is that of Recreation Software Advisory Council on the Internet, or RSACi, which is currently directly supported by Internet Explorer, and the Cyber Patrol blocking program.
PICS has received large amounts of both praise and criticism. Many see it as a reasonable and least restrictive way to protect the innocence of our children, while others worry that the costs of such a system will result in the loss of freedom of expression on the internet. Proponents of rating systems argue that they are an easily available, effective solution that gives parents the ultimate control over where their children are allowed to wander in cyberspace. Some supporters of rating systems even advocate government regulation, including restrictions on false ratings and the mandatory rating of all content. The American Civil Liberties Union has adopted a strong stance against PICS, fearing that even voluntary rating systems will ultimately result in the stifling of free speech due to the burden and unwieldiness of implementing such methods. The prospect of mandatory rating systems eventually emerging from the voluntary system is anathema to the ACLU. Despite the concerns of the ACLU and other detractors, voluntary rating systems are an excellent idea for the internet, provided they don't evolve into mandatory schemes. In this paper, I will defend rating systems against their criticisms, and attempt to show the reader some new reasons why they are the right thing for the internet.
Rating schemes are also effective at marking inappropriate sites. The web is very large, and thus it is too early to conclude how well rating systems will cover cyberspace. In light of the extensive cataloging of the web that already exists in search engines and directories such as Yahoo, it is reasonable to assume that a majority of the web could be rated within two years if PICS gains more popularity (currently, about 45,000 web sites are rated with RSACi.) With several rating systems already on market, competition among vendors will result in a steady acceleration of the web page rating process.
Under most popular rating systems, there are several levels of rating that differentiate between pornographic material and sexual content presented in a technical or educational manner. This gives parents the ability to block content based on their own judgment of their child's maturity. As for adults avoiding web pages based on labels, it seems clear that it is a matter of personal choice to select content based on labels. Many people don't like to see controversial speech with which they disagree that offends them. As long as it is the choice of the web surfer, and not some intermediate party or government censor who is regulating the distribution of information, then no terrible transgression on free speech rights has occurred.
The fact of the matter is that it is not extremely difficult to rate a web page, and the effort involved is much less than the work required to actually create the page in the first place. It takes but a few minutes to self-rate with Safe Surf and RSAC. In the case of large collections of documents, it is possible to assign one rating to entire directories where appropriate. I predict it won't be long before web-page authoring software packages will feature shortcut procedures to facilitate the entire rating process. If an organization has the resources to create a massive web site, then it likely has the ability to rate its content.
A further issue with self-rating is the problem of handling the diverse content of the internet consistently. How can authors across the globe be expected to make consistent judgments about web pages containing highly subjective content, such as art? What one person views as artistic nudity suitable for children, another sees as blockable pornography. Who gets to decide what is appropriate in such cases?
For a rating system to work, somebody will have to make such subjective judgments about content. Various rating systems set different guidelines for raters, with different levels of strictness. It would be most desirable for the parent to be able to make such decisions on a case by case basis, but the next best thing is for the parent to choose a rating system according his or her preferences. There are really two types of subjective choices involved in access controls: the choice of how to label content according to a rating system's specifications ("is the content erotic or pornographic?"), and the choice of whether a particular rating is inappropriate for young viewers ("is my child mature enough to see foul language?"). Under government censorship, both of these decisions would be made by third parties. But thanks to customizable blocking software, at least the second choice can be made by the parent. Of course, for the system to work well, the honesty and cooperation of the content providers is required. I will address the issue of the accuracy of rating schemes in more detail below.
The issue becomes murkier when we consider how to rate news sites. Should a violent news story be rated differently than an action computer game site? Recently, there was a loud cry in favor of the creation of an additional "N" label for news for the RSACi rating system. But is soon became clear that deciding what exactly counted as news would be a tricky question. RSAC's Executive Director Stephen Balkam planned to assign the news rating to "legitimate, objective news" sites, but opinion magazines would not qualify. And no extremist groups would receive the rating. "If we came across a publication called the Nazi News, we would certainly, undoubtedly turn them down."
RSAC later canned the idea when several major news media companies including The Wall Street Journal, the New York Times, and MSNBC rejected the idea of a "news" label for their content. This was a wise choice, as it is a bad idea to allow any one body to define what is legitimate news. Of course, third party vendors will essentially be making this decision when they assign their own ratings to news web sites. If Safe Surf declares that ABC's on-line news site is acceptable for children viewers, but doesn't rate the less prominent local newspaper site, then it will be up to the parent to include it manually.
I don't expect there exists a satisfactory method for rating internet conversations of these types. Blocking software would be able to filter posts according to text parsing rules--this would be effective in catching most cases of inappropriate language. The other option would be to block entire chat rooms or discussion groups based on their content. To restrict the speaking rights of participants in any way would be wrong. The chat rooms on family-friendly America On-Line are all monitored; it might be best for parents to restrict their children to such an arena for conversations.
This argument reflects a naive viewpoint of the ACLU that the United States is the ringleader of the internet. According to Jim Miller of the World Wide Web Consortium, it is the rest of the world that is ahead of us in the game. Other countries are much more likely to agree to labelling standards. Australia, for example, has seen a large movement towards content rating, although it is unfortunately probable to involve government control.
Self-rating as a mandatory policy in the US only would indeed be a bad policy. Voluntary rating on a world-wide scale, as is happening right now, is a good idea. The internet is adopting its own self-regulating measures to effect a global solution to a problem that individual national governments will never resolve satisfactorily by themselves.
In light of the recent Supreme Court decision striking down the Communications Decency Act, it is unlikely that mandatory rating laws will be upheld. This may not stop legislators from trying to pass censorship bills they know are unconstitutional. Yet, it is hard to claim that such obnoxious legislation will be the direct result of rating schemes--politicians will attempt to censor the internet anyway, as we know from the CDA case. Voluntary rating systems are an important component of the least restrictive means of providing access controls for internet content.
This possibility seems a bit unrealistic, when considering that it really isn't that burdensome to rate a web page. Any author can easily rate his work. As for the prospect of a corporate dominated internet, it is already the case that many corporate web sites are more prominently featured in net directories, search engines, and through their own promotion on banner advertisements. But this hasn't stopped individual speakers and small groups from being heard.
It is still possible for unrated pages to be processed by blocking software using filters. It's obviously better for authors to rate their own work, but a simple parser that blocks pages containing offensive language would be an acceptable way to handle such cases. Most popular rating software vendors use this technique to assign ratings on the fly as customers load unrated pages. At least one company then personally reviews the sites as they are encountered to verify the rating.
......for example, a national proxy-server/firewall combination that blocks access to a government-provided list of prohibited sites does not depend on interoperability of labels and filters provided by different organizations. While such a setup could use PICS-compatible technology, a proprietary technology provided by a single vendor would be just as effective.It is unfortunate that some countries do not provide the same degree of protection for speech as the United States, but it's not right for the rest of the world to stifle its development of new technology for fear that an oppressive government might abuse it.
It is certainly true that many of today's parents are computer illiterate, but they are becoming more and more techno savvy as the world enters the information age. Installing and configuring blocking software is not a difficult task, and in many cases can be facilitated by the internet service provider. Many service providers have the capability to enable blocking software on their end of the wire, allowing computer illiterate parents to protect their children easily, and without the fear that their children will able to circumvent the software.
There are two approaches to designing rating schemes: those that rely on standards, and those that use rules. Safe Surf is an example of a standards based rating scheme. For instance, Safe Surf asks the rater to evaluate whether nudity on a web page is "artistic", "erotic", "pornographic" or "explicit or crude". Answers will obviously vary on such questions from person to person, resulting in imprecision in rating. Standards based rating is a system which has all the advantages of its flexibility, but which suffers from inconsistency. RSACi, on the other hand, has much more detailed questions on its self-rating form, on which it asks the author to supply more specific information that isn't open to much interpretation. This type of rating relies on rules to assign a label to content. The advantage of rules based rating is higher consistency; its drawback is that the rule set used will necessarily be an inferior approximation of the decision process needed to apply the standard its creators had in mind.
It is the buyer's privilege to decide which style of rating system is better. No rating system will be perfect, obviously, but at least there are enough options that most parents will have a good chance of finding one that they like. It is expected that most participating authors will make a conscientious effort to accurately rate their page according to the specifications of the rating schemes they are using.
If you make a web page that says:In response to this critical web site, CyberSitter threatened to block all sites on PeaceFire's internet service provider if it didn't shut down the page immediately, and added text to its licensing agreement forbidding the reverse engineering of its list of blocked sites.
"Gary Bauer is a staunch anti-homosexual conservative who sees the gay movement as absolutely pure fascism and thinks movies of men with men are the greatest terror."
and view it through a web browser while running CyberSitter, it says:
"Gary Bauer is a staunch anti-conservative who sees the gay movement as absolutely pure and thinks movies of men with men are the greatest."
CyberSitter and a couple other rating services with such controversial standards are examples of companies that may lose their credibility in the competitive blocking software market. There are many other providers of blocking software with less extreme filtering rules. It is the parent's choice to buy a particular vendor's blocking software--if some companies employ unfair or irresponsible practices in their ratings, then they will lose the trust of the market. CyberSitter's policies may seem extreme to most people, yet there are many conservative families who support such heavy-handed blocking for their children. What is popular will sell--this is the nature of the free market economy.
There is also a concern that individual speakers will abuse the system by incorrectly rating their own sites to gain a larger audience. A ruthless person could label his raunchy smut server as acceptable for all ages, which would undoubtedly upset most parents using blocking software.
I believe that these situations will not be a huge problem due to the fact that users of blocking software can report mislabeled sites, and have the label provider review them. There is no need for criminal penalties for false labelling as rating providers will have the power to monitor the labels submitted to them. Furthermore, the existing rules of "netiquette" will discourage misrating. Internet service providers can enforce these standards by refusing to host mislabeled web pages, and penalizing the accounts of offenders. In this way, the internet can police itself without the involvement of the government.
There will always be a few anarchist independent servers who refuse to play along with the rest of the system, such as pornographic web sites that rate themselves as acceptable for children. But these dissenters will blocked by third-party raters, and will the feel a pressure to comply from the rest of the electronic community.
Blocking software providers will compete to provide the best rating services, and parents will choose the software that best suits their needs. There will always be rating services with extremist political agendas who block pages based on the views of their authors, for example it is possible to strike pages with pro-homosexual or anti-Christian content. Although many people are opposed to such opinion-based filtering, it remains the right of parents to control what their children see, and as long as a large number of parents desire the ability to block certain issues, rating companies will continue to provide it in their software.
The Supreme Court has held that the "least restrictive means" of controlling speech is appropriate in cases where there is a need to protect the public interest. The Communications Decency Act proposed an excessive method of speech control, and was thus struck down. The internet as a speech medium deserves a high degree of protection because of its widespread penetration and the high level of participation of its users. It would not be American for the government to enforce censorship on-line. However, it is very American for parents to use filtering software to protect their children on-line, and for the internet community to encourage voluntary rating of published content.
Donald Haines, of ACLU Legislative Counsel draws an interesting parallel on the situation:
Imagine being forced to wear a sandwich board that says "violent and sexual content" if you want to stand on the street and hand out a pamphlet on domestic abuse. This kind of content-based self-labeling is exactly what the Supreme Court opposed in its recent decision striking down censorship provisions of the Communications Decency Act.It is already the case that distributers of unsolicited pornography are required to label their materials with warnings indicating that they contain possibly offensive content. This law is designed to protect individuals from being harassed by unsolicited porn distributed in an intrusive manner. So, in some special cases, it is right to force a speaker to declare the possible offensiveness of his content. Most of the time, however, it is not acceptable, as it is an infringement of free speech.
The Supreme Court has held that it is unconstitutional in most cases to require a person to say something against his will, as this is a violation of free speech. Forcing internet authors under penalty of law to publish a rating of their work is just plain wrong. Criminalizing false ratings isn't much better--the author shouldn't be forced to publish a particular rating if he disagrees with it.
Another argument against mandatory rating is that it is simply unnecessary. In the Supreme Court battle over the Communications Decency Act, the ACLU presented a case that filtering software was an already existing solution to the inappropriate content problem; this was one of the important reasons the act was overturned. In summary, if voluntary ratings are effective, then there is no need to mandate them.
Another important aspect of a librarian's job is the selection of books to include on shelves. A librarian chooses books based on many factors including patron requests, critical reviews and price, but not according to any value judgments. It is the patrons who decide what the values of books are when they pick them off the shelf and read them. Ideally, the library would have enough space and money to include all books; yet even the Library of Congress is limited in what it can shelve. This philosophy of unabridged access applies to internet use in the library as it does to books, and the American Library Association has affirmed this in 1996. To install blocking software in library computers is a contradiction of everything for which this philosophy stands--why should the library restrict its content to what some third party believes is appropriate when it could just as easily, and for no additional cost, include the entire internet?
There are some who argue that libraries should install filtering software on all computers, and disable it upon an adult's request. Not only does this put a silly burden on adult patrons, but it also violates the American Library Association's policy of unrestricted access for children. In cases where patrons' use of library computers to access pornography might offend others, the best solution is for librarians to enforce content-neutral usage time limits, and to place computers in more secluded spots.
Detractors point out many harms that can be done if PICS-based filtering becomes widespread. Nearly all of the arguments against PICS are seated in the belief that someone--government legislators, large corporations, individual consumers--will use the system irresponsibly. The creators of PICS proclaim a stance of neutrality concerning its application by parents, teachers, governments, and others. PICS is nothing more than a protocol that can be used to label web pages. We must remember there is nothing good or bad about PICS itself--it is how we use it that matters.
Access controls have already become popular among concerned parents, and the use of rating systems is growing more widespread. We should welcome this development, but with a caution to buyers that access controls are not perfect. With a community full of active defenders of free speech, injustices in site ratings should not be tolerated, and marketers of blocking software will need to maintain a reputation of fairness and responsibility. As long as internet users understand the ramifications of restricted speech, the system can work for the wonderful benefit of allowing children to access the wealth of information available in cyberspace.