Professors Hal Abelson and Lawrence Lessig
10 December 1998
Paul Covell
Steve Gordon
Alex Hochberger
James Kovacs
Raffi Krikorian
Melanie Schneck
Identity is a unique piece of information associated with an
entity. Identity itself is simply a collection of characteristics
which are either inherent or are assigned by another. The color of a
person's hair and whether or not another thinks he is attractive is
part of a person's identity.
Interactions done in real space inherently carry the identity of the
person originating the transaction. Generally, physical traits are
carried along in a transaction - for example when one purchases a book
from a book store, the book dealer may remember the buyer's face or
build.
The difference between real space and cyberspace is that the essence
of any digital transaction is unbundling. Ones and zeros do not
inherently carry any separate information along with them; a real
space transaction carries along inseparable secondary
information. Digital transmissions can only transmit; there is no
secondary information encoded in the transmission unless explicitly
put there. Thus, for authentication purposes, additional information
needs to be carried with cyberspace transactions for identity
purposes.
Providing extra information in digital communication introduces the
possibility for identity theft. Since nothing prevents the
transmission of false identity information, or the duplication of
another's identity information. To prevent these problems, the actual
identity must not be transmitted along with the message; instead a
verification scheme needs to be used to convince the recipient that
the message was actually sent by the sender. This eliminates the need
to send one's actual identity. The concept of verifying instead of
revealing provides an extra layer of security to the sender.
The other point of insecurity is in the digital certificates were
issued to verify these characteristics. These certificates are meant
to be used only by their owner, but if they are obtained by another
party, then that party can falsify his identity, representing himself
as the individual for whom he has digital certificates.
Architecturally, we must decide how to store and use these
certificates. The certificates can be stored on a smart card for use
on a computer terminal, or the certificates can be stored in an
"identity server" locked via password or biometric
information and available for transmission over the Internet.
In real space, it is difficult to selectively to verify or reveal
portions of one's identity: most forms of identification contain more
information than is needed for any transaction. The unbundling that is
possible in cyberspace allows portions of identity to be disassociated
and verified by a third party. This not only creates the ability to
verify via the least revealing means, but it also creates the
framework for anonymous transactions - it is possible to merely verify
the proper information without ever distributing the name
characteristic. Further, cyberspace users have control over the
strength of the link between their real world and cyber-identities.
That is, in cyberspace, users can unbundle identity from content and
transactions.
Therefore, designers of an identity system for the digital environment
need to consider whether or not to build a system that facilitates
traceability - i.e. whether or not to build a system in which it is
always possible to trace one's cyberspace content/transactions to
one's real world identity. This question is extremely important as it
will not only affect the architectural design of the system, but it
will also have side effects disturbing governmental, commercial, and
social environments in cyberspace.
Forcing a mandatory link to identity (i.e. mandating traceability)
provides properly authorized law enforcement with a crucial tool in
criminal investigations - the ability to determine with whom criminals
are interacting. Law enforcement also will have the ability to monitor
illegal activities and trade, and easily determine who is
involved. However, commerce may also leave the United States if other
nations provide anonymous transactions. The mandatory identity link
will also stifle dissenting speech as citizens will be afraid to voice
their opinions because everything they say can be traced to their
identities.
Not providing the link is detrimental to law enforcement since they
will have no means to track crime in cyberspace (especially once
encryption becomes more widespread and law enforcement lose the
ability to get the content of any transmission). Commercial and social
interests have been fulfilled if there is no traceability, except now
both will have to coexist in an environment in which there will be a
significant amount of criminal activity.
Another issue that needs to be considered by those designing a digital
identity system is related to the ability to separate characteristics
from identity. This may create a market for personal
characteristics. A person may now have the ability to sell a personal
characteristic to another party in exchange for goods or services. It
may also be possible for companies to collaborate and share people's
characteristics in order to recreate as much of the identity as
possible for marketing purposes. Situations such as these need to be
fully analyzed and appreciated before the design of a digital identity
system can begin.
Any decision to unbundle characteristics in the creation of a digital
identity system cannot be made simply by choosing an architecture that
is simpler or more elegant to implement - there must be a
consideration of the ramifications this decision is going to have on
the cyberspace community.
No two identities are the same. Each identity maps to a unique set of
characteristics. Two people may share some of the same
characteristics, such as being old enough to drive or having the same
hair color, but that does not mean that they have the same
identity. One simply is not looking at enough characteristics. Upon
further inspection, it may be found that one of the individuals has
brown eyes and the other blue eyes. Therefore, when someone perceives
that two identities as the same, he should search for new information
that adds details that distinguish the identities from each
other. Although this section speaks of identity as enabling the
ability to distinguish individual people, it should be noted that
identity can be used to discern individual corporations or fictional
characters as well.
Identity also evolves over time, with more characteristics becoming
evident everyday. When someone purchases groceries, one of the
characteristics that may be added to that person's list of
characteristics is that he "bought bottled water from the
supermarket." As a child gets older more facets of his
personality may become apparent. Other characteristics may change
their state. A simple example would be a change in hair color.
One may vary the way in which one represents the characteristics that
make up an identity. One way to represent these characteristics to
create a binary flag, consisting of either a "yes" or
"no," that states whether the person possesses the desired
characteristic. Although it is possible to define every characteristic
as a binary flag, it may not always be appropriate. Creating binary
flags that say whether an individual lives in each nation of the world
-- "yes" if true, "no" if false -- may not be as
efficient in answering the question, "Where do you live," as
creating a single flag that represents the nation in which the
individual does live, say for example, the "United States."
Different uses of identity call for different representations.
The distinction between characteristics and identity is not
firm. Often, a unique characteristic serves as the representation or
identifier for identity. Consider social security numbers. Each social
security number is unique and can be used to identify an individual,
but no social security number itself contains all of the
characteristics included in that person's identity. In practical
terms, full names serve the same function for most people. So what
uses or purposes do our identities, and the characteristics from which
they are constructed, serve? For one, identity allows someone to
address an individual without confusing him with others. Identity
functions as a cue that allows us to access our memories for
information on someone.
Identity also is used in commerce when we purchase items from
retailers or sign documents, for example. In part, the needs of the
transaction will determine the amount of one's identity that needs to
be used. Some transactions do not rely on the ability uniquely to
identify an individual. For example, at the liquor store, all that is
required is some proof that you are of legal age to purchase alcohol.
The store only needs to know whether you have the characteristic that
includes you in the class of people eligible to purchase
alcohol. Other transactions depend on the unique identification of the
individual -- which requires knowledge of an identifier -- to work,
but do not require knowledge of the individual's full identity. For
example, for a postcard to reach your mother it will need to be
labeled with her mailing address. Finally, some situations may require
full knowlege of an entity's identity. To an extent, a successful
marriage may require each spouse to know the full identity of the
other. Similarly, a successful lawyer-client relationship may require
rather complete knowledge of an entity's identity, at least with
respect to a particular issue.
Businesses desire to advertise their products to the markets most
interested in them, and may even retool their products to be more
appealing to certain segments of a market. Knowing the preferences of
individuals allows a corporation to target perfectly their products to
those who would prefer and, thus, be most likely to purchase, them.
Making a detailed survey of an individual's preferences, though, is
very difficult, if not impossible. Often an individual cannot specify
the exact motivation for her purchase of a particular product. From
the seller's perspective, determining which questions to ask
purchasers can be a daunting task. Further, certain questions,
despite their potential usefulness, are not likely to be answered by a
purchaser. To work around this problem, businesses use identity
information as a proxy for preferences. For example, rather than
trying to discover the exact reason why an individual purchased a Ford
Mustang, a car dealer might instead try to find out the purchaser's
profession or income level. Suppose the car dealer discovers that a
number of his customers who have purchased Ford Mustangs are lawyers.
Although the car dealer may not understand why they purchased Ford
Mustangs, he can assume with some level of confidence that there is
something about lawyers that leads them to purchase Mustangs instead
of Cougars.
Other products have clear markets. The obvious market for judicial
robes, for example, is members of the judiciary. In this case,
identity information serves as the means by which a business can
determine who is part of the market. In the abstract, all businesses
use identity information in this manner. One clear market for every
product is "paying customers." Knowing that an individual is
"credit-worthy" or a possessor of cash helps businesses
identify the members of this market.
As a result, many businesses collect information about identity as
part of its transactions. A purchase order form may ask for an
address, occupation, and income level. Stores may ask individuals to
relinquish portions of their identity in exchange for goods and
services. For example, customers may be offered special discounts or
free products if they complete a survey. Similarly, a customer may be
asked to complete a registration form detailing her reading and
television-viewing habits in order to receive a card entitling the
customer to a discount . The business creates a database using the
information from these forms and the names of the products that were
purchased. Hoping to develop a more accurate profile of their
customers -- in essence, hoping to learn the full identity of the
average consumer, businesses sell or rent portions of their databases
to other businesses. Conceivably, if enough vendors collaborate, a
"profile" of buyers may be created without the consumers
express permission or knowledge. The information is then used to
guide the direct marketing of other products to customers or the
retooling of current products. It also could be used to identify
those people who have a high probability of not paying their bills.
Such databases stand to threaten the privacy interests of consumers,
especially for those purchasing legal, but socially stigmatized,
products like pornography.
In addition to leveraging transactions in the marketplace, our
identities serve other purposes. For example, he formation and
preservation of our identities, as a collection of our characteristics
and traits, are important for our psychological and emotional
well-being. Throughout history, mankind has struggled with the
essential questions of life: Who am I? What am I supposed to do? Who
am I supposed to be? All of these questions are tied closely to how
we view our identity and how we construct it. For some individuals,
seeing their identity used as the tool of commerce -- important
because it helps a business identify how to make a buck from you -- is
damaging to their psychological health. Other people are
psychologically bothered by the idea that someone can find out
information about them without having to get all of the information
directly and explicitly from them. Depending on the power imbalance
between the parties, the commodity nature of identity may exacerbate
the feelings of lack of control, importance, and purpose about which
many individuals of modern society complain that they suffer.
The Internet is currently the biggest network for linking computers,
but cyberspace as a concept is independent of the Internet. Cyberspace
communication began before the Internet and the World Wide Web, and
cyberspace interaction and communication will continue to take place
after the Internet is no longer the network of choice.
Two general metaphors are often used to explain and define
cyberspace. In the first, cyberspace is viewed as a geographic
"place" to which one can go. Much of the end-user
terminology relating to cyberspace is based on this metaphor. For
example, one can "visit" a Web site and "enter a chat
room." Even the company name "Netscape" highlights this
land-based, geographic metaphor.
The second metaphor focuses on communication, viewing cyberspace as a
conduit for information. This view emphasizes the actual network
technology rather than the community aspects of cyberspace. Despite
its multitude of cables, routers , and switches, and the wide variety
of applications layered upon it, fundamentally the Internet, is a
means by which one computer communicates with another.
Neither of these metaphors perfectly encapsulates cyberspace, and both
are necessary to understand fully the range of issues that are raised
by interactions in cyberspace. The geographic metaphor is limited
because in fact at each end there is still a person sitting somewhere
in real space-people are not disembodied just by participating in
cyberspace. However, the geographic metaphor is useful because
interactions in cyberspace are more than just transitory voices on the
telephone-there can be a lasting digital record of the communication,
and people can return later in time to continue the same conversation
(e.g., through postings in an on-line discussion group). Because
cyberspace interactions can be saved and continued over time,
cyberspace communities can develop. However, the communications
metaphor lends a certain amount of realism to a complete understanding
of cyberspace. Because there is a person sitting in real space on each
end of the communication, at some level cyberspace communication is
just a "souped-up" phone call, and at this level the same
concerns and rules that apply to analog voice telecommunications could
apply to cyberspace. However, exchanges can be much more complex and
far more efficient in cyberspace. These differences in degree
ultimately may amount to differences in kind.
Cyberspace can facilitate an enormous range of uses. The two most
common applications currently are e-mail and Web browsing. Both of
these are flexible tools that can be used for almost any purpose. In
the social sphere, cyberspace can enable communication between two
specific individuals. Or, one individual can publish information
on-line for the general public to access. Businesses can, in addition
to exploiting these communications methods, use cyberspace for
transactions among businesses or transactions between businesses and
consumers. In some cases, the entire transaction can be completed in
cyberspace; in other situations, some elements of the transaction must
occur in real space. Additionally, cyberspace can, and perhaps must,
be used not only for business and social purposes, but for regulatory
purposes (e.g., taxation and law enforcement) as well.
A critical problem in cyberspace is knowing with whom you are
interacting. In essence, the problem is that "on the Internet,
nobody knows you're a dog." Currently, you cannot determine
accurately the identity of the person on the other side of an e-mail
message or know with certainty the source of any information in
cyberspace.
A digital identity system must serve several functions. First,
authentication-ensuring that when a message purports to be from Alice,
Alice sent it, not someone pretending to be Alice. Second, message
integrity-providing certainty that when a message arrives from Alice,
it is the same message that Alice sent, not modified en route in any
way. Third, non-repudiation-ensuring the inability of Alice later to
deny that she sent the message, and the inability of the recipient of
Alice's message to deny that the message was received. Finally,
establishing a digital identity architecture may have the beneficial
side effect of facilitating confidentiality through encryption-the
knowledge that no one besides Alice can read a message intended for
her. For our analysis in this paper, a digital identity system must
serve the first three functions, and may serve the fourth.
The technical problem with cyberspace in 1998 is that there is no
effective, widespread architecture to verify identity on the
Internet. There is no digital identity mechanism that meets the needs
for the diverse range of cyberspace interactions. While there are
currently systems that attempt to solve this problem, they all fail
for various reasons. Some fail because they are not secure or
reliable. Some fail because they only work in very narrow contexts and
are not interoperable. Some fail because they attempt to apply literal
translations of real world identity and do not seek to capture the
benefits of making identity digital, such as the unbundling of traits,
which will be analyzed below. Some fail because they mandate very
rigid rules and do not allow flexibility around the policy trade-offs
involved in any identity verification system. Finally, some fail to
be adopted because they are too expensive.
E-mail addresses are currently the most widespread form of digital
identity in cyberspace. People use an e-mail address as an identifier
because e-mail is the most direct and easy way to reach a person in
cyberspace. However, the current e-mail architecture has little
security and includes no reliable identity verification. The dominant
protocol for sending e-mail (SMTP) does not facilitate verification of
the sender's identity, and therefore does not facilitate
authentication: an e-mail message may purport to be from
"billgates@microsoft.com" there is no certainty that Bill Gates
actually sent it. It is a trivial technical task to forge the source
of an e-mail message under the current architecture. Likewise, e-mail
is not safe from tampering en route and can be repudiated after it is
sent or received.
In many cyberspace contexts, passwords are used to verify a person's
identity. However, passwords are easily shared or distributed.
Providing a correct password proves only that the user has knowledge
of the password, not that the user really is any particular person.
There is no certainty that after issuance a password remains only with
its intended holder and has not been distributed through innocent or
malevolent means. There is thus no secure link between a password and
any particular real world or cyberspace identity. Nonetheless,
passwords are easy to implement and perhaps are better than using no
security measures (although they may give false confidence), so they
have become by far the most widespread method for cyberspace identity
verification.
In the e-commerce realm, credit card numbers often are used as a form
of identification. However, this is an inappropriate form of
identification. It is over-revealing, and it is not intended to
identify any personal traits. A credit card number is only a payment
mechanism. However, many e-commerce vendors, particular those selling
digital pornography, use credit card numbers as a proxy for proof that
an on-line user is over 18 years old. This is not a correct
assumption, as those under 18 legally can have credit cards. In
addition, providing a credit card number gives the vendor much more
information about the user than the mere fact that the user may be
over 18 (and the ability to charge the user an appropriate fee). With
a credit card number and access to consumer credit reporting
databases, a vendor could find out enormous amounts of personal
information about the user based on this one identifying number.
Internet Protocol (IP) addresses serve as the fundamental roadmap of
the Internet. IP addresses allow data to reach the correct computer on
the Internet. In this capacity, these numbers are critically
important. However, as an identity mechanism for Internet users, they
are severely lacking. First, IP addresses link only to a computer and
do not help in any way to identify the person who is using the
computer. Second, many computers are now connected to the Internet
with dynamic IP addresses (vs. static addresses), meaning that each
time the computer connects to the Internet it uses a new and temporary
IP number. This makes tracking identity-even of computers-based on IP
numbers impossible.
The current identity architecture of cyberspace is thus in great need
of an overhaul. In order to facilitate flexible but secure
verification of digital identity, a new cyberspace identity
infrastructure is needed.
Control with respect to revelation of identity (i.e., the ability to
choose which and how many elements of identity to reveal) consequently
facilitates apparent anonymity, full disclosure, and selective
revelation of identity. At one extreme is apparent anonymity: no
elements of identity are revealed. At the other extreme is complete
identification: all verifiable elements of identity are
revealed. These extremes mark the endpoints of a spectrum of choice
representing varying degrees of privacy. As we progress along the
spectrum from apparent anonymity to complete identification, we
selectively may reveal more and more elements of our identity (see figure). Such selective revelation of identity may
occur on an ad hoc basis, or may be based on defined principles.
There are several principles in accordance with which one selectively
might reveal elements of identity. One such principle is the
"least revealing means" principle. In accordance with this
principle, one would choose the least revealing means of
identification necessary to serve a purpose (e.g., the least revealing
means necessary to complete a particular transaction). Under an
alternative principle, the "most convenient means"
principle, one might choose to reveal more information. For example,
in order to enable a software agent to find a product to match one's
preferences, one might be willing to reveal more information than
dictated by the "least revealing means" analysis. Under the
"most convenient means" principle, one selectively would
reveal the combination of identifying information that would provide
the most convenience provided that there is an upper bound beyond
which one would not be willing to reveal information for
convenience.
By enabling us credibly to assert traits about ourselves, digital
certificates facilitate the Type I unbundling that provides a degree
of control over privacy. Indeed, we have seen how Type I unbundling
facilitates apparent anonymity, complete identification, and selective
revelation of identifying information because it enables us to treat
identity as a set of individual traits rather than one integrated
bundle of traits. The general concept of unbundling identity also
includes Type II unbundling, to which we now turn.
Further, once a set of traits has an existence independent from a real
world person, there is no bar to multiple identities. Not only might
we have different identities in the real world and in cyberspace; we
also might have multiple cyber-identities with different, and even
conflicting, traits.
As we develop a model within which to think about the design and
development of an architecture for digital identity, we must keep in
mind the flexibility of the technology with which we are working,
particularly because seemingly innocuous design features may mask
social choices of profound significance. As Langdon Winner wrote:
In the United States, the First and Fourth Amendments attempt to
provide this protection. The First Amendment limits the extent to
which the government can regulate speech, and the Fourth Amendment
protects citizens from unreasonable search and seizure. This creates a
tension within society: society has both a need to protect itself from
its individual members who may transgress against its rules, and
simultaneously a need to protect itself from the very same law
enforcement body which it created to solve the initial problem. This
tension effectively sets the landscape for any discussion of the
rights of law enforcement and citizens within society: law enforcement
will push to gain as many privileges as possible so as to guarantee
their effectiveness in fighting crime, whereas society will fight to
limit law enforcement's power to only those things which it deems
critical to performing its duties. This line between
"critical" and "ideal" will vary with time and
with circumstance, and must therefore be reexamined when the needs of
society change.
It is important to note that in the real world, anonymity and
accountability are interrelated in such a way that the liberty to
speak with perfect anonymity may come at the cost of accountability,
or order. In cyberspace, however, we will see that we need not provide
anonymity at the expense of accountability. As total unbundling
becomes a technical possibility in cyberspace (i.e., in the absence of
friction that prevents unbundling in the real world), we must, as a
society, strike an acceptable balance between anonymity and
accountability; between liberty and order. As we attempt to strike
this balance, it is important to resist the temptation simply to mimic
the real world in cyberspace; instead we must consider the underlying
values of privacy, anonymity, and accountability as we construct a
digital identity system.
The introduction of the Internet as a widely accessible medium for
commercial, social, and government interactions has created the
necessity to reexamine the needs of society. With its introduction
comes an unprecedented ability to transfer information extremely
quickly to a large number of recipients at extremely low
cost. Unfortunately, this ability is a double-edged sword: although it
enhances the ability of the law-abiding public to participate in
commerce and communication, it likewise enhances the ability of
criminals to do the same. However, there are some ways in which the
architecture of cyberspace is fundamentally different from the
architecture of the physical world. These differences force a
reevaluation of the rights and responsibilities of law enforcement
within this new world.
Clearly, all of these cases cause harms that should be prevented or
punished by law enforcement. The real world contains a certain amount
of "friction" which can be used to identify the perpetrator
of a crime and expose the extent of the harm. A police officer can
question a suspicious person regarding their identity or intentions,
and ultimately detain or arrest that person if sufficient cause is
present. People leave physical evidence or traces of crimes:
fingerprints, hair, carelessly forgotten articles of clothing, or a
description of their physical appearance. With cyberspace comes the
potential to eliminate the real world friction, which creates
traceability. Information transfer, whether legal or illegal, could be
rendered completely anonymous and untraceable.
Current Internet technology facilitates some of this friction. While
it may be cumbersome, it generally is possible to determine the real
life identity of someone who commits a crime on the Internet. However,
as in the physical world, this friction is an artifact of the
architecture of the Internet. More importantly, it is an artifact that
could be eliminated: truly untraceable communication could be
facilitated for the first time in history. However, as evidenced
above, law enforcement has strong interests in preserving
traceability. A complete response to the question of whether or not
to allow completely untraceable information will require the
involvement of the affected society; making an informed decision means
understanding the implicit choices that are concomitant with the
larger choice. Once the implicit choices are understood, the choice
to include link within the architecture can be made based upon the
societal values embodied within each choice.
Is this really an architectural choice? Perhaps legal, social, or
market constraints will provide a more suitable method than
technological constraints for regulating the presence of link on the
Internet. They would certainly provide a more flexible system than one
that technologically required a real world link to be maintained for
every data transfer. The Internet requires an "open"
architecture, one that different entities can customize to a large
degree in order to obtain the effect they want for their users. This
is one of the reasons the Internet is already so vast and will
continue to be viable: to a large extent people have the ability to
define how users will interact with their site. Regulations of a
legal, social, or market nature will be subordinate to this general
feature of Internet architecture, and that subordination will place a
strong restriction on the effectiveness of regulation as a universally
applicable tool.
Architecture can thus be seen as enabling communication, with content
providers placing restrictions on user interaction. For example, a
matchmaking service may require that all users provide a real world
name and phone number for contact information. In this way the
architecture enables any such site to place restrictions on the ways
in which users may interact with the site. Users who do not wish to
conform to the restrictions presented by a particular site can decide
to utilize a site that more closely matches their preferences.
Sites sharing many common identification requirements can be collected
together and identified as a "domain." Domains may be as
broad or as specific as necessary, but three major domains are likely
to dominate interactions on the Internet: business, social, and
government domains. The business and social domains both have
interesting conceptual questions associated with them, but for the
most part they are simply a translation of business in the real world
to business in the cyber world. Not much is fundamentally different:
contracts are still contracts, markets are still driven by supply and
demand, and advertising is still fundamental to success. The
situation is similar for the social domain. While novel methods of
interaction may be created and offered for idea exchange, the
fundamentals of social interaction remain the same in
cyberspace. Since the business and social domains largely translate,
their cyberspace identification requirements are likely to be similar
to their real space identification requirements.
This leaves the government domain, which breaks down into two major
areas of operation. First, the government domain includes service
providing sections, such as the Social Security Administration and the
IRS, which are responsible for providing government services to the
public. In this capacity, the government functions very much like a
business, imposing its own set of restrictions upon user interaction;
the main difference is that in some situations users will not have the
choice to simply choose not to participate. Discussions of universal
access are beyond the scope of this paper, but could potentially have
a large impact when dealing with government interactions in
cyberspace. The other major area of government operation consists of
regulatory powers. Specifically, the duty of governments, from
municipal to federal, to provide law enforcement.
If we adopt a no-link system, then government would need to form a
domain of its own facilitating traceability in order to facilitate law
enforcement. However, this is impossible: in order for law enforcement
to adequately perform its duties, its domain must impact all other
domains. For example, a method of determining a specific user's real
life identity would be effectively useless if it were not available in
all domains in which a crime could potentially be committed. There is
a fundamental difference between all other domains and the domain of
government regulatory powers. It is a domain that cannot be adequately
provided for within the open architecture framework, because law
enforcement capabilities must not be dictated by the domain in which
the user is interacting.
Ultimately, then, this link/no-link decision must be made at an
architectural level. While conceivably laws, market, or social norms
could be designed to create a link situation, each would only be
effective against those who participate within the legal, market, or
social system: it would be ineffective against criminals, who would
have little regard for such influences and feel little responsibility
or pressure to obey their restrictions.
Both link and no-link architecture have benefits and drawbacks
associated with them. With a link architecture, access to the link
information can be limited, presumably, only to an appropriately
regulated law enforcement agency with specific regulatory processes in
place for obtaining the information. However, the immediate point is
that not everyone will have access to the information contained in the
architectural link; to those without access, a link architecture is
identical to a no-link architecture. The benefit of identification is
still present, but the ability to gain knowledge of the person's real
world identity from the architecture of the system is limited to those
specific bodies with access. Thus, once again, the interesting area of
discussion is that pertaining to law enforcement: when can a link
system effectively be used as a no-link system, and are there benefits
to being able to determine link which outweigh any corresponding
drawbacks?
At all points along the continuum, except for the extreme of
one-to-one identity, there is a need to distinguish between
"transient anonymity" and "persistent anonymity."
With transient anonymity, no persistent link remains to the sender of
the information; this is analogous to anonymous leafleting. Persistent
anonymity is perhaps more useful: it allows continuity of cyber
identity generally without disclosing real world identity. It only
permits disclosure of the real world identity within a link system. In
a no-link system, continuity is preserved, but without facilitating
link. Both types of anonymity are useful in some circumstances, but
persistent anonymity is likely to be more generally useful.
To provide certain services, the government may require knowledge of
certain aspects of real world identity. However, in other
circumstances the government has no need to know identity. For
example, in the voting context, government can utilize the unbundling
potential of identity in cyberspace to determine a person's right to
vote and their proper district, without knowing the person's real
identity.
The benefits to be derived from absence of real world identity in
social and commercial interactions are realizable in the context of
political dissident and otherwise unpopular speech and
purchases. Ultimately this end of the discussion becomes a question of
how much individuals will trust their governments to only reveal
identity when absolutely necessary for authorized law enforcement
purposes. "If only popular ideas were protected, we wouldn't need
a First Amendment." If there is no fear of unreasonable
retribution from the government, viewpoints which would have
previously been considered unpopular and socially unacceptable will
become far more visible; J.S. Mill argues that this sort of
"marketplace of ideas" is absolutely necessary to the
ability of society to make well reasoned judgements.
In a no-link architecture, individuals do not have to trust (or fear)
the government at all. In the United States, this issue might be less
of a factor than in other countries; however, due to the global nature
of the Internet, as well as the lapses of judgment that can occur in
any governing body, the importance of this issue must not be
understated. Should Internet users in other countries have the same
rights of free speech that we in the United States have deemed
valuable? This raises additional logistical and sovereignty issues,
and may even threaten the global nature of the Internet, if countries
decide not to participate in a system that facilitates complete
untraceability.
As mentioned earlier, the computer can be involved in crime in several
different ways. As an anonymous method of communication, the potential
for the Internet itself to become involved in crime becomes very
high. Private communication can be utilized as a method of planning
crimes, and public communication groups can become breeding grounds
for criminal activity. Charney and Alexander mention that "it
might be possible to allow individuals to congregate in certain places
where anonymity is assured, with each individual participant on notice
as to the benefits and risks associated with anonymous
communication." However, this seems foolhardy: each individual
may fully understand the implications of participating in an
unrestricted, anonymous area, and yet the potential hazards to society
in terms of the planning of criminal activity would continue
unchecked. Additionally, crimes that can and should be prosecuted in
the real world involving restricted speech, such as libel and child
pornography, can occur unrestricted in an area with no-link.
Encryption represents the single largest barrier to law enforcement
obtaining content from a computer. This is an issue that is relatively
unique to cyberspace, as handwritten and telephone encryption is
relatively rare. One choice can be made with respect to encryption:
allow it, without regulation, or disallow it. Disallowing encryption
altogether is pragmatically different from allowing only key escrowed
encryption, but for the purposes of this discussion they are
effectively the same. The overwhelming response of government has been
that encryption controls are in fact necessary, and several
initiatives have been proposed to this effect; however, both the
public and legal reaction to these initiatives has been negative: many
organizations are resisting the degree of control which law
enforcement would be given, and the Communications Decency Act was
recently ruled as too general to be constitutional. In this situation,
law enforcement's claims of what it needs to be effective are strongly
disputed by the public: the equilibrium between the two is harder to
strike in cyberspace.
While most encryption can no longer be broken by brute force methods,
the alternative procedural approach to gaining access to content is to
subpoena the key necessary to decrypt the content. This is
satisfactory because it allows traditional warrant restrictions to
apply in the area of gaining access to content. At the same time,
however, it is hopelessly flawed because a clever (or even computer
illiterate) criminal could easily "misplace" the key, or
intentionally delete it upon receipt of the subpoena.
This would tend to argue for the necessity of government controlled
encryption. The main drawback to this argument is a practical one
mirroring the gun control argument: when encryption is outlawed, only
outlaws will have encryption. While the argument is slightly more
complicated than this, the basic point remains that encryption is too
ubiquitous and inexpensive to expect sophisticated criminals to use a
"government approved" version. Likewise the social and
market opposition to this sort of key escrow system is large and
entrenched, rendering encryption control a pragmatically infeasible
solution.
Given the choice between unregulated encryption with link, and
unregulated encryption without link, law enforcement will almost
certainly choose to have the possibility of determining link. Without
it, the case is fairly strong that it will be extremely difficult to
prevent criminal activity. The constitutional issues surrounding such
a requirement on all speech will be examined in Section V. Regardless
of the legal status, however, if a link architecture is decided upon,
there will be chilling effects on free speech, and all the benefits of
a public forum for interaction may be lost.
A no-link architecture has more tangible drawbacks. Crimes can be
easily planned and carried out on a system with no accountability, and
there is no reason to think that they wouldn't be. However, practical
concerns such as sovereignty and providing unrestricted speech to
political dissidents regardless of their government's policy on free
speech may outweigh the potential societal costs. It may be also that
suitable mechanisms for regulating identity can be created in a legal
or market based way; it is hard to see how these methods would be
enforceable in a cost-effective manner, but the number of criminal
deviants might be small enough that identification by law enforcement
could be reasonably achieved.
The ultimate question is, to what extent is law enforcement empowered
to track down information, and to what extent does that empowerment
place law abiding citizens at risk? This question must be combined
with the immediate concern that a lack of empowerment will cause
criminal activity to propagate unchecked. Perhaps the current system
of cooperative tracing provides enough of this "friction" to
allow the real world methods to be comfortably adopted in cyberspace;
however, it seems that the ongoing march of technology will dictate
other solutions, ones which rely on intent and not on friction to
bring about the desired results. In order to determine adequately what
the intent should be, it will be necessary to examine carefully the
implications of constructing a system with no friction so that the
intent is accurately reflected in the results.
Alternatively, traceability could be implemented at the infrastructure
level. That is, a digital identity architecture could be inserted into
the existing network infrastructure so that users cannot access a
networked information system until they identify themselves. Thus,
properly authorized law enforcement officials will be able to trace a
user's activities to his identity.
A more detailed discussion of the law and technology on which our
proposed implementation relies can be found in Sections VI and IX of the paper,
however the basic idea behind any system of mandatory traceability is
that speakers entering cyberspace would be required to deposit (e.g.,
with the ISP), or attach to their communications, a means of tracing
their identities. One can conceptualize mandatory traceability by
positing a regime in which an encrypted fingerprint automatically
would be attached to every transaction in cyberspace. In such a
regime, the fingerprint could be encrypted with the government's
public key such that properly authorized law enforcement officials
could access the private key necessary for decryption while
participants in the cyber-transaction would not be able to strip away
the speaker's anonymity.
What is the primary purpose of a driver's license? Most people likely
would say that a driver's license verifies that the holder of the
license is qualified to drive. However, a license whose purpose simply
was to indicate that the holder was qualified to drive, could be
comprised of much less information than currently is included on the
license. In particular, a license seeking to serve this purpose could
be comprised of a trait (a statement that the holder of the license is
qualified to drive), a link from the trait to the individual with the
trait (a picture of the qualified driver), and a means of verifying
the validity of the link between the trait and the individual with the
trait (the DMV's signature, or other means of verifying that the
person pictured has proven that he is qualified to drive).
What purpose does each piece of information serve? The trait
("the holder of this license is qualified to drive") tells
us the purpose of the license. The picture ties the trait to the
holder of the license. It ensures that the ability to drive, noted on
the license, is attributable to the actual holder of the license, and
therefore prevents individuals from transferring their credentials to
others. Finally, the verification vouches for the validity of the link
between the trait and the person pictured on the license. Note that as
this example demonstrates, a license need not include the driver's
name in order for the license to serve its designated purpose. Yet,
our licenses carry our names.
What purpose does it serve to have our name on a license when a
picture clearly is sufficient to demonstrate that the person
presenting the license is qualified to drive? Why does the state
require that our names be on our license? Further, why are we required
to carry our license when we drive-why is it a violation to drive
without a license? Why have I committed a violation when, as a
qualified driver, I forget my license at home? If one is qualified to
drive, why should one be required to carry his license? That is, why
must the law-abiding citizen carry his driver's license? I submit that
the requirement that I carry my license is designed to ensure that
drivers can be held accountable.
Indeed, the name on our licenses, in combination with the requirement
that we carry a license, ensures accountability. These requirements
ensure that we can be held accountable for traffic violations as well
as traffic accidents. I am required to carry my name with me when I
drive so that my identity can be ascertained by law enforcement
officials in the event that I commit a traffic violation, or am
involved in an accident. Thus, we introduce the example of the
driver's license to raise a point-the driver's license is a perfect
example of an identification requirement that is imposed, before one
has caused harm, in order to preserve accountability in the event that
one causes harm. Requiring that drivers carry a license with their
name on it ensures that they can be held accountable for their
actions.
With this in mind, we now turn to our doctrinal analysis of mandatory
traceability under the Fourth Amendment. Those familiar with the
Fourth Amendment may wish to go directly to the analysis of the
constitutionality of mandatory traceability.
However, in its well-known decision in Katz v. United States,
389 U.S. 347 (1967), the Supreme Court rejected Olmstead's
"trespass" doctrine, articulating, in its place, a Fourth
Amendment jurisprudence based on the protection of individual
privacy. In Katz, the Court held that the Fourth Amendment
protects people, not places: "What a person knowingly exposes to
the public, even in his own home or office, is not a subject of Fourth
Amendment protection¼ But what he seeks to preserve as private, even
in an area accessible to the public, may be constitutionally
protected." Thus, the Court held that physical penetration of a
constitutionally protected area is not necessary before a search and
seizure can be held to violate the Fourth Amendment. According to the
Court in Katz, "once it is recognized that the Fourth
Amendment protects people-and not simply "areas"-against
unreasonable searches and seizures it becomes clear that the reach of
that Amendment cannot turn upon the presence or absence of a physical
intrusion into any given enclosure." Thus, although the
Government's activities in Katz involved no physical intrusion,
they were found to have violated the privacy on which the petitioner
justifiably relied and thus constituted "search and seizure"
within the meaning of the 4th Amendment. Changing technology
precipitated the shift from protection of property to protection of
privacy, and in 1968, just one year after Katz, Congress passed
Title III of the Omnibus Crime Control and Safe Streets Act
authorizing microphone surveillance or wiretapping for law enforcement
purposes, and requiring a warrant, based on probable cause, prior to
such surveillance or wiretapping.
There are a variety of circumstances in which the legitimate law
enforcement interest in searching may outweigh the invasion that the
search entails. First, where there is great, imminent public danger, a
less demanding standard than probable cause might be
appropriate. Second, where there is rapidly disappearing evidence or
where there are rapidly disappearing suspects, we might want to accept
less than probable cause. Finally, we might require less than probable
cause where the intrusion occasioned by the search or seizure is
limited. Thus, there are a variety of circumstances under which the
legitimate law enforcement interest in searching may outweigh the
invasion that the search entails, thereby occasioning the application
of a less demanding standard than probable cause.
The Court first departed from its rigid application of the probable
cause requirement in Camara v. Municipal Court, a case
involving the inspection of dwellings for housing code
violations. Noting that even one undetected safety code violation
could cause "fires and epidemics [that] ravage large urban
areas," the Court applied a balancing test that involved
"balancing the need to search against the invasion which the
search entails." Thus, Camara marked the first time that
the Court recognized that some Fourth Amendment activity should be
judged under a balancing test.
One year after Camara, building on the balancing approach
described in that case, the Court handed down its famous decision in
Terry v. Ohio. In Terry, the Court upheld the power of
police to "stop and frisk" suspicious persons without
meeting the demanding standard of probable cause. More precisely, the
court held that the constitutionality of a "stop and
frisk"-the law enforcement practice of briefly detaining
suspicious persons on the street for purposes of investigation-is
governed not by the warrant requirement (i.e., probable cause), but
"by the Fourth Amendment's general proscription against
unreasonable searches and seizures." In Terry, the Court tailored
the level of suspicion required to the intrusiveness of the search,
"opening the way for a sliding scale in which the less intrusive
the search, the less demanding the procedural requirements for the
search to be 'reasonable.'"
Even in the absence of a warrant, such a search potentially could pass
constitutional muster. When authorized law enforcement officials
obtain a decryption key so that they can decrypt the identity of an
individual, and even when they obtain the person's identity, the
intrusion is minimal-there is little or no collateral damage of the
sort with which the Court has shown concern. The government does not
need to enter an individual's private space in order to access the
identity information, and the government will have access only to the
identity information and to nothing else. Indeed, given the tiny
burden on any particular individual's life, traceability might be
constitutional under the reasonableness test even with a lesser
showing than probable cause.
Of course, this entire analysis presumes that there are sufficient
assurances that the key and any identity information obtained would
not be misused. Toward this end, the government might impose
limitations on the circumstances under which law enforcement may
obtain and use identity information, or other procedural limitations
that would prevent misuse of revealed identity information.
First, Adler suggests that we adopt an autonomy-based rule-a rule that
focuses on an individual's desire for control over his or her personal
expression. In particular, he suggests a bright-line rule in which
searches of the home or office need to be based on
"individualized suspicion [that] would require the government to
assemble first a reasonable belief based on information already
outside the control of the individual. In other words, not until the
individual has acted with the understanding that there could be
telltale traces outside of her zone of control, thereby knowingly
risking public attention, does she become vulnerable to government
intrusion." Arguing in a similar vein, Scott Sundby suggests that
"[w]hen factual probable cause is the core regulating device of
government behavior, the [Fourth] Amendment is basically
self-regulating because control over the government's ability to
intrude rests primarily with the individual. So long as a person does
not engage in behavior arising to probable cause . . .individual
privacy cannot be invaded." Under this rationale, mandatory
traceability would survive constitutional scrutiny because the
government's ability to obtain one's identity is limited to those
situations in which the government has demonstrated an appropriate
degree of individualized suspicion based on the individual's
activity.
A second modification of the Fourth Amendment's
"reasonableness" test relies on a substantive interpretation
of the Fourth Amendment. According to a substantive interpretation,
the Fourth Amendment was used to delineate the scope of government's
substantive power. In a 1995 article, The Substantive Origins of
Criminal Procedure, William Stuntz suggests that historically, the
privacy protections afforded by the Fourth Amendment were really
"a proxy for something else, a tool with which courts or juries
could limit the government's substantive power." In particular,
Stuntz argues that the Fourth Amendment's focus "seems to have
been to make it harder to prosecute objectionable crimes-heresy,
sedition, or unpopular trade offenses in the seventeenth and
eighteenth centuries, regulatory offenses in the late nineteenth
century." Building on this interpretation of the Fourth
Amendment, Michael Adler suggests that in the past, "the
protection from arbitrary searches provided an unacknowledged but
potentially quite important pocket of privacy in which individuals
might be free to resist the state's demands." Indeed, Adler
argues that we ought to preserve spaces in which the government's
power to enforce the law is limited because such spaces allow for a
degree of criminal activity that is necessary in order for society to
flourish. The underlying notion is that perfect government enforcement
of criminal laws would prevent the civil disobedience which often
"provides society an impetus to reevaluate the law." Adler's
proposed modification of the "reasonableness" test would
"preserv[e] the possibility of a low level of criminal activity
and of allowing individuals some freedom from the punitive power of
the state" by preserving spaces in which the government's
enforcement power is limited; spaces that serve as a source of power,
independent of the government, that can be used to meet the collective
need for a potential for disobedience. However, a system of mandatory
traceability likely would pass the modified Fourth Amendment test
because mandatory traceability does not enable law enforcement
officials to enter spaces that it previously had no power to enter; it
simply enables law enforcement officials who have witnessed a crime
(or potential crime) to identify the perpetrators when the officials
have made a proper showing of cause. Thus, mandatory traceability
likely will survive scrutiny under the Fourth Amendment's
reasonableness requirement and the potential modifications thereof. We
turn now to a First Amendment analysis of a system implementing
mandatory traceability.
Doctrinally, the right not to speak can be thought of as part of the
compelled speech doctrine of First Amendment law. One scholar suggests
that the compelled speech doctrine is comprised of three distinct
prongs: confidentiality, autonomy, and preventing conscription. The
confidentiality prong of the compelled speech doctrine, which protects
the right not to speak, will be our focal point.
The proposed traceability requirement attempts to preserve the ability
to speak anonymously on the Internet. Although traceability would be
required for Internet access, one's speech on the Internet would
appear to be anonymous. Thus, while law enforcement would benefit from
the ability, under carefully circumscribed conditions, to access the
identity of speakers, the speaker would remain anonymous with respect
to the population to whom he or she addressed thoughts. Since the
traceability requirement allows speakers to retain their anonymity at
all times with respect to their audience, and to retain their
anonymity with respect to the government at all times except in the
narrow circumstances in which government officials are authorized to
learn their identity, such a requirement might be limited enough to
survive McIntyre. Indeed, the traceability requirement under
consideration is a much more limited identification requirement than
that in McIntyre which essentially would have sacrificed the anonymity
of political speakers.
The Supreme Court in NAACP v. Alabama ex rel. Patterson, 357
U.S. 449 (1958) held unconstitutional Alabama's demand that the
NAACP reveal the names and addresses of all of its Alabama
members and agents. The NAACP court is said to have recognized
that "[s]erious First Amendment questions arise . . . when there
is such a nexus between anonymity and speech that a bar on the first
is tantamount to a prohibition on the second." As one district
court explained: "[t]he Court in NAACP v. Alabama was of
the opinion that the injury to a right subsequent to disclosure of
identity precludes the right to identification." In NAACP,
the Court recognized freedom of association and held that forcing the
NAACP to divulge its membership lists was "likely to affect
adversely the ability of [the NAACP] to pursue their collective effort
to foster beliefs which they admittedly have the right to
advocate." Thus, anonymity was deemed necessary to the exercise
of freedom of association.
Unlike the public disclosure of NAACP membership lists, the disclosure
of identity at issue here would not result in public dissemination of
an individual's identity, but would only consist of disclosure to
authorized law enforcement officials. If properly implemented, a
traceability requirement need not hinder one's ability to exercise
one's freedoms, including the freedom of association. Indeed, when
properly implemented, a system of mandatory traceability will not
trigger the NAACP concern because the identification requirement will
not subsequently restrain an individual in the exercise of an
independent right.
Underlying NAACP was a concern that an identity requirement would
raise a "fear of reprisal [that] might deter perfectly peaceful
discussions of public matters of importance." In the past,
registration laws have been overturned for infringing on the freedom
of association only "when a history of harassment and social
hostility is proven." In NAACP, for example, the Court noted:
Of course, one might argue that a system implementing traceability
would awaken a fear of reprisal and deter an individual from the
exercise of his constitutional rights. However, such a claim
constitutes a mere assertion, and NAACP is "inapposite where
. . any serious infringement on First Amendment rights. . .is highly
speculative." Further, where proper procedures are adopted to
ensure that the government will not misuse its power to trace, it will
not be realistic to argue that one's speech will be deterred by a
requirement that one's cyberactivities be traceable to one's identity
in those limited circumstances in which law enforcement officials have
shown proper cause.
Thus, even under the strict scrutiny applied to cases in which the
government restricts political speech, it is likely that the
traceability requirement will pass constitutional muster because it is
narrowly tailored. In particular, under a system implementing
traceability, one maintains one's anonymity with respect to everyone
except for properly authorized government officials, and anonymity
need not be sacrificed to these officials save in limited
circumstances. Further, properly authorized government officials only
will be able to access one's identity, and not other information about
one's activities. Thus, mandatory traceability is narrowly tailored to
meet a substantial state interest-the state interest both in deterring
crime in cyberspace and in holding accountable those who have
committed crime in cyberspace.
The Internet is a large, open, public network. Unencrypted messages
are sent in clear text, and are easily intercepted. Open Internet
standards for e-mail, like SMTP (Simple Mail Transfer Protocol), have
no authentication and forging an e-mail from another person is a
trivial task. Web exchanges, through HTTP (Hypertext Transfer
Protocol), also transmit data openly and can be easily
intercepted. Internet standards were created to exchange information,
not protect privacy and commercial transactions. To guarantee that the
information your software receives is from the entity that should have
sent it, there is a need for digital authentication.
There are two basic paradigms for establishing digital
authentication. One is a trusted system, where information is presumed
to be correct and you trust the computers involved and security
measures are designed to keep the trusted system secure. The other is
to develop a system that can function over an insecure network and
implement a secure method for transmitting information. Because the
Internet inherently is insecure, a trusted system is very difficult to
create and it is even more difficult to get it adopted. A system that
established authentication without resorting to being a trusted system
is therefore preferred.
Digital certificates are evolving as a method for accomplishing both
parts of the identity scheme. Digital certificates are the electronic
equivalents of a driver's license and a notary seal. Digital
certificates are based upon public key technology, and X.509 has
established standards for exchanging digital certificates. This allows
different certificate authorities (CAs) to all issue compatible
certificates, making adoption much easier.
Public key technology allows for authentication, provided that the
user has the public key of the sender. Digital certificates are pieces
of data signed by a CA that puts its reputation behind the data's
authenticity. The CA's digital signature guarantees that no one
tampered with the information, and their public key should be well
distributed. As new CAs come into existence, their digital signatures
can be added to a digital certificate issued by an already established
CA. This allows a hierarchy of CAs to develop as well as a plethora of
CAs, guaranteeing competition in this market.
Instead of creating giant repositories of public keys, trusted
certificate authorities distribute their public keys, and sign the
public key of users. Using the well-publicized Certificate Authority's
public key, Alice can verify that Bob's public key is his, because
Alice trusts Bob's CA. After using the CA's public key to verify Bob's
public key, Alice uses Bob's public key to verify the message.
Because of public key technology, users can use the recipient's
publicly accessible public key to encrypt messages. This way, in
addition to confirming that the message was sent by the correct party
and that no one has tampered with the message's content, a user can
guarantee that nobody but the intended recipient received the
message. This is very important for e-commerce and sensitive private
documents.
Although current certificate technology provides these benefits, the
technology does suffer from some serious limitations. Use of digital
certificates is not widespread, which limits its effectiveness. With
certain products, like computer technology, the utility of a good
increases as more people utilize the technology. And because the
demand for the technology is likely to increase as more people adopt
it, it is important to establish the technology with the early
adopters.
The use of digital certificates is limited until large portions of
users use it. Because most people don't worry about e-mail being
insecure, there is limited benefit to digitally signing your
e-mail. Additionally, encryption requires both parties support the
technology, to know each other's public key. If the parties that you
converse with do not adopt this technology, then you cannot send
encrypted messages. Additionally, authentication requires that the
other party is using software that can verify the authentication, and
that the party is interested in verifying your identity. If your
message's target isn't interested, there is no advantage to providing
authentication capability.
Individuals use bundled certificates, like VeriSign Digital IDs, to
sign e-mail and authenticate themselves to web sites that support
VeriSign certificates. These certificates store all information about
the user in one certificate, with different levels of certificates
based upon the amount of information volunteered and the amount of
real world verification that was done on that information. These
certificates provide a one-to-one link between the person's real world
identity and their online identity, as well as providing all their
personal information to any server that they provide that certificate
to. Companies also use certificates to sign their downloaded software
packages and so that their servers can provide an encrypted
connection.
Every time a person purchases something with an age restriction,
alcohol, tobacco, or pornography, they are asked to present proof of
age. This proof can be visual verification; the individual is clearly
not under 18 or 21. Alternatively, the age can be verified through a
driver's license or other recognized form of identification. When
someone provides a driver's license, however, to prove his age, he is
providing more information than is necessary for the exchange. In the
real world, we accept this because the person verifying your age is
viewing the necessary information only for a few seconds, and carrying
around multiple forms of ID to verify individual characteristics would
be bulky and inconvenient.
Although bundling these traits in the real world often occurs as a
matter of convenience, such bundling carries serious privacy concerns
in cyberspace. In the digital world, all information that is made
available can be and often is recorded. Additionally, the
inconvenience of holding multiple forms of identity is trivial, with
each digital certificate being a small file stored in the computer. If
desired, picking between multiple forms of ID can be made trivial by
automating most of the decision process in the client software. To
accomplish this, a standardized language must be developed for
identifying characteristics. A method must be developed so that two
computers can communicate and request information, such as first name,
age, birth date, age 18+, age 21+, citizenship, residency, etc., in a
manner that allows the responding machine to locate and provide the
appropriate certificate. This probably will require a technological
standards board like the W3C to add new fields to the standardized
language with each revision as they become necessary over time as
certificate usage expands.
Certificate Authorities must begin providing multiple certificates
when a form of digital identification is needed. Unlike a Digital
Driver's license, it is important that when a user proves that he is
18, he does not provide the server with their name, address, social
security number, e-mail address, telephone number, and birth
date. Like the public key in traditional digital certificates, the
CA's digital signature authenticates that no one has tampered with the
data. As certificates become commonplace, groups can begin issuing
digital identities like they issue real space identities. The DMV can
provide a digital certificate form of the driver's license. To
facilitate unbundling, the DMV could provide multiple certificates
that include information like you name, address, birthday, age
(including numerical, plus 18+ and 21+ flags), and other relevant
information. This information can be presented as proof in cyberspace
like your driver's license in real space.
Currently, each server specifies which certificate authorities?
certificates it will accept. Because a digital certificate is a piece
of data signed by an issuer's certificate authority, we can create
hierarchies of certificate authorities. Certificate authorities could
sign each other's certificates, creating a hierarchy that would allow
a server to trust any certificate so long as some trusted CA in the
chain has signed it. This should result in a system where a few large,
trusted certificate authorities could serve mostly to accredit other
certificate authorities, which would deal with end users. This
approach would allow end users to face a competitive landscape, while
still guaranteeing the credibility of certificate authorities.
The problem with this security regime is that it depends upon the end
user to secure his certificates. There exists a risk that someone
might steal the private key or that the holder of the key might
disclose it to someone. There appears to be two types of technologies,
if adopted, that would help minimize these risks. One is the
implementation of a trusted system, and the other is an external
verification system.
A trusted system that prevents the theft of the private keys could be
built. A trusted system, however, has real world limitations.
Requiring end-users? machines to exist within this trusted paradigm
seems problematic, as a sophisticated computer criminal could reverse
engineer the certificate code, rendering the trusted system moot. The
difficulties for building a trusted system on client machines is an
extraordinary task, and convincing the pubic that the system is
actually secure is an even more difficult feat. Further, a trusted
system would not prevent a user from intentionally giving another
access to his private key. Although trusted systems do have a role in
digital encryption, they have a limited role in client-side
adaptation.
The other system is to secure the private key with an external
verification scheme. Instead of using a password, something not easily
duplicated can be used. Smart Cards can store private keys and prevent
tampering. Although a Smart Card is the size of a credit card, the
technology is very different. Smart Cards store information in a chip,
instead of in memory, and their interfaces are designed with specific
methods of data transfer in mind. The Smart Card could also be used as
a token to serve as the private key. Because it could not be easily
duplicated, this would prevent users from giving their private keys to
other users. The Smart Cards could be designed to not allow
duplication through the card's operating system. Extracting the chip
to duplicate it will destroy the original, and the copy will
fail. Smart Cards provide a reasonable level of security, reducing the
likelihood of the public key's theft or intentional misuse.
Another option is biometrics. Expect in the most extreme cases, this
option would eliminate the possibility of public key theft or
intentional misuse. Using a fingerprint, retina scan, or another
physical characteristic of the real world to secure the private key
and serve to authenticate that the user is the expected user. The
person's scan can be hashed to provide a value of the desired
bit-length, allowing its usage of a private key to be used across
borders. Alternatively, the scan could be used to provide a security
shield for the private key, similar to a password but more secure.
However, any sort of identity architecture will require that people
have access to a single system which they trust to be secure and not
reveal their private key. To some extent this reduces the amount of
security of any architecture and increases the burden on the
user. Similar to the expectation that an automated teller machine will
not store our card and personal identification numbers, an external
verification scheme must not store data it reads for private key
access.
Several mechanisms can be created to attempt to guarantee that a
system operates in this secure manner. Manufacturers could sign the
transmissions that are confirmed via scan, adding a level of
credibility to their transmissions in addition to possessing the
private key and certificate. A governing board that verifies that
accuracy of these devices could also sign the manufacturer's
signature, so they could be universally accepted. If this system is
standardized, and the method for verifying signatures from
manufacturers, this system can be used without a trusted system. This
requires, however, that each device be issued (or be able to generate)
a private key and private key, a digital certificate, and the
technology to digitally sign a device. While this does not require
significant amounts of processing power from a computer, it might
currently be beyond the capabilities of an inexpensive fingerprint
scanner. This requirement could be added when these more advanced
scanners are common and this would not be an unreasonable level of
verification.
This allows us to decide the level of verification necessary within
our system, while still maintaining a standardized system. We may
decide that fingerprint scanners signed by manufacturers is too
burdensome for selling pornography or tobacco products, but is
reasonable for on-line voting. The server can set its
requirements. This system can be adopted in any environment that
requires authentication by applying additional levels of
verification. The digital certificate system, thus, is extremely
flexible in terms of additional security that can be used to protect
private keys within the initial system.
Anonymous, secure, authenticated e-mail is another benefit of a
certificate-based scheme. Users can use anonymous e-mail servers or
re-mailers to create accounts with no link to their real world
identity. Digital certificates, authenticating the user's e-mail
address and public key, like the VeriSign Digital IDs can work with an
anonymous re-mailer. This allows user anonymity, protecting their
identity, while prevent e-mail forgeries. Bob can create an anonymous
e-mail address, and correspond with Alice. Although Alice will not
know who Bob is, she knows that the same person is sending each of the
e-mail messages, and she also knows that no one tampered with the
messages or forged them.
Your digital certificates can include as many characteristics as
desired, and they can be maintained separately. By unbundling them,
information can be exchanged without revealing your real world
identity. In this manner, users can remain anonymous, but
authenticated.
Essentially, the law enforcement trace is the product of the
transaction. A customer must reveal his identity to the certificate
authority for the CA to be able to certify any information about
him. If the customer wants to purchase anonymously a product from a
web site that requires him to prove his age, he would request his CA
to issue him a new certificate certifying that a pseudonym is of the
proper age. As a result, the CA always knows to whom pseudonyms
belong. Similarly, the web site store always knows which pseudonym
made a given purchase. To trace the identity, therefore, the law
enforcement agent would have to contact the web site and demand to
know which pseudonym is related to a known transaction, followed by a
similar request to the certificate authority about the real identity
associated with the pseudonym. This architecture would require the CAs
and web site stores to keep logs that associate these items.
To implement this tracing, all Internet transmissions must at least
require the use of a valid "null" certificate. Even in
situations where a web site may not need or want the individual to
present any certified information, a certificate still must be logged
for traceability purposes. Essentially, a null certificate guarantees
that the certificate authority is aware of the identity of the owner
of the pseudonym, and that the Internet host is keeping track of
pseudonyms, even when no identifying information is strictly necessary
to the transaction.
The architecture can facilitate this with changes to the method in
which web servers and other Internet servers receive their data, which
will inconvenience users, but the system can be worked around to
minimize the amount of differences the end user notices. The
difficulty lies with coercing all sites to replace their server
software. This initiative would require a rewrite of every Internet
server, as well as complicating the server's interaction with the
operating system.
Most web servers store their data in HTML files on their hard
drives. The directory-like structure of web sites allows users and
webmasters to change HTML files with ease. The HTTP server that
facilitates access to the HTML files (or any other protocol server
providing access to data) can be easily modified to request additional
identification before completing transactions requiring logging
information. Only the web server need be modified in order to
guarantee that this transaction information is stored; legal burdens
placed on the provider of the web site would act as additional
guarantees that such information would be stored and would motivate
providers to make their servers compliant.
Certificate based security also supports the concept of a certificate
hierarchy. If Bob wants to send Alice his public key, he sends his
certificate. If his certificate was issued by his company, Alice may
not trust the certificate, because she only trusts a few
services. Bob's company may have negotiated a deal with VeriSign to
issue certificates, and VeriSign could digitally sign all certificates
issued by Bob's company. Alice will now accept Bob's certificate
because VeriSign lent the certificate its credibility, even though
Alice would not have trusted the company signature without VeriSign's
signature. Alice now accepts the company's public key, and uses it to
confirm Bob's identity.
With hierarchies, a few large CAs will have their public keys widely
distributed with all certificate aware clients. Additional CAs can
issue certificates on behalf of these larger CAs on a contract
basis.
Because digital certificates include more than just public keys,
different CAs will serve different roles. VeriSign's level 1
certificate includes a name or alias, e-mail address, and public
key. This certificate can be used for authenticating that the e-mail
came from the same person, but it fails to prove the sender's
identity. Their level 2 certificate includes their real name and
mailing address. Digital certificate technology clearly allows a more
flexible usage of digital certificates.
While the original usage of a digital certificate was to transmit
public keys, there is no reason that other information couldn?t be
transmitted, as explained above. The certificate based paradigm works
on the assumption that large numbers of certificate authorities will
exist to verify specific characteristics. This certificate signed by
the issuer, gives credibility to the information contained within the
certificate. These certificates can be transmitted over a secure
connection to prevent other parties from intercepting the certificate
and also verifying those facts against the transmitter.
Government CAs, or at least government contracted CAs, will play a
large role in a working digital certificate regime. Information like
age flags to confirm that the user is of legal age requires a legally
binding certificate. This requires a licensing system for the issuing
of legally binding certificates. Under X.509, the technical capability
for transmitting interoperable digital certificates is available, so
the process is now underway. A standardized language for the exchange
of characteristics is the next step. Without that language, unbundling
is made more difficult. The final technological hurdles will be
reached once there is a strong consumer demand for these
certificates. Without legal incentive to confirm identities, there
will be no demand for the technology, and the necessary standards for
these characteristic based identities will not be created.
Combining these ideas conceptually, we have developed a model for a
hypothetical architecture. In our scenario, Bob wishes to purchase a
product from the web site that requires him to be 18. Previously, Bob
has previously obtained a "Bob is 18+" certificate from the
CA. Bob enters negotiation with the web site, which wishes to know
that he is 18. Bob creates a pseudonym, in this case
"ghost," to shield his identity. The pseudonym is then
signed by his CA indicating its authenticity. The CA now knows that
Bob is attached to the pseudonym "ghost". Bob can then
request from the CA a "ghost is 18+" certificate. Bob then
engages with the web site store, sending them the signed public key
for ghost (alternatively, Bob can provide a public key server with
ghost's signed public key and the web site can obtain it from the
public key server) and the signed "ghost is 18+"
certificate. Using the signed public key, the web site can verify that
the user presenting ghost's public key is the holder of ghost's
private key. Successful verification indicates that the transaction
can be completed. The key features of this architecture are that the
web site can verify the needed information without knowing the real
world identity of ghost, and the certificate authority knows the real
world identity of ghost, but not the transactions in which ghost
engages.
Aside from the caveat presented above, business-domain interests in
the use of identity do not require developers of the architecture to
make any fundamental architectural choices for the system. Instead,
most of the concerns regarding the business arena are related to how
businesses and consumers will behave in an environment using the
digital authentication mechanism proposed in this paper. The
remainder of this section shall consider these issues as they appear
in the business-consumer and the business-business relationships.
Consumers might demand that a corporate web site authenticate other
items of interest before proceeding with a commercial transaction.
Consumer data privacy is a topic of great debate among those thinking
about the development of the Internet. Businesses collect vast
amounts of information concerning their customers, including their
purchasing habits, in an effort to develop consumer profiles. These
profiles may be used by the corporation to retool its marketing
approaches or product lines, or may be sold or rented to other
companies. With its ability to be programmed so that every hyperlink
contains a digital "tag," the Internet stands to facilitate
the development of richer and more complete consumer profiles than
presently possible under current techniques. Similarly, scripts may
be programmed so that this data collection and compilation is done
automatically and without further human input. Concerned consumers
could request that the corporation provide a certificate, signed by a
trusted authority, that describes the corporation's data privacy
policy and consumer remedies for violation of the policy. Specific
consumer interest groups might use this feature of the architecture to
demand certificates authenticating other aspects of the company's
practices. Animal rights activists, for example, might wish to demand
from web site retailers of cosmetic supplies a certificate
authenticating that they do not test their products on animals.
Although the architecture proposed in this paper would facilitate the
construction and exchange of such certificates, it is an altogether
different question whether businesses will agree to create them.
Certificates designed to authenticate that the web site is the
corporation's legitimate Internet presence are likely to be adopted by
companies with little opposition should the idea be presented to their
executives or the need made apparent by consumer demands or publicized
incidents of fraudulent commercial web sites. These certificates
serve to protect the reputation of the company's brand name, one of
its most valuable intellectual property assets. One might see
voluntary participation in programs certifying other corporate
practices if the social and market pressure is viewed to be large
enough to warrant the expense of finding a trusted third-party to
verify that the corporation in fact complies with its stated practice.
For example, Ben & Jerry's advertises that it does not use milk
containing Bovine Growth Hormone in the manufacture of its ice cream.
It may be cost prohibitive, however, for Ben & Jerry's to pay a
third party to vouch for this fact. The third party may agree to
authenticate this statement only if Ben & Jerry's permits auditors
to test the milk on a periodic basis. The costs of such an
arrangement would be passed along to Ben & Jerry's as part of the
cost of the authentication. Where certifying certain business
practices seems crucial and has become an issue of regional or
national importance, legislatures might pass statutes mandating the
use of digital certificates. Consumer data privacy, mentioned above,
is one area that might receive such consideration in the future.
From the corporation's perspective, the ability to receive
authenticated information about its customers is very useful. Many
industries operate under state and federal regulations which dictate
to whom they are permitted to sell their products. Vendors of
alcoholic beverages, for example, are prohibited from selling their
products to individuals under the age of twenty-one. The inability to
verify this information inhibits the development of electronic
storefronts for these industries. Similarly, under pressure from
state governments, companies might demand that consumers present
digital certificates certifying their place of residence so that the
purchases could be taxed at the applicable jurisdiction's rate.
Currently, businesses often request that consumers provide information
such as their occupation or income level when making a purchase.
Typically, no purpose of the transaction is facilitated by this
information other than the company's interest in knowing as much as
possible about its customers. Businesses might begin requesting
authenticated versions of consumer information as this would make
their profile databases a more valuable commodity than those without
the degree of authenticity a verified digital certificate provides.
Whether consumers will comply with the demands of business and produce
these certificates largely depends on the balance of power between the
company and the individual in the consumer negotiation. Where the
government proscribes the sale of a product to a certain class of
individuals, consumers wanting to purchase the product will have to
present the appropriate digital certificate as the business cannot
sell the items without identifying that the purchaser is eligible
under the law. Concerning other demands for authenticated
information, factors such as the individual's personality, whether
the average consumer provides the information, and the importance or
necessity of the product to the consumer. Unless the government steps
in, it is impossible to predict whether the business will lower its
demands or the consumer will have to forgo purchasing the product
should he not wish to deliver to the company the demanded information.
The next section explores this issue in more detail, although from a
slightly different angle.
It should be noted that underlying this entire discussion is an
assumption that consumers care about privacy. As presented earlier in
this paper, the degree to which an architecture facilitates unbundling
determines the amount of privacy that the system permits. A person's
privacy is best protected by never requiring them to reveal
information unnecessarily. Procedures that require those who collect
information from individuals to safeguard it are only second best.
The digital authentication mechanism proposed in this paper allows a
consumer to separate the various elements of his identity, giving him
the capability to reveal only what is necessary for each business
transaction into which he enters. It is not easy to predict the
extent to which consumers will take advantage of this feature of the
system that protects the privacy of their identity. In part, this
stems from the fact that most individuals have no past experience in
which they had a similar level of control over what constitutes their
identity. Which pieces of information are included in a driver's
license, for example, is set by the government. A certain amount of
acculturation to the architecture inevitably will be necessary.
Further, the level of privacy each person requires varies. Erring on
the side of perfect unbundling on the architectural level see
appropriate given these circumstances.
The level of unbundling that occurs within such an architecture,
however, depends on whether people choose to unbundle their
identities. People are subject to the social pressures of their peers
and the demands of the marketplace. These pressures may lead
individuals to bundle portions of their identities that they otherwise
would unbundle absent these outside influences. Theoretically, the
pressures of the market could be so great that there might be more
bundling of identity information under this system than exists today,
despite the potential of the architecture.
Figure one, shown above, helps clarify the possibilities of the
architecture in light of the pressures of the marketplace. There
exists today a certain level of unbundling, represented in the figure
by a bar on the spectrum. The digital authentication mechanism's
architecture facilitates the possibility for the bar to be shifted to
the right as far as each individual desires. However, businesses
might demand that certain bits of information remain bundled and be
revealed in order to complete a business transaction. Such market
pressures push the bar back to the left towards perfect bundling.
Where should the bar end up resting on this spectrum? Although,
it's possible to make various arguments for different positions
within the area of the spectrum labeled as B, it might be best left to
the marketplace to work out the exact position. One can argue,
however, that market pressures should not be allowed to push the bar
into the section labeled as A. The points of the spectrum labeled as
A were options available, but not chosen, under the status quo's
architecture. As bundling is an option under the status quo, one can
surmise that consumers prefer at a minimum the level of unbundling
provided by the status quo. Should the bar be pushed into the section
of the spectrum labeled as A, policymakers should consider legislative
remedies to reposition the bar in favor of unbundling.
There is reason to be concerned that in the future businesses might
pressure consumers to rebundle identity information so that it is
bundled to a greater degree than today. There exists a valuable
market for consumer profiles. Businesses desire two types of
information from consumers: direct feedback on the company's products
and information identifying the consumers' preferences or traits. As
it is difficult to survey or measure consumer preferences, business
substitute consumer profiles, or identity information, correlated with
purchases to develop a picture of what motivates an individual to buy
a product or which types of products an individual is likely to
purchase. A company concerned with avoiding unnecessary expenses
would prefer to purchase or lease a consumer profile database if such
a move would cost less than it would to compile its own database. As
it costs next to nothing for the business creating the profile to sell
or rent a copy to another corporation, there exists a healthy market
for consumer profiles. The more complex and detailed the profile, the
more valuable it is. So long as this market exists, all other things
being equal, companies shall be motivated to demand as much
information from the consumer as possible.
Businesses are likely to design their request for identity information
so that it appeals to the financial considerations of the consumer.
Were a business to refuse to sell products unless the consumer provide
identity information to the company, the business would run the risk
in the macro view of selling no products, or an insufficient amount to
stay in business. Thus, the selling of its consumer profiles is
inherently a secondary source of revenue for the company. Instead,
companies might offer discounts to customers who provide a digital
certificate certifying various bits of identity information. This
occurs regularly today. Among other things, an application for a
store discount card often asks the customer to provide his birth date,
social security number, and household income level.
Some are sure to argue that this commoditization of identity
information is appropriate and desirable. By providing a price for
privacy, consumers will evaluate their preferences for the
"good" and purchase the desired amount of privacy. Besides
arguments that criticize this purely economic view of identity, there
exist reasons to doubt whether a consumer fully considers the
ramifications of providing a company with information about his
identity. Does he take into account when calculating his preferred
choice the potential third parties that could purchase the information
and use it in a way that affects him negatively? Does he take into
account that the bits of information that he might choose to reveal to
company A might be combined without his knowledge with those bits of
information that he revealed to company B? A consumer who is
comfortable with company A knowing ten aspects of his identity, but no
more, must understand that the only way to ensure company A never
acquires more than those ten traits is to never reveal to any other
corporation a different set of traits.
Ultimately, it is up to the judgment of legislators to determine when
the government should step in and regulate the type of information a
business is permitted to collect. All that can be said for now is
that legislators probably will take the approach of proscribing the
collection of particular bits of information rather than the more
intrusive and difficult measure of identifying the categories of
identity information which may be collected by commercial
enterprises.
It should be noted that the ability to authenticate information is the
aspect of the proposed architecture that aids the development of
business-business relationships on the Internet. Whether the digital
certificate certifies that the supplier is "authorized to access
file X" or that the supplier is "company Y" makes no
practical difference. Both parties to the agreement already know the
other's identity. It is difficult to think of what interest company
might have in not revealing its full identity as its reputation and
brand name is one of its most valuable assets.
Although a community is a group of people who interact with each
other, at the basic level it comprises a group of people who exist
with each other in a common plane. Cyberspace can be treated as a
conduit touching portions of real space at key points. Ideas are
passed through the conduit, and business is transacted through this
conduit. The cyberspace community are members of the global community
interacting on a different plane than in real space. These members
rarely interact in real space, but they communicate through multimedia
means in cyberspace - whether it be by text, image, sound, or a
combination of the three. It is not possible to use the Internet
without being part of this community of people; you cannot avoid being
a part of the community, even if you are using the Internet as a
conduit: by e-mailing people, reading web pages, reading newsgroups,
or doing commerce online, one has joined the cyberspace community.
For example, communication evolved into allowing people to meet up in
multi-user dungeons (MUDs) to interact with other characters that
people create to represent themselves. This concept of the MUD is the
basis of the community in cyberspace. In the community created in a
MUD, every member puts on a "mask" to pretend they are of a
certain type of person (depending on the setting of the MUD, it can be
anything from a serial killer to an ogre), and then they run free in
this virtual world. However, the everyday usage community on
cyberspace has not fully evolved to this state yet. One does not yet
see avatars of people wandering, meeting, and interacting with other
avatars through cyberspace. On the other hand, cyberspace is more than
a collection of people who merely exist together.
Examples of these are never ending: there are those newsgroups which
specialize in maintaining certain operating systems on computers,
those talking about northeastern birds, and those which contain erotic
fiction. People cluster and interact on these newsgroups; they are
free to debate politics or philosophy on these newsgroups while others
either listen or cheer them on. Currently, anybody can post on a
newsgroup and can post either with his real name or an assumed one.
Most post under their true e-mail address, but when put under
consideration, there is no reason to post a message with a name
connected to it unless he wants credit for a post, or he expects a
response. Nobody uses this "anonymity" feature, however,
unless they are posting something questionable and do not wish other
people to track them.
The creation of a web page is also a symbol of community. The web page
is an invitation to interaction. Personal web pages usually contain
the hyperlink saying "e-mail me!", thereby spurring on
communication. It is similar to a sign on the door of a house saying
"Welcome to my home." A web page is that front door sign -
it is inviting the person in and introducing the author to the
visitor.
Doing commerce is another sign of community. In this community there
are those who create the goods that others want to buy. The Internet
simply provides that conduit between the buyer and the seller - all is
needed is for the peddler to place his web page up inviting the buyer
in for the interaction, and then the good can be sold online.
Cyberspace provides all these means (social and commerce) for people
to get together and interact on all these different levels with each
other, not only to exist with each other. And in this current
cyberspace community, there is the option to work anonymously, or
there is the option to connect some identity to yourself.
Anonymous transactions on the Internet today are not always possible,
and when they are, they are not trivial to perform. There are many
anonymous re-mailers around the Internet - anonymous re-mailers
provide the way for people to send mail or to post to a newsgroup
without having their identity (their e-mail address) to be attached to
that post. This provides the service of sending an e-mail or a post
"one-way." It is very similar to the idea of the anonymous
pamphlet distribution. One can leave an information pamphlet at your
door that simply contains information but no point of contact back to
the distributor. In order to facilitate some two-way communication, it
is a little bit more difficult. It is possible to create a server that
will re-mail your e-mail anonymously, but provide a handle on the
e-mail so when another attempts to reply to that e-mail, instead of
disappearing, the server actually resends the reply back to the
originator of the first e- mail. This of course provides the
re-mailing server with all the information on who is who in this
system.
However, there is no way to do anonymous commerce on the web currently
- all commerce requires you to provide some form of payment which is
either in the form of electronic money (credit cards), or a real world
equivalent which needs to be transferred between the two parties.
On the other end of the spectrum is the absolute identification of
oneself. In order to facilitate this, one needs to use some form of
digital authentication mechanism such as a VeriSign ID, or a PGP key -
this ID can be verified to be the real thing either by VeriSign, or
from a PGP key server. For an equivalent of certified mail, one would
like to know whether or not the sender is actually who he claimed to
be. With an e-mail or a news post signed with either the VeriSign ID
or the PGP key, one can be absolutely sure that the sender is who he
or she said he or she was.
Both of these are optional when interacting on the Internet
today. However, if full unlinking of identity or absolute linking is
enabled , in cyberspace interactions, then there are ramifications in
the social arena that must be considered.
Unlike in cyberspace, in real space it is impossible to facilitate
full unlinking. In real space, it is impossible to interact with other
people and not leave a portion of one's identity behind. In contrast,
the consequence of digital communication in cyberspace is that nothing
remains connected to the original source.
In a cyberspace with no mandatory traceability, no one need fear that
posts intended to be anonymous could be traced back to them. This
allows a full range of speech, including anti-government speech,
without fear of repercussion from an oppressive government. As long
as one can express what he wishes to express, one does not have to
worry that he exists in a Orwellian state. For example, one could post
an anonymous message to a bulletin board stating that the
U.S. President should be removed from office, and not have to worry
that somebody may be able to track his name back from that posting and
be able to find him and cause him personal damage. One can say
whatever he wishes to whomever he wishes whenever he wishes.
Unlinking also protects the "whistle-blower" who leaks
useful information that may get him in trouble had he posted it
directly with a connection back to his identity. There may be many
instances where information has not been released because the
potential whistle-blower feared reprisal. In the absence of this fear,
an individual will be more likely to release this information to
benefit the greater good.
Unfortunately, full unlinking also provides an environment for a large
amount of "junk" posting on the Internet. It allows people
to post false and fictitious ideas without fear that they will be told
to stop since there is no way they will be able to be tracked down.
Problems of this type have occurred on bulletin boards - people have
posted false information concerning certain stocks, which caused
readers to react in a way that affected the entire market. And due to
the unlinked nature of the conversation, there is no way to trace the
source of the false information down and prevent further incidents. It
is posters like this who give Internet bulletin boards a reputation as
untrustworthy places to obtain information. The consequences extend
far beyond simply damaging the reputation of the Internet; postings
such as this will affect the real world.
The "junk" being posted is due to the elimination of social
norms. If nobody is able to trace one's actions back to their real
world identity, then there is no incentive for one to control his or
her activity; no longer are norms regulating what people do in
cyberspace. One has the opportunity to do whatever he wants and not
have anybody "slap his wrist" to restrain him.
This ability to fully unlink also will make the Internet a prime
candidate for crime. It is possible for sections of cyberspace to
appear, whether it be an Internet newsgroup or a web site ring, for
the specific purpose of trading criminal information.
Criminals will have no fear of being caught and prosecuted for these
postings since there is no way for the message to be tied back to the
real identity of the server. Nobody will necessary know any aliases
being used since nothing has an absolute link back to the real world
identity of a person. This alias can simply be set up for use in this
bulletin board system. Although other posters may know to whom this
alias refers, to the uninitiated the alias will mean
nothing. Therefore, the law enforcement force will have no way to
track down those who may have posted these messages. Nothing stops
criminals from coordinating over the Internet, or even posting
criminal messages on public bulletin boards.
Law enforcement has lost a key tool for tracking down
criminals. Simply having a criminal newsgroup post is not enough;
there is absolutely no way to trace the real world origin of this
message. The only resort is to attempt to analyze the content of the
message. This can be easily circumvented with careful measures, which
may not be that intricate. As a result, a criminal can communicate
with other criminals and have no fear that anybody may be able to
track him down simply for posting to a newsgroup. Already, strong
encryption sometimes prevents law enforcement from determining the
contents of a message. Unlinking exacerbates the situation by making
it impossible for law enforcement to determine even the sender or
recipient of the message. In the real world, law enforcement can use
telephone records to investigate criminal activity; however, mandatory
unlinking works completely against this. This means current law
enforcement measures become less effective.
Should an Internet, which easily facilitates, if not promotes, crime
be allowed to exist? The crucial question becomes which choice is more
important - anonymous access for everybody everywhere with the side
effect of promotion of crime, or a mandatory real-world link
associated with information for the sake of safety.
This again is not a true analog to the real space domain. The few
mappings include phone calls and credit card transactions. Each phone
call is logged for billing purposes and can be obtained by other third
parties if they have the proper credentials in order to do so.
In cyberspace, e-mail and all digital transactions can be hidden and
secured with encryption, leaving third parties no more information
except who the call was made by, the time, and to whom. All e-mail and
newsgroup posts are marked exactly with the time they were sent, and
with the mandatory linking of digital transactions, the information of
the parties involved in the transaction will then be included;
however, the content of the message can be hidden.
This situation will eliminate the ability of anonymous re-mailers to
provide their service fully. It is not to be said that anonymous
re-mailers will not still be used - all that can be hidden is the
given e-mail address of the originating party. This may be enough for
the digitally unoriented, but law enforcement could still access the
identity link of the transaction, and the true identity of the sender
could still be determined. If anonymous re-mailers are allowed to
exist in this situation, then they must be designed in order to
account for the "filtering" effect of the e-mail. E- mail
may be able to be passed on from entity to entity but the identity of
the originator must be included somewhere in the transaction.
Whether or not this will cause a major change in the philosophy of
usage in cyberspace is an interesting question. The item of interest
is who holds the ability to check the identity of the transactions in
cyberspace. In this model, only the government has the ability to
absolutely identify transactions. This will probably mean nothing to
most transactions on the Internet - standard social transactions or
commerce will be unaffected unless they are potentially criminal
transactions.
This traceability means nothing to the end users of cyberspace, except
if they are engaging in illegal activities. The law enforcement
agencies will have the ability to absolutely identify people who are
communicating with each other. This eliminates the fear that
cyberspace may not be patrolable. Law enforcement has gained the
ability that they have in real space with phone records. The officers
still do not have the ability to necessarily tap into the content of
the transaction or the interaction, but they do have the ability to
see who is talking to who.
Now, law enforcement's reach has stretched out into commerce. When I
purchase something online, the record of the transaction is available
for a third party to view it. This will cause social
repercussions. This implementation is a mandatory enforcement of what
social norms should be taking care of. No longer are people relying on
their own guilt to govern their lives; no longer will people have the
security when they buy something that "nobody else" will
know that they have spent money on a certain good or service (of
course this includes not only illegal items, but legal yet socially
unaccepted goods). Somebody somewhere may have access to the
transaction record and may view it.
This may prove to be damaging to cyberspace commerce. If a precedent
is set to create a mandatory link between identity and cyberspace
transactions, then other commerce centers may decide not to have this
mandatory link, causing some consumers to use services in different
commerce communities outside the United States. This will begin a
downturn in cyberspace activities in the United States.
Unfortunately this also provides an incentive for criminal networks to
start their own private internets for their own use. These networks
will have their own security, their own group of people who are
allowed to use them - and, most importantly, no identity linking. This
allows criminals to evade law enforcement and all the effort being
placed into setting up the mandatory link, making efforts to establish
identity linking seem like a waste of time. However, if this digital
identity system is not set up, then cyberspace will be used for
crime.
As one easily can see, this evolving situation seems to be leaning
toward a "big brother" type of scenario. That is why the
statement "standard social transactions, or commerce will mean
nothing [to anybody] unless it is questionable" is so
interesting. Who or what defines "is questionable"? This
argument seems similar to the random drug testing argument - if one
does not take drugs, why should one mind being randomly tested for
drugs? The parallel to this argument is if someone is not sending
something illegal over the Internet, why should he mind that his
absolute identity is being tagged on every transaction in cyberspace?
The same applies for commerce - if one does not purchase or trade
anything illegal in cyberspace, why should one be concerned that
somebody may have a record of all the transactions performed.
This type of reasoning is incorrect - one's right to privacy should
not be jeopardized for the convenience of law enforcement. In general,
people should be able to act without worrying about somebody looking
over their shoulder at every step. A mandatory link might stifle
social activity on the Internet as individuals may view the link as a
substantial invasion of their privacy in cyberspace.
Granted, this puts a burden on the law enforcement community to
attempt to keep order in a digitally oriented culture by not providing
them with a precious tool: the ability to precisely identify a
community member's real life identity in order to prosecute the source
of any disorder in cyberspace. This may be the best that law
enforcement can hope to do in cyberspace.
More advanced uses of the Internet, however, are suffering from the
lack of an identification architecture. There is a wide realm of
future cyberspace applications that cannot be facilitated without
secure identity verification. For example, "registered"
e-mail that provides proof of sending, proof of receipt, and a chain
of custody, requires a secure identification mechanism. Public
Internet terminals that, regardless of their location, can configure
to a user's preferences and load a user's personal data require secure
user identification. Likewise, important applications such as on-line
voting in government elections or filing on-line tax returns with the
IRS will demand an effective authentication methodology. The general
public, however, does not yet comprehend the need for an
identification architecture to enable these advanced
applications. Most cyberspace users are content - even amazed - by
what they presently can do online.
The lack of public understanding about the identity problem is
increased by the potential to unbundle digital identity. Identity in
cyberspace need not be - and should not be - merely a direct
translation of real world identity. However, changes such as
unbundling identity into discrete traits and providing multiple
identities to the same person require the public to comprehend their
own identity in new and complex ways. This is a slow and difficult
process.
The key solution to this social barrier is to educate the public about
the need for a cyberspace identification architecture. Until people
understand that existing cyberspace identity mechanisms are
insufficient, people will not begin using more secure identity
mechanisms and will not begin lobbying the government to pass
legislation encouraging an Internet identity system. This burden of
education falls on companies that are trying to develop identification
mechanisms, such as VeriSign, Entrust Technologies, and Zero-Knowledge
Systems. For instance, according to VeriSign CEO Stratton Sclavos,
VeriSign is giving away most of its consumer identification
certificates for free in an effort both to train consumers about the
need for cyberspace authentication and to develop the VeriSign
brand. However, much of the rest of the business community developing
Internet services and software currently is promoting the message that
"the Internet is safe," in an effort to encourage electronic
commerce and the other presently available Internet services on which
these companies rely for their revenues. Such conflicting messages
delay consumers? learning process about cyberspace identity and
reveal the need for advocacy groups to promote the importance of a
secure cyberspace identification system. The educational barrier may
be the most significant barrier for consumer identification
technology. According to Mr. Sclavos, "consumer markets are just
not ready - the technology is there, but consumer behavior is
not."
On the revenue side, revenues from identification services are likely
to be very low. People are not used to paying for their own identity,
beyond a nominal amount. For example, even a U.S. passport, perhaps
the most secure and broadly accepted of all forms of identification,
costs just $60 for a 10 year term. For $6 revenue per year per user,
it would be difficult for a private company to become profitable. The
only way that the U.S. government is able to offer passports for this
low fee is that taxes and other government revenue subsidize the
costs, and the government is not seeking a profit. In addition,
unbundling, which is one of the key benefits we have described for an
advanced cyberspace identification architecture, may potentially drive
revenues from identity services even lower. It is possible that the
more that specific traits of identity become unbundled from each
other, the less a consumer is likely to pay for verification of each
particular trait. However, this is still very new ground for both
consumers and businesses, so it is also possible that unbundled
identification traits could actually be more valuable and generate
higher revenue than bundled traits, as consumers? privacy remains
protected and the disclosed trait could serve a focused and desirable
function.
On the cost side, building the infrastructure for an identity solution
is very expensive. The system has to be both highly secure and
operational, with a fast response time, on a 24-hour per day, 365 day
per year basis. This requires a significant amount of capital to be
invested at the beginning, in order to set up the service. In effect,
the entire system must be built - at least in a small-size version
capable of being scaled up rapidly - before there are any clients
using it. This pattern of cash flows make the identification system
business particularly unattractive.
Even if a profitable business model can be developed, it is unclear
which company, or even which type of company, is most appropriate for
building an identity infrastructure. Which companies should issue
certificates? Which company is trusted for this highly secure and
important function? How many CAs should there be? CAs could be either
new companies specifically started for this purpose, such as VeriSign,
or could be trusted existing companies, such as AT&T or the
U.S. Postal Service.
To understand who is best suited to develop the infrastructure, it is
first essential to understand that the identity architecture would be,
in economic terms, a public good. This is true because the
architecture would be both non-excludable and non-rivalrous. First,
the ideal identity authentication infrastructure would be an open
standard that allows particular users to customize identity
applications for use in a wide variety of contexts. As an open
standard, the system would be non-excludable, or in other words there
would be no way to exclude someone who does not pay for the open
standard from enjoying it. This type of identity system would be an
improvement over closed, proprietary systems, because the benefits of
the architecture rise as the number of compatible users increases. In
contrast, currently available identity mechanisms tend not to
interoperate. For example, the various internet telephony applications
that are available all allow voice communication over the network, but
only among users of the same brand of telephony
application. Similarly, an identity mechanism provided by one company
might be incompatible with a mechanism provided by another
company. Second, our proposed identity architecture is non-rivalrous,
meaning that one person's use of the system does not decrease another
person's enjoyment of it. The system, in effect, is not used up by any
particular person's use. In fact, the opposite is true: there is a
rising marginal value of each user of the identity architecture. As
the number of people using the same identity architecture increases,
the value of that architecture increases, because each identity
application based on this architecture would be compatible with other
applications also based on the architecture.
The fact that an identity architecture is a public good both reveals
some of the current difficulties with developing this type of system
in an unregulated market and highlights what is necessary to implement
this system successfully. First, as a public good, there is a
disincentive for any particular private company to develop a truly
open standard identity architecture, and there is a corresponding
incentive for each company to build proprietary identity
architectures. This is because the company that develops an open
system does not necessarily get to capture all of the value that the
system creates. Instead, a significant portion of the value accrues to
other companies and other parties that develop applications for their
own use based on the open standard that has been created. This pattern
helps to explain why almost all of the presently available identity
architectures are proprietary standards that are not
interoperable. Each company is hoping both to develop the standard and
also to capture all of the resulting value. No company is willing to
create the excess societal value that emerges (but that does not
accrue to the developer) from an open standard for identity
verification that allows global interoperability. The TCP/IP standard
provides a good example of this economic incentive pattern. With
separate, incompatible network protocols, society suffers due to the
costs of maintaining separate networks, the burdens of network
interchanges, and the loss of shared information. When the TCP/IP
standard emerged to help produce a single global network, society
benefited from this public good. Enormous wealth was created, but no
single company alone captured the value from deploying TCP/IP or
charged for the use of this networking standard.
Due to this disincentive for any company to produce a system that will
be open and interoperable, it appears necessary for the government to
regulate or at least to influence this marketplace. An open standard
identity infrastructure - a public good - would produce benefits for
society beyond just for those who create it, and thus private parties
are likely to invest below the socially optimal level. While the
potential for industry to coordinate investment efforts and develop an
identity system exists, such coordination might not take place. If the
public wishes to ensure that an identity system is developed, then it
the government is economically the best party - and probably the only
party - which could guarantee the building of an open identity
architecture. In addition, as we have argued above with respect to the
government's role as law enforcer, it may be necessary for the
government to influence the identity architecture now - before the
identity mechanisms are determined - rather than to wait until later,
if the government's legitimate needs for forced revelation of identity
in certain limited contexts is to be enabled on the Internet. Finally,
a secure identity verification mechanism could be extremely valuable
to the government for purposes of national defense and national
security. Whether the government is motivated by economics, law
enforcement, national defense, or a combination of all three, there
are several ways in which it could trigger the development of the
identity infrastructure.
Initiating an identity infrastructure first requires the development
of the architecture specification. This involves writing the
certificate specification and protocol for exchange, and ensuring that
both are robust. Next is the implementation of software which uses the
specification, and we conclude with the deployment of these
applications to the consumer market. It is likely that any company
which invests the time and money to implement applications for a
specific architecture will make these products available to the
consumer market.
The U.S. government has four basic options if it wants to initiate
this procedure. First, the government could decide to build the system
itself. A new or existing government agency could have responsibility
for developing the specification, and the government, via legislation
or market power, could then influence Internet companies to develop
identity applications using this specification. While this scenario
might seem simple, it is unlikely. The government is not particularly
well suited for this type of technology development. In addition, this
is much more of a ?command and control? method of governance than the
U.S. has normally adopted. Particularly with respect to the emergence
of Internet technologies, the U.S. government has so far taken a
free-market approach to technology standards or at least deferred to
industry associations. Indeed, given today's political environment,
the first option may be both undesirable and unrealistic.
Second, the government can contract with a private sector organization
to perform the specification development. This makes more sense than
internal government development because the system ultimately has to
be implemented and maintained by the private sector and must be
compatible with private sector products. Even if the government funded
the identity architecture development, the issue of implementation
would remain. Implementation could be mandated by legislation or
promoted through financial rewards or liability avoidance.
Third, the government could encourage a single company to develop the
specification for the identity architecture by granting to that
company a monopoly right such as the right to license the open
standard specification to software developers. Note that this would
have to be limited to a form of monopoly that did not discourage
people from adopting the open-standard. For example, the company might
charge a nominal licensing fee for developers; too high a fee would
result in industry backlash and the ultimate demise of the
standard. Additionally, the company might not be the best in the
market. If the government wants to guarantee adoption of the standard,
they will additionally need to regulate the implementation stage, as
in the second approach.
Fourth, the government could pass laws assigning tort liability for
identity misuse, while avoiding the role of technology developer or
technology financier. In this context, the government is allocating
liability in order to encourage entry into the market; allocating
liability as a method of encouraging use of the system will be
discussed below. This approach fails to address the problem of
interoperability: companies likely will select the cheapest system,
even at the expense of interoperability. Unless the private sector has
a specific incentive to develop interoperable systems ¾ an incentive
which liability rules are unlikely to provide ¾ private sector
identity solutions will tend to be proprietary and will fail to
capture the benefits of an open standard identity architecture as a
public good. Thus, this approach fails to solve one of the problems
which initially motivated government intervention.
Our discussion has focused on the U.S. government's role in developing
the global network. Because so much of Internet use is still
concentrated in the U.S., the U.S. government perhaps has the leverage
to dictate a standard. However, the Internet is a global network and
thus it may suffer from a "race to the bottom" in which
countries offering the greatest freedom and the least government
regulation lure Internet companies and Internet servers. It is
imperative that the U.S. government set a reasonable standard, if it
decides to influence this marketplace, so that U.S.-based Internet
companies do not have incentive to operate outside the country.
The appropriate liability rules must reconcile two competing
principles. First, because the market for digital identity mechanisms
is in its infancy, the selected liability rules must help create
incentives that will drive towards the widespread adoption of a secure
identity infrastructure. According to this goal, the liability for
identity misuse should be placed on whichever party can best induce
the introduction and implementation of identity architecture. Second,
in order to have an efficiently operating marketplace for identity
mechanisms, it is desirable for the selected liability rules to place
liability on the party who is the "least cost avoider" of
harm. Adopting this goal, liability for identity misuse should be
placed on whoever is best able to avoid misuse of digital identity. If
these two goals point towards the same party, both goals can be
accomplished together. However, if these two goals suggest that
different parties should bear liability, then one goal or another must
be made paramount, or the goals must be balanced.
In developing liability rules, the first situation we must consider is
who bears the cost if digital identity is misused, stolen or forged in
a cyberspace interaction. For example, suppose that person A is a
valid, registered user of a Web site providing content, discussion
groups, chat rooms and e-commerce. Suppose B, impersonating A,
successfully logs on to a Web site as A, posts obnoxious messages, and
orders merchandise using A's pre-paid account on the Web site. Who
should be liable for the harm caused by this fraudulent use of digital
identity? In this context, there would be high transaction costs for
all the parties facilitating a cyberspace interaction to negotiate
ahead of time over liability, so the law should set a clear default
rule.
We consider seven candidates for parties who could bear liability for
digital identity misuse in cyberspace.
Victims: The person or entity whose identity is stolen or forged could
be held liable. This would place on each person the burden of
protecting his own identity. However, since users cannot directly
dictate the security measures in use by their Web vendors, users would
be forced, in effect, to ?vote with their feet.? Web services with
better identity verification measures would flourish. Nonetheless, it
would be difficult for many end-users to understand, let alone to
judge, the effectiveness of identity mechanisms. Since the only sure
way of protecting one's identity in cyberspace would be to avoid
cyberspace, users would curtail their cyberspace use. The chilling
effect of this rule on the use of the Internet would be too
significant.
Identity thieves: The person who stole or misused the victim's
identity could be held liable for the harms caused by his or her
actions. This person, as the tortfeasor, clearly should be held at
least jointly liable for the identity misuse. However, a rule placing
exclusive liability on the identity thief will hold harmless the other
parties who could and should have taken steps to protect the victim's
identity. This would lead to carelessness or inadequate measures to
protect digital identity by the Web vendors. In addition, in the
context of cyberspace, it is often very difficult to figure out
exactly who the identity thief was, as by definition they were
impersonating someone else. Further it may be difficult to find the
individual even if his identity is known. If the identity thief alone
were liable, then in most cases of identity misuse, no claims would
ever be brought and no damages would ever be collected by the victims.
Application vendors: Software developers, such as those producing Web
browsers or e-mail applications, could be held liable for any identity
misuse that took place through their products. This type of rule would
rapidly lead to the widespread implementation of identity mechanisms
in all Internet software products. However, software products are
general purpose tools. This rule would raise the cost of Internet
software products for all users, as software vendors could not
distinguish in advance between customers who would use the products
for high-risk purposes and those who would use the products for
low-risk purposes. Liability costs would be shifted to end-users, but
in an unfair and inefficient manner under this rule.
Internet Service Providers: ISPs could be held liable for any identity
misuse that is done to or by their clients. The concern with
allocating liability in this way is that ISPs cannot know that an
identity has been stolen unless the certificate authority or
certificate holder notifies them (e.g., via a certificate revocation
list or other authorized information). In addition, prices for
Internet access for all users would rise because ISPs, like
application vendors, cannot easily distinguish between low and
high-risk Internet customers. In the situation where the ISP is
notified of the revocation, it might be reasonable to assign some
amount of liability if it negligently continues to permit continued
exchange. Under this scenario, rather than raising the price of
Internet access to all users, the ISP may contract with users in
advance to include a penalty fee for sending a revoked certificate
that exposes the ISP to liability.
Hardware vendors: In a non-trusted system, the hardware layer is quite
distinct from the higher software layers, and misuse of one layer
should not be the responsibility of those producing and implementing a
different layer. Holding a hardware vendor liable for misuse of
identity would place an undue burden on the manufacturer and would
create a strong disincentive to develop hardware systems. In contrast,
manufacturers of trusted systems might appropriately be held liable to
some extent because the manufacturer can control the degree of
security provided. Liability might be limited in cases where the
public had been adequately notified of flaws and the manufacturer took
appropriate steps to mitigate the impact of the problem.
Identity verifiers: Sources of trust, such as certificate authorities
(CAs), could be held liable for the misuse of digital identity, if
they were the least cost avoider. However, it is unlikely that CAs
will be the least cost avoiders, as there is no way a CA could
identify users whose identities are likely to be misused, or prevent
their misuse. However, CAs do have a responsibility to verify the
traits which they are certifying, and could reasonably be held liable
in cases where certification is negligent.
CAs also should shoulder some liability for the adoption of digital
identification systems that are inherently unreliable. Since liability
cannot be placed directly on the developers of an open standard
technology, we instead place liability on those who can pressure the
developers to build a reliable system. The logical choice for the
placement of liability is on the CA because CAs are in the business of
providing trust, whereas other participants in the system simply
utilize the infrastructure to achieve other ends. The drawback to this
approach is that assigning liability to the CAs in this nascent market
for secure identity mechanisms would require more reliable, hence more
expensive systems. This might prevent the development of this market,
prolong the implementation of an identity infrastructure, and
dramatically raise the cost of entry for identity solutions
developers. In order to mitigate the impact on CAs and increase the
likelihood of market development, some other party involved in
cyberspace interactions should also bear liability for identity
misuse. This other party should be chosen to ensure that the CA and
the other party can exert mutual pressure toward creating an identity
system that has the fewest liability concerns.
Internet host servers: Internet hosts, such as on-line vendors or Web
site or chat room providers, could be held liable for any identity
misuse that takes place through their servers. For example, if person
B successfully impersonates person A on Amazon.com, then Amazon.com
would be liable for the damages. This rule places the liability on the
party most able to develop or purchase identity mechanisms to prevent
identity fraud because the owner of the Internet host is best able to
install an identity verification system on its servers. This rule will
create strong incentives leading to the deployment of identity
verification mechanisms. However, this rule may promote systems that
are highly specialized for each particular Internet host, rather than
an open standard identity infrastructure. There would be an incentive
to build a strong system for the lowest cost, but there would be no
incentive to make the identity infrastructure interoperable if
interoperability added any cost whatsoever to the host's security
bill.
Holding host servers strictly liable for any harm from identity misuse
that takes place on their servers places the absolute burden of
accurate identity verification on the party actually doing the
verification. This rule does not require each Web host actually to
build the identity mechanism or to bear the ultimate liability for
identity misuse. Rather, this rule gives each Web host the appropriate
incentive to make sure that all clients who access their servers are
utilizing a reliable system of identification. Each Web host is in the
best position to negotiate with other parties and to contract around
this default liability rule to achieve for each particular situation a
more efficient rule, if there is one. For example, Amazon.com might
decide that VeriSign's certificate procedures meet their needs for
reliable verification; they might then form a contract indicating that
in return for exclusively using VeriSign certificates, VeriSign would
accept liability for abuse or fraud that results from use of their
certificates. Alternatively, VeriSign might lure Amazon into such an
exclusive contract by convincing Amazon of the integrity of VeriSign
certificates and accepting some or all of the liability for
misuse. This process need not be limited to a single certificate
authority; indeed, one of the advantages of an open system is that
Amazon could contract with multiple certificate authorities that it
decided were trustworthy.
An additional element to the default liability rule may be necessary
to promote the implementation of an open standard identity
architecture. If one CA or one identity system was the market
standard, then most Web hosts would probably contract with this CA.
However, until and unless there is one predominant identity
verification mechanism, each Web host would have the incentive to
contract with the cheapest available CA who would agree to indemnify
the Web host for any liability for misused identity. This is close to
what is happening presently in the Internet marketplace: each Web host
tends to use a separate identification scheme for its own
purposes. Interoperability is basically nonexistent.
Thus, the strict liability rule could provide a form of "safe
harbor" to promote the implementation of a standardized identity
verification mechanism. There are several forms this safe harbor could
take. At its most extreme, the law could disallow contracting and
prohibit indemnification of liability unless the Web host contracts
with a CA providing an interoperable identify architecture that
conforms with standards set by the government or by an industry
association. However, this type of rule is not feasible until there is
an acceptable industry standard to mandate. A more flexible safe
harbor provision might require that a Web host retain liability if it
is negligent in selecting the identity infrastructure to adopt. This
at least would require that Web hosts choose the identity
architectures carefully and might weed out insecure mechanisms from
the marketplace. Ultimately, the problem of developing an
interoperable identity system may not be solvable merely by the
creation of the correct liability rules. This problem might require
market leadership by one company after a competitive period or might
require regulatory intervention of some form from the government.
Assuming that a strict liability rule for Web hosts is adopted, and
assuming further that the Web host has a contract with all trusted
certificate providers, another liability issue remains: should the
"identity issuer" (the CA who issues the identity
certificate) or the "identity holder" (the user who receives
and uses the identity certificate) be liable if the certificate is
compromised? Until a rule for liability in this situation is
determined, the marketplace for digital identity systems cannot
operate efficiently, even if an identity infrastructure is
implemented. Some method of assigning liability must therefore be
determined.
There is certain to be a contractual relationship between the CA and
the CA's users - they are voluntarily in a relationship and they need
to interact when the certificate is issued and the user's identity is
verified. No matter what default liability rule is set by law for this
context, the contract between the two parties can and will reallocate
this liability, unless this reallocation is made illegal. This
contract will allocate liability in whole or in part to achieve the
optimal allocation, unless for some reason the CA has inappropriate
bargaining power over the user.
Whether it is optimal for the CA or the user to bear liability for a
compromised certificate depends in part on which party is the
"least cost avoider" of compromised certificates. Whichever
party can most easily avoid the loss of the identity certificate
should bear the liability and thus have incentive to take appropriate
care in protecting the certificate. Who this party is depends on the
technology that is used, on what physical manifestation the
identification certificate involves, and on how difficult it is for
the user to share the certificate inappropriately with others. If the
certificate is something that can be shared easily by the user, then
the user is the least cost avoider. For example, passwords can rapidly
and simply be distributed to other people, so it would be inefficient
to make providers of password-based identification mechanisms liable
for compromised passwords. The users of the passwords must bear the
liability for loss so that they have incentive not to share the
passwords with others. For digital certificates such as under our
proposal, the efficient allocation of liability between CA and user
depends on how and where the certificate can be stored. If it is
stored in a file on the user's computer and its use is not linked in
any way to the user's computer, then it is effectively very similar to
a password and the liability for loss or compromise should remain with
the user. However, for more difficult-to-distribute certificates -
such as those that are stored on smartcards or held in an encrypted
fashion -liability for loss or compromise should fall on the
CA. Biometric verification methods, where the certificate is in some
way the user's own body, are no doubt the hardest for the user to
share inappropriately, so liability for compromise of these should
also remain with the CA. A biometric-based certificate should only be
compromised if the issuer's technology fails.
Even in situations where the user is liable for a lost or compromised
password or certificate, the user's liability should be based on a
standard of negligence. If the user takes reasonable care of his or
her certificate and it is compromised nonetheless, then the user
should not be liable. Defining exactly what is "reasonable
care" of a digital certificate may be a difficult
process. Consumer protection laws may be required if CAs try to
allocate by contract liability even in cases where the user was not
negligent or where the CA is the least cost avoider. For example, if
the CA's identification system is at fault for the misuse of identity
(e.g., the system erroneously verifies an out-of-date password), then
the CA should be strictly liable, regardless of the user's possible
negligence.
The global nature of the Internet adds a further level of complexity
to any identity liability regime. What we have been considering so far
is liability rules that would apply within one nation. If identity is
misused across international borders, as is easily the case in many
cyberspace interactions that involve users and servers and companies
in several different countries, the liability rules become far more
complex and are therefore beyond the scope of this paper. Which
nations? laws should apply? What if one nation has much more
protective liability rules than another? What if the different
national rules make different parties responsible? Which country
should have legal jurisdiction? While, theoretically, the simplest
solution would be to develop global legal rules that govern the
Internet, this is not a realistic possibility, as many countries have
very different concerns for privacy and freedom and different schemes
for handling regulation.
The digital identity laws passed or under consideration even in the
United States so far are controversial. It is difficult for lawmakers
to understand and respond to the rapidly changing issues in
cyberspace. Currently, many conflicting legislative efforts are
underway, from those placing strict liability on the certificate
authority to those that rely very heavily upon existing contract law
and the courts to determine liability. As we consider the construction
of a digital identity system, we need to consider the appropriate
allocation of liability among these various parties.
Even without legislatures enacting new laws for digital identity
liability, there are less formal methods by which to encourage the
adoption and implementation of cyberspace identity verification
mechanisms. Faster moving groups than state legislatures could pass
rules or guidelines that encourage the use of digital identity
mechanisms in cyberspace. For example, state bar associations that
propose guidelines for lawyers? conduct could be a point of
significant leverage. Bar Associations could try to mandate that
lawyer-client e-mail correspondence is not protected by the
lawyer-client privilege unless the e-mail is encrypted. In the real
world, the lawyer-client privilege only applies to information that is
kept in confidence by the lawyer and the client, not to information
knowingly revealed to third parties; in cyberspace this type of rule
would make sense, since non-encrypted e-mail is basically open to any
third party's observation and thus, arguably, "knowingly
revealed." Setting a rule like this would force all lawyers to
use encryption mechanisms in cyberspace, and all clients would
probably have to follow suit, potentially spurring the need for a more
standardized digital identity architecture. Only if a group leads the
way and demonstrates the need for and use of a digital identification
architecture will such a system become widely adopted.
In broad terms, there are just three types of identification
mechanisms. Authentication can be based on: a person's shared
knowledge (such as a password); a person's possession of unique
information or device (such as a digital certificate); or a person's
inherent unique characteristics (such as a fingerprint or other
biometric). While our system fits mostly in the second category, it
would be perfectly compatible with the other two approaches. If, in a
certain context, a very strong link to real world persona is needed,
then a biometric system could be the "front-end" for our
architecture. Likewise, a password or PIN could be required for access
to a certificate to prevent simple copying of identity
certificates.
While there is flexibility about exactly how our identity architecture
should be implemented, the technology now exists to make our proposal
work. The choices are just exactly how, and where, and by whom it
should be built. Cyberspace will be a better, safer, and more useful
place when people know for sure that they are communicating with a
dog.
It is important to understand the exact type of information to which
law enforcement will have access under the trace feature. Consider
the example transaction discussed in an earlier section of this
paper. In that transaction, Bob, using the alias "Ghost," purchased a
product that required him to prove that he was over the age of
eighteen. Under the trace, law enforcement would contact the web site
store and ask for the alias of the person who made the transaction,
and would then contact the certificate authority and ask for the
identity of "Ghost." In legal circles, this identity information is
referred to as transactional information, to distinguish it from
content information. Law enforcement would be requesting content
information if they asked the web site store what "Ghost" told the web
site about himself - that he is over eighteen, in our
example. Typically, content information is afforded higher procedural
protections than transactional information.
Congress could rely on a number of current statutes as a model for new
legislation that would govern law enforcement access to the trace,
however, none of the current statutes provides a perfect model. This
section will explore some of the features Congress may wish to borrow
from two of these statutes, the Wiretap Statute (Title III) and the
Stored Electronic Communications Privacy Act (ECPA).
Although Title III is designed to regulate government interceptions of
communications (content information), the statute does contain some
features that Congress may wish to apply to the access provisions for
the law enforcement trace. First, Title III requires law enforcement
to obtain a court order. Although current laws sometimes allow law
enforcement to access transactional information without first
obtaining a court order or warrant, Congress may wish to implement a
court order requirement because of the current political sensitivity
regarding privacy on the Internet. Second, Title III requires law
enforcement to show that traditional means have failed or would fail
before a wiretap authorization can be granted. Congress may wish to
consider whether there might be some other method by which law
enforcement could obtain this information and whether there should be
a statutory preference for using that method over the law enforcement
trace. Finally, Title III requires law enforcement to minimize the
interception so communications not covered by the order authorizing
the wiretap are not intercepted accidentally. Following this
principle, Congress should draft legislation that prohibits law
enforcement from accessing anything other than transactional logs,
particularly archived copies of exchanged digital certificates.
Another statute that Congress should review when drafting legislation
to regulate access to the law enforcement trace is ECPA. ECPA's
provisions establish procedures for governmental access to both
content and transactional information stored with electronic
communication providers. As ECPA does for electronic communication
providers, Congress will want to prohibit certificate authorities and
web site stores from disclosing identity information and related
transactional information unless consent from the appropriate
individual is obtained or the request comes from an authorized law
enforcement officer. Although ECPA allows for access to certain types
of information with an administrative subpoena, as mentioned above,
for political reasons, Congress may find it necessary to require a
warrant for access to the trace. One interesting twist of ECPA is that
it places liability on the electronic service provider for disclosing
information to a government agent who has not followed the statute's
procedures, but does not place any liability on the government for
requesting information in a manner that does not conform to the access
procedures established in ECPA. Regardless of whether this was an
intentional decision by Congress or an oversight, Congress should
require the government to follow process in any new statute governing
a law enforcement trace because such a law would offer an individual
the possibility that improperly obtained information could be deemed
inadmissible and excluded from any criminal proceedings.
In the United States, individual states have led the way in the
passage of legislation regulating the use of digital and/or electronic
signatures. State digital and/or electronic signature legislation
generally can be categorized as one of three types of legislation:
prescriptive, criteria-based, or signature-enabling. Legislation
following the prescriptive model provides a specific regulatory and
statutory framework for the recognition of digital
signatures. Criteria-based legislation requires signatures to satisfy
certain criteria of reliability and security in order to be legally
binding. Finally, signature-enabling legislation, such as that passed
in Florida and Massachusetts, permits any electronic mark that is
intended to authenticate a writing to satisfy a signature requirement.
Utah's digital signature law, passed in 1995 and amended in 1996, was
the first of its kind in the United States. Influenced by the ABA's
efforts and enacted just before the ABA's Guidelines were released,
Utah's law followed the prescriptive model by providing a specific
regulatory and statutory framework for the recognition of digital
signatures. Indeed, Utah's digital signature statute epitomizes
prescriptive, PKI-based legislation: it "establishes a detailed
PKI licensing scheme, allocates duties between contracting parties,
prescribes liability standards, and creates evidentiary presumptions
and standards for signature or document authentication." Under
the Utah Act, a digitally signed document satisfies the writing
requirements if the signature is verified by a valid licensed public
key, however, only attorneys, financial institutions, title insurance
companies, and the State of Utah may act as licensed certification
authorities, and licensed Certification Authorities are required to
post a guaranty in the form of a bond or letter of credit. The Utah
Act does not prohibit the operation of unlicensed certification
authorities in Utah, however, unlicensed CAs lose evidentiary
presumptions of authenticity and do not enjoy the benefit of limited
liability under the Act. Note the potential effect of such laws on CAs
wishing to certify "low-value" certificates. In order to
enjoy evidentiary presumptions of authenticity an limited liability,
these CAs need to be licensed; however the costs associated with such
licensing could be prohibitive for CAs distributing
"low-value" certificates.
Following the lead of the ABA Information Security Committee and the
Utah legislature, some states have adopted prescriptive legislation,
however other states, mindful of the drawbacks of the prescriptive
approach, have eschewed this technology-specific approach in favor of
a more flexible approach. The prescriptive approach limits liability
and establishes evidentiary presumptions when users rely on digital
signature technology used in conjunction with state-licensed
certification authorities. Simultaneously, such legislation
discourages reliance on alternatives, even if they superior security,
by denying them the benefit of these presumptions. Thus, some states
have opted for more flexible legislation that is technology-neutral
and avoids "market-distorting effects" by choosing not to
define a particular liability regime.
Similarly, the NCCUSL draft codifies the fundamental premise of the
Act that "the form in which a signature is generated, presented,
communicated or stored may not be the only reason to deny the
signature legal recognition." Indeed, a provision of the UETA
ensures that "[a] signature may not be denied legal effect,
validity, or enforceability solely because it is an electronic
signature." Unlike certain existing state laws, the UETA does not
set forth substantive requirements for the creation of valid
electronic signatures. For purposes of the act, however, "an
electronic record will be deemed signed by an electronic signature if
the signature is ?verified in conformity with a commercially
reasonable security procedure.?" While this technology-neutral
approach will foster competition by allowing a variety of security
measures to blossom; there is likely to be uncertainty, at least
initially, in determining what meets the "commercially
reasonable" standard (i.e., what constitutes "commercially
reasonable" security).
The draft law also creates presumptions regarding the identity and
integrity of electronic records and signatures where heightened
security procedures are followed.Executive Summary
Currently there is no generic system for identification in
cyberspace. It is not possible to absolutely identify an entity or to
accurately tell whether an object has a specific
characteristic. Digital environments have inherent differences from
real space which causes this discrepancy, and when implementing an
identity system for cyberspace one needs to consider more than just
the architectural nature of the system - any system chosen will have
social repercussions which need to be also taken into account.I. Overview: What is Digital Identity?
This paper explores in detail the legal, political, social and
technical issues surrounding the development and adoption of an
internet architecture that permits individuals to exchange
authenticated information. Before proceeding with that discussion,
however, it is important to examine the concept of identity
itself. This section develops a working definition of identity,
considers the ways in which people use their identities, and
articulates the reasons why it is important to protect our identities,
especially in the digital context.Working Defintion of Identity
It is difficult to craft a formal definition of identity. Basically,
the essential and unique characteristics of an entity are what
identify it. These characteristics might include, among other things,
the unchanging physical traits of the person, his preferences, or
other people's perceptions of the individual's personality. The skills
that a person possesses can also become part of one's identity. For
example, a person's identity could include the fact that he "has
the ability to drive" or that he "has brown hair." Some
characteristics, such as height, have one correct setting. Those
traits of an individual that reflect someone else's perceptions do not
have to have an absolute setting. Bob may set Alice's "is
friendly" flag to true, whereas Charles may set the same flag to
false. Even if Bob and Charles agree on what should be the flag's
setting for Alice, Alice's own view may differ from theirs. Thus, in
practice there is a degree of fuzziness to the definition of an
entity's identity, and most certainly to how it is perceived by
others.Identity as a Commodity
In today's economy, identity information often is viewed as a valuable
commodity. This view of identity is worth a closer examination.Verifying Versus Revealing An Identity
Cyberspace creates opportunities for identity theft. One inherent
property of digital media is that it can be duplicated perfectly and
easily. Exact copies of everything sent over a digital communications
channel can be recorded.Consider the act of sending a signed letter to
someone. In real space, I reveal to the recipient the exact form of my
signature, but the difficulty of mastering the art of forgery protects
me from the possibility that the recipient would begin signing letters
with my signature. However, if I send a digital letter that contains
the digital representation of my signature, the recipient could easily
duplicate and use my signature to assume my identity when signing
documents. The seriousness of this problem is highlighted when you
consider that future technologies will allow extremely important
identifiers, such as a retinal scan or a fingerprint, to be
represented digitally. These biometric characteristics are protected
in real space because they are embedded in the physical body of the
person. This is lost in cyberspace.Thus, cyberspace needs a system
that allows individuals to verify their identities to others without
revealing to them the digital representation of their identites. A
verification system would let Bob, for example, know the identity of
Alice or that she possesses a particular trait, but would not give him
the ability to impersonate Alice or use the trait identifier as if it
was his own. In our digital letter example, Bob would be able to
verify that the letter contains Alice's signature but would not let
him sign documents as Alice. Similarly, a verification that someone is
of the proper age to purchase alcohol would not give the person
verifying this identifier anything that would allow him to represent
himself as being of the proper age to purchase alcohol. Such a sytem
helps both parties obtain what they want out of exchanging identity
information without the risk of identity theft.II. Introduction to Cyberspace
Cyberspace is "the total interconnectedness of human beings
through computers and telecommunication without regard to physical
geography." Cyberspace is "a term coined by science fiction
author William Gibson to describe the whole range of information
resources available through computer networks." For our purposes,
cyberspace is a realm in which communication and interaction between
two individuals or between an individual and a computer is facilitated
by digital data exchanged over computer networks. This interaction or
communication can be used for a host of different purposes.
III. Unbundling
The Promise
One major premise of our project is that new technology facilitates
unbundling of identity information and therefore has the potential to
provide a degree of control over privacy and anonymity in cyberspace
that is difficult, if at all possible, to achieve in the real world.
This section briefly explains this premise.The Promise of Unbundling
The concept of unbundling captures two distinct notions, each of which
facilitates the exercise of choice. First, to unbundle identity is to
treat identity as a set of individual traits, rather than one
integrated bundle of traits. Second, when identity is unbundled, a set
of traits need not be bound to a single "real world"
person. For clarity, we refer to these types of unbundling as Type I
and Type II unbundling, respectively, discussing each type of
unbundling in turn.Privacy: Type I Unbundling
The ability to achieve Type I unbundling facilitates control over
one's degree of privacy. When identity is unbundled, it can be treated
as a set of individual traits rather than one integrated bundle of
traits. Identity is bundled perfectly when all of one's traits are
grouped together. Identity is unbundled partially when small groups of
individual traits are packaged together. Identity is unbundled
completely when it is divided into a set of individual traits treated
separately. When identity is partially or completely unbundled,
entities may be identified by one or a combination of traits, rather
than by a complete set of traits. Thus, Type I unbundling facilitates
control over the degree to which one's identity is revealed. Indeed,
it facilitates choice regarding which and how many elements of
identity to reveal.
Anonymity: Type II Unbundling
Type II unbundling of identity refers to the ability to control the
strength of the link between our cyberspace and real world identities
and to the corollary that a set of traits need not be bound to a
single "real world" person. The ability to achieve Type II
unbundling facilitates control over one?s degree of anonymity. Indeed,
with Type II unbundling, we can have a strong link, weak link, or no
link at all between our cyberspace and our real world
identities. Where there is a strong link between these identities, we
are the person that we represent ourselves to be in
cyberspace. However, as the link between our cyberspace and real world
identities becomes more tenuous, the malleability of our cyberspace
identity increases. Taken to its extreme, unbundling enables us to cut
the link between real world and cyberspace identity, leaving us with
two completely different identities: one in the real world and one in
cyberspace (see figure). Thus, true anonymity
is possible in cyberspace.
The Technology
The key point is that new technology facilitates unbundling of
identity information. The new technology to which I refer is digital
communication, and, more precisely, the digital certificate. For
purposes of this section of the paper, it is sufficient to note that
digital certificates enable us to make credible assertions about
ourselves in cyberspace. Certificates can be used to verify a trait or
to verify actual identity. For example, I can present my "I am
over 18" certificate to verify that I am eligible to vote, and I
can present my "I am Melanie" certificate to verify my
identity. The interesting point is that digital certificates enable us
to make credible assertions both about our traits and about our
identities, and thereby facilitate the Type I and Type II unbundling
that provide a degree of control over privacy and anonymity.The Promise of Unbundling Revisited
In the real world, there were a number of obstacles to
unbundling. However, in cyberspace, it possible to unbundle identity
to a degree heretofore quite difficult, if at all possible, to achieve
in the real world. With the technical ability to unbundle, we
theoretically are able to choose any degree of privacy or anonymity
represented along the two spectra, including perfect privacy and
perfect anonymity.In our times, people are often willing to make drastic
changes in the way they live to accord with technological innovation;
at the same time, they would resist similar kinds of changes justified
on political grounds. If for no other reason than that, it is
important for us to achieve a clearer view of these matters than has
been our habit so far.
IV. Anonymity vs. Accountability
Crime
Political theorists have long been in the business of trying to form
the perfect society: one in which the people are content and
prosperous and there is no crime. Approaches to developing this
society have varied, from the Hobbes' tyrannical world to Marx's
non-governmental communist scheme. However, in all cases a central
issue is order: keeping people within the society from harming one
another, or preventing crime. In order to prevent crime, society
creates a regulatory body designed to enforce laws. Law enforcement
functions by providing disincentives to breaking the law and a system
of procedures to deal with those who do. Law enforcement protects
people from criminals and criminal activity; however, almost every
political philosopher (excepting Hobbes) has recognized the interest
society has in protecting itself from law enforcement. That is, law
enforcement must be given a certain amount of power in order to
prevent people from committing crimes and to punish those who do
commit crimes; however, because law enforcement itself must be
composed of members of society, an attempt must be made to avoid the
abuse of power. This is the classic conflict between liberty and
order.The Internet
Scott Charney and Kent Alexander outline the several ways in which
computers can be involved in crime. In all cases a harm is committed
which would be considered a crime in the real world, except that a
computer is involved in some way. The scenarios involve computers A
and B. In the first case, data on computer A is stolen, erased, or
damaged. In the second case, computer B is used to commit a crime:
this can be either a traditional crime, or a crime involving a victim
computer, "A," as in the first case. The third case is one
in which a computer is not a target or source of a crime, but contains
evidence that a crime was committed, or was used in planning the
crime.Link and No-Link: An Architectural Choice
As identified earlier, any digital identification system must
determine where to lie upon the continuum of anonymity and
accountability; that is, a system must adopt an appropriate degree of
Type II unbundling. However, within the context of law enforcement it
becomes clear that not all points along this continuum are equal. One
point is very different from all the others: the point at the far end
of the spectrum where there is absolutely no traceability. For the
sake of clarity in future discussion, this point will be called
"no-link." At the no-link point, there exists within the
digital identification architecture no mechanism for determining the
link between data in cyberspace and the real world recipient or
sender. The no- link point implies only that there is no mandatory
link between cyberspace and the real world; this does not preclude an
additional, non-mandatory method of determining identity that could be
layered on top of the no-link architecture. All other points along the
spectrum will be designated as "link" points. This indicates
that there is some mandatory architectural mechanism for determining
the real world identity of the sender and receiver of data in
cyberspace.No Link
The benefits of a no-link system are, as mentioned above, those
pertaining mostly to issues of freedom of speech and freedom of
action. In the commercial domain, the wheels of capitalism are greased
by the no-link architecture. People who have no fear of ever being
personally associated with what they buy are far less likely to be
concerned about the social norms which might have previously
restricted them from purchasing a product. Unbundling facilitates the
necessary degree of identification that commerce will require without
necessitating the revelation of the entire real world identity. Free
speech is likewise assisted by the absence of traceability: where
potential oppressors are unable to determine the sender's real world
identity, there is no danger of oppression.Link Architecture
No-link architecture provides protection from McCarthyism. But in so
doing it removes all accountability from speech. It is an architecture
that completely eliminates the power of social norms, market
regulation, and legal regulation to govern interaction on the
Internet. Society should not overlook the more general consequences
that may result from the ability to avoid accountability in all
speech, especially speech which would not be considered criminal:
people may routinely and without concern spout inaccurate and
misleading information, and responsibility may disappear even further
from the moral landscape. However, the aspects which can be most
clearly identified and discussed are those which result in criminal
behavior.Preventing Crimes
The issue then becomes one of preventing crime, while simultaneously
attempting to mitigate this potential "chilling effect" on
free speech. At the heart of this discussion lies the distinction
between transactional information and content
information. Transactional information is information regarding the
sender, recipient, and other information associated with the
transmission of the information but not regarding the content of the
information. Thus far the argument has centered around transactional
information; however, the value of content to law enforcement must be
considered: if it is absolutely necessary to have content as well as
transactional information, then it will do no good to consider
offering the latter without the former. If, on the other hand,
transactional information without content is a tool that can be
utilized, it may represent an effective compromise between the needs
of law enforcement and the desires of society.Implications
The negative implications of choosing the link system are clear: it
may place an unreasonable burden on free speech. Even if it is not
unconstitutional in this manner, it may simply deter people from
speaking out in situations where their voices would be most useful. In
order to convince society that its interests in avoiding unreasonable
persecution are maintained, the architectural decision to include link
must be combined with legal regulations regarding who is given
sanction to disclose the link, and under what circumstances such
disclosure is acceptable. While the negative impacts of providing a
link with all transmitted data can never be fully accounted for, the
goal of a system which provides an architectural link must be to
mitigate the impact of the architecture as fully as possible.V. Mandatory Authentication Mechanism - Constitutionality
Introduction
Traceability refers to the existence of a link between a cyberspace
identity and a real world identity. Mandatory traceability refers to
the requirement that such a link be created and maintained. Mandatory
traceability generally takes one of two forms. First, to achieve
mandatory traceability, one might require identification as a
precondition of speech in cyberspace. Alternatively, one might require
identification as a precondition of access to cyberspace. We briefly
consider and reject the former and then consider the constitutionality
of the latter form of traceability in detail.Traceability as a precondition for speech
One way to accomplish traceability is to require identification as a
precondition of speech (e.g., by outlawing the transmission of
anonymous messages from a particular access provider, from a class of
providers, or in a network). This facial form of traceability would
require a user openly to attach his identity to his message, and, as
one commentator points out, only would withstand strict scrutiny in
cases in which the identity requirement implicated "other
overriding constitutional rights, such as voting rights or property
rights." When this type of identification requirement is designed
simply to combat the risks of fraud and corruption on the electoral
process, it will not survive constitutional scrutiny. Further, such a
broad ban on anonymous speech is unlikely to survive; indeed, an
identification requirement in this form would bar the use of services
such as anonymous re-mailers, and likely would be struck down. We do
not propose a system of digital identification in which traceability
is a precondition for speech because the technology available permits
us to craft a more narrow identification requirement which better
would serve the state's legitimate law enforcement ends.Traceability as a precondition for access
Identification could be required as a precondition for access to
cyberspace using law, technology, or a combination thereof. Using law,
the government potentially could regulate access to cyberspace whether
access is obtained through government-subsidized or private ISPs. In
the case of private ISPs, the government could require the ISPs not
only to require identification as a precondition to access, but also
to keep logs of cyberspace users linking their cyber-aliases to their
real world identities. Further, the government could provide ways in
which legal process could be used to compel private ISPs to respond to
authorized law enforcement requests for identity information.Constitutionality of Traceability as a Precondition for Access
We now consider the constitutionality of requiring traceability as a
precondition for access to cyberspace. In order to allow an individual
to maintain full control over privacy and anonymity in the
cyber-environment, yet ensure the availability of real world
accountability in limited circumstances, we imagine the implementation
of a flexible digital identity system conjoined with a system of
mandatory traceability. This section will grapple with the
constitutionality of such a digital identity system, focusing on the
First Amendment right to anonymity and the Fourth Amendment right to
privacy and protection against unreasonable searches and seizures. For
the remainder of this section, we will use the designator "system
of mandatory traceability" to refer to a fully flexible digital
identity system that simply creates and maintains a link between one?s
real and cyber-identities.Fourth Amendment Analysis
Introduction: The Driver's License Analogy
Before entering into our doctrinal analysis of mandatory traceability
under the Fourth Amendment, we would like to identify an analogy that
is useful in thinking about the constitutionality of a traceability
requirement. Consider the driver's license: the small, plastic card
that people carry around in real space in order to signify that they
are qualified to operate a motor vehicle. One's driver's license
contains information that identifies the holder of the license: name;
birthdate; sex; height; weight; identification number; and picture. In
addition, the card has verification features that demonstrate that the
information on the card has been certified by an authorized, trusted
authority, the DMV. For example, in order to verify the validity of
the license, it is laminated, it may have holograms that are difficult
to forge, and it may include signatures or other indicia that the
state attests to the truth of the information on the card.Introduction to Fourth Amendment Doctrine
The Fourth Amendment to the United States Constitution provides:
"The right of the people to be secure in their persons, houses,
papers, and effects, against unreasonable searches and seizures, shall
not be violated, and no Warrants shall issue, but upon probable cause,
supported by Oath or affirmation, and particularly describing the
place to be searched, and the persons or things to be seized."
The plain words of the Amendment indicate that searches and seizures
are subject to a "reasonableness" requirement, and that
probable cause is required for a warrant to search or seize. In
interpreting the amendment, courts constantly have struggled both to
determine what constitutes a search subject to the warrant
requirement, and to interpret "reasonableness."
What constitutes a search
As the Court struggled to define the scope of the Fourth Amendment, it
needed to determine just what constituted a search. In the early
twentieth century, the Supreme Court's Fourth Amendment jurisprudence
was geared toward the protection of property. The Court's inclination
to protect property quite clearly is reflected in its 1928 decision in
Olmstead v. United States (277 U.S. 438 (1928)). In
Olmstead, the Supreme Court held that use of a wiretap to
intercept a private telephone conversation was not a
"search" for purposes of the Fourth Amendment. One of the
grounds on which the Court justified its result was that there had
been no physical intrusion into the person's home. Under
Olmstead's narrow view of the Fourth Amendment, the amendment
was not applicable in the absence of physical intrusion. Thus, without
trespass or seizure of any material object, surveillance was beyond
the scope of the Fourth Amendment as interpreted by the Olmstead
Court.Reasonableness Test
The Court has struggled not only to define the scope of Fourth
Amendment searches and seizures, but also to apply the "probable
cause" requirement. Originally, it was assumed that
"probable cause" was required for every law enforcement
activity that constituted search or seizure. Subsequently, on
suspicion short of probable cause, the Court permitted certain
searches and seizures on the basis that they were reasonable-that law
enforcement interests warranted a limited intrusion on the personal
security of the suspect.Constitutionality of Mandatory Traceability Under Current Fourth Amendment Jurisprudence
Applying Katz
In a system with mandatory traceability, the government will be able,
under certain limited circumstances, to obtain an individual's real
world identity. If the user's identification information is encrypted
for protection, the government will require the decryption key.
Eliciting the decryption key, which an individual seeks to preserve as
private, likely would be considered a seizure in accordance with the
privacy-based rationale articulated in Katz. Similarly, even if the
user's identification information is not encrypted, but simply is
stored by a certificate authority or other trusted authority, the
government may be conducting a search/seizure subject to the
strictures of the Fourth Amendment when it attempts to elicit the
identity information. Therefore, we must determine whether or not such
searches are constitutional under the reasonableness test.Applying the Reasonableness Test
One way to ensure that a search passes the reasonableness test is to
condition such a search on the presence of a warrant based on a
showing of probable cause. Where a decryption key (if required) and a
person's identity will be revealed only in cases of individualized
suspicion (i.e., cases in which there is a valid warrant), the search
clearly would be deemed reasonable under our current
jurisprudence. Indeed, one's identity would be revealed only in those
cases in which law enforcement has the right to access that identity
and the person has no right to conceal his identity.Constitutionality of Traceability Under Potential Modifications to
the Reasonableness Test
While the traceability requirement likely would pass the Fourth
Amendment's "reasonableness" test, this test has come under
criticism from some scholars. For example, Michael Adler argues that
the current "reasonableness" test does not protect important
interests that were protected under the old "property-based"
standard, and therefore suggests that we modify the test to include
these interests. We now consider two proposed modifications to the
"reasonableness" test based on two distinct policy
interests.First Amendment: The right to anonymity
Although the Supreme is best know for its protection of free speech,
it also protects rights ancillary to free speech including the right
not to speak. The constitutionality of a system of mandatory
traceability depends upon the scope of the limited right to anonymity
that the Supreme Court has carved out for First Amendment
protection.Mandatory Traceability Under the Compelled Speech Doctrine
When properly implemented, a digital authentication system that
facilitates traceability will permit the government to access identity
information only in those circumstances in which the government has
shown proper cause. Under the proposed system of mandatory
traceability, the public will not be able to link a person's real and
cyberspace identities; only authorized law enforcement officials will
be able to do this, and only in accordance with proper procedures.On past occasions revelation of the identity of [the
NAACP's] rank-and-file members has exposed these members to economic
reprisal, loss of employment, threat of physical coercion, and other
manifestations of public hostility. Under these circumstances, we
think it apparent that compelled disclosure of [NAACP's] Alabama
membership is likely to affect adversely the ability of petitioner and
its members to pursue their collective effort to foster beliefs which
they admittedly have the right to advocate, in that it may induce
members to withdraw from the Association and dissuade others from
joining it because of fear of exposure of their beliefs shown through
their associations and of the consequences of this
exposure.
Conclusion
Under the current interpretation of the First and Fourth Amendments to
the Constitution, a traceability requirement likely would be deemed
constitutional. In particular, we propose a flexible system of digital
identity in conjunction with a mechanism that implements traceability
in which a warrant is required for access to identity information. A
warrant requirement would ensure that our system will meet the
standards of the Fourth Amendment, and even in the absence of a
warrant requirement, the trace likely would be held
"reasonable." In addition, the ability to maintain the full
spectrum of privacy and anonymity with respect to all but properly
authorized law enforcement officials should enable a mechanism for
traceability to survive scrutiny under the First Amendment.VI. Technology
Introduction
This section provides an overview of the technical issues concerning
the development of the digital identity architecture proposed in this
paper. The section also explores a possible method for implementing
the traceability feature that law enforcement might desire in a
digital identity architecture.Digital Certificates
The cornerstone of any digital identity scheme is a method for
authenticating people and messages. It is important to be able to
authenticate people, confirming characteristics about them such as
their name, age, citizenship, or other relevant details. It is also
important to verify messages, confirming that the person or server
from which you are supposed to be receiving information is actually
the entity sending the information.Using Digital Certificates
Let us consider a typical interaction using a digital
certificate. Suppose Bob sends Alice a digitally signed message. To
authenticate Bob's transmission, Alice must have Bob's public key. Bob
needs a method to send Alice his public key that would guarantee to
Alice that it is his public key. Certificate authorities can solve
this problem. Bob can have his certificate authority sign his public
key. When a public key is combined with a trait and then the
combination is signed by a CA, the item becomes a
"certificate." Because his CA is well known, Alice's
computer knows the CA's public key and can verify the CA's
signature. When Alice's computer receives the digital certificate, she
knows the information within it is accurate, and receives Bob's public
key. Alice can now verify that Bob sent her the message, and she can
now send encrypted information to Bob.Digital Certificates and the Unbundling Paradigm
The digital certificates of the status quo do not facilitate
unbundling. Today, digital certificates are issued with all
information in one certificate, and there is a dependence on the user
to keep their certificates from being stolen or used by
others. Additionally, most services only accept certificates from one
certificate authority. Both of these limitations prevent current
digital certificates from being used as an extensive digital
identification scheme. Digital certificates can reap the benefits of
unbundling, however, if they are designed with this capability in
mind.Securing Digital Certificates
To protect digital certificates, browsers or software applications
could support a password feature. This allows users to keep other
users of their computer from stealing their identity. This password
protects the private key, and must be kept secret. If the password is
compromised, the private key is compromised, and the digital identity
is compromised.Anonymous Certificates
The digital certificate regime described above facilitates unbundling,
which is essential to maintain privacy in cyberspace. Cyberspace
currently allows relatively anonymous usage, but once identity must be
confirmed, all privacy is removed. The current scheme for confirmation
is a credit card. A credit card releases information such as your name
and billing address. When having Amazon.com ship a book to your home,
this is adequate. However, when the desire is to prove your name to
somebody in a chat room, or prove your age to purchase alcohol
products, this system clearly releases more information than
necessary. The digital certificate regime described in this paper can
also be used to create unbundled certificates that link to
pseudonyms.Using Digital Certificates for Traceable Anonymity
It is very difficult to develop a guaranteed traceability feature for
digital identity architectures. Systems that would store your real
world identity in a certificate encrypted with a law enforcement
public key have in the past been considered politically unviable. Each
of the players in a transaction - the CA, the user and the web site
store - could collude and avoid technologies or procedures that enable
the government trace unless a trusted system with the trace built in
is used. However, a viable option exists that could meet the
government's needs in a large percentage of transactions.Certificate Authorities
One of the advantages that a digital certificate based approach has is
the decentralized nature of certificates. With a trusted system, one
system must be established, and it is proprietary in nature. Digital
certificates, like public key encryption, use a private key that is
kept secret, and the public key and the certificate that can be
distributed publicly.Completing the Picture
By utilizing existing technology and the principles of unbundling,
privacy, and accountability, we see the opportunity for a real,
working digital identity scheme. A collection of protocols can govern
these certificate exchanges, successfully protecting the private
identities of citizens, while allowing other entities to confirm
necessary traits on-line. A possible architecture would involve many
servers communicating and protecting the privacy of their users.
VII. Business Aspects of Digital Identity
Introduction
The business world is also very interested in the development of a
digital identity authentication system. Electronic commerce, with its
ability to market and sell products directly to the consumer at lower
costs, promises to become a key channel for retail in the twenty-first
century. The success of initiatives such as Amazon.com has caused a
flurry of development of World Wide Web storefronts for traditional
retailers of products ranging from books to computers. Taking a cue
from the work of direct marketers, corporations have begun to
recognize the Internet's potential to facilitate the tailoring of the
online storefront to each individual customer. Purchase a book from
Amazon.com, for example, and on subsequent visits to the web site you
will be presented with the titles of other books that you might like
on subjects similar to the one that you purchased or other works by
the same author. Such tailoring depends on the ability to connect
bits of information to the identity of the visitor. This section
examines how the proposed digital authentication mechanism might be
put to use for commercial activities by businesses and consumers.A Note on Architectural Choice
Previous sections of this paper examined whether the architecture of a
digital authentication mechanism should be designed to permit
traceability. Although the discussion focused on how traceability on
the Internet would meet the needs of the government in carrying out
its law enforcement function, it should be noted that businesses also
have an interest in the development of an architecture with such a
feature. Many corporations have established intranets to facilitate
communication between the various divisions of their companies.
Traceability in the architecture would help the leadership of a
business monitor the activities of its employees. Monitoring of this
sort might be motivated by a desire to track the productivity of
individual workers or a need to ensure procedures designed to govern
access to the company's sensitive information are followed. The
development of an architecture for the Internet that included
traceability would provide a standard that could be adopted for
corporate internal networks, without the associated research and
development costs.The Business-Consumer Relationship
Both consumers and businesses stand to benefit from the implementation
of a digital authentication mechanism similar to the one proposed in
this paper, despite the sometimes conflicting interests of these
parties. This subsection shall examine how consumers and businesses
might choose to use a digital authentication mechanism and what they
might layer on top of this base architecture in order to better meet
their interests. The discussion is separated into two parts,
examining first those issues stemming from the ability to authenticate
information and then those issues that proceed from the unbundling of
identity information that the digital medium allows.The Ability to Authenticate Information
The ability to send authenticated information, and the other party's
ability to verify this authentication, is valuable to both consumers
and businesses. Consumers might wish to demand that a corporate web
site provide it with a digital certificate in order to prove that the
web site truly belongs to the corporation. Although many consumers do
not question the authenticity of the web sites they visit, there have
been reported incidents of individuals using web sites to commit
fraud. These web sites may be designed to mimic the web site of a
legitimate corporation or may purport to represent the web storefront
of a business that in fact does not exist. The practice of confirming
that the web site is indeed the web site of the business it seems to
represent will grow in importance as consumers purchase a larger
percentage of their goods and services via the Internet. Under the
architecture proposed in this paper, a business could make available
on its web site a digital certificate, for example, signed by the
Better Business Bureau Online or by the CEO of the corporation. The
consumer would download this certificate and verify the signature
before interacting further with the web site.The Ability to Unbundle
One of the advantages that the digital medium provides is the ability
to unbundle portions of your identity that otherwise would be
connected. In real space, a customer purchasing a book in a store
reveals more than just his desire to own that particular novel and his
ability to pay the store's price. His physical presence in the
store reveals his gender, his height, and his taste in clothing.
Being a digital medium, cyberspace promises to allow for each of these
aspects of identity to be represented separately and displayed to
others only when needed. This subsection examines whether market
pressures might prevent users from taking advantage of this benefit of
unbundling.
The Business-Business Relationship
The Internet is far more than just a place where businesses shall sell
their products to consumers. As a communications technology, the
Internet also facilitates communication between businesses. Today,
most corporations contract with other companies to supply them with
necessary components or raw materials for their products. For
example, the major automobile manufactures outsource the construction
of many of the parts necessary for their vehicles. In such
situations, it is important for the corporation to be able to
communicate with its supplier the specifications for the component.
Often, this may require the company to permit the supplier to access
documents containing corporate trade secrets. Two companies
coordinating research and development efforts would need to exchange
such information frequently. The Internet provides a low-cost method
of delivering this information to the supplier. Because of the
sensitive nature of the information, however, the ability to
authenticate the identity of the individual attempting to access the
documents as the approved supplier is critical.VIII. Social Aspects
Community in cyberspace is based on the interaction between people
Cyberspace has an important social aspect to it that must not be
overlooked. Ever since the ARPA Net was created its primary use has
been to communicate with other people. With the advent of a faster
backbone, different types of communication media became possible -
namely, interactive communications. Community in cyberspace is based
on the interaction between people.Examples of Community
The Internet newsgroup is a prime example of community. A newsgroup is
a place where people with the same interests can post messages and
allow people to read and respond to these certain messages.Anonymity Today
It is not usually possible (not due to technicalities, but due to
situation) to do commerce anonymously online. The only good available
online currently that can be purchased anonymously is software that
one can download after purchasing. Any other commerce requires a real
space address for the shipment of the goods.Implications of Full Unlinking
Full unlinking provides what many consider to be the ideal
intellectual society - freedom of ideas is possible. Unlinking allows
thoughts to have a "life" of their own; thoughts gain
separation from the person. Ideas and concepts flow from person to
person without necessarily tying themselves to any one person.Implications of Mandatory Linking
The other side of the spectrum must also be fully considered - the
mandatory linking of identity with each cyberspace transaction. In
other words, each e-mail sent, each newsgroup post, and each commerce
transaction will have a digital feature that has the ability to be
connected back to the transactor. The government can identify the
originator of any newsgroup post - not simply the alias which posted
it, but the real world of the creator.IX. Road to Implementation
From Here to There
The current state of cyberspace identification mechanisms is far from
the flexible, broad potential of the identity architecture we have
proposed. There is still a long way to go from the 'here' of the
Internet as it exists in 1998 to the 'there' of the ubiquitous, secure
identity architecture we have described. In order for the Internet to
reach its full potential, a secure mechanism for managing and
verifying digital identity is necessary. There remain a range of
hurdles to overcome before a cyberspace identity mechanism will be
deployed and ubiquitous. These hurdles can best be analyzed in four
categories: social norms, market, legal, and architectural barriers.Social Norms Barriers
The main social obstacle to implementation of a cyberspace
identification mechanism is that the general public does not recognize
that there is a problem with the existing identification
architecture. The general public does not understand the need for an
improved, secure cyberspace identification system. Even without any
effective identification mechanism, the use of the Internet - for both
casual and secure applications - has soared, with double-digit growth
rates measured month-to-month rather than year-to-year. While more
sophisticated Internet users may recognize the need for a digital
identity mechanism, these advanced users represent a shrinking
percentage of the overall Internet ?community.? Many people using
popular Internet applications seem to be satisfied with the existing
levels of security and identification. E-mail, for instance, is often
self-identifying through the content of the message. Forged e-mail,
while easy to create in the current architecture, is not perceived to
be a major problem. E-mail eavesdropping, also a relatively simple
technical task, has not slowed the flood of e-mail
communications. On-line commerce is booming even based on systems
requiring credit card numbers and the overly revealing identification
that credit card numbers enable.Market Barriers
The market barriers to the implementation of a secure Internet
identification system stem from the difficult business economics
inherent in solving this type of problem. One of the key problems is
that there is significant business model risk for companies providing
identity verification solutions. In other words, it is unclear exactly
how these companies can make money. In addition, economic incentives
do not encourage the development of an open-standard identity
infrastructure. Ultimately, success of an open-standard identification
architecture, such as our proposed system, may require government
intervention in the marketplace.Legal Barriers
The most critical legal obstacle to the development and adoption of
any effective digital identity mechanism is the current confusion over
legal liability rules. In other words, who is responsible if someone's
digital identity is misused or stolen? Who bears the cost if a digital
identification mechanism is compromised? The lack of a clear legal
liability regime for these two issues discourages the cyberspace
identity market from emerging in the first place and from operating
efficiently once it does become widespread. Legislatures may need to
enact liability laws that cover digital identity before the identity
infrastructure can be effectively implemented.Architectural Barriers
We have outlined the functionality and basic operation of a secure,
open standard for a digital identity verification architecture. Our
system would resolve many of the problems with existing identity
mechanisms and could be implemented to produce a much more secure
cyberspace environment. Our system is flexible enough that it could be
used to meet the requirements in a wide variety of settings, from
social to business to government. Exactly how the certificates and
methods that we have described should best be implemented in the
marketplace depends on the needs of the particular identity
application and the choice of technology.
Appendix 1: Legal Process For Law Enforcement Access To The Trace
Congress will have to develop statutes to govern the conditions under
which law enforcement agencies can access the trace feature, should
such a feature be included in the digital identity architecture.Appendix 2: Digital Signature Legislation
State Digital and/or Electronic Signature Legislation
For purposes of this discussion, it is important to distinguish
between digital and electronic signatures. A digital signature is
"an electronic identifier that utilizes an information security
measure, most commonly cryptography, to ensure the integrity,
authenticity, and nonrepudiation of the information to which it
corresponds." In contrast, an electronic signature refers to
"any identifiers such as letter, characters, or symbols,
manifested by electronic or similar means, executed or adopted by a
party to a transaction with an intent to authenticate a
writing."Prescriptive Model
In 1991, the American Bar Association's Information Security Committee
began to draft a model law for digital signatures. Four years later,
there was still significant disagreement within the Committee over key
components of the model legislation, so, in the summer of 1995, the
Committee released its work in the form of guidelines. The ABA's
Digital Signature Guidelines are prescriptive-they attempt "to
delineate a comprehensive scheme for the recognition of digital
signatures in a PKI environment utilizing state-licensed certification
authorities." These prescriptive guidelines became the basis for
much state digital signature legislation.Criteria-based model
Other states have adopted criteria-based legislation that requires
signatures to satisfy certain criteria of reliability and security in
order to be legally binding. For example, under California's
criteria-based law, an electronic signature is legally effective if it
satisfies the following criteria: it is unique to the person using it;
it is capable of verification; it is under the sole control of the
person using it; it is linked to the data in such a manner that if the
data is changed the signature is invalidated; and it is in conformity
with regulations adopted by the appropriate state agency (the
Secretary of State).Signature-enabling legislation
Finally, signature-enabling legislation, such as that passed in
Florida and Massachusetts, permits any electronic mark that is
intended to authenticate a writing to satisfy a signature
requirement. For example, the Florida Electronic Signature Act of 1996
provides that, unless otherwise provided by law, an electronic
signature may be used to sign a writing and shall have the same force
and effect as a written signature. Under the statute, "electronic
signature means any letters, characters, or symbols, manifested by
electronic or similar means, executed or adopted by a party with an
intent to authenticate a writing," and a writing "is
electronically signed if an electronic signature is logically
associated with such writing."Trends
States are now experimenting with "hybrid" digital signature
legislation that combines aspects of two or more of the above
approaches. The recent trend in state digital signature legislation
has been toward legislation that not only removes barriers to
electronic commerce, but affirmatively enables electronic commerce and
establishes evidentiary presumptions in favor of the electronic
signature user based on security and trustworthiness standards.National Conference of Commissioners on Uniform State Law (NCCUSL) Draft Laws
The NCCUSL is working to develop uniform state laws of electronic
commerce. Two major NCCUSL projects of interest include the draft
revision of Article 2B of the Uniform Commercial Code (expected to be
approved in July of 1999 and enacted by the states in 2000), and the
March 23, 1998 draft Uniform Electronic Transactions Act (UETA)
(scheduled for final approval in the summer of 1999).NCCUSL Efforts: The Uniform Electronic Transactions Act (UETA)
The Clinton Administration's "Framework for Global Electronic
Commerce," released July 1, 1997, emphasized that electronic
economic activity currently is conducted in an atmosphere of legal
uncertainty. In particular, the report highlighted existing
uncertainties regarding the validity of electronic records and
documents used to evidence commercial transactions and
relationships. To address these legal uncertainties, the
administration's Framework called for the development of a Uniform
Commercial Code for Electronic Commerce. In response, the NCCUSL began
drafting the Uniform Electronic Transactions Act (UETA) which is
intended to create a basic legal structure recognizing and
effectuating electronically generated records and signatures. The UETA
is scheduled for final NCCUSL approval in the summer of 1999.Scope of the UETA
The goal of the committee working on the UETA is to "create a law
that will be adopted by the 50 states to govern several key aspects of
electronic commerce," however, there recently has been
considerable debate over the appropriate scope of the UETA. In
particular, the UETA Drafting Committee was struggling to decide
whether the Act merely should remove existing barriers to electronic
commerce or should go further and actively promote electronic
commerce. There was agreement that the UETA should remove existing
barriers to electronic commerce, including uncertainty over whether or
not electronic messages are "legal" [i.e., satisfy statutes
and regulations requiring transactions to be "in writing"
and "signed"] and whether or not electronic records are
admissible. However, there was disagreement over whether or not the
UETA should establish legal presumptions (e.g., the law would presume
the authenticity and integrity of a message if certain types of
security procedures were implemented) in order to facilitate reliance
on electronic messages. The Drafting Committee recently voted to
remove all references to presumptions in the UETA, reflecting a focus
on the removal of existing barriers to electronic commerce, rather
than on affirmative facilitation of electronic commerce.Provisions of the UETA
Several provisions of the draft law reflect the idea that electronic
media should be treated on a par with written media. One such
provision holds that "[a] record may not be denied legal effect,
validity, or enforceability solely because it is an electronic
record." Indeed, where "existing law requires a record be in
writing in order to be enforceable, the UETA simply provides that the
existence of an electronic record satisfies that requirement."
Title, Citation, Sponsor, and Status of Legislation | Provisions | Certification Authority | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
|
Pending Federal Initiatives
National Conference of Commissioners on Uniform State Law (NCCUSL) Drafting Efforts
Resources and Legislation
State Initiatives