To demonstrate this, we first explore the history of encryption challenges and the use of distributed computing to crack them, and combine that information with trends in computers to estimate the future potential of distributed computing. We then explore situations and scenarios in which distributed computing methods could be misused in order to illegitimately crack encryption keys, demonstrating that the threat of distributed cracks is not an empty one. Finally, we present key vulnerabilities in modern encryption systems which are most susceptible to attack using distributed cracking methods, and which allow even the small number of cracks possible with such methods to be put to devastating use. The threats presented by distributed computing are not impossible to avoid, and we also present ways to reduce or eliminate those threats, along with the difficulties inherent in doing so. The matter thus illuminated is one which deserves serious thought as computing networks develop, and which must be satisfactorily resolved if those networks are to take their prominent place in the daily business of society.
Though these cracks had their desired effect of demonstrating shortcomings in available encryption standards (whether they lead to a solution still remains to be seen), they also demonstrated the potential of distributed techniques. What is notable is that they did so not on the huge scale of the later RSA challenges, but on a smaller scale. Both efforts used slightly more than 100 workstations, a number quite feasibly available to a user or sysadmin on an academic or corporate network, to crack an encryption system which even today is used to protect credit card numbers and other sensitive data. When later that year the crack results (including time) were repeated on a single $100,000 parallel rendering workstation, it was a further demonstration of the potential of small scale efforts, as well as the rapid increase in power that always exists in computing technology.
The efforts which have grown in response to the RSA Secret-Key challenges were more sophisticated than the thrown-together responses to the SSL challenge, and continue to become more so. The DES Challenge was the first to fall, cracked by Rocke Verser's DESCHALL effort, one of several distributed efforts to make the attempt. There were also several distinct efforts to crack the RC5 challenges at key lengths up to 56-bits, but with the move on to 64-bits the focus has been on Distributed.Net, a nonprofit organization which has grown out of the challenges. With its numerous key servers, and thousands of participants (many acting as members of organized teams) Distributed.Net achieved a maximal key search rate of approximately 7.2 billion keys per second during their work on the 56-bit RC5 challenge, which they won. The client software for the Distributed.Net efforts is freely available in versions optimized for most available PC and workstation platforms, though source code and detailed design information is not available. Their membership continues to grow, and their efforts continue, currently at a key rate of 17.2 billion keys per second directed toward the 64-bit RC5 challenge and the DES Challenge II mentioned below.
Another RC5 effort deserving of mention is the RC5 Java Breaking Effort, located at MIT. Though it is far smaller than the Distributed.Net effort it is notable for the method by which it allows users to participate. Rather than distributing specialized clients for each possible workstation type, the effort uses a Java applet to perform the key search. Though this method does suffer from the slowdowns and overhead inherent in Java, it is also completely platform independent, allowing a single piece of contest code to run on any client machine. Just as importantly it makes it trivially easy for users to join the effort. They need only click on a link to access the applet page, and then the applet will run the challenge code until it is terminated. This type of universality and ease of participation is sure to be a boon to distributed efforts in future, but it can also present its own risks as will be discussed later.
Very recently (on January 13, 1998) RSA issued a second DES challenge, which only gives prizes for cracking efforts which beat the previous best time by at least 25%, with the prize amount increasing based on the improvement. Distributed.Net has created their first dual-purpose clients to allow simultaneous work on the DES Challenge II and the 64-bit RC5 challenge. Users choose which challenge to work toward (though Distributed.Net has stated DES as its current priority), and the client will automatically switch to the other when the first one ends.
Distributed.Net's plans look toward what may well be a very significant future for distributed computing. The increasing popularity of the Internet, and of computers in general has been unmistakable in recent years. The number of personal computers is growing at a phenomenal rate, as is the number of those computers which are connected to some sort of local network, and usually at least indirectly to the Internet. An additional factor is the growth of the computational ability of modern computer technology, subject to the rule of thumb of doubling of power approximately every eighteen months. With the combination of these factors it is clear that the sheer amount of computation power available in the world is rapidly increasing. As it has been since the decline in the popularity of mainframe systems, this computation power is largely scattered among individual workstations, but more and more of those workstations are now being connected together. The amount of computation power which goes unused in such a situation, taking into account the many PCs left idle on office desks every night, is truly phenomenal, and thus there is clearly potential for efforts to make use of that extra potential by putting lost cycles to use for a distributed computation.
It is quite feasible, therefore, that the future could see widespread use of distributed computing to reclaim unused time on workstations. In one of the most likely scenarios, corporations which have need of large computations could harness the desktop computers of their employees rather than spending more on specialized parallel machines or supercomputers. Specialized "farms" of rendering workstations are already used in the production of high-quality graphics, and as the graphics capabilities of the average workstation increase it becomes more and more feasible to replace or augment that function with distributed use of idle cycles. On a larger scale, it is quite possible that more public efforts, such as government sponsored research projects, could be both public enough and popular enough to gather thousands of participants on the Internet for distributed efforts, as Distributed.Net has done for the RSA challenges. The use of contests and prizes as incentive may well continue, or there could be a move to a more general method of incentive for the donation of processing power, perhaps organized by Internet service providers in terms of discounted usage fees (sort of an Internet equivalent of frequent-flyer-miles). In the most general case, it is quite possible that at some point in the future all, or at least many of the world's personal workstations will be part of one or several massive general-purpose "distributed computers" which could then be used to solve computationally intensive problems which would now be considered completely infeasible. The possibilities, if not limitless, are at least vast and impressive.
Of course, all of this is only speculation. All of these misuses are possible, but there is no reason to suggest that they are actually occurring. For the most part, the priorities and interests (not to mention morals) of those organizing the distributed efforts keep such things from happening, as such a plan could only work for as long as all of those involved kept it secret. The other guard against such misuses is the very public nature of distributed cracking efforts. All that is required to put an end to misuse of such a public effort is for a single user to take the time to analyze the code or the network traffic of a client program. In situations like that at Distributed.Net where the source code and network protocols are not published such a task is somewhat more difficult, but there are enough resourceful, curious, dedicated, or bored people in the world of computers that someone would be likely to discover a misuse. If such a thing were to happen, then it would be sure to garner great public attention. The distributed effort in question would be likely to be shut down very quickly, and users would become more hesitant to join such distributed efforts in future. For the sake of the good that distributed efforts have done for the encryption field, then, it is best to hope that such a misuse never takes place. However, the potential does exist, and it may well just be a matter of time before someone is either resourceful or lucky enough to attempt it and succeed, at least for a short time or on a smaller scale.
Such techniques have quite foreseeable legitimate uses, such as dedicating a corporate network to some business-related computation in a way which is nonintrusive to users, but the potential and incentive for misuse is far greater. In addition, the safeguards which help to prevent misuse of public distributed efforts do not exist in this situation. Given a few hacking skills, it is relatively easy for a user to set up such theft of cycles in a way which would be difficult if not impossible to trace back to him. This is especially true on many academic networks, and even some corporate networks, where internal security is minimal. If even MIT, where security may not be at its tightest but is at least well designed and implemented by knowledgeable people, is susceptible to such things, then there can be little hope of completely avoiding such misuse of computing at institutions with fewer resources in money, manpower, and knowledge to dedicate to the security of their computing facilities.
Though the security holes used by the Internet worm have largely been patched, new holes are constantly discovered in today's electronic security systems, and patches for those holes always lag behind their discovery, especially at less maintained or lower security sites which are slow to respond. Thus it is still quite possible that a program like the worm could again be released on the Internet and be as difficult to stop and trace as the original. Indeed, it could be far more difficult. While the bug in the original worm was the cause of its massive havoc, it was also the reason that the worm was discovered so quickly and so much time and effort was dedicated its eradication. Assuming a programmer who valued subtlety over destruction, a neo-worm could easily spread unchecked, and undiscovered for a long while. With the growth of the Internet in the past ten years, the number of machines possibly infected could be phenomenal, especially if the neo-worm could infect non-UNIX operating systems such as Windows 95.
Once that access is gained, the uses to which it could be put range from the harmless technical demonstration which was the purpose of the original worm, to massive destruction of data. Somewhere in between, though, lies a quite solid possibility that such a neo-worm could be put to use not to steal or destroy data, but to steal cycles. Once infected, a computer could use its idle cycles to contribute to a distributed computation, while the worm remains safely hidden from view, carefully rationing its CPU usage so as to remain unnoticed, and modifying system software to foil detection efforts. The only major limitation of such a use would be the need for a central server to coordinate the computation, certainly a major technical problem in the general case, if the server is not to be revealed as soon as a single copy of the worm is discovered. In the case of such an easily parallelizable problem as brute-force cracking, though, it is likely to be a solvable one. In the simplest case, the server can itself exist on a hacked machine, and all the hacker need do is occasionally check that machine for found keys. If the server is found and shut down, the hacker can simply use periodic data on which keys the server had searched to set up another server to search the rest of the key space, with minimal loss of effort. More elegant solutions could hide the server behind multiple levels of redirection, or perform purely through communication between infected machines, contacting a server only in the event that the key is found. In fact, the infected machines could just as easily be instructed to search the key space in a random order, thus completely eliminating the need for coordination, while still (probabilistically) allowing the key to be found without too much delay. The RC5 Java Breaking Effort, in fact, uses this randomized search approach to save in coordination hassles, so there is no reason that a hacker could not do the same.
The extent to which such techniques can be put to use for distributed cracking is variable, but the potential certainly exists. In either system an applet or control can easily be hidden in a web page even if it makes no display to the screen such as an animation, and only a look at the page's HTML code is a surefire method of finding such a thing. Such a program can also, in both systems, be made to continue running after the user leaves the web page in question, for at least as long as the user runs their browser, which is often quite a long time, considering the number of people who simply leave a browser window open at all times. In Java, that is the extent of the theft of cycles which is possible. When the browser closes, so does the Java interpreter and any programs that it was running at closing time. ActiveX, on the other hand, allows far more. Once an ActiveX control is running it can gain access to the user's entire system, perhaps spawning off a new background process which will run long after the browser is closed, or even modifying the user's system files so that this process is restarted at boot-time.
The difficulties in preventing these misuses are also variable. In Java, a dedicated and knowledgeable user could easily run a monitor on the CPU usage of his Java system, and terminate the offending program if it is noticed to be taking up a large percentage of computing cycles. Unfortunately, CPU usage is the only useful statistic which can be monitored in this case. Actually having the Java watchdog detect what computation an applet is performing is impossible (literally an uncomputable problem, equivalent to the halting problem), and with only CPU usage as a benchmark it is hard to distinguish between a distributed cracker and, for instance, a CPU- intensive MPEG renderer. In ActiveX a similar precaution would be possible, but since it would have to use the native operating system's monitoring facilities, and since the offending ActiveX control could have direct access to those facilities, it would be easier for it to "cloak" itself to avoid detection, using the same methods that would be used to hide a process spread by direct machine access or worm-like hacking techniques. The other major difficulty in using Java or ActiveX to steal computation power is in distributing the code in the first place, since the user must first access a web page containing the code in question, and indeed in Java they must access that page each time they run their browser in order for the code to be run. With the web-surfing tendencies of today's Internet users, putting a piece of code somewhere that it will be accessed is not particularly difficult. Having it accessed often by the same user so as to bypass Java security might be more difficult, but the power that could be wielded through illegitimate use of a popular web site, such as a search engine, in this case is great.
Of course, the web page, and its hosting server, act as a vulnerable point here even more than in the worm-like hacking case. Once a single user notices the offending code, be it through Java monitoring, CPU slowdowns, or taking a look at web page source-code, they can arrange to have the site closed down, especially if it is not under the direct control of the cracker in question. Of course, the effectiveness of such methods in stopping a dedicated cracker may be limited. Similar methods are already used to attempt to stop the posters of unwanted junk-email, by contacting the sysadmins of their service providers, and often either the sysadmin doesn't care or, more often, the spammer simply finds a new account from which to ply his trade. It is quite possible that the theft of computation could follow a similar pattern. Presumably, since encryption cracking is closer to actual crime than unwanted email the effort taken to stop it would be greater, but the potential still exists for a cracker to elude detection and continue his efforts indefinitely.
Even if no analysis as simple as a single dollar value can be performed, it is clear that the effort and risk involved in using distributed techniques to crack encryption are large, while the sheer number of keys which can be cracked is relatively small. There are, of course, factors which affect even this qualitative estimate. For instance, the brute-force cracking used by all public cracking efforts is a very random process, with the possibility of finding a key in the first few seconds of the search, or at the very end of the search space, or anywhere in between. In addition, there exist cryptanalytic methods of cracking, especially in situations where single keys are used for multiple messages or sessions, which provide improvements on the search time of brute-force techniques. Bugs in encryption algorithms or their use (such as the infamous, now fixed, Netscape SSL random-number flaw which reduced the key search space to about 1000 keys) can make an intelligent cracker's job even easier. Even with this qualitative knowledge of the level of the threat it is possible to pinpoint the greatest vulnerabilities to that threat, as well as who is most likely to benefit from taking advantage of those vulnerabilities.
Even if it cannot rival such resources, distributed cracking can quite feasibly put significant capabilities into the hands of far smaller groups of people with far fewer limitations. Foreign intelligence operations are probably not the most likely use of distributed cracking, though they are a possibility. Where it is likely to be seen is in organized crime, terrorism, white-collar crime, small-time criminal activity, and individual actions by hackers. In its largest scale, what makes distributed cracking a threat is its ability to bring smaller, less legitimate organizations into the same realm as government agencies, at least for a small number of cracks. There are probably more than enough criminal or terrorist organizations, or simply resourceful individuals who would be more than willing to reap some benefits, turn some profit, or do some damage by using distributed cracking techniques, and who can certainly afford to pay the few computer-savvy individuals it would take to do so. On the smallest scale, even on the level of the use of workstations on a corporate or academic network to crack a few keys, distributed cracking can still present a threat to corporate security or personal privacy. An individual reading another individual's email or data is still an invasion of privacy and possibly a crime, even if the FBI and NSA is unlikely to waste resources on stopping it. They are also far more likely to become interested if the data in question contains corporate trade secrets.
The most tempting targets are not the one-time session keys, but keys which are reused. Such a key once cracked could be reused indefinitely, until its legitimate user changes it or discovers its misuse. Many authentication systems rely on encryption keys which change only rarely. For instance, if a public key must be registered with a certificate authority, and distributed to all of those who need to communicate with the holder of the private key, then the user has good incentive to avoid repeating that hassle by changing keys too often. If encryption is to be built into automated systems, especially those which are hardware rather than software based, then there is an even greater likelihood of keys remaining unchanged. In the original Clipper proposal, for instance, a Clipper phone had a single dedicated key hard-coded into its Clipper chip. While the chip itself was supposed to be tamper resistant to avoid recovery of the key, a brute-force crack of the key would allow a cracker to listen in on telephone conversations indefinitely, until the phone in question, and possibly many others had been replaced.
Compounding this situation is the ability of some passwords or keys to be used to access others. A sysadmin's password could be used to gain access to the accounts of any users on that machine. Those accounts might contain copies of encryption keys on disk. Sysadmin access could also allow a cracker to listen in on the sessions of those users, gaining the passwords for other systems they log in on, or the keys they use in other encrypted communications. The worst possible example of this comes in the phenomenon of key escrow. The escrow system in question could be one of the possible systems proposed by the government for the accessing of encrypted communications by law enforcement, or it could be a corporate escrow system used to protect the corporation from data loss. In either case, a cracker gaining access to the escrow server or master keys by whatever means could then gain access to the keys of all users registered with that escrow server.
Any other major centralized encryption system would have similar weaknesses, since their logistics necessitate central and powerful keys which cannot be changed often. For instance, some of the proposed key escrow systems include a "government key" which would be used to encrypt a copy of the session key with each encrypted session. While allowing easy law-enforcement access to encrypted communications, that single key, if cracked, could also allow anyone else that same access. Outside of the field of key escrow, it is widely agreed that a widely useable encryption system will require some sort of key certification authorities to ensure the genuine nature of public keys by signing them. This allows users who trust the authority to then be able to trust the identities of any certified parties they communicate with, thus allowing business to be carried out over the network. If the signature keys of the authority could be cracked, however, then the cracker could impersonate any individual who uses that certificate authority, and the possibilities of such capability, at least in an environment where online transactions and business are commonplace and trusted, are limitless. These are tempting vulnerabilities indeed, and difficult ones to secure, both technically and logistically.
Of course, such security when taken to extremes can become as much of a detriment to the network and its users as the threats of crackers would be. In order to be useful, computer networks must not only be trusted, but they must also be easy to use and effective. If users and systems become bogged down in security measures, or if security unnecessarily limits what a user can access or do on the network, then use of that network will not grow as it should, and the usefulness of that network will be limited. Even distributed computing itself has great potential for usefulness, which should not be stunted by the paranoia of security experts worrying about too many threats. Finding the happy medium in this situation is no easy matter, especially as the size and complexity of the systems and infrastructures under consideration grows. Great effort will need to be dedicated to ensuring that those systems maintain an acceptable level of security, while not sacrificing their usefulness.
There are two limitations currently placed on key length. The first is technical, simply that algorithms that use larger keys, while not necessarily more complicated conceptually, can often become more computationally-intensive, or in the case of hardware implementations require more physical resources to implement. Thus, for the purposes of performance and affordability, encryption suppliers and users have tended to choose the smallest key which provides a reasonable level of security. The actual level of difficulty in increasing key length varies with the encryption method and its implementation. In hardware, obviously, it is impossible to avoid the fact that a larger key will require more physical wires in at least some parts of the system. In software, some algorithms (such as RSA, where larger primes mean more computation) increase in complexity significantly with key size, while others (including many symmetric block ciphers) increase in complexity only slightly with larger keys. At worst, since the difficulties associated with larger keys tend to grow linearly, or at least far less than exponentially, with key size, it is relatively easy to achieve a key size which puts cracking well outside the range of feasibility. As computing power, as well as cryptographic and cryptanalytical techniques grow, of course, it will be necessary for crypto systems and key sizes to grow with them in order maintain their level of security.
The other limitation on key length is societal, in the form of the limitations placed on encryption by governments and law- enforcement agencies who wish to ensure their continued ability to bypass that encryption. Those limitations, and the various proposed methods for eliminating them, have been and will continue to be the subject of voluminous debate by people from all sides and with all different priorities and points of view. If encryption is to be a tool for security and trust on the computer networks of the future, then those debates must eventually reach conclusions which allow encryption to be secure, and do not present new vulnerabilities or technical complications which cripple the encryption systems in question.
 RSA Laboratories: The RSA Data Security Secret-Key Challenge,
 RSA Laboratories: DES Challenge II,
 Rocke Verser: The DESCHALL II Homepage,
 Distributed.Net Homepage,
 Mersenne.Org (mathematical problems),
 Ben Adida & Oliver Roup, RC5 Java Breaking Effort,
 Email exchanges with Ben Adida (email@example.com), January 1998.
 Multiple Authors, _The Risks of Key Recovery, Key Escrow, and
Trusted Third-Party Encryption_, May 1997,
 Distributed.Net Homepage,
 Mersenne.Org (mathematical problems),
 Ben Adida & Oliver Roup, RC5 Java Breaking Effort,
 Email exchanges with Ben Adida (firstname.lastname@example.org), January 1998.
 Multiple Authors, _The Risks of Key Recovery, Key Escrow, and Trusted Third-Party Encryption_, May 1997, http://www-swiss.ai.mit.edu/6805/articles/crypto/key-study-report.html.