next up previous
Next: What's needed Up: No Title Previous: No Title

Why now?

The new impetus for amorphous computing is inspired by the recent astonishing developments in fundamental biology and in microfabrication. Each of these is the basis of a kernel technology that makes it possible to build or grow huge numbers of almost-identical information-processing units, (with actuators and sensors) at almost no cost. However, to exploit the potential of these technologies will require new insights into programming and system organization.

Microelectronic components are so inexpensive that we can imagine mixing them into materials that are produced in bulk, such as paints, gels, and concrete. Engineers can exploit the resulting ``smart materials'' to reduce the need for strength and precision in mechanical and electrical apparatus, through the application of computational intelligence. For example, we can imagine coating a building or a bridge with ``smart paint'' that reports on traffic loads and wind loads, and monitors the integrity of the structure, or even heals small cracks by shifting material around. A clean room with ``active skins'' lined with cilia can push dirt and dust into a corner for removal. A wall that could sense vibration (and move slightly on its own) could monitor a premises for intrusion or it could actively cancel noise.

There are existing technologies (such as flexible printed circuits) that permit arranging microcomponents into pre-specified patterns. However, to fully exploit the potential of intelligent materials, it will be essential to obtain the desired behavior without the need to precisely fabricate the interconnect of the microelectronic components, and without the expectation that all the components are operational or that they are arranged as planned.

Biological organisms, of course, accomplish just the kind of organized behavior of amorphous systems that we wish to engineer. But it is only just now that biologists are learning the precise structure of complete organisms: the July 1995 issue of Science printed the complete genome for a bacterium, and hundreds more complete organisms will be sequenced over the next few years. gif We will thus be in the position of knowing the complete ``microcode'' for organisms, the effect of most of the genes, and we will even have at hand the technology to assemble such ``programs''. However, we do not have programming paradigms and methodologies that help us to exploit biological mechanisms as an engineering technology for fabricating intelligent materials.

We can expect to find some clues in developmental biology. Recent advances in understanding the developmental process in complex biological systems, notably Drosophila (see, e.g., [4]), are uncovering the organizing principles used to differentiate and organize organs and organ groups within such systems. We can learn much from living things' ability to dynamically organize arrays of initially identical cells into highly ordered arrays of differentiated organs and to interconnect these systems in organized ways.

There are even longer-term visions, such as the nanotechnologies described by Drexler and others [2]. These involve the use of novel chemistry and exotic materials, and we can expect that the invention of nanotechnology will be bootstrapped by the application of advanced biological and computational tools.

All these visions of intelligent materials present the same fundamental challenge:

To obtain desired, coherent behavior from large numbers of unreliable parts that are interconnected in unknown, irregular, and time-varying ways, by deliberately orchestrating their individual behavior and their cooperation.

Beyond its immediate applications to programmable materials, the study of amorphous computing has fundamental implications for software design in general. Our ability to program complex software systems is not keeping up with our desire to solve complex problems. Nor has it matched the growth in computational resources that are available to help in the solution. We have, in too many cases, made Faustian bargains in software system design---attaining efficiency by reducing the number of computing operations required, but at the sacrifice of simplicity and understandability. It once made sense to attempt to eke out performance from computers by any technique that could reduce operation counts. As systems become more capable, however, the dominant costs are measured not in terms of operation counts, but in terms of the conceptual simplicity of code and the required communication between concurrent tasks. One effect of studying amorphous computing is that it forces us to reassess the cost, now perhaps poorly justified, of making algorithms non-local and inherently complex in order to reduce operation counts.



next up previous
Next: What's needed Up: No Title Previous: No Title



Gerald Jay Sussman
Thu Jun 27 16:56:19 EDT 1996