next up previous
Next: 6. Conclusion Up: Discrete, Amorphous Physical Models Previous: 4. Simulation

Subsections

5. Digression: Models of computation and physical models

The class of models presented here corresponds to a new parallel computing architecture known as an amorphous computer.[1] This architecture is intended to take advantage the possibility of manufacturing small processors in such large numbers that precise interconnections and regular structure of conventional parallel computers are not feasible. An amorphous computer consists of a large number of processing elements embedded in a surface or volume. Processors are limited in resources, unreliable and connected in imprecise ways. They communicate with each other locally, for example by wireless broadcast. They have no detailed knowledge of their precise geometrical arrangement, and no global clock.

The amorphous model of computation is informed by physical constraints; it is based on the fact that processors are embedded in space, and have the locality constraints of physics (local interconnections and no global clock). Microscopic conservation is another constraint from physics that is useful in amorphous computing. Here it was used in simulating physical systems with conservation laws, but it may find use in non-simulation applications as well, since it is very similar to the idea of ``tokens'' in distributed systems. Tokens are used when it is required that there be only a fixed number of entities in a distributed system, such as processes that have access to a certain resource. They can be passed around between processors, but not created or destroyed.

Other models of computation informed by physical constraints have close correspondences with physical models. Physical models correspond to models of computation, at a minimum, when the range of possible states in the two is the same, and the same dynamics or evolution are possible in each. In addition, there is a two-way correspondence between them: one could conceivably implement the computer in the physics as described by the model; and conversely, straightforwardly simulate the model of physics on the computer. Thus it is for example with quantum computation and quantum physics. Quantum computation was first conceived [15] as a way of efficiently simulating the evolution of a quantum system; more recently they have been proposed as a general purpose computing architecture.

Being inherently parallel, discrete models correspond to certain models of parallel computation. In addition to amorphous computing and the amorphous physical models presented here, another pair of corresponding models is Cellular Automata and 3-D mesh architectures. A 3-D mesh architecture [16] is a parallel architecture with local interconnections and regular geometry.

5.1 Classifying models of computation based on physical constraints

The Church-Turing Thesis expresses the belief among computer scientists that all these models are equivalent in the class of computations they can theoretically achieve, given unlimited resources. However, taking physics into account imposes constraints, and provides a more realistic view of what computations are achievable as the computer becomes large and comes up against physical limits [10].

There is often more than one relevant physical constraint. These constraints can conflict with each other, making tradeoffs necessary. We might thus distinguish models of computation from one another based on the degree of various tradeoffs they make in physical resources, by classifying them on an axis that represents different amounts of two mutually conflicting resources (where having more of one implies less of the other is available.)

One axis represents granularity or distributedness. On one side of the axis are architectures that have the ability to access all parts of the system in constant time. Here are found traditional models of computation such as Turing machines. A model that more resembles current sequential machines is the Random Access machine[9]. In this model, there is a memory, every location in which is accessible in constant time. Models with constant-time access, however, do not take space into account. The result is that as the system becomes large, this capability becomes unphysical, since information takes time to propagate from one part of a system to another.

This can be dealt with by distributing the computation over space. The cost of this distribution is that it takes time to move information from one place to another. Distribution entails breaking the computer into a number of elements each of which has high internal bandwidth, connected together by links with limited bandwidth. Thus we give up the ability to reach all parts of the system in constant time. However, distributedness is a resource because when the computation is highly distributed, one can enlarge the computation simply by using more space (processors), without having to go through the ``von Neumann bottleneck''.

One step along the axis from the Random Access Machine class are conventional parallel computers, which contain a relatively small number of processing elements fully connected with each other. Each processing element is a smaller version of the Random Access Machine. As we move along the axis, the number of elements increases, and we must give something up: it becomes impractical to connect every element to every other. The reason is that as the number of elements scales up, not only the total bandwidth but also the bandwidth per processor would need to increase to maintain full connectedness[10]. Connecting each processor to every other becomes incompatible with the constraints of physical space. Thus, architectures that have local interconnections become necessary with a large number of processors.

Amorphous computing represents a step further along the axis, being even more distributed than conventional parallel architectures. Amorphous processors are assumed to be unreliable, and can fail. This requires redundancy: the computation must be distributed to such an extent that the failure of some elements still allows the computation to proceed successfully.

Granularity or distributedness is only one of several axes that represent physically realizable architectures. An example of another is the energy/time axis: a computation can be made to use an arbitrarily small amount of energy, at the cost of being slow[17]. It is an interesting topic for future work to identify a small number of relevant axes which represent the constraints of physics, each one representing a tradeoff between conflicting resources. This would produce a space of models that are potentially physically realizable; we could then classify models of computation on them. (New models might also come to light by identifying regions of the space that are uninvestigated.)


next up previous
Next: 6. Conclusion Up: Discrete, Amorphous Physical Models Previous: 4. Simulation
Erik Rauch
1999-06-26