ERIK RAUCH: What I'm going to do is try to relax some of the basic assumptions that go into the discrete models that we use normally and show that you can still model some important physical phenomena. Here's an outline: I'm going to list some assumptions that are built into all of the discrete models that we normally use. Two in particular are regular lattice geometry, and synchronous updating -- global synchronization among the sites. Then I'm going to make a new model that doesn't have these two things and show that you can still do some interesting things with these kinds of models.

I want to talk about discrete models in the spirit of models in their own right as opposed to things like finite element analysis, which are really just taking continuous models and discretizing them so we can simulate them on digital computers. So in other words, an alternative to rather than an approximation of continuous models. I think that was Tom Toffoli's phrase.

You've all seen examples of them, and they've been very successful at some things. And the important thing is that they agree with their corresponding continuous models if you average them over space and time. But they've been viewed as first class models; in other words, they're not approximating another model, but they're viewed as models in their own right.

Why I would want to try to make models that don't have certain of these properties that the usual discrete models have? We've heard about Occam's razor, and it's right to examine the assumptions that go into our discrete models. I think it's useful to know more about the nature of assumptions that we have and why we have them.

So here are some of what I think are the major characteristics that are shared by discrete models. You have discrete space -- discrete sites -- and each site has a state. You have discrete time so change happens in discrete steps. And sites are arranged in a regular lattice. You have global synchronization so all of the sites update at one instant, and you have locality so sites are connected to nearest neighbors and the new state of any site can only be a function of the states of the sites that are connected to it.

And the other thing is global synchronization. Now what that really means is that each site actually waits for all of the other sites in the universe to update before it commits its change. So I want to see how far I can go in getting a model that gets by without these and still physical in some sense. In other words, there qualitative physics-like phenomena you can see in them, and also you can measure them and they come out like their continuous counterparts.

And finally, one attraction of discrete models is that they're really convenient for simulating on discrete machines. The ones that we use have a very nice correspondence with the structure of the parallel computers that we use to simulate them. Now there's been some work at MIT on a new kind of parallel programming called amorphous computing, and I'll just talk very briefly about it. It's a kind of massively parallel programming that takes advantage of fabricating massive numbers of cheap processors in bulk. So if you want to do that, you'd have to have algorithms that don't rely on very precise interconnects between them. In other words, you want to be able to throw together a huge number of cheap processors and just deal with the fact that some of them don't work and the fact that you haven't precisely positioned each one of them so that it has a regular number of neighbors, for example. You just want it to work if you just have each one of these communicate with whatever other processors are within communication range of it. So, two things: you have local connectivity -- it's impractical to have bandwidth that goes from every processor to every other processor -- and you have limited or no global communication. If this actually ever becomes practical, this kind of model might actually be a natural kind of model to run on these machines.

So we want a model that works whether or not your lattice is regular. That actually lets me do it on pretty much any graph I want. But if I want something that I can compare it to our three-dimensional universe, I need the graph to actually have some properties that resemble real space. So I'm just going to put in some statistical constraints that allow me to actually use these as simulations of physics.

This list is definitely not a complete list. It's just some things that seem like the right things. I have to put in some sort of locality, and I don't have a lattice where it comes naturally. So I have a small number of neighbors, and I suppose that they have some decent probability of being neighbors of each other. And you want the number of sites reachable in n hops to be roughly n to the D where D is the dimension. You also want the neighborhoods not to be too different from each other, like have lots with no neighbor and lots with a zillion neighbors. So that's why there's low variance in neighborhood sites. But I think you actually can probably replace some of these with some statistical version of the triangle inequality: if you have a number of hops from one point to two separate points, you want the number of hops between those two points to be not much bigger than that.

A regular lattice is actually a special case of this.

Now, to illustrate two things that are spacelike and not spacelike in this sense. You have a Bethe lattice, where each site has four neighbors and those sites have four neighbors in turn, but it's just basically a branching tree structure, and that's definitely not spacelike at all:

I also want to see if I can get away without global synchronization. So I want something that still works when the sites don't all update at once. The models we usually use have rules that assume synchrony -- they assume that all the updates are happening in such a way that they're a function of the states in the previous time step, but they're not committed until everything else has updated.

So again, if I want to have any kind of comparison with the three-dimensional world, I need some sort of statistical constraint. In this case I'm just going to pick something: I'm going to say that each site updates independently of the other sites, but if you view them globally, they're all updating at about the same rate.

So now I can actually show that I can test whether or not if I make small changes in the lattice geometry and small changes in the update that I get small changes in the results of my simulations rather than catastrophic changes.

Now I'm going to say more about the kinds of models that might work without regular lattices and synchronization. I want to get away with as little synchronization as possible. I mentioned already global synchronization, but actually we usually use local synchronization as well. In other words, the new state of the site depends on the state of all of its neighbors at that particular instant:

The simplest thing I could think of was to have the only interaction possible be an interaction of pairs. So two sites which are neighbors of each other can get together and only they synchronize with each other. They do their updates based only on the states of the two sites. In other words, there's no structure in the local neighborhood. So you can't say that, for example, your neighbors are on opposite sites of each other or things like that.

I can show you some examples of this running. I'll start with something in one dimension. I'm showing the amplitude as height. So these things aren't actually moving. It's just a simple Hooke's Law interaction between the neighboring sites:

[click for animation]

TOMMASO TOFFOLI: You just choose a cell at random?

ERIK RAUCH: It's not that you just pick a random site and update it, but rather it's as if they're all updating parallel, but sometimes they update a little bit faster, sometimes they update a little bit slower. So viewed globally --

TOMMASO TOFFOLI: You get some jitter?

ERIK RAUCH: Right. So let me show you in two dimensions. Here's an example.

[click for animation]

So you basically have a two-dimensional array of sites. I'm starting with a perturbation. So you see the positive amplitudes as black and negative amplitudes as blue here. Starting with the perturbation in the middle, it's acting like ripples on a pond. What you're seeing is a wave spreading out, wrapping around the boundaries and interfering with itself.

And this is the same thing, except this is going to just bounce off the edges. It's kind of like hitting a drum actually. You get these interference patterns:

And you can see different phenomena that you see in waves.

So here I've started with a line wave, so it's a perturbation that's linear, and I'm just going to let it go. And I've fixed the edges, so I'm going to make a mirror. All you need to do to make a mirror is to fix the amplitudes at the edges, and I made it curved so it's parabolic, so when you get to the end, it's going to bounce off and focus to a point which you see there -- that sort of depression. It's a parabolic mirror.

[click for animation]

And finally, I can do refraction as well.
So start with the same line wave and now I've made a region in the center,
a circular region where the sites are more dense and that's really sort of
like a different medium with a different index of refraction. But the
average number of neighbors is still the same in there:Okay. So now I've got at least qualitative phenomena from waves, but I want to test whether this actually is acting like other models of waves that we have like partial differential equations. So what I'm going to do is I'm going to take this amorphous lattice that I have, and I'm going to embed it a Euclidian space. So we're going to assign each site a coordinate based on where it's embedded in the space, and I'm going to start with the same initial conditions and compare the evolution of the two models.

In doing so, I'm sort of imposing a global time on it. You know, the amorphous model doesn't have any notion of global time, but I'm going to look at it globally and look at what's happening at one particular instant at all times. I need to do that in order to compare it. So I just compute the average difference of each site. Obviously, there are going to be differences because there are some irregularities that aren't averaged out at the scale that I'm looking at, so you get things that look sort of like scattering if you view them in the Euclidian space. But the differences between the two models grow linearly and they grow slowly, which is also the case for conventional discrete models. What's being plotted here is for different numbers of sites and degrees of irregularity in the lattice, showing that average difference per site over time -- the number of updates:

The model that I made conserves momentum. And one thing about it is, it actually doesn't conserve energy. So you can basically write down that energy in the usual way, just, say, kinetic plus potential energy:

The fact that I'm trying to preserve this minimalism with the update -- I only want pairs to be involved in updating -- makes it sort of difficult, because each site is a member of lots of pairs, so when I change its amplitude, I'm changing the energy associated with all those pairs. So can I still get away without synchronization? There is a way to do it. But I have to, instead of using amplitudes, have bond variables. So I'm going to have some number associated with every bond, which you can think of as potential energy, instead of amplitudes for each site, but I still have momenta for each site.

What I do then, briefly, is I use a leapfrog update method. That means that the momentum updates are half-step offset from the position updates:

You can see that that version of the model also exhibits these phenomena that I showed you. It looks pretty much the same:

And the difference with continuous model also grows linerally and slowly here.

So to conclude, I've shown that you can at least do without crystalline geometry and global synchronization among sites and still have a model that at least models one important physical phenomenon which is pretty ubiquitous in physics, which is waves.

Now one thing that this model is lacking as a discrete model is the fact that the states of the sites are still continuous. So I haven't found a way to actually to do the same thing for when you have states that are very small sets -- zeroes and ones in the sites. I think it might be possible, but that would be a way to go further with this.

And these models aren't really practical for the architecture that we use right now, but if we ever have highly parallel computers that have the kinds of constraints that this kind of model uses, then it actually might become something that we want to use.

Thanks.

(Applause.)

QUESTIONS

CHAIRMAN FREDKIN: Questions?

BRIAN HAYES: Could you explain again the local synchronization condition? This pair-wise synchronization?

ERIK RAUCH: Whatever lattice you have, a natural way to update the state of a site is to look at the states of all its neighbors and then update it based on that. That makes things pretty easy to calculate. And the thing is, I wanted to see if I could do without it because it involves a degree of synchronization, because I have to sort of isolate this cluster of sites like you see here:

I have to isolate that from the rest of the universe for an instant while they change their states. In other words, I have to keep them from engaging in any other interaction while they're doing that. I just wanted to see how minimal I could get. So instead of doing that, it's actually possible to have only two of these things ever having to be isolated from the universe like that. That's the least degree of synchronization I could come up with is that only two of them have to participate in these atomic updates.

BRIAN HAYES: But every time you have a triangle, each member of that pair is also connected to one other.

ERIK RAUCH: That's right. So each member of this pair is a member of lots of other pairs as well, but it still works.