Tuesday, December 29, 2009

The place for force

In the last post we discussed a method of representing of finite geometries as terms in a computational calculus. The background aim of this endeavor is to re-situate Einstein's physical intuitions about geometry, gravity, mass and energy in a new formalism; and, the background aim of that endeavor is to construct, out of a computational formalism, a physical theory that includes both quantum mechanics and general relativity. In short, the aim is to get "it from bit" in a way that unifies gravitation with quantum mechanics.

With that in mind, the purpose of a framework to study evolving discrete geometry should be clear, but where is the place for force in all of this? That's where the choice of the particular variant of the π-calculus becomes relevant. In a reflective higher-order calculus there is a rich structure on names, which is a place to host a rich "synchronization algebra", i.e. the algebraic structure determining when two names are triggers of a process synchronization. In the ordinary π-calculus, it is name equality that does this job. The simplest form of synchronization comes from the comm rule, such as the one for a monadic, asynchronous calculus like this one

x?(y).P | x!(v) -> P{v/y}

In a reflective higher-order calculus names have internal structure deriving from the structure of process terms. This gives a very rich setting to express synchronization constraints. In a previous post we described one we call annihilation.

s = @S, t = @T, ∀ R. S|T →* R => R →* 0
------------------------------------------
s?(y)P | t!(Q) → P{@Q/y}

This comm rule says that if (this is all the stuff above the horizontal line) s is the transcription (aka code) of some process, S, and t the transcription of some process T and that whenever you run S and T in parallel composition together whenever they (eventually) reduce to a state R, that state (eventually) reduces to 0,

then (this is all the stuff below the horizontal line) s and t can be used to synchronize for an interaction that results in a substitution. This basic idea can be used to provide a template for different kinds of force. Before we discuss that thought, however, the attentive reader might be wondering about something. If the "comm" rule is the most basic of the rules governing reduction, but it mentions reduction in its hypothesis, is this rule well-founded? Will it ever allow reduction to take place?

The answer is 'yes'! Note that we require that 0 is the unit of the parallel composition monoid and so 0|0 = 0 ; similarly, and after no steps we get from termination to termination, or in symbols 0 →* 0. From these two facts we have @0 can be used as a channel to synchronize with itself. Taking one step further we see that @(@0?(y)0) can be used to synchronize with @(@0!(0)). More generally, the reader may verify, we have that processes that cancel each other out, matching every output with an input, (alternatively said, imposing a kind of linear discipline on i/o) can be used to provide the codes for channels that can be used to synchronize. Hence the name: annihilation.

Returning to the topic at hand, the thought is that running the check for annihilation -- which amounts to exhaustive execution of the processes the codes of which are the subjects of guards -- provides a natural vehicle in which to express force. The intuition is that the less complex the annihilation calculation the stronger the force. An example may prove illustrative. Consider the following race condition

xl!( Ql ) | x?( y )P | xr!( Qr )

Suppose that the check that xl annihilated x was simply more complex than the check that xr annihilated x. Intuition suggests that the race is more often won by the less complex check. The effect, therefore, is a certain tendency to evolve in one way over another. To an outside observer this looks like a force between x and xr. Well, if it looks like a force and feels like a force and quacks like a force... why not investigate whether this might not be a sufficient framework to model force?

One important aspect of the construction of this model is that the choice of synchronization algebra is nearly independent of the method of modeling the geometry. However, once we have a geometrical structure, how the geometry is likely to evolve is determined by the synchronization algebra. In other words we are building up our model of physics in a compositional manner. Another important aspect of this particular modeling choice is that it has enough structure to construct many apparently different types of forces -- roughly aligned to complexity classes of the annihilation calculation -- but there is only one notion of force: the complexity of annihilation.

Again, this is starting feel like it's moving in an interesting direction, but it raises a host of very challenging questions. In this model, because of the way the structure-behavior relationship is repackaged, we do not need a "differential structure" to host an expression of dynamics. The gold standard, however, of physical theories are general relativity and quantum mechanics. The first of these (and arguably the second) is certainly dependent on a notion of derivative to express dynamics. If we are to judge the quality of this alternate approach we have to build a means of comparison. Ideally, we would like to find an interpretation mapping the standard expression of these physical theories into the new model. A question that stands out when contemplating such a mapping is how to interpret the derivative -- and in particular its relationship to dynamics -- in this alternate model?

No comments: