One view of the various techniques employed into managing variable bindings and environments for a functional language is exactly this view -- these techniques give a concrete way to interpret name-based addressing in terms of structure-based addressing. What i have been arguing is that quotation provides a primitive way to understand this relationship. The principle says that the structure of names reflects the structure of terms and that name-based addressing employs this structural correspondence to reduce to structural addressing. i have also discovered other bases for relationship, based on biological metaphors; but the quotation method provides an information-theoretic minimum in a certain Scott domain.
This way of looking at name-based versus structure-based addressing results in another observation. When we think of composition as a form of structure creation then we usually have a means of getting the components out of the resultant structure (that's one of the important consequences of structure-based addressing: retrieval). These are essentially factorization theorems. Along with a composition is some fundamental and unique decomposition procedure. Sometimes the composition/decomposition procedures result entirely in a flat collection, as we see with multiplication. Multiplication, in fact any associative notion of composition, forgets history of term formation. In symbols
- First we form a*b, then we form (a*b)*c is not distinguishable from first we form b*c, then we form a*(b*c), more succinctly: (a*b)*c = a*(b*c)
- Do we have a notion of composition that does not have a factorization theorem?
- Addition, multiplication, in fact any group, ring, field... all have various factorization theorems
- Physical notions of composition as mixing also result in decomposition/factorization theorems -- but (nuclear) chemical reactions, these can result in irreversible structural changes.
- Could it be that we need structural change of the components before we have such a notion? Could it be that composition without a factorization theorem requires a change in the information content of the components post composition?
- Could this be the import of interaction?
- Hypothesis: interaction is composition extended in time.
- Hypothesis: interaction is composition that results in structural change of the components.
Here's an idealized setting where we can investigate this idea. When Milner proposed the π-calculus he provided a partitioning of ideas as follows. Suppose we have two reagents, say P and Q, then we can place them in a mixture: P|Q. This is a flat, and well-mixed container. That is, the mixing operation is -- like multiplication -- not just associative, but also commutative. So, because P|(Q|R) = (P|Q)|R = (Q|P)|R = Q|(P|R) any two reagents can come into contact with each other. He went on to describe how a composite agent of the form, P|Q, can evolve in terms of an interaction between the two components. He suggested that when we have a reagent seeking some information come into contact with a reagent providing some information, they can exchange that information. More suggestively, suppose that reagent P posts its request for information at a location, x; then a reagent, Q, that posts information to location x can potentially give P that information. He wrote this as
- x?(y)M | x!(z)N -> M{z/y}|N
- P = x?(y)M is the reagent posting at x the request for information
- Q = x!(z)N is the reagent posting the information z to the location x
- M{z/y} is what P becomes after receiving the information z; y was its way of deferring what the information actually was while still referring to some quantity
- N is what Q becomes after providing the information z at the location x
Milner's intuitions are well supported from the physical sciences. Chemistry also formalizes chemical behavior in rules of the form
- kX + mY -> nZ
- instance: 2H + O -> H2O
Note that Milner's calculus and chemistry do not share one trait. At the level of chemical equations there is no structural alteration -- no loss of information. All the H on the left side of the arrow shows up somewhere on the right side. After interaction in Milner's calculus structure can undergo radical change. In some sense Milner's calculus is closer to the intuitions of nuclear chemistry where such structural change can take place. The point is that chemical equations support a post-interaction decomposition mechanism that Milner's calculus does not.
As an aside -- to see the close connection between structure and addressing -- notice how structural change after interaction poses deep questions about identity. If A and B change into C and D after interaction, how do we know whether C is A's continuation or D is? Has A ceased to exist? Does it have a unique relation one or the other? Or, has it been lost in the mix of C and D? To see how this might have ramifications in understanding physical systems, think about the genetic contribution of mother and father in sexual reproduction. Because of feature interaction it is very hard to tease mother's contribution out of the phenotype.
Now, the question we posed is this. Can we find a notion of composition as interaction that breaks a decomposition procedure? Milner's does this if we hide where the interaction occurred. Intriguingly, this hiding is reminiscent of the sort of hiding we find in the categorical notion of composition. To compose two morphisms (= reagents in this setting), p : A -> B, q : B -> C we need to hide the point of interaction. That is, p;q : A -> C. What we can't see in the composite morphism/reagent is the site (B in category theory, x in Milner's calculus) of interaction. Abramsky proposed that this should be the basis of an interpretation of composition as interaction. This idea has worked out well in games semantics, but less well in a more axiomatic formulation called interaction categories. We say less well because the idea of mobility -- of information exchange affecting the next site of interaction -- is not well captured within the framework of interaction categories.
No comments:
Post a Comment