Wednesday, March 19, 2008

Replication, reproduction and domain specific languages

In the posting "a biological view of replication and reproduction" we talked about a concurrent version of the paradoxical combinator. Then we took an apparently non sequitur leap off in the direction of introducing naming into concrete data structures and how that relates to the development of languages in the post "naming as infection". The last post, "2-Level type analysis of adding names to games" showed how a lot of that English could get boiled down into some very concrete Haskell code.

One of the ways to look at the code in the previous posting is as a monadic toolkit for creating DSLs. The proposal is that a Domain Specific Language derives from a proposal that we get a certain angle or edge on the world via a certain concrete data structure and then injecting (infecting) that concrete data structure with just enough additional capacity to do computation over that data type. In fact, i ran a lot of changes on this idea in the code here to test the idea. If i say so myself, the idea is so good that i can solve the "equation" (λ-calculus : Haskell :: π-calculus : X) -- which i submit is the most conservative and robust way to go about designing a production-quality concurrent programming language. i still need to refactor the code to use the monadic abstractions explicitly, but that is going to have to wait because that's not the direction i want to take this post.

In this post i want to begin the process of tying together a biological view of replication, replication-qua-reproduction, with computation-as-infection -- showing that -- in fact -- the two ideas are closely connected.

The bridging idea, the connective tissue, if you will, is the notion that strategies of persistence -- as i dubbed them previously -- can be arranged in a lattice, a partial order. The partial order relates the strategies in terms of information content. Roughly, speaking, perfect copy is the strategy with the least information necessary to guarantee persistence. It is the least fixpoint of a certain recursive equation. We can write down a schematic for this recursive equation.

PersistentBehavior = FieldAPoke; PersistentBehavior

Persistent behavior has to, at a minimum, handle a probe into its nature and be essentially unperturbed by this probe -- i.e. go back to doing what it does after -- or while -- handling the probe. These are apparently the necessary and sufficient conditions for qualifying as persistent behavior. To see this, imagine we've some sort of gadget in front of us. What happens if our gadget stops or becomes some other kind of gadget after we poke it? We are unlikely to want to think of this gadget as particularly persistent. Moreover, the gadget has to handle at least some kind of probe, or we are not very willing to say that it behaves. In fact, as we mentioned before, just being visible means the gadget is handling a probe: the probes of gazillions of photons. In the limit our minimal persistent behavior is an infinite capacity to handle probes.


If in our schematic view of persistent behavior we allow polymorphic interpretations of the operator connecting "FieldAPoke" and "PersistentBehavior" -- what we wrote as a semi-colon -- then we can start to see a family resemblance between this behavior and our concurrent version of the paradoxical combinator. For recall that the limit of our process Copy(FieldAPoke,x) is


In the shift from a sequential view of persistence to a concurrent one we've added the possibility of concurrently handling the probes. When the time scale of the probe approaches instantaneous the limits of these two behaviors are indistinguishable. When probes become more sophisticated or demanding then the shift from sequential to concurrent has the effect of morphing our "handlers" from (re-)actions to (re-)agents. The unfolding of the concurrent strategy is the production of a population of these agents. So, the proposal is manifold
  • the "phenotype" of the persistent behavior is the part that handles probes
  • different persistence strategies and phenotypic structures are introduced by "injecting" additional structure prior to the knot-tying step; that is, following our 2-Level type recipe
  • data PersistentBehavior = PersistentBehavior FieldAPoke PersistentBehavior
  • => data PolyPersistentBehavior a f = PolyPersistentBehavior f a
  • data RPolyPersistentBehavior f = RPolyPersistentBehavior f (PolyPersistentBehavior (RPolyPersistentBehavior f) f)
Note that this viewpoint actually lines up with the DSL "toolkit" viewpoint adumbrated above. The DSL toolkit starts with a proposal about what a DSL is: it's a theory about how to get a handle on the world -- a theory that has been infected with some minimal computational machinery, e.g. λ- or π-calculi. The theory is embodied -- made manifest -- in the concrete data structure. The suggestion is that this concrete-data-structure-as-advantage-in-the-world is exactly what phenotype is. A phenotype is a hypothesis about how to get ahead in the world. ;-) More generally, we are turning the "Language: It's a virus" notion on its head; we are saying that viruses (and other beasties) are (domain specific) languages.

To see that reproductive strategies are arranged in an information-theoretic ordering we note that in our recipe above perfect copy shows up as the least fixpoint. Other strategies, like sexual reproduction, introduce additional computation -- which itself can be measured by information content -- into the knot-tying step, yielding a fixpoint somewhere above the least one in the information ordering. (There's a whole series of posts yet to be done on different ideas of information and how they relate to each other and to computation.)

In the interest of generating a little narrative tension i'm going to leave it there. In the next post i'm going to talk about a "smooth" transformation from the concurrent version of perfect copy to sexual reproduction where smoothness is measured in terms of the partial order on the domain of strategies of persistence. i would like to close this discussion with a question. Note that the type level description of this phenomenon is decidedly functional. We are considering functional means of finding the type -- as the solution to a domain equation. What would the corresponding concurrency-theoretic account of the type look like? What is a concurrency-theoretic view of types?

No comments: