Wednesday, January 28, 2009

Sociology of 'toys'

We are all aware of the many ways in which ideas and our ideas about ideas influence the propagation, acceptance and adoption of ways of thinking -- especially those ways of thinking that impact ways of doing. Personally, i never cease to marvel at scope of this. For example, it was very clear that the imaginations of those thinkers in the 20's and 30's who were crafting the agenda for physics for the next hundred years, those imaginations and imaginers were captivated and compelled by images that were startlingly removed from the human scale. The large and the small best measured in distance from the human scale by counting orders of magnitude held sway in the imaginations of those who were framing the physical theories most of us think as intellectual ediface. What is the human-scale frame for general relativity? What is the human-scale framing of quantum mechanics? It's only in the past couple of decades that things like GPS (note the 'global' in that moniker) or quantum cryptographic protocols have put these theories into strands that weave into the fabric of daily life for most folks.

At the opposite end of the spectrum we have notions of computation whose verisimilitude, the very life-likeness of which is a stumbling block for adoption. There are essentially two successful compositional models of computation: the lambda calculus and the pi-calculus. These are really representatives of classes of models, so forgive me for employing a little metonomy. At their root, both make an common ontological commitment: both rest on the idea that being is only witnessed through doing. We can only classify programs in terms of how they behave. Beyond this fundamental commitment, however, there is a marked difference in world-view and outlook. The functional paradigm does not support the idea of autonomous composition. In the functional world there is always a 'head' term in a composition, and the head enjoys an essentially governing role in the computation.

By contrast, the pi-calculus supports a notion of autonomous composition. Terms in parallel composition are essentially peers. They freely mix and may interact/compute/evaluate/reduce in a non-deterministic (ungoverned) manner. i submit that this is much, much more life-like. This is a model of computation that much more closely aligns with my personal understanding of the physical world. Moreover, it is a model much more closely aligned with the western ideal of human society. It's proto-market, in the sense that it's the sort of conditions necessary for markets to emerge -- in a western viewpoint.

In my experience, however, this model hits a real stumbling block in its adoption in various human communities. Part of this -- i will concede -- is the commitment to doing as the only witness for being. Many, many people keep asking "where's the result? where's the data" and it takes them a long time to get to a bridging notion like, "ah, the result is a channel to a process that behaves like the data i was looking for." But, part of this is the deep-seated inability to understand a notion of composition that supports autonomous execution with the possibility of mutual engagement. i need only point out that this idea escaped every model of computation until the '70's. That's 3K recorded years of computing (yes, Euclid and Pythagoras were computing) and thinking about computing before we stumble on the obvious idea that computation is a form of interaction. With the exception of Chemistry, none of the dominant mathematical apparati supporting physical theories supports this kind of composition.

And then we wonder why a society of peers is so far from our grasp. As a parent i have learned a lot from watching my children engage with toys. Many of my cherished ideas about nature and nurture were trashed merely from careful observation of how my 5 children engaged their playthings. There's a way in which mathematical theories are just that: toys. They're reduced, idealized versions of situations and phenomena. Observing our models of computations can be like a parent watching children play. The kinds of 'toys' we make and the kind of games we play say a lot about who we are as a society and a species. There is very little mystery about the struggle to reach a society of peers. Our proclivities are writ large -- like brand names on the packaging and very form -- of our toys. Actually, in a funny way, the situation is so profoundly skewed in one direction that the fact that we ever stumbled on such a radical model as computation as interaction is cause for hope. If phylogeny recaptulates ontogeny, maybe the picture of children learning even from playing with the same old toys is a picture of a deeper process that holds true for all of us.

4 comments:

Raoul Duke said...

i'm under the impression that -- presumably because pi-calculus type approaches never caught on enough -- there aren't any good programming languages which implement the pi-c. so now it is sort of a chicken-and-egg situation, and maybe people who would otherwise be interested in pi-c end up using something else that is more pragmatic e.g. actor model in erlang or scala (or sorta along those lines clojure) or whathaveyou.

(i know of occam-pi http://occam-pi.org/ but again it doesn't seem anywhere near main-stream enough?)

leithaus said...

Thanks for your note!

It depends on what you mean by caught on. What decade did Church discover the lambda calculus? What decade did McCarthy propose lisp? What decade was it considerable reasonable to write production code in a functional language.

If you plot a similar timeline for pi-calculus you will see that we're not due for a serious commercial uptake of pi-calculus-based approaches for general purpose programming languages for quite some time. That said, BizTalk and BPEL and WS-Choreography certainly "caught on" and they were based on pi-calculus-like calculi.

Pi-calculus is also arguably the only reasonable framework in which to give semantics for languages like Erlang and the actor sublanguage of Scala. Certainly, the semantic accounts put forward by the actor community are considerably less compelling.

leithaus said...

As an addendum to the previous comment we should note that there is a considerable acceleration toward a pi-calculus-based general purpose programming language (by comparison to the timeline for the lambda calculus). Part of this is fueled by the commercialization of the Internet and part of this is fueled by the shift to multicore hardware architectures. It is demanding an investigation of programming models that will help out with the very serious difficulties of concurrent and distributed programming.

Looking into the future, software-transactional-memory-like schemes will only get you so far. As Pat Helland used to say, 2PC is the anti-availability protocol. If you look out to where we are heading it looks remarkably like Internet on a chip. Then the message-passing model starts to look remarkably scale-invariant, that is, applicable at the global Internet scale and applicable at the chip level. The transactional model hits the sweet spot where you've got tens of resources sharing a common store. It doesn't scale out. To scale out you have to share less. This is why db scale-out schemes involve partitioning.

Finally, there's another aspect to "caught on". If you follow conferences like Concur you will see that the trend has been totally clear. At the outset of Concur there were papers on many, many more and more diverse models of concurrent and distributed computation. Now, the process calculi dominate. If you look at commercially critical -- but very specialized -- activities like specifying and verifying mission critical protocols, the process calculi are essentially the only game in town.

With these kinds of tipping point situations its very hard to see when you have reached a tipping point. The current situation w.r.t programming languages is mitigated by sociological processes. Most programmers have at least some exposure to functional models of computation, but very, very little to the process calculi. So, the fact that the functional languages have reached a point where they can perform as well as procedural languages together with the need to keep deltas to existing programming paradigms as small as possible to be able to capitalize on existing talent pool is driving much of what we see in terms of the growth of the functional languages.

Over time i don't know if the functional languages can maintain ascendancy. Scala has already stipulated this point with its actor sublanguage. Erlang went through a similar transition a generation earlier. i'm guessing we will see something like this in Haskell sooner rather than later. Then, whether you call it functional programming or gardening, a rose by any other name will still grow it's petals concurrently with all the other roses.

Paul Steckler said...

With the impending 200th birthday of Darwin, I'd like to add evolution to the list of sciences where interaction among entities is fundamental.