Over the years i have become conscious of certain natural progressions. One i have talked about to colleagues and friends is the movement from "god's eye" descriptions of the world to compositional ones. i'm not going to talk about that one here, but rather one that has to do with the introduction of names. Its a very natural thing to do, to introduce names for things, over the course of a discussion; and, if we watch this process carefully, i think that certain aspects of communication -- including human-to-human, machine-to-machine and human-to-machine -- become more easily managed. i submit that because we have some erroneous assumptions about the use and development of language, including naming, we easily miff communications on a regular basis.
As an aside, i should come clean that i believe that it is a natural, ubiquitous and on-going human activity to develop languages. We all experience this when we come in contact with some new subculture -- whether it's some new corporate culture or some new "youth" culture -- one of the more noticeable or recognizable aspects is its "language", its vocabulary and grammar. Words -- common words like 'boy' or 'toy' -- that have certain meanings in the ambient culture's language are given specialized or even wholly different meanings in the subculture. New words are coined. Ways of stringing words together are also specialized or freshly minted. Examples are so ubiquitous that it's overwhelming to list them -- so i won't. Instead i just want to state that i take it as a starting point that the child is not acquiring language; it works first with mother then family then family and friends then ... to co-create language. Further, this process continues throughout life. Language co-creation is a fundamental human activity. This perspective -- by the way -- is what prevents me from having a lot of faith in the current "ontology" efforts of web 3.0. Any given ontology will be out of date before it has enough mass to be useful -- and the ontology co-creation abstractions -- that facilitate the natural activity of co-creating (linguistic representation of) ontology are almost entirely missing.
Given this point of view it is natural to ask what are some of the basic components of the process of the co-creation of language. i submit that naming is one of them. When looked at in this light it is natural to understand that names come with scope: the current conversation, a larger on-going dialog, a social or professional context, etc. One of the things we are most likely to forget, however, is the established scope. Names can escape scope when conversations, dialogs and/or cultures collide. Thus, having a device for delimiting and circumscribing scope is incredibly natural and useful. Moreover, if this device allows one to manipulate scope -- as might happen when cultures collide -- all the better.
i want to approach this idea via a very simple example. One of the most brilliant mathematicians of the last and current century is John Horton Conway. Near the middle of the last century he formalized a notion of game in terms of a certain recursive data structure. He went on to show that every notion of number that has made it into the canon of numerical notions could be given representations in terms of this data structure. These ideas are documented in his delightful On Numbers and Games. Knuth popularized some of these ideas in his writings on surreal numbers. There's a lot already written about Conway games and how they can be used to represent numbers. i'm not going to go over that much here. Instead, i'm just going to treat it as a data structure that we might very naturally be motivated to want to include in a programming environment -- language, library or other beastie. Here's just the barest little bit that we need in order to get to the central point i want to make about naming.
Conway's representation is really a representation of positions. His view is that we can represent a great deal about a game by thinking of it in terms of positions from which (and to which) plays can be made. Just to be clear, his representation is more intuitively thought of in terms of two-player games in which the players take alternating turns. (Unlike, say rock-paper-scissors, in which the players all play at once in any given round.) The two players are conventionally called 'Player' and 'Opponent' (see what i mean: we're already introducing names and scopes!). A position is then just a list of positions Player can move to and a list of positions Opponent can move to. If you've spotted what appears to be a circularity (a position is a ... pair of lists -- sets actually -- of positions), you're spot on! The data structure, as mentioned above, is recursive. What allows this structure to be grounded... make contact with the Ground (in Christopher's Alexander's sense) is that these lists can be empty. The first game (position) is the one in which Player has no positions to which she can move and Opponent is likewise limited.
To support intuition we can introduce Conway's notation, but we're going to do this in an idiosyncratic way. Since this is a post about a dialectic in the co-creation of language (see... we're doing it again, we've already started to use terms like 'dialectic' and phrases like 'co-creation of language' in ways that are special to this specific communication) we're going to describe Conway's data structure using a grammar. The following grammar is machine readable and can be consumed by the tool BNFC.
-- Games
Base . Game ::= "{" [Game] "" [Game] "}" ;
-- Lists
[] . [Game] ::= ;
(: []) . [Game] ::= Game ;
(:) . [Game] ::= Game "," [Game] ;
The grammar is expressed in a labeled variant of an EBNF in which all alternatives of a production involving a choice are given separate, labeled productions. The first production (labeled 'Base') says that a game (represented by the token Game) is given by a curly brace ("{", "}") delimited pair of lists of games that are separated by a vertical bar "". The next three productions say what a list of games (represented by the token [Game]) is. The first of these says that an empty list is just that, represented by emptiness. The third of these says how to add a game to the front of a list of games (i.e., put a comma between the game and the list). The middle production says that a one element list is just written as a single game.
Before moving to some (hopefully illuminating) examples, i need to point out a little technicality. Conway's formalism is expressed in the language of mathematics which uses sets -- instead of lists -- that are potentially infinitely 'wide', i.e. contain infinitely many elements. It is common practice in computer science-y contexts to ignore the difference between sets and lists because everybody knows the drill (see... we're doing it again, scope and specialized use are at play in this our co-creation of language to communicate); but, the representation above will only provide for finitely 'wide' lists and thus is only an effective approximation of the structure Conway devised. It will serve for our purposes, but you, gentle reader, should be aware that there are differences.
Now, for that grounding example i mentioned earlier...
{|}
There... that's the empty game and is accepted by the grammar, as you may verify by placing the grammar above in a file named Conway0.cf and invoking
> bnfc Conway0.cf
...
> alex LexConway0.x
> happy -i ParConway0.y
...
>
At the prompt of a unix box and then starting Haskell and typing
Prelude> :l TestConway0.hs
This is what you should see
[1 of 7] Compiling LexConway0 ( LexConway0.hs, interpreted )
[2 of 7] Compiling AbsConway0 ( AbsConway0.hs, interpreted )
[3 of 7] Compiling PrintConway0 ( PrintConway0.hs, interpreted )
[4 of 7] Compiling ErrM ( ErrM.hs, interpreted )
[5 of 7] Compiling SkelConway0 ( SkelConway0.hs, interpreted )
[6 of 7] Compiling ParConway0 ( ParConway0.hs, interpreted )
[7 of 7] Compiling Main ( TestConway0.hs, interpreted )
Ok, modules loaded: ErrM, Main, AbsConway0, PrintConway0, SkelConway0, ParConway0, LexConway0.
Then if you type
*Main> run 2 pGame "{|}"
This is what you should see
Loading package haskell98 ... linking ... done.
Parse Successful!
[Abstract Syntax] Base [] []
[Linearized tree] {| }
*Main>
Now that we've done all that work done for the empty game we can actually get to more interesting games. The empty game is commonly used to represent the number 0. The game for the number 1 puts the empty game in as a position available in the left component, that is {0|}. Likewise, the game representing 2 is {1|}, and more generally, the game representing n+1 is {n|}. Similarly, {|0} represents -1. And, more interestingly, {0|1} represents 1/2. But, now we've done it! Moreover, we've done it formally and out in the open. i'm guessing that even the mathematically sophisticated amongst the readers will not recognize what we've done -- because the phenomenon is so ubiquitous (especially in maths) that it we're like a fish to this water. What have we done? We've complicitly said "let's use the name '0' for the game {|}". i can prove that we've done this. Look at the grammar above. It will not accept the expression {0|}. Try typing it into Haskell. It will accept the expression {{|}|}. But, this is a lot harder to read. That's one of the tricks mathematicians (and other players of the language game) have learned how to use so well that they've forgotten how much and in what contexts it's in use.
We can be explicit about this act, however, and recognize formally what we do without thinking. We can expand our grammar to explicitly include a simple naming utterance. (Note, in the grammar below i've also added expressions for adding and multiplying and negating games. Look to Conway or to the survey paper on arXiv to see what these mean. i'm not going to talk about them here.)
-- Expressions
EDef . Exp ::= "let" Var "be" Exp "in" Exp1 ;
ERef . Exp ::= Var ;
EAdd . Exp ::= Exp "+" Exp1 ;
EMul . Exp1 ::= Exp1 "*" Exp2 ;
EGame . Exp2 ::= Game ;
_ . Exp ::= Exp1 ;
_ . Exp1 ::= Exp2 ;
_ . Exp2 ::= "(" Exp ")" ;
Variable . Var ::= Ident ;
-- Games
Base . Game ::= "{" [Exp] "" [Exp] "}" ;
Negation . Game ::= "-" Game ;
-- Lists
[] . [Game] ::= ;
(: []) . [Game] ::= Game ;
(:) . [Game] ::= Game "," [Game] ;
[] . [Exp] ::= ;
(: []) . [Exp] ::= Exp ;
(:) . [Exp] ::= Exp "," [Exp] ;
The important thing this new grammar adds is productions for definitions (EDef) and references (ERef). This allows us to express what we said above about how to think about the representation of the number 1 in terms of the representation of the number 0.
*Main> run 2 pExp "let Z be {|} in {Z|}"
Parse Successful!
[Abstract Syntax]
EDef (Variable (Ident "Z")) (EGame (Base [] [])) (EGame (Base [ERef (Variable (Ident "Z"))] []))
[Linearized tree]
let Z be {
}
in {
Z
}
*Main>
Of course, because we used Ident as the terminal grounding the non-terminal Var we can't literally write "let 0 be {|} in {0|}"; but, we could make that work, too, if we were so inclined. The main point is that we have have an explicit acknowledgment in our syntax for our acts of naming -- capturing an essential aspect of what we must do in communication acts involving data structures of even moderate complexity.
What this representation will not do for us, though, is allow us to manipulate scopes very effectively. Specifically, it will not allow us to build new scopes by "gluing" them together. We can if we allow ourselves to change code in let-expressions, but if we want to leave certain units of code unchanged in order to combine scopes into larger and larger units, then -- ironically -- we have to split the atom of definition. It turns out that the definition-reference construct is not basic. Rather, introduction of scope as an abstraction (literally!), together with a notion of applying this scope to create a binding for the term are the more basic constructs. For those who are somewhat familiar with it, of course i am describing the λ-calculus. The point here is that something like the λ-calculus is at play at the inception of virtually any communication structure we devise to give us some handle on the world. We inject something like this naming device into our communication devices.
And notice that it is an injection. Look at the grammar above. The production for games has been modified. Now, instead of curly-brace-delimited lists of games, it is curly brace delimited lists of expressions -- so that we can refer to names we have introduced via our scoping mechanism. This trend remains when we introduce the full blown version.
-- Core lambda calculus
Abs . Term ::= "lambda" [Var] "->" Term ;
App . Term ::= "(" Term [Term] ")" ;
Expr . Term ::= Exp ;
-- Expressions
EDef . Exp ::= "let" Var "be" Exp "in" Exp1 ;
ERef . Exp ::= Var ;
EAdd . Exp ::= Exp "+" Exp1 ;
EMul . Exp1 ::= Exp1 "*" Exp2 ;
EGame . Exp2 ::= Game ;
Lambda . Exp2 ::= "(" Term ")" ;
_ . Exp ::= Exp1 ;
_ . Exp1 ::= Exp2 ;
Variable . Var ::= Ident ;
-- Games
Base . Game ::= "{" [Exp] "" [Exp] "}" ;
Negation . Game ::= "-" Game ;
-- Lists
[] . [Term] ::= ;
(:) . [Term] ::= Term [Term] ;
[] . [Game] ::= ;
(: []) . [Game] ::= Game ;
(:) . [Game] ::= Game "," [Game] ;
[] . [Exp] ::= ;
(: []) . [Exp] ::= Exp ;
(:) . [Exp] ::= Exp "," [Exp] ;
[] . [Var] ::= ;
(:) . [Var] ::= Var [Var] ;
This formulation allows us to write "game patterns", expressions in which certain game structure has been deferred to be filled in later -- a common communication act and essential in the co-creation of language -- one of the basic patterns allowing us to talk about the act of creating language itself. It also allows us to compose game patterns, to build up larger patterns out of smaller ones. It also allows us to point to the limits or completions of infinite processes without needing infinite resources to do it. These advantages are why we ultimately proceed to along the dialectic from structure to structure with references to structure intertwined with λ-calculus. We do this without knowing what it is we are doing.
If you look at the specification above it is easy to construe it as the (grammar for) the λ-calculus with an embedded data type, namely Conway games. The position i am taking here is that the natural progression runs the other way. The λ-calculus is "genetic" material we inject or infect the Conway construction with. Again, look at the basic production for games. It now references λ-terms. We know the old saw "Language: it's a virus". The proposal i'm making here is that there's more than a kernel of truth to this idea, hence the title of this blog.
But, because this is so organic, so natural we don't know what we're doing the process often goes awry. This process of introducing and managing naming can get really bungled up. A really good example can be seen in this newly emerging class of biological databases, like KEGG or Reactome or Biocyc. The databases record -- among other things -- pathways (effectively, chains of biochemical reactions). These pathways need to mention the reagents. They do so by name. One apparent way out of the issue of scoping these names is to use URIs. That's what many of these databases do. This fails because the dereferencing mechanism of those URIs is decidedly not universal. By the very spec for the URI, no agent is required to treat a scheme the same way as everybody else. The notion of scheme in a URI is not given a specific semantic -- because it can't. As a result the pathways -- which could be the source of interesting information about pathway patterns -- are much less useful because they cannot be composed -- or decomposed. i will give examples of this in a subsequent post. This one has already gotten much longer than i had anticipated.
Random Permutations (Part 14)
4 days ago
4 comments:
Note: the words 'inject' and 'injection' are being used here in a more biological sense. They are not referring to a co-product-like widget.
If you write out these grammars as 2-level types you will see that some of the productions are not applied in the same way when the recursive knot is tied. They are applied to the enveloping name management -- be it let/ref or lambda abstraction -- structure.
Great post! I love the perspective you're taking (even though I apparently need to brush up on my lambda-calculus).
I'm really interested in the semantics behind programming languages and your take on the problem sounds like the same thing I've been struggling with.
I'm really curious about your take on the URI naming scheme problem you mentioned in the end, so keep up the good work!
Hey mtrimpe,
Thanks for the feedback. i'd be interested to hear more about your programming language semantics efforts. Do you have some particular writings i could/should read?
Best wishes,
--greg
I haven't put all that much in writing yet as I only started exploring this direction a few months ago.
You can find the two posts that got me going in this direction at my blog though to see _what_ exactly I'm trying to do:
http://blog.mondiality.nl/2007/08/19/speck-programming-by-specification/
http://blog.mondiality.nl/2007/08/29/graph-oriented-programming/
Post a Comment