## This post is a part of a series of posts attempting make explicit some folklore and wisdom of the commons that computer scientists enjoy but neither mainstream programmers nor mainstream mathematicians have exposure to. Both of these cultures would benefit from this common wisdom and so it feels like telling this story is of a certain utility. The caveat is that unlike the other posts the pay-off is cumulative not immediate. i'm not going to present some eye-catching bauble as with the post on Conway Games. Instead, you're going to have to steep in this stuff a bit before the goodness kicks in.

It’s worth summarizing what we’ve said so far. A very broad class of algebras, corresponding to what are commonly known as algebraic data types, have purely syntactic presentations in terms of a freely generated structure and a collection of relations. Once we have this general idea in mind, whether we are programmers, computer scientists, mathematicians or logicians, we are probably pretty comfortable with the idea of polymorphism and so are probably pretty comfortable with the idea that ‘freely generated structure’ has potentially many polymorphic interpretations ranging from simple concatenation of symbols to languages generated by (Chomsky) grammars, and that a similar intuition applies to relations. Here’s a little table summarizing some of this content with examples.

There is one thing the mainstream mathematician is likely to have that modern programmer does not, and that is some passing familiarity with the core notions of Universal Algebra. Universal algebra presents a general theory of what an algebra (read algebraic data type) is and spells out for a given algebraic theory what it means to have a presentation and a model and what constitutes a purely syntactic model.

Actually, armed with what we already have we can present the core notions via a simple grammar! Let’s begin with the idea of an algebraic expression.

Operation . AlgebraicExpr ::= Symbol “(” AlgebraicExpr* “)”

Name . Symbol ::= Identifier

This would correspond to the Scala code

trait AlgebraicExpr

case class Operation(

symbol : Symbol,

actuals : List[AlgebraicExp]

) extends AlgebraicExpr

trait Symbol

case class Name( identifier : [| Identifier |] ) extends Symbol

Notice that we have deferred providing a semantic interpretation of the type Identifier. We did this because there was no production for it in the grammar, and our interpretation scheme allows us to be lazy about this. Further notice that Scala gives us a way to be permanently lazy about missing productions. Because our scheme interprets grammatical categories as types, both parametric polymorphism (that’s generics to you!) and type members offer reasonable semantic targets for lazily interpreting missing productions. We parametrize in terms of the type corresponding to the category that is missing a production. In the case above we have

trait Symbol[Identifier]

case class Name[Identifier](

identifier : Identifier

) extends Symbol[Identifier]

Now, if a production (or several!) for Identifier ever gets codified we can inject it into our semantic interpretation. So, suppose we have two types of identifiers, Strings and UUIDs.

LocallyUnique . Identifier ::= String

GloballyUnique . Identifier ::= UUID

These would give rise to the productions above which correspond to the type specifications below.

trait Identifier

case class LocallyUnique( s : String ) extends Identifier

case class GloballyUnique( u : UUID ) extends Identifier

Then we are free to provide the concrete interpretations of Symbol via instantiation of the parametric type. Notice that in Scala we could improve on our interpretation of Symbol and remain lazy with type variance constraints. We don’t want any old type (such as Int) supplied in the type parameter, but only those types that really do act as Identifiers. In other words,

trait Identifier

trait Symbol[ID <: Identifier]

case class Name[ID <: Identifier](

identifier : ID

) extends Symbol[ID]

Now, only the *Unique case classes are available types for substitution in the type parameter. Moreover, if we decide to add path-based identifiers later

Pathbased . Identifier ::= Identifier ( “.” Identifier )*

We can likewise add our semantic interpretation in a similarly lazy, compositional fashion, without perturbing our existing code.

case class Pathbased(

id : Identifier,

p : List[Identifier]

) extends Identifier

Of course, this benefit isn’t entirely for free. If we do use this method, then our AlgebraicExpr type will also have to have been parametric in Identifier because it has a dependency on Symbol which is so parametrized. It’s a small price to pay: it just means that the interpretation needed to be cognizant of this strategy from the outset. Now that we know about the strategy we can incorporate it from the beginning.

trait AlgebraicExpr[ID <: Identifier]

case class Operation[ID <: Identifier](

symbol : Symbol[ID],

actuals : List[AlgebraicExp[ID]]

) extends AlgebraicExpr[ID]

trait Symbol[ID <: Identifier]

case class Name[ID <: Identifier](

identifier : ID

) extends Symbol[ID]

And now you, gentle reader, have a basic understanding of 2-level type decomposition. One last point before returning to the main topic, the use of type parameters as a means of lazily handling missing productions opens another possibility: recursion. In terms of our example, consider the following class of Symbols

case class Quotation[ID <: Identifier](

expr : AlgebraicExpr[ID]

) extends Symbol[ID]

This allows for a wholly new class of Symbols which are quotations of AlgebraicExprs. The general strategy of using recursion in 2-level type decomposition is well known. This specific application to the general area of the creation of symbols is less well known and has some significant implications -- that are beyond the scope of this discussion. Suffice it to say that this technique -- parametrizing syntactic categories in syntactic categories -- can be projected back from our semantic domain (where we introduced it) to the syntactic domain of specifying free(ly generated) structures. This opens up a whole new level of expressiveness at the syntactic level corresponding to the expressiveness we are accustomed to with the introduction of parametric polymorphism. In some very real sense, the type definition language of Haskell can be seen as a model or blueprint for a much richer syntax specification mechanism. Note bene: this is bleeding edge thinking! Apart from Haskell, itself, there does not exist a tool for specifying abstract syntax in this fashion. (Ok, CS community, i know you can take a hint.) (Scala coders beware! Scala generics will not allow you to tie the recursive knot this way without some trouble. Instead, use type members and you will be happier.)

With that discussion under our belt we can now reconstruct our grammar for AlgebraicExprs

Operation . AlgebraicExpr ::= Symbol “(” AlgebraicExpr* “)”

Using the same trick as above, we can simply defer any interpretation of Symbol. [Note: from now on, when there’s little risk of confusion, i’m going to drop the use of quotations around elements concrete syntax like the parens in the production above, and use a different font, instead. And, consistent with this, i’ll use font to indicate a variety of categories present in our specification. So, the above production becomes

Operation . AlgebraicExpr ::= Symbol ( AlgebraicExpr* )

Thus endeth the notational convention sidebar.]

trait AlgebraicExpr[Symbol]

case class Operation[Symbol](

symbol : Symbol,

actuals : List[AlgebraicExp[Symbol]]

) extends AlgebraicExpr[Symbol]

So, if s, s1, s2, … are Symbols (whatever those are), then

Mainstream mathematics | Computer science | |

Generators | Symbols | Grammars |

Relations | Equations | Structural equivalence |

Examples | Monoids,
Groups, Rings, Fields, Vector spaces, ... | Lambda calculus,
π-calculus, ambient calculus, ... |

Example monoid specification:
Generators: ISO character set Relations: s ++ “” == s == “” ++ s s1 ++ ( s2 ++ s3 ) == (s1 ++ s2) ++ s3 | Example language specification:
Generators are given by the grammar E ::= Var | \ x -> E | E E Relations: alpha-equivalence: \ x -> E ≈ \ y -> (E{y/x}) |

### A smattering of Universal Algebra

There is one thing the mainstream mathematician is likely to have that modern programmer does not, and that is some passing familiarity with the core notions of Universal Algebra. Universal algebra presents a general theory of what an algebra (read algebraic data type) is and spells out for a given algebraic theory what it means to have a presentation and a model and what constitutes a purely syntactic model.

Actually, armed with what we already have we can present the core notions via a simple grammar! Let’s begin with the idea of an algebraic expression.

Operation . AlgebraicExpr ::= Symbol “(” AlgebraicExpr* “)”

Name . Symbol ::= Identifier

This would correspond to the Scala code

trait AlgebraicExpr

case class Operation(

symbol : Symbol,

actuals : List[AlgebraicExp]

) extends AlgebraicExpr

trait Symbol

case class Name( identifier : [| Identifier |] ) extends Symbol

#### A short detour through 2-level type decomposition

Notice that we have deferred providing a semantic interpretation of the type Identifier. We did this because there was no production for it in the grammar, and our interpretation scheme allows us to be lazy about this. Further notice that Scala gives us a way to be permanently lazy about missing productions. Because our scheme interprets grammatical categories as types, both parametric polymorphism (that’s generics to you!) and type members offer reasonable semantic targets for lazily interpreting missing productions. We parametrize in terms of the type corresponding to the category that is missing a production. In the case above we have

trait Symbol[Identifier]

case class Name[Identifier](

identifier : Identifier

) extends Symbol[Identifier]

Now, if a production (or several!) for Identifier ever gets codified we can inject it into our semantic interpretation. So, suppose we have two types of identifiers, Strings and UUIDs.

LocallyUnique . Identifier ::= String

GloballyUnique . Identifier ::= UUID

These would give rise to the productions above which correspond to the type specifications below.

trait Identifier

case class LocallyUnique( s : String ) extends Identifier

case class GloballyUnique( u : UUID ) extends Identifier

Then we are free to provide the concrete interpretations of Symbol via instantiation of the parametric type. Notice that in Scala we could improve on our interpretation of Symbol and remain lazy with type variance constraints. We don’t want any old type (such as Int) supplied in the type parameter, but only those types that really do act as Identifiers. In other words,

trait Identifier

trait Symbol[ID <: Identifier]

case class Name[ID <: Identifier](

identifier : ID

) extends Symbol[ID]

Now, only the *Unique case classes are available types for substitution in the type parameter. Moreover, if we decide to add path-based identifiers later

Pathbased . Identifier ::= Identifier ( “.” Identifier )*

We can likewise add our semantic interpretation in a similarly lazy, compositional fashion, without perturbing our existing code.

case class Pathbased(

id : Identifier,

p : List[Identifier]

) extends Identifier

Of course, this benefit isn’t entirely for free. If we do use this method, then our AlgebraicExpr type will also have to have been parametric in Identifier because it has a dependency on Symbol which is so parametrized. It’s a small price to pay: it just means that the interpretation needed to be cognizant of this strategy from the outset. Now that we know about the strategy we can incorporate it from the beginning.

trait AlgebraicExpr[ID <: Identifier]

case class Operation[ID <: Identifier](

symbol : Symbol[ID],

actuals : List[AlgebraicExp[ID]]

) extends AlgebraicExpr[ID]

trait Symbol[ID <: Identifier]

case class Name[ID <: Identifier](

identifier : ID

) extends Symbol[ID]

And now you, gentle reader, have a basic understanding of 2-level type decomposition. One last point before returning to the main topic, the use of type parameters as a means of lazily handling missing productions opens another possibility: recursion. In terms of our example, consider the following class of Symbols

case class Quotation[ID <: Identifier](

expr : AlgebraicExpr[ID]

) extends Symbol[ID]

This allows for a wholly new class of Symbols which are quotations of AlgebraicExprs. The general strategy of using recursion in 2-level type decomposition is well known. This specific application to the general area of the creation of symbols is less well known and has some significant implications -- that are beyond the scope of this discussion. Suffice it to say that this technique -- parametrizing syntactic categories in syntactic categories -- can be projected back from our semantic domain (where we introduced it) to the syntactic domain of specifying free(ly generated) structures. This opens up a whole new level of expressiveness at the syntactic level corresponding to the expressiveness we are accustomed to with the introduction of parametric polymorphism. In some very real sense, the type definition language of Haskell can be seen as a model or blueprint for a much richer syntax specification mechanism. Note bene: this is bleeding edge thinking! Apart from Haskell, itself, there does not exist a tool for specifying abstract syntax in this fashion. (Ok, CS community, i know you can take a hint.) (Scala coders beware! Scala generics will not allow you to tie the recursive knot this way without some trouble. Instead, use type members and you will be happier.)

### A smattering of Universal Algebra (continued)

With that discussion under our belt we can now reconstruct our grammar for AlgebraicExprs

Operation . AlgebraicExpr ::= Symbol “(” AlgebraicExpr* “)”

Using the same trick as above, we can simply defer any interpretation of Symbol. [Note: from now on, when there’s little risk of confusion, i’m going to drop the use of quotations around elements concrete syntax like the parens in the production above, and use a different font, instead. And, consistent with this, i’ll use font to indicate a variety of categories present in our specification. So, the above production becomes

Operation . AlgebraicExpr ::= Symbol ( AlgebraicExpr* )

Thus endeth the notational convention sidebar.]

trait AlgebraicExpr[Symbol]

case class Operation[Symbol](

symbol : Symbol,

actuals : List[AlgebraicExp[Symbol]]

) extends AlgebraicExpr[Symbol]

So, if s, s1, s2, … are Symbols (whatever those are), then

would be recognized by this grammar as AlgebraicExprs. Moving toward an eventual semantic interpretation of these purely syntactic gadgets, we’re going to identify expressions of the form s() as constants. That is, they will eventually be interpreted in much the same way we normally interpret constants in a functional language, as pure functions of zero arity. Likewise, we’d like to interpret expressions of the form s1( s() ) as applications of a unary function, which we’ll write [| s1 |] to the constant [| s |].

By now, you’ve probably spotted the general convention in this posting (which is a convention in programming language semantics at large is) to use [| - |] (pronounced ‘meaning of’) as denoting the compositional semantic interpretation of some piece of syntax to its target semantics. Thus, we have a general specification of the semantic interpretation of our AlgebraicExprs.

[| s( e1, …, eN ) |] = [| s |]( [| e1 |], …, [| eN |] )

In terms of our Scala code we can write this down, too.

trait Semantics[

Symbol,

Value,

Operator <: { def apply [X] ( actuals : List[X] ) : X }

] {

def meaning( s : Symbol ) : Operator

def meaning( expr : AlgebraicExpr[Symbol] ) : Value = {

val op = meaning( expr.symbol )( actuals.map( meaning ) )

}

}

Those familiar with Universal algebra will notice that we have skirted around the issues of syntactic considerations of arity. That is, our grammar allows for both

- the expression s1( s() ) and
- the expression s1( s(), s() ).

If s1 denotes a single function, there’s a potential for a problem. In an era where polymorphism is commonplace, we recognize that s1 probably stands for a whole family of functions each with different arity. We probably ought to give a nod to this (and some related issues) by adjusting the meaning function of our semantics as

def meaning( s : Symbol, arity : Int ) : Operator

to allow us to pick out which Operator in the family a given symbol denotes. As an example, consider our friends the Boolean algebras. It is perfectly reasonable and consistent to interpret true as the zero-ary conjunction and -- because conjunction is associative -- we have a version of it for every possible arity. In other words, we have a reasonably complete presentation of a Boolean algebra with

Conjunction . BooleanExpr ::= And ( BooleanExpr* )

Negation . BooleanExpr ::= ~ BooleanExpr

together with the DeMorgan’s law which we could write

Or( b1, b2 ) := ~ And( ~b1, ~b2 )

This raises interesting points about syntactic sugar and compositional semantics -- but let’s not get too ahead of ourselves, but instead, pop up a level closer to our aim. Even with this we can sketch out what we mean by an algebraic theory and a model. An algebraic theory is going to be given by a collection of Symbols and an assignment of (sets of ) arity (resp., arities). Taken together with a collection of equations. What are those? In the interest of narrative tension, they will have to wait until the next post! ;-) For now, let me close with this: if you've spotted a connection between our AlgebraicExprs and case class instances, this is no accident! It's all part of this underlying structure i'm hoping to expose.

## 4 comments:

This another nice way of showing that NAND gates are all you need to build digital circuits :)

It is great to find reference of people such as Chomsky here.Thank you for the interesting post!

And also thank you for the "steep in this stuff" :-)

Good to notice your strategic thinking and the close way of addressing the audience as well.

Just one tip:could I have a glossery and a plan before the mainstream please ( i know it would take more time but I am certain it would worth it:-))?

Thx in advance.

Kind Regards,

MT

http://www.dmgcc.com/technology.html

sth more about the programming.NLP

with respect,

MT

Post a Comment