One of the most persistent questions in our history is whether we are 'alone'. To misuse Heidegger, is human-kind the only being concerned with the question of Being? In this vein, artificial intelligence (AI) can be thought of as a Pygmalion-like play to make a playmate if we can't find one. At the center of the AI whorl of ideas is Turing's clever formulation of the old adage, 'if it looks like a duck and walks like a duck and quacks like a duck and ...' His key insight (and/or his central proposal) was that the primary observational test we need employ to check for this particular flavor of duckiness (aka intelligence) is the use of language. In modern parlance, if after IM'ing with MssrDuck 50% of the population of MssrDuck's buddy list would conclude the agent behind the moniker was human, then whatever the agency responding to messages @MssrDuck is, it's intelligent.
This proposal says that language is the interface across which we make probes for intelligence. This may or may not be the whole story, but at least it is a story. So, let's run with it for a bit. Specifically, let's pick up where we left off in the last post with the proposal for a language-based interface between the collective and the individual. The immediate observation is that with such an interface we could employ Alan's test. The individual (many of them, individually) could probe the collective with language-based assays. Conversely, the collective could probe the individual with similar assays and invite it's other collective friends to do the same. Each side is naturally looking for signs of intelligent life.
But, now, let's throw in the ideas coming from previous posts regarding complexity and scale. It appears the scaling along spatio-temporal lines has little to do with complexity. Smaller does not mean simpler. i've made this point elsewhere, but i'll make again here. Whether one considers string-theoretic proposals or loop-quantum-gravity based proposals, from a computational perspective, in these unified physical theories the basic building blocks end up having the same computational expressiveness as the universe one builds out of them. After building computational models that had this property i was convinced it was not necessarily a bad thing or sign of a lack of progress, but it is a sign that we have to divorce our sense that spatio-temporal scale is related to complexity.
Once we've made that step then we might naturally ask whether entities at smaller (resp, larger) scales than ourselves not only have similar levels of complexity but similar kinds of capacity and capability. Could our very cells (and their proteins and their proteins atoms...) be intelligent in the way we understand intelligence? And that question leads us to Mr. Turing's test. And an attempt to employ Mr. Turing's test across these scale boundaries leads us to need a language by which the collective and the individual can speak to each other as peers. And this is a means, in principle, to check -- without venturing out of our own backyard -- whether we might not be so alone after all.
Thursday, April 17, 2008
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment