QUANTA

Tuesday, March 22, 2011


I Took the Turing Test

By DAVID LEAVITT

In his landmark 1950 paper “Computing Machinery and Intelligence,” the mathematician, philosopher and code breaker Alan Turing proposed a method for answering the question “Can machines think?”: an “imitation game” in which an “interrogator,” C, interviews two players, A and B, via teleprinter, then decides on the basis of the exchange which is human and which is a computer.

Turing’s radical premise was that the question “Can a machine win the imitation game?” could replace the question “Can machines think?” — an upsetting idea at the time, as the neurosurgeon Sir Geoffrey Jefferson asserted in 1949: “Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain — that is, not only write it but know that it had written it.” Turing demurred: if the only way to be certain that a machine is thinking “is to be the machine and to feel oneself thinking,” wouldn’t it follow that “the only way to know that a man thinks is to be that particular man”? Nor was the imitation game, for Turing, a mere thought experiment. On the contrary, he predicted that in 50 years, “it will be possible to program computers . . . to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning.”

Well, he was almost right, as Brian Christian explains in “The Most Human Human,” his illuminating book about the Turing test. In 2008, a computer program called Elbot came just one vote shy of breaking Turing’s 30 percent silicon ceiling. The occasion was the annual Loebner Prize Competition, at which programs called “chatterbots” or “chatbots” face off against human “confederates” in scrupulous enactments of the imitation game. The winning chatbot is awarded the title “Most Human Computer,” while the confederate who elicits “the greatest number of votes and greatest confidence from the judges” is awarded the title “Most Human Human.”

It was this title that Christian — a poet with degrees in computer science and philosophy — set out, in 2009, to win. And he was not about to go “head-to-head (head-to-motherboard?) against the top A.I. programs,” he writes, without first getting, as it were, in peak condition. After all, for Elbot to have fooled the judges almost 30 percent of the time into believing that it was human, its rivals had to have failed almost 30 percent of the time to persuade the judges that they were human. To earn the “Most Human Human” title, Christian realized, he would have to figure out not just why Elbot won, but why humanity lost.

His quest is, more or less, the subject of “The Most Human Human,” an irreverent picaresque that follows its hero from the recondite arena of the “Nicomachean Ethics” to the even more recondite arena of legal deposition to perhaps the most recondite arena of all, that of speed dating — and on beyond zebra. What Christian learns along the way is that if machines win the imitation game as often as they do, it’s not because they’re getting better at acting human; it’s because we’re getting worse.

Take, for example, the loathsome infinite regress of telephone customer service. You pummel your way through a blockade of menu options only to find that the live operator, once you reach her, talks exactly like the automated voice you’re trying to escape. And why is this? Because, Christian discovers, that’s how operators are trained to talk. Nor is this emulation of the electronic limited to the commercial realm. In chess, he notes, the “victory” of the computer program Deep Blue over Garry Kasparov had the paradoxical effect of convincing a whole generation of young chess players that the route to a grandmaster title was through rote memorization of famous matches. Whereas in the past these chess players might have dreamed of growing up to be Kasparov, master of strategy, now they dream of growing up to be Deep Blue, master of memory.

So how do you win the imitation game? “Just be yourself,” a past confederate advises Christian. But what does it mean to “be yourself”? In pursuing the question, Christian finds his way to Nietzsche, who “held the startling opinion that the most important part of ‘being oneself’ was — in the Brown University philosopher Bernard Reginster’s words — ‘being one self, any self.’ ” Which, as it turns out, is immensely challenging for computers. For instance, to circumvent the difficulty, the program known as Cleverbot “crowdsources” selfhood, borrowing intelligence from the humans who visit its Web site; it’s from this “conversational purée” that it draws its remarks and retorts, thereby generating the illusion of what Christian calls “coherence of identity.” But while Cleverbot can speak persuasively about “the things to which there is a right answer independent of the speaker,” if you ask it where it lives, “you get a pastiche of thousands of people talking about thousands of places.” What you realize, in other words, isn’t that “you aren’t talking with a human,” but that “you aren’t talking with a human.”

And that’s precisely the difficulty. In a wiki-age that privileges the collective over the personal, Christian suggests, we have become tone deaf to the difference between the human voice and the chatbot voice. Nor is the effect limited to the Loebner Prize. From smartphones whose predictive-text algorithms auto-correct the originality out of our language (“the more helpful our phones get, the harder it is to be ourselves”) to “super-automatic” espresso machines that sidestep the nuanced maneuvers of the human barista, technology militates against Ford Madox Ford’s “personal note,” Nietzsche’s “single taste”: against selfhood itself.

Christian is at his best when he is at his most hortatory. “Cobbled-together bits of human interaction do not a human relationship make,” he inveighs early on. “Not 50 one-night stands, not 50 speed dates, not 50 transfers through the bureaucratic pachinko. No more than sapling tied to sapling, oak though they may be, makes an oak. Fragmentary humanity isn’t humanity.” And later: “For everyone out there fighting to write idiosyncratic, high-entropy, unpredictable, unruly text, swimming upstream of spell-check and predictive auto-completion: Don’t let them banalize you.”

As “The Most Human Human” demonstrates, Christian has taken his own words to heart. An authentic son of Frost, he learns by going where he has to go, and in doing so proves that both he and his book deserve their title.

David Leavitt’s books include “The Man Who Knew Too Much: Alan Turing and the Invention of the Computer” and “The Indian Clerk.”


Source and/or read more: http://goo.gl/DXhlm

Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://3.ly/rECc