Around the Institute

Where Does Innovation Come From?

October 2, 2014  • Walter Isaacson

We live in the age of computers, but few of us know who invented them. Because most of the pioneers were part of collaborative teams working in wartime secrecy, they aren’t as famous as an Edison, Bell or Morse. But one genius, the English mathematician Alan Turing, stands out as a heroic-tragic figure, and he’s about to get his due in a new movie, “The Imitation Game,” starring Benedict Cumberbatch, which won the top award at the Toronto Film Festival earlier this month and will open in theaters in November.

The title refers to a test that Turing thought would someday show that machines could think in ways indistinguishable from humans. His belief in the potential of artificial intelligence stands in contrast to the school of thought that argues that the combined talents of humans and computers, working together as partners, will indefinitely be more creative than computers working alone. Today, as disappointing as the quest for pure artificial intelligence has been, we have seen astonishing innovations from finding ways to connect humans and machines more intimately. As the movie about him shows, Alan Turing’s own deeply human personal life served as a powerful counter to the idea that there is no fundamental distinction between the human mind and artificial intelligence.

Turing, who had the cold upbringing of a child born on the fraying fringe of the British gentry, displayed a trait that was common among innovators; in the words of his biographer Andrew Hodges, he was “slow to learn that indistinct line that separated initiative from disobedience.”

He taught himself early on to keep secrets. At boarding school, he realized he was homosexual, and he became infatuated with a classmate who died of tuberculosis before they graduated. During World War II, he became a leader of the teams at Bletchley Park, England, that built machines to break the German military codes. Feeling the need to hide both his sexuality and his code-breaking work, he often found himself playing an imitation game by pretending to be things he wasn’t. He also wrestled with the issue of free will: Are our personal preferences and impulses all predetermined and programmed, like those of a machine?

These questions came together in a paper, “Computing Machinery and Intelligence,” that Turing published in 1950. With a schoolboy’s sense of fun, he invented a game—one that is still being played and debated—to give meaning to the question, “Can machines think?” He proposed a purely empirical definition of artificial intelligence: If the output of a machine is indistinguishable from that of a human brain, then we have no meaningful reason to insist that the machine isn’t “thinking.” His test, now usually called the Turing Test, was a simple imitation game. An interrogator sends written questions to a human and a machine in another room and tries to determine which is which. If the output of a machine is indistinguishable from that of a human brain, he argued, then it makes no sense to deny that the machine is “thinking.”

Turing predicted that in 50 years there would be machines that, for five minutes, could fool a human questioner 30% of the time. Even though that is an exceedingly low bar, after more than 60 years the only machines that can make feeble claims of passing the Turing Test are those programmed to give clever parlor-trick ripostes, and no one would believe they are actually engaged in serious thinking. More to the point, philosophers led by Berkeley professor John Searle contend that it would be wrong to ascribe intentions and consciousness and “thinking” to a machine, even if it could fool 100% of questioners indefinitely.

Turing’s ideas hark back to the work done a century earlier by Ada Lovelace, the daughter of Lord Byron. In an attempt to assure that she didn’t turn out to be a romantic poet like her father, Lady Byron had Ada tutored mainly in mathematics, as if that were an antidote to artistic thinking. The result was that, like Steve Jobs and other great innovators of the digital age, she took joy in connecting the arts and sciences. She embraced what she called “poetical science,” which linked her rebellious imagination to her enchantment with numbers.

Her father was a Luddite, literally; his only speech in the House of Lords was a defense of the followers of Ned Ludd, who were smashing the new mechanical looms that were putting weavers out of work. But Ada admired how punch cards could instruct those machines to weave beautiful patterns, and she connected this to her friend Charles Babbage’s plan to use punch cards in a numerical calculator.

In the notes she published about Babbage’s Analytical Engine, Ada described the concept of a general-purpose machine, one that could process not just numbers but anything that could be noted in symbols, such as music, designs, words or even logic. In other words, what we would call a computer.

But no matter how many logical tasks such machines could perform, there was one thing they would never be able to do, Ada insisted. They would have no ability to actually think and “no pretensions whatever to originate anything.” Humans would supply the creativity; the machine could only do what it was told. In his paper on the “imitation game,” Turing dubbed this “Lady Lovelace’s Objection” and tried to refute it.

Decade after decade, new waves of experts have claimed the imminent arrival of artificial intelligence, perhaps even a “singularity” when computers aren’t only smarter than humans but also can design themselves to be even supersmarter, and will thus no longer need us mere mortals. Ever since breathless newspaper reports appeared in 1958 about a “Perceptron” that would mimic the neural networks of the human brain and be “capable of what amounts to original thought,” enthusiasts have declared that brainlike computers were on the visible horizon, perhaps only 20 years away. Yet true artificial intelligence has so far remained a mirage, always about 20 years away.

Computers can do some of the toughest tasks in the world (assessing billions of possible chess positions, finding correlations in hundreds of Wikipedia-size information repositories), but they cannot perform some of the tasks that seem most simple to us mere humans. Ask Google a hard question like “What is the depth of the Red Sea?” and it will instantly respond “7,254 feet,” something even your smartest friends don’t know. Ask it an easy one like “Can a crocodile play basketball?” and it will have no clue, even though a toddler could tell you, after a bit of giggling.

At Applied Minds near Los Angeles, you can get an exciting look at how a robot is being programmed to maneuver, but it soon becomes apparent that it has trouble navigating an unfamiliar room, picking up a crayon, and writing its name. A visit to Nuance Communications near Boston shows the wondrous advances in speech-recognition technologies that underpin Siri and other systems, but it’s also apparent to anyone using Siri that you still can’t have a truly meaningful conversation with a computer, except in a fantasy movie. A visit to the New York City police command system in Manhattan reveals how computers scan thousands of feeds from surveillance cameras as part of a Domain Awareness System, but the system still cannot reliably identify your mother’s face in a crowd.

All of these tasks have one thing in common: even a 4-year-old can do them.

Perhaps the latest round of reports about neural network breakthroughs does in fact mean that, in 20 years, there will be machines that think like humans. But there is another possibility, the one that Ada Lovelace envisioned: that the combined talents of humans and computers, when working together in partnership and symbiosis, will indefinitely be more creative than any computer working alone.

This was the approach taken by the most important unsung pioneers of the digital age, such as Vannevar Bush, J.C.R. Licklider and Doug Engelbart. “Human brains and computing machines will be coupled together very tightly, and the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today,” Licklider wrote in 1960.

IBM is now pursuing this symbiosis with Watson, the question-answering machine that beat some Jeopardy champions. It is being configured to work in partnership with doctors to diagnose and treat cancers. Chief executive Ginni Rometty was so impressed that she created a new Watson Division of the company. “I watched Watson interact in a collegial way with the doctors,” she said. “It was the clearest testament of how machines can truly be partners with humans rather than try to replace them.” Even Google, the ultimate question-answering system, harnesses the power of computer algorithms with the millions of human judgments made each day by people who create links on the sites they curate.

Despite his belief in the viability of artificial intelligence, Turing was a testament to the power of linking human creativity to computer processing power. In addition, the complex emotional components of his life were a reminder of how machines are still fundamentally different from us mysterious mortals, even if they can occasionally fool us in an imitation game.

People seeking to debunk Turing’s imitation game often cited the role that sexual and emotional desires play in humans, distinguishing them from machines. That topic dominated a January 1952 televised BBC debate that Turing had with a famous brain surgeon, Sir Geoffrey Jefferson. When the moderators asked about the role played by “appetites, desires, drives, instincts” that might set humans apart from machines, Jefferson repeatedly invoked sexual desires. Man is prey to “sexual urges,” he said, and “may make a fool of himself,” adding that he wouldn’t believe a machine could think until he saw it touch the leg of a female machine.

Turing fell quiet during this part of the discussion. During the weeks leading up to the broadcast, he was engaged in a series of actions that were so very human that a machine would have found them incomprehensible.

He had picked up on the streets a 19-year-old working-class drifter named Arnold Murray and begun a relationship. When he returned from the BBC taping, he invited Murray to move in. One night Turing described to Murray his fantasy of playing chess against a nefarious computer that he was able to beat by causing it to show anger, then pleasure, then smugness. A few days later Turing’s house was burglarized by a friend of Murray’s. When Turing reported the incident to the police, he disclosed his sexual relationship with Murray and was arrested for “gross indecency.”

At the trial, Turing pleaded guilty, though he made clear he felt no remorse. (In 2013, he was posthumously pardoned by Queen Elizabeth.) He was offered a choice: imprisonment or probation contingent on receiving hormone treatments to curb his sexual desires, as if he were a chemically controlled machine. He chose the latter, which he endured for a year.

Turing at first seemed to take it all in stride, but on June 7, 1954, he committed suicide by biting into an apple he had laced with cyanide. He had always been fascinated by the scene in Snow White in which the Wicked Queen dips an apple into a poisonous brew. He was found in bed with froth around his mouth, cyanide in his system, and a half-eaten apple by his side.

Was that something a machine would have done?