From Zurich: “It’s not a human move” – AlphaGo, DeepMind, artificial intelligence and the future

AlphaGo
 

14 March 2016 – Last week I was in Zurich for several days working with my #1 client on a variety of matters, as well as attending one of the required “on campus” sessions for my artificial intelligence program at the Swiss Federal Institute of Technology (ETH Zurich). It is a program in tandem to my neuroscience program at the University of Cambridge. Yes, an opsimath at heart.

On Thursday we all arose (very) early to watch live from Korea the second game between Lee Se-dol, a South Korean professional Go player, and AlphaGo, a computer program developed by Google DeepMind in London, to play the board game Go.

Go is a brain-taxing board game is a little like an Eastern version of chess, except many times more complex. It has millions of devotees in China, Korea and Japan. In Go, two players take turns putting black or white stones on a 19-by-19-line grid, with a goal of putting more territory under one’s control. A player with a black stone plays first and a white-stone player gets extra points to compensate. AlphaGo’s algorithm uses a combination of machine learning and tree search techniques, combined with extensive training, both from human and computer play. To understand how the game works, here’s a primer.

Lee lost three straight times before scoring his first win yesterday. His victory is a reminder that Google’s Go-playing program has room for improvement despite winning those first three matches in the best-of-five series. Lee said:

“I was unable to beat AlphaGo because I could not find any weaknesses in the software’s strategy. But then I found two weaknesses in the artificial intelligence program”.

Lee said that when he made an unexpected move, AlphaGo responded with a move as if the program had a bug, indicating that the machine lacked the ability to deal with surprises. AlphaGo also had more difficulty when it played with a black stone, according to Lee.

Lee played with a white stone on Sunday. For the final match of the series, scheduled for tomorrow, Lee has offered to play with a black stone, saying it would make a victory more meaningful.

What do we learn from all of this

Because there are near-infinite board positions in Go and top players rely heavily on intuition, the popular Asian board game has remained the holy grail for the artificial intelligence community for about two decades, after chess was conquered by computers.

And Lee’s comment after the first game and AlphaGo’s winning move said it all:

“It’s not a human move.”

Because the story was not that the computer won, but how it won. A pivotal move by AlphaGo was so unexpected, so at odds with 2,500 years of Go history and wisdom, that some thought it must be a glitch. What will be remembered is that moment of bewilderment. As Lee said “it is intuition not of the human kind”.

A classic fear about AI is that the machines we build to serve us will destroy us instead, not because they become sentient and malicious, but because they devise unforeseen and catastrophic ways to reach the goals we set them. Worse, if they do become sentient and malicious, then—like Ava, the android in the movie Ex Machina—we may not even realize until it’s too late, because the way they think will be unrecognizable to us. What we call common sense and logic will be revealed as small-minded prejudices, baked in by aeons of biological and social evolution, which trap us in a tiny corner of the possible intellectual universe.

But we have been here before.

When IBM’s Deep Blue beat chess Grandmaster Garry Kasparov in 1997 in a six-game chess match, Kasparov came to believe he was facing a machine that could experience human intuition. “The machine refused to move to a position that had a decisive short-term advantage,” Kasparov wrote after the match. It was “showing a very human sense of danger.” To Kasparov, Deep Blue seemed to be experiencing the game rather than just crunching numbers. 1

1 The best book to read on Deep Blue/Kasparov is Clive Thompson’s Smarter Than You Think.There is a link below in my “Sources” addendum.

Just a few years earlier, Kasparov had declared, “No computer will ever beat me.” When one finally did, his reaction was not just to conclude that the computer was smarter than him, but that it had also become more human. For Kasparov, there was a uniquely human component to chess playing that could not be simulated by a computer. 2

2 It should be noted that Kasparov also accused IBM of cheating (by having Grandmasters that were part of the IBM team) making at least one move for Deep Blue. By the end of the matches he also thought IBM was spying on him.

Kasparov was not sensing real human intuition in Deep Blue; there was no place in its code, constantly observed and managed by a team of IBM engineers, for anything that resembled human thought processes. But if not that, then what? The answer started with another set of games with an unlikely set of names: Go, Hex, Havannah, and Twixt. All of these have a similar design: two players take turns placing pieces on any remaining free space on a fairly large board (19-by-19 in Go’s case, up to 24-by-24 for Twixt). The goal is to reach some sort of winning configuration, by surrounding the most territory in the case of Go, by connecting two opposite sides of the board in Hex, and so on.

The usual way a computer plays chess is to consider various move possibilities individually, evaluate the resulting boards, and rank moves as being more or less advantageous. Yet for games like Go and Twixt, this approach breaks down. Whereas at any point in chess there are at most a couple dozen possible moves, these games offer hundreds of possible moves (thousands in the case of Arimaa, which was designed to be a chess-like game that computers could not beat). The evaluation of all or most possible board positions for more than a couple of moves forward quickly becomes impossible for a computer to analyze in a reasonable amount of time: a “combinatoric explosion”. 3 In addition, even the very concept of evaluation is more difficult than in chess, as there is less agreement on how to judge the value of a particular board configuration.

3 a favorite phrase of the physicist Lee Smolin

Computer scientist and advanced Go player Martin Müller explains it:

Patterns recognized by humans are much more than just chunks of stones and empty spaces: Players can perceive complex relations between groups of stones, and easily grasp fuzzy concepts such as ‘light’ and ‘heavy’ stones. This visual nature of the game fits human perception but is hard to model in a program.

In other words, Go strategy lies not in a strictly formal representation of the game but in a variety of different kinds of visual pattern recognition and similarity analysis: sorting the pieces into different shapes and clumps, comparing them to identical or visually similar patterns immediately available to one’s mind, and quickly trimming the space of investigation to a manageable level. So might Kasparov have actually detected a hint of analogical thinking in Deep Blue’s play and mistaken it for human intervention?

Where are we going?

There are several foundational texts in our AI course at ETH Zurich.4  And pretty much everything written by Marvin Minsky. In 1974 Minsky wrote:

4 The key texts are Philip Jackson’s Introduction to Artificial Intelligence and the Stuart Russell/Peter Norvig text Artificial Intelligence: A Modern Approach. There is a full source list at the end of this post.

“There is room in the anatomy and genetics of the brain for much more mechanism than anyone today is prepared to propose.”

Today we know through evolutionary genetics and analytics and domain-specific psychology that we do, indeed, have that “richness of mechanism” that Minsky called for 40+ years ago. I certainly have learned that the mind is organized into cognitive systems specialized for reasoning about object, space, numbers, living things, and other minds; that we are equipped with emotions triggered by other people (sympathy, guilt, anger, gratitude) and by the physical world (fear, disgust, awe); that we have different ways for thinking and feeling about people in different kinds of relationships to us (parents, siblings, other kin, friends, spouses, lovers, allies, rivals, enemies); and several peripheral drivers for communicating with others (language, gesture, facial expression).

And, no. We cannot yet explain or prove everything either empirically or theoretically in the same way that molecular biologists demonstrate their claims. But the levels of neuroanatomy and neurophysiology have shown how the brain is a system for information processing.

Demis Hassabis, the co-founder and chief executive of DeepMind, has noted the study of that human information processing and the power of randomness is amply visible in new approaches that have finally enabled computers to play games like Go, Hex, Havannah, and Twixt at a professional level.9 At the heart of these approaches is an algorithm called the Monte Carlo method which, true to its name, relies on randomized, statistical sampling, rather than evaluating possible future board configurations for each possible move. For example, for a given move, a Monte Carlo tree search will play out a number of random or heuristically chosen future games (“playouts”) from that move on, with little strategy behind either player’s moves. Most possibilities are not played out, thus constraining the massive branching factor. If a move tends to lead to more winning games regardless of the strategy then employed, it is considered a stronger move. The idea is that such sampling will often be sufficient to estimate the general strength or weakness of a move.

At DeepMind, his engineers have created programs based on neural networks, modelled on the human brain. These systems make mistakes, but learn and improve over time. They can be set to play other games and solve other tasks, so the intelligence is general, not specific. This AI “thinks” like humans do.

Therein lies the difference between AlphaGo and Deep Blue. Deep Blue is a hand-crafted program where the programmers distilled the information from chess grandmasters into specific rules and heuristics, whereas DeepMind has imbued AlphaGo with the ability to learn and then it’s learned it through practice and study, which is much more human-like.

Side note: having recently returned from the Mobile World Congress in Barcelona and having met with both Google and IBM, I must also note this. Google is making its AI system beat the pants off a human Go player. And Google continues to generate massive amounts of money from its advertising business. The company, in general, seems to be doing the “science and math AI club projects” without making headlines with massive layoffs and without giving analysts a flow of pure puffery PR which comedy writers could have converted to entertainment gold … if they didn’t have Donald Trump. And that’s the problem: a steam of press releases on Watson. But no revenue. And reading the IBM business “press” and trying to figure out what’s fact, what’s opinion, and what’s content marketing. Or content spam.

There is also one other notable difference between the games Go/Othello/others and chess. In a game like Go you add tiles. This is similar to Othello/Reversi, et al. You are consuming the remaining board space. In chess, the unused board space usually only grows (as pieces are removed). This affects the “look-ahead” and I will explore that in a subsequent post.

The future(?)

The more I study AI and neuroscience, the more I see the incontrovertible evidence that the human brain has capabilities that are, in some respects, far superior to those of all other known objects in the cosmos. As I noted in a previous post, it is the only kind of object capable of understanding that the cosmos is even there, or why there are infinitely many prime numbers, or that apples fall because of the curvature of space-time, or that obeying its own inborn instincts can be morally wrong, or that it itself exists.

Nor are its unique abilities confined to such cerebral matters. The cold, physical fact is that it is the only kind of object that can propel itself into space and back without harm, or predict and prevent a meteor strike on itself, or cool objects to a billionth of a degree above absolute zero, or detect others of its kind across galactic distances.

But still …. no brain on Earth is yet even close to knowing what brains do in order to achieve any of that functionality. That is the wonder. I also believe that since not all of the properties of nature are mathematically expressible—why should they be? it takes a very special sort of property to be so expressible—that there are aspects of our nature that we will never get to by way of our science. An outright materialist could argue that all my acts, from the day of my birth, have been a determined result of genetics and environment. No. Not all of my acts are determined at the molecular and submolecular level.

Paul Kay, a biologist at Cambridge, posits that intuition comes to be learned, much as we learn about right and wrong, good and evil, in much the same way that we learn about geometry and mathematics. It is cultural learning, and it does not arise from innate principles that have evolved through natural selection. It is not like the development of language or sexual preference or taste in food.

I like that. We know that gut-feelings, such as reactions of empathy or disgust, have a major influence on how children and adults reason about morality. I like it because it allows for moral realism. It allows for the existence of moral truths that people come to discover, just as we come to discover truths of mathematics. That “intuition” is not a mere accident of biology or culture.

So when people talk about the future of the intelligent machines, and whether intelligent machines are going to take over and decide what to do for themselves, whether they will “intuit”, I suggest you need a little more study of the basic science about computation, and human intelligence.

You can teach a machine to track an algorithm and to perform a sequence of operations which follow logically from each other. It can do so faster and more accurately than any human. Given well defined basic postulates or axioms, pure logic is the strong suit of the thinking machine. But exercising common sense in making decisions and being able to ask meaningful questions are, so far, the prerogative of humans. Merging intuition, emotion, empathy, experience and cultural background, and using all of these to ask a relevant question and to draw conclusions by combining seemingly unrelated facts and principles, are trademarks of human thinking, not yet shared by machines. Even as we prepare the machine learning algorithms and try to mimic the brain with deep neural networks in all domain sciences, we remain puzzled on the mode of connected knowledge and intuition, imaginary and organic reasoning tools that the mind possesses.

Maria Spiropulu, a physics professor at Caltech, explains it this way:

“It is difficult, perhaps impossible, to replicate on a machine. Infinite unconnected clusters of knowledge will remain sadly useless and dumb. When a machine starts remembering a fact (on its own time and initiative, spontaneous and untriggered) and when it produces and uses an idea not because it was in the algorithm of the human that programmed it but because it connected to other facts and ideas—beyond its “training” samples or its “utility function”—I will start becoming hopeful that humans can manufacture a totally new branch of artificial species—self-sustainable and with independent thinking—in the course of their evolution.”

So perhaps many of the AI pundits are right: the emergence of hybrid human-machine chimeras. Human-born beings augmented with new machine abilities that enhance all or most of their human capacities, pleasures and psychological needs. To the point that thinking might be rendered irrelevant and strictly speaking unnecessary. That might provide the ordinary thinking humans a better set of servants they have been looking for in machines.

Yes, I can see all sorts of potentially wonderful developments from this level of AI: healthcare, smartphone assistants, and robotics. And improvements in human consciousness, global solidarity, knowledge and ethics.

But I am also aware of the many trends operating towards opposite outcomes: a coarsening of taste, reduction to least common denominator, polarization of property, power, and faith. I do think we shall ever take the time or the opportunity to understand which policies lead to which outcomes, nor will we ever have the motivation and the courage to implement the more desirable alternatives.

Because the evidence to date is mixed for technical advances monotonically mapping onto human advances.

 

 

SOURCE LIST

If you’re interested in reading more about AI, start with these books and articles:

The most rigorous and thorough look at the dangers of AI:
Nick Bostrom – Superintelligence: Paths, Dangers, Strategies

The best overall overview of the whole topic and fun to read:
James Barrat – Our Final Invention

Controversial and a lot of fun. Packed with facts and charts and mind-blowing future projections:
Ray Kurzweil – The Singularity is Near

But Kurzweil’s best book is his most recent: How to Create a Mind: The Secret of Human Thought Revealed

Articles and Papers:
J. Nils Nilsson – The Quest for Artificial Intelligence: A History of Ideas and Achievements
Steven Pinker – How the Mind Works
Vernor Vinge – The Coming Technological Singularity: How to Survive in the Post-Human Era
Moshe Y. Vardi – Artificial Intelligence: Past and Future
Stuart Armstrong and Kaj Sotala, MIRI – How We’re Predicting AI—or Failing To
Stuart Russell and Peter Norvig – Artificial Intelligence: A Modern Approach
Theodore Modis – The Singularity Myth
Gary Marcus – Hyping Artificial Intelligence, Yet Again
Steven Pinker – Could a Computer Ever Be Conscious?
John R. Searle – What Your Computer Can’t Know
Jaron Lanier – One Half a Manifesto
Paul Allen – The Singularity Isn’t Near (and Kurzweil’s response)
Arthur C. Clarke – Sir Arthur C. Clarke’s Predictions
Hubert L. Dreyfus – What Computers Still Can’t Do: A Critique of Artificial Reason
Stuart Armstrong – Smarter Than Us: The Rise of Machine Intelligence

Leave a Reply

Your email address will not be published. Required fields are marked *

scroll to top